One of my favourite figures from the assembly theory paper. The assembly observed gives grounding to the theory in experiment & and explains why contingency is so important in explaining what exists.
We could just shut down the existential threat of AI by simply enforcing copyright laws and valuing human creation. It's sickening that lawmakers can't see that this is the obvious thing that must be done.
The NYT reports that:
- OpenAI built a tool to transcribe YouTube videos to train its LLMs (likely infringing copyright)
- Greg Brockman personally helped scrape the videos
- OpenAI knew it was a legal gray area
- Google may have used YouTube videos the same way
- Meta avoided
The greatest gift of AI might be that we can start to see this emptiness and ultimate dead-end clearly. And, in contrast, that we all might also start to see the magic and power of human natural intelligence and how under-appreciated and engaged it is right now.
Life is constantly selecting for new assemblies of existing things that "work" in the current context - and each context is unique. We course grain these newly assembled things and can "label them" in our minds. We do this to sensemake and create into our future.
Better late than never. There are many more scaling arguments like this that would be helpful in not wasting resources on dead-end AI. Next can someone do the scaling arguments for trying to fix them?
Yes. No difference between data and code. They used copyrighted material to compile their regurgitation machines, not "learn". Copyright infringement all the way down.
Can anyone tell me what mechanism of action for psychedelics are being hypothesized beyond 'new connections/neuroplasticity' or 'mystical experiences'?
In contrast, AI does not have this open dynamic relationship with the creative universe. So while it can create novel and combinations of what we have culturally labeled, they are not *assemblies* - meaning, they can't participate to be further assembled.
@ylecun
@SpriteAttack
@survivor_343
The mechanism of inspiration is akin resonance in open systems - a function of the lived history and embodiment of both source and target of perceptions. This has absolutely ZERO overlap with AI mechanisms of data-fitting and this tired argument is fundamentally a red herring.
@ChombaBupe
Mixed feelings. Seems like they are trying to set precedent for "opt-out", I would rather we all fight for "opt-in" and be default only explicitly offered data is used to train AIs.
Educators and parents take note. LLMs are not safe right now, and probably never will be. They have no place in education at this stage and they shouldn't be in the classroom.
Putting out a petition momentarily, any edits?
New research by researchers at Stanford University and Allen AI, establishes considerable covert racism in large language models, showing how user’s dialect can influence AI’s representations of people’s character, employment, and
We are fountains of new assemblies, with deep and meaningful chains of assembly of everything around us feeding into our sensemaking and creative processes. We do this dynamically, at the speed of life, using our perception and assemble deep structures in that process.
Put another way, natural language "encodes" how to run simulations in minds. Every utterance and can be thought of as code to do this. Through this frame LLMs simply compile our code into a different form that obfuscates the source. Nothing like "learning" takes place.🧵
Yes. No difference between data and code. They used copyrighted material to compile their regurgitation machines, not "learn". Copyright infringement all the way down.
Why is that? Because the new novel combination is not named, labeled, and represented in the data as something that can be further blended ("penguin dog", "hamster hamburger", do not have data).
@_florianmai
@NecroKuma3
"such big productivity jumps that it enables UBI" <= We could already be doing this, and we aren't. All the savings from replacing humans with machines goes to the owners of the tech. Throughout history no tech breakthrough has "enabled" this "1%" to give it back to the displaced
@modelsarereal
@ChombaBupe
It's a toy problem because it is over a closed system. Real world problems are against open systems. It isn't clear how human cognition handles this level of complexity, but it is certain that the techniques mentioned in this article will not be able to.
And without the ability to perceptually course-grain in ways that are meaningful to humans, be able to communicate that meaning in a lived history and context that is also meaningful to humans such that they adopt it, the depth will not be created.
And every other opaque software tool! It's all AI now, and all the people asking for trillions of dollars and claiming that their new LLMs create all this value for humanity are referring to THIS, not what their new stochastic parrot-ware does.
This is an example of where a simple decision tree (software...) could *augment* the intelligence of doctors. Medicine is no place for a black box confabulatory LLM. Start with the need THEN choose the technology and approach - working with subject matter experts!
All we need to fix this mess we are in is to restore our abilities to listen. Listen to ourselves, listen to each other, listen to everything around us. It's the key for natural intelligence coordination, not being controlled by a statistical monstrosity.
A symptom of the total lack of sensemaking exercise and training(reasoning with careful inquiry through real world scenarios) in public schools in the U.S., with apparently zero ability to reform/adapt. We need new education models, now.
The flat earther community seems to be growing, and it's beginning to worry me. It's not just a question of lacking science education, there is more going on here.
I think it is hilarious that the folks that believe an AGI God can magically solve all the worlds problems, upload them into immortal forms, and allow us to colonize the stars, tries to frame arguments against such things as "woo".
We could have predicted this and many other harms with AI playing out. Until we actively restore and make time to exercise our individual and collective sensemaking, we will keep falling for the empty hype and false promises that we keep being sold over and over. Heartbreaking.
For years I’ve been interviewing data annotation workers who are the lifeblood of the AI industry. For years I’ve heard the same story: the platforms they work for wield total power, leaving them precarious & vulnerable to exploitation. A horrible example of this just happened 1/
It is heartening to see the AI push back. But still everyone is under the veil that AI holds all this promise, it doesn't. Humans hold the promise. We poison human cognition and subsequently distrust and disrespect it. Address the poison and we have everything we need.
@mephzara
@GaryMarcus
The “Afghan girl” example shows how an end user can naively end up with copyrighted material. So really everyone should be terrified of getting sued if they use midjourney.
@jim_rutt
People want to BE Lex. Classic guru/influencer/celebrity model. You have to find your target market of people that want to BE you and then tell them what they want to hear...
@fustbariclation
@EikoFried
I think you need to step back a bit here on all these papers and realize that there are fundamental problems with ALL the research being done, it doesn't matter how many there, it can all be garbage. Blood letting was shown to be effective as well...
@IrisVanRooij
@o_guest
More tragedy that people don't understand scaling arguments and thought experiments. Instead we still "wait for the data". I wish there were something like a "first principles" conference on AI where a bunch of predictions from science can be carefully walked through for people
@thatfollowed
@IrisVanRooij
I think you are the one that needs to learn something about neural networks, "novel" is not an "original idea" - meaning plausible. You could start with first principles (Judea Pearl), but my bet is that you are more of data guy. So here you go:
We don't need research into harm of AI, AI ethics, or the alignment problem - the science has already been done, and it is solid. We need people to educate themselves on this science now and take action accordingly.
Confabulations can be temporarily induced in humans and are transient. Natural Intelligence seeks out the correction of confabulation. When that process fails, we recognize it as illness. AI can't do this. As a result it is a cognitive parasite on humans natural intelligence.
People need to be careful with these appeals to share all their important personal information with AI's with the false promise that it can deliver what only natural intelligence can and already does deliver. Worse, its poised to undermine, not enhance our natural intelligence.
The youtube channel Mindful Machines has just released an *amazing* video covering our work with the Moral Graph, our vision for Meaning Economies, and why is that important for AI and the future.
Thanks so much for the great work,
@wchadly
!
(link below)
GenAI destroying human's ability to coordinate (breakdown of collective natural intelligence) also predicted by modern cognitive science. Other prong of why the existential threat of AI is here right now.
Some supernatural things I don't believe in: ghosts, embodied afterlives, re-incarnating with past memories, uploading minds into machines, artificial intelligence.
Hugely underestimated is the power of reading deeply AND broadly across scientific fields. Once you can learn to translate, you start to see how many fields solve each other's open questions and even cases where there are no degrees of freedom left in entire fields of science.
@labenz
@emilymbender
Real question: Have you tried cocaine and experienced how it helps you be more productive? Is cocaine powerful? You sure would think so if you tried it.
“AI will raise peoples’ quality of life, and help people be more competent and more efficient.” - I claim AI will lower peoples' quality of life, make them less competent and less efficient in the long term. This should be debated before taken as fact.
What we call "AI" is a big data technology. Big data technology's value is based on the quality of the data. "AI" attempting to replace natural intelligence does not have the quality of data to do so. We sense this and still fall for the false promise that this is solvable.
@botzero_net
@Rahll
@GaryMarcus
Google LINKS - which is what creators want and expect when they put their content on the INTER - NET. This is not the internet, it is an artificial neural net attempting to replace.
@jim_rutt
I have an idea, not a theory, that two party systems select for maximally polarizing topics, not important ones. Seems like someone should be able demonstrate it in a toy system if it has any validity.
If OpenAI wants to develop AI for the good of all humanity, first answer the scientific question of if that is even possible. Because there are real scientific arguments out there that say it isn’t. Start by trying to prove them wrong.
@Tim_Dettmers
@willie_agnew
The applications of ANNs for scientific discovery are non-overlapping with the ones that are a threat - these cases should not be part of the argument.
@thatfollowed
@ChombaBupe
@IrisVanRooij
"Sometimes I wonder if you guys have actually seen the magic trick where a woman is cut in half and a rabbit comes out of a hat". It is in the training data. I know that sounds crazy, but you really can't believe your eyes, this is not reasoning.
@Rahll
In this post I saw this image again and I remember when it came out thinking it was hard to tell it was AI. Now it seems very obvious. Like artificial wood grain, which is never a good bet for long term perception of value.
I will be moving on from problems with current AI and cognition to solutions in my Stoa session tomorrow at 6pm EST. For folks unfamiliar, there will be a Q&A that attendees can participate in. The Stoa:
Link to my talk is under UPCOMING SESSIONS
@thatfollowed
@IrisVanRooij
So humans can parrot and confabulate, but they can also reason. Artificial neural nets can *only* parrot (mistaken for reasoning - in the training data) and confabulate (reframed as *cute* "hallucinating"). This is why humans will always be more intelligent and the rest is hype.
This looks very promising to bring to the battle of "engineering fiction" rampant in AI and beyond within Silicon Valley, home of "if we saw it on Star Trek, we can build it"
1/ This tweet demonstrates a dangerous stage that anyone that has been around the 'spiritual scene' for decades can spot. It is where a person thinks the insights they have gained on drugs have given them supernatural powers that make them 'healers'.
I call B.S. that there is any real use cases for genAI that add true value to humanity in the long run. Every consumer of these tools should seek out a side-by-side, full-cost-accounting, hype-stripped comparison of solving their problem without genAI. Alternative tech exists.
Equivocate things like creativity, intelligence, learning, insight etc between machines and humans is getting us into a world of trouble. It seems super important to stop doing this and come up with new terms that don't confused people.
If we spent the same amount of resources on human 'training data' as we do AI, there would be no stopping us. Machine intelligence is no match for natural intelligence. We have just poisoned natural intelligence to make it seem that way.
@leecronin
We are already in it, our attention being captured by many false senses of promise, leading to nonsensical actions actions that poison us slowly over time, like a bacterial colony consuming its own waste. The waste product is our data.
@nopranablem
@RichDecibels
It confabulates. That means, when it doesn’t know, it fills in with nonsense. We have a hard time catching this. To avoid this tendency (in humans and AI) one has to be clear on doing constraint based reasoning - not optimization
This is a wonderful question! One way to unpack it is to trace the embodiment all the way down and be able to restore the context/embodiment meaningfully at each step. 🧵
Is a recording of live music “fake” music? Your computer speakers aren’t actually playing the saxophone/drums!
(I mean this as a serious question - spoofing the sounds of live music may trick us into thinking we’re in the presence of by attuned skilled allies, when we aren’t!)
@DonaldClark
@_KarenHao
"Abundant (though currently diminishing) storage ... and the ability to pump groundwater has allowed Phoenix to continue to thrive...water demand decreased by roughly 30 percent over the last 20 years." - good shape = borrowing from the future and cutbacks
@IrisVanRooij
My p-hope is at about 80% that after LLMs make no appreciable gains by the end of this, this will all go away. P-hope at 20% that we will look back on this as the time when humanity learned how to fix marketing, based on the largest example of marketing induced mass hysteria ever
My wildest dreams appear to coming true today - triple whammy of scientists taking the time to test/explicate troubling theoretical arguments. There is hope for humanity yet to escape this con-job before too much more damage is done.
@Grady_Booch
Further, if one were to arrange to have a constant companion/assistant LLM, one might even find themselves with full blown cult dynamics where a person is in a permanent state of inability to ground. This is a good example:
Wonderful example of where models belong in the world - simulations by domain experts that know how to interpret and verify the results. We need a CLEAR category distinction between this kind of thing and what is happening with GenAI in the world. This does not justify that.
This is the most surprising and exciting result of my career: we were running simulations of NaCl with a neural network potential that implicitly accounts for the effect of the water, ie a continuum solvent model (trained on normal MD) when Junji noticed something strange: 1/n
This is a big deal, the effect is "provoked confabulation" and it is predicted from fundamental principles of cognition, making it THE central harm of AI. Imagine the power of this for mind control/distorted thinking in personal agents.
@DoktorSly
@JohanKwisthout
@mjdramstead
@IrisVanRooij
Imagine a world where people invest billions of dollars because of false promises to replicate or exceed general intelligence at the expense of all the other theories/models/tools that actually can do some good in the world. This is the point of hammering on hype.
If I were trying to imagine a "judgement day" for all of humanity in our modern world, the scenario of people losing their riches on the bet against human natural intelligence would be a pretty good one. Of course AI can't scale where we do.
It turns out the data bottleneck problem is more dire than initially thought:
AI model performance - which can be largely attributed to the presence of test concepts within their vast pretraining datasets - increases linearly with exponentially more data.
RIP: Scaling laws
@patrickDurusau
Then let me rephrase, it's sickening that lawmakers clear thinking on what is a clear legal breach gets clouded by tech nonsense claiming that machines can be creative and lack of recognition of intentional obfuscation of source work intended to compete.
Every time you successfully replace any tech with genAI, you are destroying context and by extension connection with the real world and real people. This is the hidden cost that leads to every kind of breakdown of society and individual. Productivity metrics are shortsighted.
Because we are materially changed, our learning can be transferred to every new context. LLMs can't do this. LLMs instead of learning, "memorizes" by "compiling" our data into fancy lookups based on statistical patterns.
Persistently annoyed that there isn't a good term for the kind of AI that is problematic versus every other application of machine learning. Thinking "human intelligence mimickers" might be closer. HIMs for short. If not this, please something else.
@BrianRoemmele
My advice in AI/LLM lawsuits, ask AI people to explain how this happens and how anyone can know what prompts will unlock entire copywritten works:
More in today's post: analyzing the scale of the problem, how and why OpenAI didn't predict the pollution issue originally, and why gen AI killing the internet fits the definition of a tragedy of the commons
@OlegAlexandrov
If AI ever can do this, it still will not be able to do human creativity because, as shown by assembly theory, biophysics and open systems dynamics, the ability to participate creatively is tied to deep history and had to co-evolve. We cannot engineer that.
@deepfates
What's hilarious is dismissing a mathematical argument and data without a counter argument or data to back it and thinking that calling something you don't understand "mental gymnastics" is some kind of flex.
@modelsarereal
@lerthedc
@ChombaBupe
"but they are professional idiots" <= sadly this view is common among many pro AI folks. If you study AI long enough, you will undoubtedly reverse this view. In the mean time, we have e/acc.
I really resent that I spend so much time in my life attempting to address and accommodate broken science. I shouldn't be the one challenging crap science, scientists should be ones challenging crap science.
@dthorson
Also one can develop a sense for the patterns of "Moloch" - the invitation for "left hemisphere capture" and the patterns that release it. This is the heart of sensemaking and is a skill that can be trained.
They buried the lede on this new study. It's not that exercise beats out SSRIs for depression treatment, but that *just* dancing has the largest effect of *any treatment* for depression.
That's kind of beautiful.
I have said that consuming the output of LLMs is like "eating plastic"
@dthorson
. Others appear to be concluding the same from the paper “Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with..."
This is one lense into what AI destroying society looks like. It is only going to get worse. And these are only the OBVIOUS cases that were caught. Science is now INFECTED with confabulatory spew from LLMs. And as pointed out, no human has the bandwidth to clean it up.
@DrBrianKeating
I went to every office hour with all my professors as an undergraduate. I was usually the only one there. I only needed help once. The rest of the time I would just discuss their research or related material that wasn't covered in class I was reading. It was priceless.
Definitely describes me. I started doing it back in 2000 when coached non-technical business people on how to ask tough questions to my algorithms teams that conned/intellectually bullied the business departments into paying for their favorite speculative and nonsensical projects
Almost all of the critical voices on the recent but endless AI hype come from women, and all with different perspectives: from ethical AI to cognitive science and AI. They've been warning us for years: this hype distracts us from and gets in the way of addressing the real issues,
If an LLM says "I am hot" and you check the the temperature of its circuit board, it will be the same as if it says "I am cold" - both are a lie and don't reflect its internal state. You can now extrapolate to every other "I" statement and hopefully break the spell...
The reason e/acc is a nonsense movement is that it assumes that it knows the optimal speed of anything. In open systems (life...) we don't decide the speed of anything, the system does. If you push it, you break it.
@matspike
This isn't the modern understanding of what genes do. They don't encode "data", more like store constraints on adaptation within context/environment. The environment is the "data".