Browning.jake00 Profile Banner
Browning.jake00 Profile
Browning.jake00

@Jake_Browning00

2,047
Followers
489
Following
2
Media
709
Statuses

Visiting Scientist at NYU working on the philosophy of AI, philosophy of mind, and the history of philosophy.

Brooklyn, NY
Joined August 2021
Don't wanna be here? Send us removal request.
@Jake_Browning00
Browning.jake00
9 months
Now that both Altman and Gates are acknowledging scaling up won't help these models improve, I think it is safe to pronounce a verdict on the current approach to generative AI: it's boring.
37
62
405
@Jake_Browning00
Browning.jake00
1 year
I don't think anyone should report on the 22-word letter from Altman et al. on regulating AI to prevent existential risk without mentioning it comes just after OpenAI threatened to leave the EU rather than be regulated.
15
63
190
@Jake_Browning00
Browning.jake00
2 years
A piece by @ylecun about whether deep learning is hitting a wall--or just facing another hurdle. A response to @GaryMarcus attempting to lay out the different perspectives and why the debates get so heated.
11
37
172
@Jake_Browning00
Browning.jake00
10 months
I don't know if IIT is pseudoscience, but it is pernicious. While feigning scientific respectability, it avoided rigor and peer review in favor of pop books that obscured or ignored decades of consciousness research. The field will be better without it.
9
17
93
@Jake_Browning00
Browning.jake00
9 months
Written texts aren't the same as human knowledge & pretrained LLMs aren't reading those texts. They're trying to reproduce the words, not figure out how to apply them. LLMs aren't a path to general intelligence--no matter how much text they're fed.
@boazbaraktcs
Boaz Barak
9 months
Folks might be a bit too manic-depressive about LLMs with any advance meaning that robot apocalypse is around the corner, and any obstacle means that we've hit the wall. Concretely, pretraining data for LLMs is already basically all of human knowledge, so not being to
29
15
168
9
11
80
@Jake_Browning00
Browning.jake00
1 year
A piece by @ylecun and I. We argue conversation is more than just words; it depends on social norms governing what to say. Current chatbots are oblivious to these norms--and it shows in how dishonest, inconsistent, and offensive they are.
7
24
72
@Jake_Browning00
Browning.jake00
9 months
Why did we think the Winograd Schema Challenge would be a definitive test of common sense? And can there be a definitive test of common sense in language? A piece @ylecun and I wrote back in early 2022 is finally out!
8
8
68
@Jake_Browning00
Browning.jake00
9 months
I hope that the good social technologies we've developed--and that Andreessen largely lumps into the "enemies"--will help limit some of the dangers of Techno-Optimism.   (I don't know what happened to the first version of this post.)
5
11
67
@Jake_Browning00
Browning.jake00
9 months
In a new piece, @AllYouCanPartyand I argue against @add_hawk 's claim that Twitter gamifies communication and transforms our values. We argue Twitter isn't game-like on his account and chasing Likes and Retweets don't seem to transform values. 1/
3
11
54
@Jake_Browning00
Browning.jake00
2 years
A short piece by Yann LeCun and I. We argue that contemporary large language models, while impressive, are fundamentally limited to a shallow, superficial understanding of the world. This isn't because of the technology, but stem from the limits of language itself.
@NoemaMag
Noema Magazine
2 years
“A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe.” — @ylecun & @Jake_Browning00
10
65
352
3
9
33
@Jake_Browning00
Browning.jake00
1 year
Really great workshop next weekend, featuring a ton of great speakers and panelists: @ylecun @davidchalmers42 @neurograce @LakeBrenden @luosha @Brown_NLP @raphaelmilliere @glupyan @cameronjbuckner , Nick Shea, and more. Schedule and sign up here:
2
1
19
@Jake_Browning00
Browning.jake00
9 months
@LeszBuk I like Voyager, but memorizing and following scripts of other people accomplishing things in game isn't really planning. If the model did the reverse--learned to play Minecraft, label actions they find useful, and then write a program to accomplish that action--I'd be interested.
1
0
16
@Jake_Browning00
Browning.jake00
6 months
Making language models multimodal doesn't seem to have made them any smarter. And Altman is tamping down hopes that further scaling (or Q*) will lead to transformative AGI. Some reflections on why progress has slowed:
3
4
17
@Jake_Browning00
Browning.jake00
10 months
@ylecun As you like to say, people are often caught up in "mathematical hypnotism," getting too wrapped up in the cool math to forget what it is supposed to do.
0
0
16
@Jake_Browning00
Browning.jake00
10 months
Like phrenology or mesmerism, IIT gained public acceptance with readable and flashy pop-sci presentations by big names rather than empirical investigation. This letter may be a political attempt to change the public conversations, but IIT can't complain; it started it.
@hakwanlau
hakwan lau 🇺🇦 @hakwan.bsky.social
10 months
in light of the spread of some misinformation on public media (including news articles in Nature & Science), a group of 124 researchers weigh in & consider it necessary to label the integrated information theory (IIT) of consciousness as pseudoscience
28
116
396
1
3
15
@Jake_Browning00
Browning.jake00
1 year
Very exciting and provocative new work by @raphaelmilliere on a topic recently taken up by @ylecun @davidchalmers42 @Brown_NLP @glupyan @LakeBrenden and I in our debate at NYU on LLMs and symbol grounding
@raphaelmilliere
Raphaël Millière
1 year
📝New preprint! What does it take for AI models to have grounded representations of lexical items? There is a lot of disagreement – some verbal, some substantive – about what grounding involves. Dimitri Mollo and I frame this old question in a new light 1/
15
84
341
1
5
13
@Jake_Browning00
Browning.jake00
2 years
A reply to our piece from @GaryMarcus But I don't think we agree where he says we do, or disagree on where our disagreements are. But that's the nature of difficult issues, I suppose.
@NoemaMag
Noema Magazine
2 years
Is a decades-long AI debate finally coming to a resolution? @garymarcus is seeing signs that it is. Now "we can finally focus on the real issue: how to get data-driven learning & abstract, symbolic representations to work together."
1
34
197
0
2
9
@Jake_Browning00
Browning.jake00
1 year
@davidchalmers42 Throw a pitch to @NoemaMag . They're publishing a lot of philosophical takes on AI.
2
0
8
@Jake_Browning00
Browning.jake00
1 year
Audiobook or podcast recommendations for philosophy of AI-adjacent stuff? I have a really long, multiday trip ahead of me.
3
0
7
@Jake_Browning00
Browning.jake00
2 years
A Spanish translation of the recent piece with @ylecun in @NoemaMag ! Thanks to @gienini
@gienini
gienini
2 years
Nuevo artículo de la factoría La IA y los límites del lenguaje Un sistema de IA entrenado solo en palabras y oraciones nunca se aproximará a la comprensión humana Léelo en:
1
1
1
2
2
6
@Jake_Browning00
Browning.jake00
23 days
@Sander_vdLinden @robsica I don't think that's right. There is a difference between speculation and misinfo. The Convo article is speculation based on public speeches; the only qualified expert on Biden's mental state is his doc. But they've been silent. So everyone is speculating from appearances.
1
0
5
@Jake_Browning00
Browning.jake00
9 months
This doesn't mean there isn't plenty to complain about with Twitter or social media more generally. But we need to distinguish weak claims about ways users can use a technology from stronger claims about how a technology transforms us. 3/
0
0
5
@Jake_Browning00
Browning.jake00
1 year
@MelMitchell1 @tyrell_turing @VenkRamaswamy I agree, but we shouldn't overstate the power of language. The Eliza Effect occurs even when we know how the program works. Calling it text synthesis might be more accurate but won't help shift perceptions much.
2
1
5
@Jake_Browning00
Browning.jake00
9 months
@danwilliamsphil @add_hawk This is a fantastic piece! We discussed some of these issues while writing, so it is really great to see them really spelled out so clearly.
0
0
5
@Jake_Browning00
Browning.jake00
7 months
@glupyan Very cool read! The 19th century had its own blue dress moment when color-blindness became public knowledge after a train accident. People also doubted the first reported cases in the 18th c, assuming color-blind folks just never learned their color words.
1
0
4
@Jake_Browning00
Browning.jake00
2 years
@johnschwenkler @NoemaMag is another greater venue for public philosophy
1
1
4
@Jake_Browning00
Browning.jake00
10 months
If IIT wants to be a science, spend a couple decades in the empirical weeds before publishing the next big, bestselling book.
2
0
4
@Jake_Browning00
Browning.jake00
2 months
@petemandik @cameronjbuckner @johnmark_taylor Alhazen discussed unconscious inferences in the 11th century. And Pete is right that Leibniz repopularized it in the 18th. But it was super common among the post-Schopenhauer crowd in the 19th century, too. I think Freud was just the best writer of the bunch.
1
1
4
@Jake_Browning00
Browning.jake00
1 year
@thehangedman In his early Heidegger on Being a Person, he defined authenticity as being consistent among all one's commitments. So you reject or abandon any commitment (e.g., playboy) that is incompatible with the most important (e.g., father and husband).
1
0
4
@Jake_Browning00
Browning.jake00
9 months
We also argue his focus on high frequency posters obscures how the app shapes the experience of the far more numerous consumers. The provocativeness of the analogy with games encourages us to overlook major design choices with troubling effects on users. 2/
1
0
4
@Jake_Browning00
Browning.jake00
1 year
This strikes me as exactly the right point, one that is missing from a lot of the AI risk crowd. They should first prove they can align corporate interests and social good in the present, especially at AI companies, before they talk about uncertain, unpredictable future risks.
@TonyZador
Tony Zador
1 year
"we have an alignment problem, not just between human beings and computer systems but between human society and corporations, human society and governments, human society and institutions." From Ezra Klein's podcast
Tweet media one
15
52
228
3
2
4
@Jake_Browning00
Browning.jake00
2 years
@petemandik @gualtieropicc Some concepts survive in a non-scientific capacity, like id and superego, Libra, the passions, hysteria, and so on. But they don't describe anything in current scientific theories.
0
0
3
@Jake_Browning00
Browning.jake00
9 months
@DavidmComfort @schulzb589 @LeszBuk Snark aside, these aren't action shots; they're atmospheric shots. I want to see what happens next: the zombies attack out of the mist, the calvary crashes into the pikemen, etc. My contention is that AI are failing at complex, difficult-to-describe, multi-object interactions.
1
0
2
@Jake_Browning00
Browning.jake00
23 days
@Sander_vdLinden @robsica I agree about the fake and edited stuff. But isn't Dan's point that the meme isn't misinfo? I feel like the meme wasn't misleading, even at the time. It was a valid worry evidenced by public appearances without any counter evidence. Labeling it misinfo is inaccurate
1
0
3
@Jake_Browning00
Browning.jake00
10 months
@ourwaters @DioVicen Amen. I think scientists often aren't so bad at philosophy as it applies to them. They're just not good with dealing with the weird, a priori arguments and appeals to intuition common in philosophy
1
0
3
@Jake_Browning00
Browning.jake00
23 days
@Sander_vdLinden @robsica But that's on Biden in this case. He could release his cognitive tests like Trump did. People are entitled to infer from Biden's refusal to do so (and, of course, appearances). It seems like misleading propaganda for Biden's supporters to dismiss the issue without evidence.
1
0
3
@Jake_Browning00
Browning.jake00
1 year
It reminded me of a take from 2008 about the iPhone which (rightly) complained that the phone was inferior to a Nokia E70, which had internet, texting, and a much more useful keyboard. I agreed at the time—but, in hindsight, this missed the point. 2/4
Tweet media one
1
0
2
@Jake_Browning00
Browning.jake00
1 year
This should be essential reading for anyone trying to make sense of current language models. It does a good job of highlighting why these models are so fascinating, but also fickly, unreliable, and unpredictable.
@sleepinyourhat
Sam Bowman
1 year
I’m sharing a draft of a slightly-opinionated survey paper I’ve been working on for the last couple of months. It's meant for a broad audience—not just LLM researchers. (🧵)
Tweet media one
23
275
1K
0
1
3
@Jake_Browning00
Browning.jake00
1 year
@pfau @s_scardapane I think a tiny fraction of the world has gone mad, but they're overrepresented on Twitter.
1
0
3
@Jake_Browning00
Browning.jake00
1 year
I need to write some of this down... Way better than my prepared remarks
0
0
3
@Jake_Browning00
Browning.jake00
1 year
"Aligning" AI doesn't require more GPUs. It's a social problem requiring independent institutions with teeth. By treating alignment as an in house, technical problem for the future, OpenAI is just making another PR push to avoid governments regulating ChatGPT and Bing AI.
@OpenAI
OpenAI
1 year
We need new technical breakthroughs to steer and control AI systems much smarter than us. Our new Superalignment team aims to solve this problem within 4 years, and we’re dedicating 20% of the compute we've secured to date towards this problem. Join us!
520
744
4K
0
0
3
@Jake_Browning00
Browning.jake00
2 months
@aresteanu @petemandik You can consent to an animal humping you. That's not a problem. But the law sometimes frowns on this (see Clerks 2).
0
0
2
@Jake_Browning00
Browning.jake00
10 months
@mjdramstead @ourwaters @DioVicen I mostly saw philosophers telling scientists they didn't know what science is and thus had to treat phrenology and mesmerism as valid sciences, too. There were also moral claims that it was wrong to exclude ideas and theories that had been repeatedly falsified.
2
0
2
@Jake_Browning00
Browning.jake00
1 year
This is fantastic.
@rao2z
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
1 year
Our new paper generalizing the chain, circle and graph of thought prompting strategies--that unleashes the hidden power of LLMs (and graduate students). Hope @_akhaliq picks it up.. 🤞
Tweet media one
12
40
248
0
0
2
@Jake_Browning00
Browning.jake00
1 year
@xgabegottliebx I was in a bar the other night called "the Emerson." It had busts of the great American poets and, weirdly, there was Holmes. Dude couldn't pick a lane.
1
0
2
@Jake_Browning00
Browning.jake00
1 year
@DeonTBenton @matthewslocombe This looks great! I'll make that my next book
0
0
2
@Jake_Browning00
Browning.jake00
10 months
@mjdramstead @ourwaters @DioVicen IIT's physical claims has been disproven, multiple times for each iteration of the theory. But no one can disprove the phenomenological part, which is treated as "axiomatic." Which is the problem. The details of the theory keep changing because the core is impervious to evidence.
1
0
2
@Jake_Browning00
Browning.jake00
1 year
@schafer_karl This is part of Douglas Hofstadter's argument in his book on analogical reasoning. And Melanie Mitchell develops it some in her book on AI.
1
0
2
@Jake_Browning00
Browning.jake00
2 years
@AntonellaTrama3 Second @martacarava . Haugeland's definition is often used (e.g., by Clark and Toribio): 1) reps store info about environment, 2) can be used to guide even when what it's representing isn't present, and 3) is part of a systematic representative schema of the organism.
1
0
2
@Jake_Browning00
Browning.jake00
1 year
@DeonTBenton Is there any text or texts that does anything as comprehensive as Spelke's last book? Since I don't care for her explanations of findings, I'd love to read a good contrary account.
1
0
2
@Jake_Browning00
Browning.jake00
2 years
@keithfrankish @Philip_Goff Matthias Michel is doing great work on the history of scientific work on consciousness.
1
0
2
@Jake_Browning00
Browning.jake00
1 year
Smart article. Rather than criticize x-risk crowd for their beliefs, instead criticizes their attempts to circumvent existing institutions. Strengthening these institutions' ability to address current problems is essential for dealing with any future risk
1
0
2
@Jake_Browning00
Browning.jake00
9 months
@JudiciaIreview Part of it is they are running out of data and using synthesized data compounds errors. So scaling up won't bring benefits.
2
0
1
@Jake_Browning00
Browning.jake00
2 years
@GaryMarcus @cameronjbuckner @ylecun @davidchalmers42 @NoemaMag I'm just not sure I can imagine domain-general nativism, so it's hard to argue against it
2
0
2
@Jake_Browning00
Browning.jake00
2 years
@petemandik LaMDA includes a specially trained module which scores potential responses to pick out the most appropriate to spit out in a way similar to how CLIP scores potential images made in dall-e to provide the best. It's not introspection but it isn't unrelated
1
0
2
@Jake_Browning00
Browning.jake00
21 days
Ah the question of determinancy. To reappropriate Chisholm's critique of Ayer: if you dream of a tiger, does it have a determinate number of stripes? Try and imagine one with just 17 stripes.
@balazskegl
Balázs Kégl
21 days
When you dream, do you see images or just imagine abstract representations?
12
0
4
0
0
2
@Jake_Browning00
Browning.jake00
2 years
@AntonellaTrama3 @martacarava Bermudez, in his cog sci textbook, goes with the more generic: "[representations are] stored information about the environment."
0
0
2
@Jake_Browning00
Browning.jake00
7 months
@glupyan Dalton's study on his own color-blindness (the first widely believed study) of this is a good example. He couldn't figure out why his fellow botanists disliked the way he described flowers, so he finally tested their color vision and realized he was the odd duck.
1
0
2
@Jake_Browning00
Browning.jake00
1 year
Really good read on the dangers of Google's AI Search and its potential effect on news creators. Plagiarism Engine: Google’s Content-Swiping AI Could Break the Internet
0
1
2
@Jake_Browning00
Browning.jake00
2 years
Brilliant work from my colleague Philipp Schmitt!
@philippschmitt
Philipp Schmitt
2 years
New research-y project: Blueprints for Intelligence, a visual history of artificial neural networks from 1943 to 2020
41
518
2K
0
0
2
@Jake_Browning00
Browning.jake00
9 months
@DavidmComfort @schulzb589 @LeszBuk I love all the horses moving in the wrong direction. But at least they aren't clipping each other.
1
0
2
@Jake_Browning00
Browning.jake00
1 year
Turns out there is no overlap between my Twitter use and my Instagram use.
1
0
1
@Jake_Browning00
Browning.jake00
2 years
@GaryMarcus @cameronjbuckner @ylecun @davidchalmers42 @NoemaMag Maybe we should have argued that, but we were actually targeting domain-specific symbol manipulation--that some modules in the brain come innately with discrete symbols, variable-binding, etc.
2
0
1
@Jake_Browning00
Browning.jake00
1 year
@KhurrumM @ylecun @LakeBrenden @glupyan @davidchalmers42 Bad example. She had senses and was embodied for years before she learned language. Also had a wicked sense of humor.
1
0
1
@Jake_Browning00
Browning.jake00
1 year
@carl_b_sachs And my "Pittsburgh School" critique: .
0
0
1
@Jake_Browning00
Browning.jake00
2 years
@WiringTheBrain Can I get a copy of the reading list for this project? It sounds amazing to me.
1
0
1
@Jake_Browning00
Browning.jake00
6 months
@TossingsRaphael I think model free approaches are pretty hopeless. But mixed self-supervised and RL approaches, like Nvidia's work on Minecraft and NetHack, seems very promising.
0
0
1
@Jake_Browning00
Browning.jake00
2 years
@GaryMarcus @cameronjbuckner @ylecun @davidchalmers42 @NoemaMag Yann's proposal was designed to generate and evaluate abstract representations of plausible future states based off the present. And it doesn't assume any discrete symbols or variable binding.
1
0
1
@Jake_Browning00
Browning.jake00
1 year
@MelMitchell1 @tyrell_turing @VenkRamaswamy This also reminds me of the excellent short piece by @kmahowald and @neuranna
0
0
1
@Jake_Browning00
Browning.jake00
2 years
@DeonTBenton
Deon T. Benton
2 years
Ever wonder what would happen if a nativist and an empiricist were to meet? You'll soon get that answer because my friend (and nativist!), Dr. Jenny Wang ( @JinjingJenny1 ), will be joining me to co-host @TheItsInnatePC and to debate the origins of human knowledge and concepts!
5
6
63
0
0
1
@Jake_Browning00
Browning.jake00
3 years
@NoamChompers Weirdly, Kant uses it in a letter in 1783 to Marcus Herz.
0
0
1
@Jake_Browning00
Browning.jake00
1 year
@bleepbeepbzzz Maybe I'm mislabeling, but I'm thinking of (for example) Stuart Russell, the idea that the Midas problem is the big issue, and inverse RL as a potential solution. I'm not thinking of, like, McKinsey risk analysis.
1
0
1
@Jake_Browning00
Browning.jake00
1 year
@raphaelmilliere @ylecun @davidchalmers42 @Brown_NLP @glupyan @LakeBrenden I remember you mentioning this piece afterwards! Glad it is finally out!
0
0
1
@Jake_Browning00
Browning.jake00
2 years
@MilekPl Pamela McCorduck is a good reference, since she wrote the early history of AI:
1
0
1
@Jake_Browning00
Browning.jake00
9 months
@JudiciaIreview In principle, maybe. In practice, Generative AI hallucinates so much and so unpredictably that you'd end up steadily deviating from reality. And we actually already see that happening
0
0
1
@Jake_Browning00
Browning.jake00
10 months
0
0
1
@Jake_Browning00
Browning.jake00
1 year
The iPhone may not have been a great phone, but it was an incredible PC, with an enclosed ecosystem that transformed the design, purchase, acquisition, and usage of programs (relabeled “apps”). Apple defined the smartphone and owns the market. 3/4
1
0
1
@Jake_Browning00
Browning.jake00
9 months
@Shawnryan96 @ylecun That is a great article and a huge accomplishment. I think that will help with a lot of problems. But it can't fix the limits of the autoregressive approach to language.
1
0
1
@Jake_Browning00
Browning.jake00
1 year
This is true regardless of what future risk means. I'm skeptical it is existential but, if it is, building up current institutions and ensuring current harms from AI are addressed is still necessary for ensuring capacity to deal with whatever comes next.
0
0
1
@Jake_Browning00
Browning.jake00
1 year
(Sent from an iPhone)
0
0
1
@Jake_Browning00
Browning.jake00
2 months
@marielgoddu Burnyeat is really critical of the functionalist reading of Aristotle for the reasons you mention. He argues our conception of function is only possible when Descartes regards life as mechanical. Jessica Riskin's book is about the merging of mechanical and biological teleology.
0
0
1
@Jake_Browning00
Browning.jake00
2 years
@cameronjbuckner @NoemaMag We also linked to your piece for Junkyard of the Mind (because that piece is fantastic).
1
0
1
@Jake_Browning00
Browning.jake00
1 year
@HusarenH @NoemaMag @ylecun Yes, but you can get a chatbot to apologize and admit it was wrong.
0
0
1
@Jake_Browning00
Browning.jake00
9 months
@MaunaLoona @ylecun Sure, but feeding more modalities to a language model still treats the core of thought as language. Better success, I think, will come from building a better, multimodal world model and then adding a language modality to it--like evolution did with hominids.
1
0
1
@Jake_Browning00
Browning.jake00
11 months
@AISafetyMemes @cellinip @ylecun @NoemaMag I'd reconsider if an LLM could drive a car cross country. Humans are pretty good at that and it turns out to be really cognitively complex but not dependent on language.
1
0
1
@Jake_Browning00
Browning.jake00
10 months
@GaneshNatesh @mjdramstead @ourwaters @DioVicen @JohannesKleiner Don't have to convince me! I've always said the same. That's why treating them as axiomatic is so frustrating.
1
0
1
@Jake_Browning00
Browning.jake00
1 year
In fairness, it is easy to underestimate the novel: probably the first hammer seemed fragile and harder to use than a nice rock. But you should be careful about judging new tech by old metrics. 4/4
1
0
1
@Jake_Browning00
Browning.jake00
1 year
How might telepathy actually work outside the realm of sci-fi? – via @aeonmag
0
0
1
@Jake_Browning00
Browning.jake00
1 year
@Brown_NLP killed it. I was ready to change sides...
@raphaelmilliere
Raphaël Millière
1 year
Ellie's conclusions #phildeeplearning
Tweet media one
1
10
67
0
0
1
@Jake_Browning00
Browning.jake00
11 months
@ibogost Lays has a few of these: chili cheese Frito flavor, Cheetos, cool ranch Doritos...
0
0
0