@AndrewYNg
@ylecun
@geoffreyhinton
Exactly.
@ylecun
’s view of this is so limited.
No consideration of task orchestration paradigms like AutoGPT, ReACT prompting for tool usage/reasoning, etc.
It’s like talking about capabilities of an OS at a base level without mentioning any programs built on top, pointless.
Had an insightful conversation with
@geoffreyhinton
about AI and catastrophic risks. Two thoughts we want to share:
(i) It's important that AI scientists reach consensus on risks-similar to climate scientists, who have rough consensus on climate change-to shape good policy.…
@AndrewYNg
I used to talk to Andrew a lot and it was great to catch up again and get his take on the various risks posed by recent developments in AI. We agreed on a lot of things, especially on the need for the researchers to arrive at a consensus view of the risks to inform policy makers.
@geoffreyhinton
@AndrewYNg
We all agree that we need to arrive at a consensus on a number of questions.
I agree with
@geoffreyhinton
that LLM have *some* level of understanding and that it is misleading to say they are "just statistics."
However, their understanding of the world is very superficial, in…
@ylecun
@geoffreyhinton
Similar to how text-based LLMs were made possible via large pre-trained text transformers, the progress on large pre-trained vision transformers has been swift. This includes Meta's fantastic work (XCiT, DINO, DINOv2, SAM), Landing AI's work on Visual Prompting, and the work of…
@rbehal1729
@AndrewYNg
@ylecun
@geoffreyhinton
What you mention are just engineering tricks that people use during inference. The techniques are not used during model optimization
They by no way make the model more capable to reason or plan than it can do by autoregressing the next word. It's just shoveling extra context.
@artsiom_s
@AndrewYNg
@ylecun
@geoffreyhinton
I disagree. Adding long-term memory in the form of a vector store obviously does result in enhanced planning — it’s clear to anyone that’s used these systems.
“Engineering tricks” is just reductionist phrasing, the actual outcomes are what matters, not the details.