@rbehal1729
Rahul Behal
1 year
@AndrewYNg @ylecun @geoffreyhinton Exactly. @ylecun ’s view of this is so limited. No consideration of task orchestration paradigms like AutoGPT, ReACT prompting for tool usage/reasoning, etc. It’s like talking about capabilities of an OS at a base level without mentioning any programs built on top, pointless.
1
0
3

Replies

@AndrewYNg
Andrew Ng
1 year
Had an insightful conversation with @geoffreyhinton about AI and catastrophic risks. Two thoughts we want to share: (i) It's important that AI scientists reach consensus on risks-similar to climate scientists, who have rough consensus on climate change-to shape good policy.…
216
789
3K
@geoffreyhinton
Geoffrey Hinton
1 year
@AndrewYNg I used to talk to Andrew a lot and it was great to catch up again and get his take on the various risks posed by recent developments in AI. We agreed on a lot of things, especially on the need for the researchers to arrive at a consensus view of the risks to inform policy makers.
43
73
1K
@ylecun
Yann LeCun
1 year
@geoffreyhinton @AndrewYNg We all agree that we need to arrive at a consensus on a number of questions. I agree with @geoffreyhinton that LLM have *some* level of understanding and that it is misleading to say they are "just statistics." However, their understanding of the world is very superficial, in…
144
211
1K
@AndrewYNg
Andrew Ng
1 year
@ylecun @geoffreyhinton Similar to how text-based LLMs were made possible via large pre-trained text transformers, the progress on large pre-trained vision transformers has been swift. This includes Meta's fantastic work (XCiT, DINO, DINOv2, SAM), Landing AI's work on Visual Prompting, and the work of…
9
19
290
@artsiom_s
Artsiom Sanakoyeu
1 year
@rbehal1729 @AndrewYNg @ylecun @geoffreyhinton What you mention are just engineering tricks that people use during inference. The techniques are not used during model optimization They by no way make the model more capable to reason or plan than it can do by autoregressing the next word. It's just shoveling extra context.
1
0
0
@rbehal1729
Rahul Behal
1 year
@artsiom_s @AndrewYNg @ylecun @geoffreyhinton I disagree. Adding long-term memory in the form of a vector store obviously does result in enhanced planning — it’s clear to anyone that’s used these systems. “Engineering tricks” is just reductionist phrasing, the actual outcomes are what matters, not the details.
0
0
2