Still use ⛓️Chain-of-Thought (CoT) for all your prompting? May be underutilizing LLM capabilities🤠
Introducing 🌲Tree-of-Thought (ToT), a framework to unleash complex & general problem solving with LLMs, through a deliberate ‘System 2’ tree search.
I've defended my PhD!
"Language Agents: From Next-Token Prediction to Digital Automation"
- Talk (WebShop, SWE-bench, ReAct, ToT, CoALA, and on the future of agents):
- Thesis (covers even more):
I will present my thesis defense tomorrow!
Language Agents: From Next-Token Prediction to Digital Automation
- 10am EST on Thursday, May 2
-
- WebShop, ReAct, ToT, CoALA
- Briefly: SWE-bench/agent
- Thoughts on the future of language agents
🧠🦾ReAct -> 🔥FireAct
Most language agents prompt LMs
- ReAct, AutoGPT, ToT, Generative Agents, ...
- Which is expensive, slow, and non-robust😢
Most fine-tuned LMs not for agents...
FireAct asks: WHY NOT?
Paper, code, data, ckpts:
(1/5)
Language Agents are cool & fast-moving, but no systematic way to understand & design them..
So we use classical CogSci & AI insights to propose Cognitive Architectures for Language Agents (🐨CoALA)!
w/ great
@tedsumers
@karthik_r_n
@cocosci_lab
(1/6)
Solving >10% of our SWE-Bench () is THE most impressive result in 2024 so far, and a milestone for the research and application of AI agents. Congrats
@cognition_labs
!
Today we're excited to introduce Devin, the first AI software engineer.
Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork.
Devin is
Large Language Models (LLM) are 🔥in 2 ways:
1.🧠Reason via internal thoughts (explain jokes, math reasoning..)
2.💪Act in external worlds (SayCan, ADEPT ACT-1, WebGPT..)
But so far 🧠and💪 remain distinct methods/tasks...
Why not 🧠+💪?
In our new work ReAct, we show 1+1>>2!
The art of programming is interactive.
Why should coding benchmarks be "seq2seq"?
Thrilled to present 🔄InterCode, next-gen framework of coding tasks as standard RL tasks (action=code, observation=execution feedback)
paper, code, data, pip:
(1/7)
Code released at , thanks for waiting!
It's intentionally kept minimalistic (core ~ 100 lines), though some features (e.g. variable breadth across steps) can be easily added to improve perf & reduce cost.
(1/2)
Still use ⛓️Chain-of-Thought (CoT) for all your prompting? May be underutilizing LLM capabilities🤠
Introducing 🌲Tree-of-Thought (ToT), a framework to unleash complex & general problem solving with LLMs, through a deliberate ‘System 2’ tree search.
Coding is the frontier of AI.
Excited to push the two frontiers of AI coding:
1. SWE(-bench/agent)
2. Olympiad programming (this tweet)
Introduce USACO benchmark:
* inference methods (RAG/reflect) help a bit: 9->20%
* human feedback helps a lot: 0->86%!
Extremely excited to open-source our SWE-agent that achieves SoTA on SWE-bench😃
Turns out ReAct + Agent-Computer Interface (ACI) can go a long way, very excited about the implications for SWE and beyond!
SWE-agent is our new system for autonomously solving issues in GitHub repos. It gets similar accuracy to Devin on SWE-bench, takes 93 seconds on avg + it's open source!
We designed a new agent-computer interface to make it easy for GPT-4 to edit+run code
Still use ⛓️Chain-of-Thought (CoT) for all your prompting? May be underutilizing LLM capabilities🤠
Introducing 🌲Tree-of-Thought (ToT), a framework to unleash complex & general problem solving with LLMs, through a deliberate ‘System 2’ tree search.
We're releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.
These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math.
A summary thread of our recent (i.e. after GPT-3) work in language agents, in tweets👇
( also provides a nice summary --- I might be the first researcher that includes "tweet" column for publication?🤷)
Large Language Model Agents is the next frontier. Really excited to announce our Berkeley course on LLM Agents, also available for anyone to join as a MOOC, starting Sep 9 (Mon) 3pm PT! 📢
Sign up & join us:
I'll give an oral talk about Tree of Thoughts
@NeurIPSConf
at 3:45-4pm CST on Dec 13 (4C), with the poster session right after (
#410
).
I'm also on the faculty job market this year, so DM me if you wanna chat😃
(Other posters: InterCode
#522
, Reflexion
#508
, 5-7pm Dec 14)
Still use ⛓️Chain-of-Thought (CoT) for all your prompting? May be underutilizing LLM capabilities🤠
Introducing 🌲Tree-of-Thought (ToT), a framework to unleash complex & general problem solving with LLMs, through a deliberate ‘System 2’ tree search.
What to do if someone implemented my work (tree of thoughts) but failed to acknowledge what is the offical repo, and have more stars than offical repo, and might mislead people about the content of the work (i.e. implementation might not be paper's ideas)
We're releasing a new iteration of SWE-bench, in collaboration with the original authors, to more reliably evaluate AI models on their ability to solve real-world software issues.
Write a sentence with "dog, frisbee, catch, throw"
👉Too easy for 7B LM...
Will (constrained) text generation (CTG) "die out" like many other NLP tasks, in face of LLM?
👉Excited to introduce 🐕COLLIE, next-gen CTG that even challenges GPT-4!
(1/n)
Meme aside, Check SWE-bench that hits many checks for a good benchmark
- hard but useful to solve, easy to evaluate
- automatically constructed from real GitHub issues and pull requests
- challenge super long context, retrieval, coding, etc
- can easily update with new instances
Can LMs 🤖 replace programmers 🧑💻?
- Not yet!
Our new benchmark, SWE-bench, tests models on solving real issues from GitHub.
Claude 2 & GPT-4 get <5% acc.
🔗 See our leaderboard, paper, code, data:
🧵
We show huge gains on 3 new tasks GPT-4 can't solve directly or with CoT (hard to find!) due to a need for planning / searching: game of 24, creative writing, crosswords.
Not at
#ICML2023
but happy to finally release a
@princeton_nlp
blog post written by me and
@karthik_r_n
on the opportunities and risks of language agents
should be a fun 10min read! it's a very new subject, so please leave any comments here👇
Huge thanks to
@dawnsongtweets
@xinyun_chen_
for inviting me to such a timely, well-organized, and extremely popular class!
Check out the recording for my talk on the history and overview of llm agents - hope u like it 😀
If you ever learn a bit of computer systems or programming, you know the most intriguing and magical idea in CS is memory.
Same (will be true) for AI or at least the study of autonomous agents.
ToT achieves 10x perf by leveraging LLM's ability to
1. generate diverse choices of intermediate "thoughts" toward problem solving
2. self-evaluate thoughts via deliberate reasoning
With
3. search algorithms (e.g., bfs/dfs) that help systematically explore the problem space
When I first saw Tree of Thoughts I also asked myself this😀 great exploration into if next-token prediction can simulate search, and if you're interested in this you probably also wanna check out last paragraph
When I first saw Tree of Thoughts, I asked myself: If language models can reason better by searching, why don't they do it themselves during Chain of Thought? Some possible answers (and a new paper): 🧵
New preprint time :)
We propose Referral-Augmented Retrieval (RAR), an extremely simple augmentation technique that significantly improves zero-shot information retrieval.
Led by awesome undergrad
@_michaeltang_
, w/
@jyangballin
@karthik_r_n
If intelligence is "emergent complex behavior", then Autonomous Language Agents (ALA) like BabyAGI and AutoGPT start to enter that arena?
Will revise my slides & a blogpost draft about ALA w.r.t. recent progress and share soon
Quick thoughts👇 (1/n)
the top three trending repos on github are all self-prompting “primitive agi” projects:
1) babyagi by
@yoheinakajima
2) autogpt by
@SigGravitas
3) jarvis by
@Microsoft
these + scaling gets you the rest of the way there.
What if you had a bot you could just instruct in English to shop online for you?
Check out our latest work 🛒WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents
w/
@__howardchen
@jyangballin
,
@karthik_r_n
@princeton_nlp
For example, on game of 24 ("4 9 10 13"->“(13-9)*(10-4)=24”), CoT only solves 4% games --- and already fails 60% of games after generating just first 3 words!
Why?
LM token-by-token decoding does not allow lookahead, backtrack, or exploration of different thoughts globally.
One of the most fundamental insights in CS: program is just a text.
Two of the most amazing inventions in CS: LM and compiler.
Both can turn program into some other text: reasoning or execution feedback
Late night wild thoughts…
Bonus 2:
If you care more about "hardcore" language (instead of math/word) problems, or just enjoy CHAOS, check out our invented creative writing task!
How would you find a way to chain random sentences into a coherent passage? Would you similarly plan and compare?😀
Excited to share what I did
@SierraPlatform
with
@noahrshinn
pedram and
@karthik_r_n
!
𝜏-bench evaluates critical agent capabilities omitted by current benchmarks: robustness, complex rule following, and human interaction skills.
Try it out!
Excited to release 𝜏-bench (TAU for Tool-Agent-User ⚒️-🤖-🧑), a new benchmark to evaluate AI agents' performance and reliability in real-world settings with dynamic user and tool interaction.
Paper: , Blog:
Is it just me or haystack is too stupid for benchmarking long context capabilities? If swe-bench is too hard, at least we need tasks that require reasoning over multiple parts of long context in multiple steps
SWE-agent is our new system for autonomously solving issues in GitHub repos. It gets similar accuracy to Devin on SWE-bench, takes 93 seconds on avg + it's open source!
We designed a new agent-computer interface to make it easy for GPT-4 to edit+run code
@karthik_r_n
@princeton_nlp
@PrincetonPLI
My thesis into 2 takeaways:
1. Language is the general-purpose representation for various external environments and internal thoughts. Thus, language agent is general.
2. Language reasoning can be seen as internal actions for agents. Thus, language agent is special.
Read more about
- how ToT modularizes thought decomposition/generation/evaluation & search algorithm to suit diverse tasks
- formal framework (rare in prompting era)
- many more experiments & findings
- Inspirations from CogSci & Root of AI (eg )!
Updates:
- Jupyter notebooks to try out ReAct prompting with GPT-3:
- 5-min video explaining ReAct:
- Oral presentations at NeurIPS FMDM, EMNLP EvoNLP & NILLI workshops, happy to chat in New Orleans/Abu Dhabi and meet new friends!
Large Language Models (LLM) are 🔥in 2 ways:
1.🧠Reason via internal thoughts (explain jokes, math reasoning..)
2.💪Act in external worlds (SayCan, ADEPT ACT-1, WebGPT..)
But so far 🧠and💪 remain distinct methods/tasks...
Why not 🧠+💪?
In our new work ReAct, we show 1+1>>2!
Hierarchical structure is a core aspect of language syntax. Recurrent networks can systematically process recursion by emulating stacks, but can self-attention networks? If so, how?
Our
#ACL2021
paper shed lights into this fundamental issue!
(1/5)
Going to NeurIPS and present my Phd papers in person for the FIRST time😀! Anyone interested in WebShop (), ReAct (), building language agents, language grounding/interaction/theory, NLP + RL...Let's DM and SCHEDULE A CHAT😆!
(1/3)
Had a great hour w/
@hwchase17
@charles_irl
@mbusigin
@yoheinakajima
talking about autonomous language agents, ReAct, LangChain, BabyAGI, context management, critic, safety, and many more.
Look forward to more
@LangChainAI
webinars, they're awesome!
Replay at the same link 👇
Our webinar on agents starts in 1 hour
It's the most popular webinar we've hosted yet, so we had to bring in the best possible moderator:
@charles_irl
Come join Charles, myself,
@ShunyuYao12
,
@mbusigin
and
@yoheinakajima
for some lively discussion :)
Check out our new preprint led by
@RTomMcCoy
and thank him for getting me to know the word 'ember'🔥
Tldr: language models (LMs) are not humans, just like planes are not birds. So analyzing LMs shouldn't just use human behavior or performance tests!
🤖🧠NEW PAPER🧠🤖
Language models are so broadly useful that it's easy to forget what they are: next-word prediction systems
Remembering this fact reveals surprising behavioral patterns: 🔥Embers of Autoregression🔥 (counterpart to "Sparks of AGI")
1/8
Thanks
@USC_ISI
@HJCH0
!
I talked about Formulation (CoALA) and Evaluation (Collie/InterCode/WebShop) of language agents, two directions that I find
- important but understudied
- academia could uniquely contribute!
slides:
video:
We had the pleasure of
@ShunyuYao12
give us a talk at USC ISI's NL Seminar "On Formulating and Evaluating Language Agents🤖"
Check out his recorded talk to learn about a unified taxonomy for work on language agents and the next steps forward on evaluating them for complex tasks!
ICLR week! Finally muster up a long-due tweet for our spotlight work:
Linking Emergent and Natural Languages via Cospus Transfer
paper:
code:
poster: Apr 27 13:30 - 15:30 EDT
1/n
Bonus:
if you're a crosswords fan, check out how ToT plays 😀
We improve game success from 1% -> 20%, but incorporating better search algorithms (e.g. how you maintain your thoughts) and heuristics (e.g. how you prune) should further enhance LLM!
@AdeptAILabs
This is super cool! We had a similar research idea in one domain (shopping), but it'd be much more powerful to train a multitask general language agent
Happy to announce our new
#emnlp2020
paper “Keep CALM and Explore: Language Models for Action Generation in Text-based Games” is online! w/ Rohan,
@mhauskn
,
@karthik_r_n
arxiv:
code:
more below (1/n)
For autonomous tasks with language (e.g. text games), how much does an agent rely on language semantics vs. memorization? Our
#NAACL2021
paper (, joint w/
@karthik_r_n
,
@mhauskn
) proposes ablation studies with surprising findings and useful insights! (1/3)
Flying to
#EMNLP
for the first
#NLP
conference in my life, despite being an OLD fourth year phd student😂😂
Would love to meet new friends and chat about language grounding and interaction!
Just realize another analog of humans and LLMs: we develop and evaluate them on what is easy to evaluate (SAT or MMLU) then set them out on what is hard to evaluate.
@noahshinn024
et al did Reflexion in Mar 2023, and tons of LLM-critic projects since.
Still, we worked on Reflexion v2 . What for?
- clean & general conceptual framework via language agent/RL
- strong empirical perf on more diverse & complex tasks
(1/n)
1. ReAct > 🧠/💪only methods, e.g.
- On knowledge reasoning tasks, interacting with wiki API obtains new knowledge and avoids hallucination.
- On decision making tasks, sparse+flexible thoughts can decompose goal, plan actions, induce commonsense, track progress, adjust plan..
Meme aside, Check SWE-bench that hits many checks for a good benchmark
- hard but useful to solve, easy to evaluate
- automatically constructed from real GitHub issues and pull requests
- challenge super long context, retrieval, coding, etc
- can easily update with new instances
Tree of Thoughts is a serious paper and serious research, not a github star chasing leverage. I appreciate any implementation of any of my work, but it should link to what is the offical implementation to avoid confusion and abuse.
ChemCrow is out today in
@NatMachIntell
! ChemCrow is an agent that uses chem tools and a cloud-based robotic lab for open-ended chem tasks. It’s been a journey to get to publication and I’d like to share some history about it. It started back in 2022. 1/8
🤖Autonomous Agents & Agent Simulations🤖
Four agent-related projects (AutoGPT, BabyAGI, CAMEL, and Generative Agents) have exploded recently
We wrote a blog on they differ from previous
@LangChainAI
agents and how we've incorporated some key ideas
In collaboration with
@robusthq
, yesterday we shared "Tree of Attacks" a method than can jailbreak
@OpenAI
GPT-4 like 90% of the times. It was just covered in
@wired
Excited about this work on emergent communication (EC)! EC's been a tricky subject (i.e. lots of toy papers), but IMO the true potential is unleashing soon.
Simplest reason: we're running out of language by humans on Internet. Have to use machine's self-generated language soon!
Our new paper, EC^2, has been published in CVPR2023. It presents a novel video-language per-training scheme via emergent communication for few-shot embodied control.
Project page: Paper:
@DrJimFan
's "no-gradient architecture" is exactly what we call "verbal reinforcement learning". Awesome progress in this direction using a great testbed!
It is fair to say we (significantly) haven't reached the capabilitity limit out of just calling gpt4 apis. Still much to do!
What if we set GPT-4 free in Minecraft? ⛏️
I’m excited to announce Voyager, the first lifelong learning agent that plays Minecraft purely in-context. Voyager continuously improves itself by writing, refining, committing, and retrieving *code* from a skill library.
GPT-4 unlocks
3. ReAct naturally produces more interpretable and trustworthy trajs, where humans can
- inspect fact source (internal vs. external)
- check reasoning basis of decisions
- modify model thoughts for policy edit on-the-go, an exciting new paradigm for human-machine interaction!
2. ReAct generalizes strongly, both in few-shot prompting and finetuning. e.g.
- On WebShop/AlfWorld, 1/2-shot ReAct outperforms imitation learning w/ 3k/100k samples by 10/34%!
- Using LLM ReAct trajs, finetuned smaller LMs outperforms LLM and finetuned🧠/💪only models!
I am hoping to hire a postdoc who would start in Fall 2024. If you are interested in the intersection of linguistics, cognitive science, and AI, I encourage you to apply!
Please see this link for details:
misalignment of ai and humans is not as dangerous as misalignment of our own behavior and desire, or misalignment among our various desires, or misalignment among various us
ReAct = Synergize [Rea]soning and [Act]ing in LM
How? ReAct prompts LLM with human task-solving trajectories with interleaving 🧠flexible thoughts and 💪domain-specific actions, so that it can generate both.
Why? Strong generalization on VERY diverse tasks + ALIGNMENT benefits!
As GPT context length keeps increasing, will all retrieval become in-context retrieval?
Or is (traditional) retrieval the key to increasing context length...
Should you let LMs control your email? terminal? bank account? or even your smart home?🤔
🔥Introducing ToolEmu for identifying risks associated with LM agents at scale!
🛠️Featuring LM-emulation of tools & automated realistic risk detection
🚨GPT4 is risky in 40% of our cases!
@srush_nlp
🤣i would say higher but not exponential, given search has heuristics (eg bfs prune breadth, dfs prune subtree). But hopefully we can (and should) use open and free models soon
I opened an issue at just to ask the repo links to our offical repo to avoid any confusion, but closed by
@KyeGomezB
without any resolving. I don't like it.
We all know that __alignment__ elicits the capability of foundation models (FMs), while __agents__ autonomize FMs as copilots.
Well, I'd say the two are intrinsically intertwined! Excited to introduce the principles of unified alignment for agents: (1/N)
When is language model enough and when is language agent needed? Seems a good probe is whether humans speak or write.
Eg. We can answer most questions fluently in everyday conversations, but need iterative revisions for writing math proof, blogposts, code, etc.
I also first saw Langchain when it implemented ReAct at 0.0.3 with <100 stars. Now it's 0.0.131 with 20k+ stars. A lot of hard work!
Great demos every day (a lot with super easy zero-shot-react-agent!). Congrats
@hwchase17
and
@LangChainAI
and look forward to the future!
I first saw Langchain on Twitter when Harrison implemented the ReAct paper, exposing the LLM’s reasoning: . I was impressed w/ the elegant abstractions he wrote & captivated by the possibilities of orchestrating LLMs as an intelligence layer.