EleutherAI Profile
EleutherAI

@AiEleuther

21,098
Followers
78
Following
31
Media
704
Statuses

A non-profit research lab focused on interpretability, alignment, and ethics of artificial intelligence. Creators of GPT-J, GPT-NeoX, Pythia, and VQGAN-CLIP

Joined August 2022
Don't wanna be here? Send us removal request.
Pinned Tweet
@AiEleuther
EleutherAI
2 years
Over the past two and a half years, EleutherAI has grown from a group of hackers on Discord to a thriving open science research community. Today, we are excited to announce the next step in our evolution: the formation of a non-profit research institute.
24
153
881
@AiEleuther
EleutherAI
1 year
The most common question we get about our models is "will X fit on Y GPU?" This, and many more questions about training and inferring with LLMs, can be answered with some relatively easy math. By @QuentinAnthon15 , @BlancheMinerva , and @haileysch__
12
102
509
@AiEleuther
EleutherAI
1 year
Everyone knows that transformers are synonymous with large language models… but what if they weren’t? Over the past two years @BlinkDL_AI and team have been hard at work scaling RNNs to unprecedented scales. Today we are releasing a preprint on our work
4
118
474
@AiEleuther
EleutherAI
2 years
What do LLMs learn over the course of training? How do these patterns change as you scale? To help answer these questions, we are releasing a Pythia, suite of LLMs + checkpoints specifically designed for research on interpretability and training dynamics!
4
87
473
@AiEleuther
EleutherAI
2 months
📕Today, we'd like to draw attention to the EleutherAI cookbook! () The cookbook contains practical details and utilities that go into working with real models! Such as: 🧵
1
97
422
@AiEleuther
EleutherAI
2 years
As part of our work to democratize and promote access to language model technology worldwide, the Polyglot team at EleutherAI is conducting research on multilingual and non-English NLP. We are excited to announce their first models: Korean LLMs with 1.3B and 3.8B parameters.
3
43
371
@AiEleuther
EleutherAI
2 years
We have been getting emails from confused individuals trying to access recently. That webpage doesn’t exist, because we don’t have an API. One of them finally clued us in to why: apparently ChatGPT suggests it for trying out our models.
7
30
275
@AiEleuther
EleutherAI
4 months
Excited to share our new paper, Lessons From The Trenches on Reproducible Evaluation of Language Models! In it, we discuss common challenges we’ve faced evaluating LMs, and how our library the Evaluation Harness is designed to mitigate them 🧵
Tweet media one
4
69
241
@AiEleuther
EleutherAI
1 year
ggml is a deeply impressive project, and much of its success is likely ascribable to @ggerganov 's management. Managing large-scale branching collaborations is a very challenging task (one we hope to improve at!), and Georgi deserves huge props for how he handles it.
@ggerganov
Georgi Gerganov
1 year
The ggml roadmap is progressing as expected with a lot of infrastructural development already completed We now enter the more interesting phase of the project - applying the framework to practical problems and doing cool stuff on the Edge
Tweet media one
7
41
535
7
15
238
@AiEleuther
EleutherAI
1 year
We applaud @Meta ’s continued push to openly license their models with #Llama2 having the most permissive license yet. However we are extremely sad to see Meta continue to spread misinformation about the licensing of the model: LLaMA 2 is not open source
4
52
212
@AiEleuther
EleutherAI
7 months
We’re excited to be collaborating on a new *resource release* to help provide an on-ramp for new open model developers: the Foundation Model Development Cheatsheet!
Tweet media one
4
43
209
@AiEleuther
EleutherAI
2 years
A common meme in the AI world is that responsible AI means locking AIs up so that nobody can study their strengths and weaknesses. We disagree: if there is going to be LLM products by companies like OpenAI and Google then independent researchers must be able to study them.
@ClementDelangue
clem 🤗
2 years
What am I excited about for 2023? Supporting more open-source science, models, datasets and demos like Dalle-mini by @borisdayma , Bloom by @BigscienceW , GPTJ by @AiEleuther @laion_ai , @StableDiffusion by compvis @StabilityAI @runwayml , Santacoder by @BigCodeProject & many more!
13
30
279
2
20
184
@AiEleuther
EleutherAI
10 months
The EMNLP camera-ready version of @RWKV_AI is now available on arXiv! Congrats again to @BlinkDL_AI @eric_alcaide @QuentinAnthon15 and the rest of the team on the first successful scaling of RNNs to the ten billion parameter regime! A 🧵
Tweet media one
4
27
164
@AiEleuther
EleutherAI
11 months
The Foundation Model Transparency Index by @StanfordCRFM purports to be an assessment of how transparent popular AI models are. Unfortunately its analysis is quite flawed in ways that minimize its usefulness and encourage gamification
7
34
150
@AiEleuther
EleutherAI
2 years
The latest paper in EleutherAI's close collaboration with @mark_riedl 's lab on computational storytelling shows how to use a CLIP-like contrastive model to guide the generation of natural language stories to meet human preferences.
@_akhaliq
AK
2 years
Robust Preference Learning for Storytelling via Contrastive Reinforcement Learning abs:
Tweet media one
1
58
300
3
29
132
@AiEleuther
EleutherAI
2 years
If you have substantially contributed to a ML training that required multiple compute nodes, we would like to interview you! Email contact @eleuther .ai with your resume, details about the training run, and a short description of your current interests. More jobs coming soon!
7
23
116
@AiEleuther
EleutherAI
2 years
Our recent blog post contained a meme about code golfing, inspired by a paper that bragged about reaching 80%+ on ImageNet with code that fit in a tweet. In the past 24 hours we've received five emails with code beating Gao (2021), with the current record holder being 260 bytes:
Tweet media one
2
11
111
@AiEleuther
EleutherAI
11 months
How can we talk about the way AI chat bots behave without falling into false anthropomorphic assumptions? In our latest paper we explore role-play as a framework for understanding chatbots without falsely ascribing human characteristics to language models
3
21
107
@AiEleuther
EleutherAI
8 months
Congratulations to our friends at @allen_ai on joining (along with EleutherAI and @llm360 ) the tiny club of organizations that have trained a large language model with: 1. Public training data 2. Partially trained checkpoints 3. Open source licensing on model weights
@allen_ai
Ai2
8 months
OLMo is here! And it’s 100% open. It’s a state-of-the-art LLM and we are releasing it with all pre-training data and code. Let’s get to work on understanding the science behind LLMs. Learn more about the framework and how to access it here:
29
347
1K
1
14
105
@AiEleuther
EleutherAI
1 year
We're glad to share our work on Minetester, a fully open RL framework we've been working on as part of a larger alignment research agenda.
1
23
106
@AiEleuther
EleutherAI
11 months
Great to see our work with @NousResearch and @EnricoShippole on context length extension highlighted at @MistralAI 's presentation at AI Pulse. And a very deserved shout-out to @huggingface and @Teknium1 as well!
Tweet media one
3
10
95
@AiEleuther
EleutherAI
1 year
A few days ago we had the first meeting of our newest reading group, focusing on Mixture of Expert (MoE) models. Check out the recording, and drop by our discord server to join the next meeting!
0
14
95
@AiEleuther
EleutherAI
4 months
Negative results are essential for science, but approximately impossible to publish. What's your preferred way to share them?
@norabelrose
Nora Belrose
4 months
Last year, many people at @AiEleuther worked on an project to improve on @CollinBurns4 's CCS method for eliciting latent knowledge from LLMs. We were unable to improve on CCS, but today we're publishing the proposed method and negative empirical results.
1
7
122
7
8
88
@AiEleuther
EleutherAI
1 year
We are discussing ramping up our public education efforts. What are topic(s) regarding LLMs and other large scale AI technologies that you would like to see more lay-accessible blog posts, infographics, etc. about?
18
7
88
@AiEleuther
EleutherAI
2 years
The first major codebase to come out of @carperai , our Reinforcement Learning from Human Feedback (RLHF) lab. Previous work by @OpenAI and @AnthropicAI has made it clear that RLHF is a promising technology, but a lack of released tools and frameworks makes using it challenging
1
11
86
@AiEleuther
EleutherAI
11 months
Read more about our team and collaborators’ work on Llemma, powerful domain-adapted base models for mathematics! Blog post: Models/data/code: 1/n
1
32
83
@AiEleuther
EleutherAI
8 months
Amazing work by the @CohereForAI ! Dataset paper: Model paper:
@CohereForAI
Cohere For AI
8 months
Today, we’re launching Aya, a new open-source, massively multilingual LLM & dataset to help support under-represented languages. Aya outperforms existing open-source models and covers 101 different languages – more than double covered by previous models.
77
371
1K
1
19
77
@AiEleuther
EleutherAI
2 years
We are very excited to share the results of our collaboration with @farairesearch on developing tooling for understanding how model predictions evolve over the course of training. These ideas are already powering our ELK research, so expect more soon!
@norabelrose
Nora Belrose
2 years
Ever wonder how a language model decides what to say next? Our method, the tuned lens (), can trace an LM’s prediction as it develops from one layer to the next. It's more reliable and applies to more models than prior state-of-the-art. 🧵
Tweet media one
18
177
923
1
14
72
@AiEleuther
EleutherAI
2 months
Looking for EleutherAI at @icmlconf ? Come meet our community and check out their fabulous work. Featuring (in order of appearance): @lintangsutawika @haileysch__ @aviskowron @BlancheMinerva @Vermeille_ @Void13950782 @dashstander @qinan_yu @norabelrose @CurtTigges
Tweet media one
Tweet media two
2
14
53
@AiEleuther
EleutherAI
1 year
We believe that building a robust, interoperable research community requires collaboration. @huggingface has been doing a phenomenal job organizing multilateral collaborations and we're excited to continue to participate. Congrats to @haileysch__ and the entire @BigCodeProject !
@BigCodeProject
BigCode
1 year
Introducing: 💫StarCoder StarCoder is a 15B LLM for code with 8k context and trained only on permissive data in 80+ programming languages. It can be prompted to reach 40% pass @1 on HumanEval and act as a Tech Assistant. Try it here: Release thread🧵
Tweet media one
76
666
3K
1
15
70
@AiEleuther
EleutherAI
1 year
We’ve trained and released Llemma, strong base LMs for mathematics competitive with the best similar closed+unreleased models. We hope these models + code will serve as a powerful platform for enabling future open Math+AI research!
@zhangir_azerbay
Zhangir Azerbayev
1 year
We release Llemma: open LMs for math trained on up to 200B tokens of mathematical text. The performance of Llemma 34B approaches Google's Minerva 62B despite having half the parameters. Models/data/code: Paper: More ⬇️
Tweet media one
11
126
549
1
15
70
@AiEleuther
EleutherAI
1 year
Amazing news for our close partner in research and major donor, @huggingface . We've be thrilled to work with HF on projects like BLOOM and the Open LLM Leaderboard, and are excited to continue to work with them to advance open AI research and the open source ecosystem.
@ClementDelangue
clem 🤗
1 year
Super excited to welcome our new investors @SalesforceVC , @Google , @amazon , @nvidia , @AMD , @intel , @QualcommVenture , @IBM & @sound_ventures_ who all participated in @huggingface ’s $235M series D at a $4.5B valuation to celebrate the crossing of 1,000,000 models, datasets and apps
Tweet media one
258
322
2K
0
4
68
@AiEleuther
EleutherAI
2 years
Great to see @CerebrasSystems build on top of Pile and releasing these open source! Cerebras-GPT is Chinchilla optimal up to 13B parameters. A nice compliment to our Pythia suite, allowing for the comparison of the effect of different training regimes on model behavior
@CerebrasSystems
Cerebras
2 years
🎉 Exciting news! Today we are releasing Cerebras-GPT, a family of 7 GPT models from 111M to 13B parameters trained using the Chinchilla formula. These are the highest accuracy models for a compute budget and are available today open-source! (1/5) Press:
32
337
1K
2
10
70
@AiEleuther
EleutherAI
2 years
Huge shout out to the donors who have helped us get to where we are today and where we will go next: @StabilityAI @huggingface @CoreWeave @natfriedman @LambdaAPI and @canva And finally, come hang out in our online research lab! We can't wait to meet you.
2
2
68
@AiEleuther
EleutherAI
2 years
Very exciting work from @databricks ! We’re excited to see GPT-J continuing to power open source innovation close to two years after we released it.
@matei_zaharia
Matei Zaharia
2 years
Building a ChatGPT-like LLM might be easier than anyone thought. At @Databricks , we tuned a 2-year-old open source model to follow instructions in just 3 hours, and are open sourcing the code. We think this tech will quickly be democratized.
43
508
3K
2
8
64
@AiEleuther
EleutherAI
8 months
We are excited to join other leaders in artificial intelligence in partnering with @NSF to launch the National AI Research Resource (NAIRR), a shared infrastructure that will promote access to critical resources necessary to power AI research.
@NSF
U.S. National Science Foundation
8 months
NSF and its partners are proud to launch the National AI Research Resource pilot. Its goal? To democratize the future of #AI research & development by offering researchers & educators advanced computing, datasets, models, software, training & user support.
Tweet media one
23
86
244
3
7
63
@AiEleuther
EleutherAI
2 months
As models become larger and more unwieldy, auto-interp methods have becoming increasingly important. We are excited to be releasing the most comprehensive auto interp library to enable wider research on this topic.
@kh4dien
caden
2 months
Sparse autoencoders recover a diversity of interpretable features but present an intractable problem of scale to human labelers. We build new automated pipelines to close the gap, scaling our understanding to GPT-2 and LLama-3 8b features. @goncaloSpaulo @jacobcd52 @norabelrose
Tweet media one
3
23
130
2
11
61
@AiEleuther
EleutherAI
2 years
Reinforcement Learning from Human Feedback is an allegedly powerful technology for language models, but one that has so far been kept out of the hands of most researchers. We are thrilled to be working on bringing the ability to study and evaluate these model to the mainstream.
0
13
61
@AiEleuther
EleutherAI
7 months
Kyle is one of four members of our community without a PhD with their first first-author paper under review currently! We view providing this training and mentorship as an important part of our public service.
@KyleDevinOBrien
Kyle O'Brien
7 months
We are grateful to EleutherAI for permitting access to their compute resources for initial experiments. The welcome and open research community on the EleutherAI Discord was especially helpful for this project and my growth as a scientist. 😊
1
1
11
3
4
60
@AiEleuther
EleutherAI
8 months
Another day, another math LM bootstrapping their data work off of the work done by OpenWebMath and Llemma teams. This makes the three in the past week! Open data work is 🔥 Not only do people use your data, but high quality data work has enduring impact on data pipelines.
@_akhaliq
AK
8 months
AutoMathText Autonomous Data Selection with Language Models for Mathematical Texts paper page: dataset: . To improve language models' proficiency in mathematical reasoning via continual pretraining, we introduce a novel strategy
Tweet media one
1
18
85
3
10
58
@AiEleuther
EleutherAI
1 year
Releasing data is amazing, but tools like these that help people make sense of the data is arguably an even more important step forward for data transparency. We're thrilled to see our community continue to lead by example when it comes to in transparent releases.
@keirp1
Keiran Paster
1 year
We also made an @nomic_ai Atlas map of OpenWebMath so you can explore the different types of math and scientific data present in the dataset:
3
14
63
1
13
55
@AiEleuther
EleutherAI
5 months
An essential blocker to training LLMs on public domain books is not knowing which books are in the public domain. We're working on it, but it's slow and costly... if you're interested in providing support reach out!
@Is_Dan_Bull
Daniel Bullock
5 months
@BlancheMinerva @rom1504 Indeed, these would be *extremely* valuable data resources. The databases on are, unfortunately, ununified and the records themselves seem anemic. Somewhat odd considering USPTO has flagship datasets (available via NAIRR). Greater financial incentives?
1
0
0
2
10
53
@AiEleuther
EleutherAI
2 years
A very interesting analysis from @ZetaVector looks at how the most cited papers each year break down. We’re especially proud of this statistic: Almost 20% of papers with EleutherAI authors were in the top 100 most cited papers of their year. Full report:
@ZetaVector
Zeta Alpha
2 years
And fixed an issue that caused @AiEleuther to miss their spot as the second most effective in impact.
Tweet media one
1
0
7
3
8
55
@AiEleuther
EleutherAI
5 months
We are excited to see torchtune, a newly announced PyTorch-native finetuning library, integrate with our LM Evaluation Harness library for standardized, reproducible evaluations! Read more here: Blog: Thread:
@kakemeister
Kartikay Khandelwal
5 months
torchtune provides: - LLM implementations in native-PyTorch - Recipes for QLoRA, LoRA and full fine-tune - Popular dataset-formats and YAML configs - Integrations with @huggingface Hub, @AiEleuther Eval Harness, bitsandbyes, ExecuTorch and many more [3/5]
1
3
22
0
7
55
@AiEleuther
EleutherAI
2 years
Interested in Mixture-of-Experts models but don't want to train it from scratch? Check out the latest from @arankomatsuzaki , who spent his internship @GoogleAI figuring out how to convert existing dense models to MoE ones.
@arankomatsuzaki
Aran Komatsuzaki
2 years
We have released "Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints"! Our method converts a pretrained dense model into a MoE by copying the MLP layers and keeps training it, which outperforms continued dense training. (1/N)
Tweet media one
11
82
387
0
6
53
@AiEleuther
EleutherAI
2 years
Very glad to see this. “Public release” of models doesn’t mean much at this scale if you can’t provide a free API, as almost nobody can afford to deploy the model. Great work by @huggingface and @Azure making this happen and keeping it supported
@julien_c
Julien Chaumond
2 years
BLOOM API is back online 🌸🌸🌸🔥 Thanks @Azure for the support
2
12
75
0
5
52
@AiEleuther
EleutherAI
1 year
A huge thank you to everyone who has helped make our training and evaluation libraries some of the most popular in the world. Especially @QuentinAnthon15 's work leading GPT-NeoX and @haileysch__ @lintangsutawika @BlancheMinerva and @jonbtow for their eval work over the years
Tweet media one
Tweet media two
2
8
52
@AiEleuther
EleutherAI
2 years
most NLP researchers had a very minimal understanding of the engineering undertaking required to train such models or their capabilities & limitations. We started as a ragtag group nobody had heard of, and within a year had released the largest OSS GPT-3-style model in the world.
1
0
51
@AiEleuther
EleutherAI
1 year
@Meta @ylecun @paperswithcode @huggingface @StabilityAI If “open source” is to mean anything, we must stand with @OpenSourceOrg and call out corporate misinformation. You don’t need to license your models open source. It may even be the best choice to *not* do so. But if you don’t, you shouldn’t lie and say you did.
4
11
49
@AiEleuther
EleutherAI
1 year
A really phenomenal deep dive into LLM evaluations and good illustration of why 1. Real life applications should be evaluated in the deployment context 2. Open access to models and evaluation code is essential for understanding the claims made in papers
@a13xba
Alex
1 year
𝗗𝗼𝗻’𝘁 𝗯𝗹𝗶𝗻𝗱𝗹𝘆 𝘁𝗿𝘂𝘀𝘁 𝘁𝗵𝗲 𝗢𝗽𝗲𝗻 𝗟𝗟𝗠 𝗟𝗲𝗮𝗱𝗲𝗿𝗯𝗼𝗮𝗿𝗱! We used @try_zeno to explore the Open LLM Leaderboard data. Spoiler: Unless you use LLMs for multiple choice questions, these benchmarks aren’t that helpful. Zeno Report:
4
19
83
0
9
50
@AiEleuther
EleutherAI
1 year
Claiming that you can match a transformers’ performance is nothing new, and plenty of other papers put forth that claim. What makes RWKV special is that we actually train models up to 14B params and show consistently competitive performance with token-matched transformers!
Tweet media one
1
9
50
@AiEleuther
EleutherAI
1 year
A little over a month ago, @Vermeille_ showed up in our discord server with a simple question: can CFG be applied to LLMs? Probably, but the devil’s in the details. So we sat down to figure those details out. Check out his new paper for more ⬇️⬇️⬇️
@Vermeille_
Guillaume "Vermeille" Sanchez
1 year
We borrowed CFG from vision and run it with LLMs. We get increased control, and benchmarks increases similar to a model twice the size. Ready for all your models (incl. chatbots!) : no special training or fine tuning required. thx @AiEleuther !
17
89
393
1
8
49
@AiEleuther
EleutherAI
1 year
Congrats to everyone who won one of these grants. The open source community desperately needs more funding so that people can be *professional* open source engineers and researchers, lest the only end-game be a closed-source job.
@BornsteinMatt
Matt Bornstein
1 year
[New program] a16z Open Source AI Grants Hackers & independent devs are massively important to the AI ecosystem. We're starting a grant funding program so they can continue their work without pressure to generate financial returns.
70
266
1K
0
5
49
@AiEleuther
EleutherAI
2 years
It’s been a pleasure to watch Theodore’s ideas develop over the past two years. Definitely check out his paper one finetuning LLMs into “text-to-structure” models, and how to use them to design tools that are useful to architects.
@TheodoreGalanos
Theodore Galanos
2 years
It's finally out! After almost 2 years of delay, our paper on Architext, the first, open-source, language model trained for Architectural design, is now on arxiv. In the unlikely event you're curious to read it, you can find it here: Quick thread ↓
Tweet media one
25
77
532
0
11
47
@AiEleuther
EleutherAI
2 years
This will enable us to do much more, and we look forward to building a world class research group for public good! Lead by Stella Biderman @BlancheMinerva as Executive Director and Head of Research, Curtis Huebner as Head of Alignment, and Shiv Purohit as Head of Engineering.
3
1
45
@AiEleuther
EleutherAI
10 months
Interested in meeting up with EleutherAI at #NeurIPS2023 ? Over a dozen members of our community will be there to present ten papers, including @BlancheMinerva @norabelrose @lcastricato @QuentinAnthon15 @arankomatsuzaki @KyleDevinOBrien @zhangir_azerbay @iScienceLuvr @LauraRuis
1
5
43
@AiEleuther
EleutherAI
1 year
RNNs struggle to scale because of how they parallelize, but making the time decay of each channel data-independent, we are able to parallelize RWKV the same way transformers are during training! After training, it can be used like an RNN for inference.
Tweet media one
Tweet media two
1
4
42
@AiEleuther
EleutherAI
4 months
We're excited to announce that @lintangsutawika and @haileysch__ will be at ICML 2024 in Vienna on July 22 to present a tutorial on "Challenges in Language Model Evaluations"! Website:
1
5
43
@AiEleuther
EleutherAI
2 years
This is some really phenomenal work out of @StanfordHAI . Evaluation work, like data work, is a massively understudied and undervalued. But work like this has far more impact than a half dozen medium papers about minor tweaks to transformer architectures
@Tianyi_Zh
Tianyi Zhang
2 years
Two lessons we learned through HELM (Sec 8.5.1; ): 1. CNN/DM and XSum reference summaries are worse than summaries generated by finetuned LMs and zero-/few-shot large LMs. 2. Instruction tuning, not scale, is the key to “zero-shot” summarization.
3
27
110
1
7
41
@AiEleuther
EleutherAI
2 years
Interested in studying formal proofs with LLMs? Check out ProofNet, a new benchmark for theorem proving and autoformalization of undergraduate-level mathematics by @zhangir_azerbay @haileysch__ and others. Follow-up work is already in progress!
@zhangir_azerbay
Zhangir Azerbayev
2 years
How good are language models at formalizing undergraduate math? We explore this in "ProofNet: autoformalizing and formally proving undergraduate-level mathematics" Thread below. 1/n
3
56
178
0
7
42
@AiEleuther
EleutherAI
1 year
Congrats to @StabilityAI and their collaborators. We are excited to see people continuing to push for non-English non-Chinese LLM research, and thrilled that they're finding our libraries including GPT-NeoX and lm-eval useful! To get started on your next LLM project, check out 👇
@StabilityAI
Stability AI
1 year
Today, we are releasing our first Japanese language model (LM), Japanese StableLM Alpha. It is currently the best-performing openly available LM created for Japanese speakers! ↓
Tweet media one
16
54
284
1
6
40
@AiEleuther
EleutherAI
2 years
@databricks GPT-J-6B might be “old” but it’s hardly slowing down. Month after month it’s among the most downloaded GPT-3 style models on @huggingface , and no billion+ param model has ever come close (“gpt2” is the 125M version, not the 1.3B version).
Tweet media one
1
5
37
@AiEleuther
EleutherAI
2 years
@BigscienceW This is just the beginning of our work on non-English and multilingual NLP. We have a 6B Korean model currently training, and plans to expand to East Asian and Nordic language families next! Keep an eye on our GitHhub or stop by #polyglot on our Discord!
2
1
36
@AiEleuther
EleutherAI
2 years
As access to LLMs has increased, our research has shifted to focus more on interpretability, alignment, ethics, and evaluation of AIs. We look forward to continuing to grow and adapt to the needs of researchers and the public Check out our latest work at
1
1
37
@AiEleuther
EleutherAI
8 months
EleutherAI is excited to collaborate with NIST in its newly formed AI Safety Institute Consortium (AISIC) to establish a new measurement science for safe AI systems. See the official announcement here: #AISIC @NIST @CommerceGov
1
5
37
@AiEleuther
EleutherAI
10 months
RWKV substantially lags behind S4 on the long range arena benchmark, as well as subsequent work by @_albertgu et al. @HazyResearch such as SGConv and Mamba. It remains to be seen if that's a killer for NLP applications. Note that the scores are nearly identical for the text task.
Tweet media one
1
4
36
@AiEleuther
EleutherAI
10 months
Very cool work! @_albertgu has been pushing on state-space models for some time now and the release of billion-parameter scale models is a big step forward for this line of work. We look forward to the community testing the models out!
@_albertgu
Albert Gu
10 months
Quadratic attention has been indispensable for information-dense modalities such as language... until now. Announcing Mamba: a new SSM arch. that has linear-time scaling, ultra long context, and most importantly--outperforms Transformers everywhere we've tried. With @tri_dao 1/
Tweet media one
54
418
2K
0
3
36
@AiEleuther
EleutherAI
2 years
Public release makes AI models better, more diverse, and spreads their benefits more widely.
@jlondonobo
Jose Londono
2 years
Just a few days into @huggingface and @LambdaAPI 's Whisper fine-tuning event and have already seen huge breakthroughs in multilingual ASR. Very smart people working on this. Here's the SOTA whisper-based module I fine-tuned for Portuguese 🇧🇷🇵🇹
1
9
64
2
8
35
@AiEleuther
EleutherAI
1 year
HF transformers, Megatron-DeepSpeed, and now Lit-GPT... what will be the next framework to support our language model evaluation harness?
@LightningAI
Lightning AI ⚡️
1 year
Use Lit-GPT to evaluate and compare LLMs on 200+ tasks with a single command. Try it ➡️ #MachineLearning #LLM #GPT
Tweet media one
4
9
45
3
3
35
@AiEleuther
EleutherAI
2 months
We were very happy with the reception to our researchers @lintangsutawika and @haileysch__ 's ICML tutorial, "Challenges in LM Evaluation", this past week! For all those who requested it, the slides are now available at . Enjoy!
1
12
36
@AiEleuther
EleutherAI
1 year
We are thrilled to share the latest in our collaboration with @EnricoShippole and @NousResearch on sequence length extension. We're now pushing sequence lengths that will enable work in malware detection and biology that are currently hamstrung by sequence length limitations!
@EnricoShippole
EnricoShippole
1 year
Releasing Yarn-Llama-2-13b-128k, a Llama-2 model, trained for 128k context length using YaRN scaling. The model was trained in collaboration with u/bloc97 and @theemozilla of @NousResearch and @Void13950782 of @AiEleuther .
Tweet media one
28
173
781
0
3
36
@AiEleuther
EleutherAI
6 months
A new minor version release, 0.4.2, of the lm-evaluation-harness is available on PyPI! 1/n
1
6
35
@AiEleuther
EleutherAI
2 years
Transparency about whose data is contained in datasets is an essential first step towards establishing meaningful provenance and consent. We applaud @BigCodeProject 's efforts in this regard and look forward to implementing similar techniques for our future datasets.
@julien_c
Julien Chaumond
2 years
Yay, I have 29 of my GH repositories included in The Stack 😎 Prepare for some very good quality codes 🤪 The Stack:
Tweet media one
5
3
38
0
10
34
@AiEleuther
EleutherAI
1 year
Congrats to @BlinkDL_AI and the team :) We hope to have a paper about RWKV out by the end of the month!
@huggingface
Hugging Face
1 year
The first RNN in transformers! 🤯 Announcing the integration of RWKV models in transformers with @BlinkDL_AI and RWKV community! RWKV is an attention free model that combines the best from RNNs and transformers. Learn more about the model in this blogpost:
Tweet media one
18
265
1K
0
7
33
@AiEleuther
EleutherAI
1 year
Even after only 55% of the training, @BlinkDL_AI ’s multilingual RWKV “World” model is the best open source Japanese LLM in the world! Check out the paper: Code and more models can be found at:
@BlinkDL_AI
BlinkDL
1 year
The JPNtuned 7B #RWKV World is the best open-source Japanese LLM 🚀Runner: Model (55% trained, finishing in a few days): More languages are coming🌍RWKV is 100% RNN
Tweet media one
1
42
139
1
8
33
@AiEleuther
EleutherAI
1 year
If you’re attending #ACL2023NLP or #icml2023 don't miss our seven exciting papers on crosslingual adaption of LLMs, the Pythia model suite, novel training methodologies for LLMs, data trusts, and more! 🧵
1
6
33
@AiEleuther
EleutherAI
7 months
Interested in practical strategies to continually pre-train existing models on new data? Take a look at the recent paper between @AiEleuther and @irinarish 's CERC lab as a part of our joint INCITE grant!
@benjamintherien
Benjamin Thérien
7 months
Interested in seamlessly updating your #LLM on new datasets to avoid wasting previous efforts & compute, all while maintaining performance on past data? Excited to present Simple and Scalable Strategies to Continually Pre-train Large Language Models! 🧵 1/N
Tweet media one
4
49
161
2
4
30
@AiEleuther
EleutherAI
2 years
The world has changed quite a lot since we first got started. When EleutherAI was founded, the largest open source GPT-3-style language model in the world had 1.5B parameters. GPT-3 itself was not available for researchers to study without special access from OpenAI, and
1
0
31
@AiEleuther
EleutherAI
11 months
This is deeply necessary work and a heroic effort by Shayne et al. "This is the best NLP data work of 2023." @BlancheMinerva "If there's anything less glamorous yet higher-impact in ML than looking at the data, it's doing due diligence on licensing." @haileysch__
@ShayneRedford
Shayne Longpre
11 months
📢Announcing the🌟Data Provenance Initiative🌟 🧭A rigorous public audit of 1800+ instruct/align datasets 🔍Explore/filter sources, creators & license conditions ⚠️We see a rising divide between commercially open v closed licensed data 🌐: 1/
10
148
462
1
6
31
@AiEleuther
EleutherAI
10 months
Looking for something to check out on the last day of #NeurIPS2023 ? Come hang out with EleutherAI @solarneurips @BlancheMinerva is speaking on a panel and @jacob_pfau @alexinfanger Abhay Sheshadri, Ayush Panda, Curtis Huebner and @_julianmichael_ have a poster Room R06-R09
Tweet media one
Tweet media two
0
5
31
@AiEleuther
EleutherAI
10 months
Interested in our recent paper "LLeMA: An Open Language Model For Mathematics"? Check out this summary by @unboxresearch Or dig into our work directly Paper: Code:
@unboxresearch
Unbox Research
10 months
I can imagine a future where advanced mathematics has completely changed. What makes math challenging today is the ability to learn abstract technical concepts, as well as the ability to construct arguments that solve precise logical problems. [article: ]
1
6
24
2
7
30
@AiEleuther
EleutherAI
10 months
It was great to see a lot of excitement about attention-free models @NeurIPSConf ! We had great conversations with many people interested in next-gen architectures for language models. Pic from Systems for foundational models and foundation models for systems by Chris Re
Tweet media one
1
3
31
@AiEleuther
EleutherAI
2 years
@arankomatsuzaki has graduated from explaining other peoples’ papers in Discord and on Twitter to doing it at conferences when the author misses their poster session
@MichaelTrazzi
Michaël Trazzi (in SF)
2 years
Aran Komatsuzaki giving walkthroughs of the codeRL paper before the author arrives. After 10 minutes of SBFing his way into answering poster questions he revealed he was not the author and everyone lost their mind (Poster 138 #NeurIPS2022 )
9
33
588
1
2
31
@AiEleuther
EleutherAI
2 years
Benchmark results show that the models have performance comparable to or better than the best publicly available Korean language models, including Facebook's 7.5B xGLM and Kakao Brain's 6.0B koGPT model. We do not show @BigscienceW 's BLOOM models as they are not trained in Korean
Tweet media one
Tweet media two
2
1
31
@AiEleuther
EleutherAI
2 years
EleutherAI is blessed to have ungodly amounts of compute for a research non-profit. Part of that blessing though is a responsibility to develop things that are interesting and useful not just to us, but to the many researchers who wouldn’t have been able to do this themselves.
@AiEleuther
EleutherAI
2 years
We are currently using these models to investigate a variety of phenomena (expect initial papers within the month!), but are making the models public now because we believe that these models will be widely useful to the NLP community writ large and don't want to make others wait
1
0
14
0
2
30
@AiEleuther
EleutherAI
10 months
We present the first compute-optimal scaling laws analysis of a large RNN, finding highly predictable scaling across runs. Unfortunately we don't sample densely enough to estimate the optimal token-per-parameter, but we plan to in future work.
Tweet media one
1
4
29
@AiEleuther
EleutherAI
2 years
Excellent and timely reminder from the FTC. The question, as always, is whether the USG will be able to bring itself to levy meaningful penalties that actually deter illegal behavior by companies. h/t: @emilymbender
2
8
29
@AiEleuther
EleutherAI
2 years
We are also introducing DeeperSpeed v2.0, which will be synced with the latest upstream DeepSpeed. It also provides GPT-NeoX-specific bugfixes and features and additional optimizations specific to EleutherAI's HPC providers ( @StabilityAI @CoreWeave @ORNL )
1
3
28
@AiEleuther
EleutherAI
10 months
Really great work by @guitaricet that we were thrilled to sponsor.
@guitaricet
Vlad Lialin
10 months
Parameter-efficient methods revolutionized the accessibility of LLM fine-tuning, but can they do pre-training? Today at NeurIPS Workshop on Advancing Neural Network Training we present ReLoRA — the first PEFT method that can be used for LLMs at scale!
Tweet media one
7
48
220
0
4
28
@AiEleuther
EleutherAI
8 months
We envision a world where "safety" isn't dictated by model developers but is something that downstream deployers have agency over. For a small step in this direction, check out the latest work by @lcastricato @haileysch__ @BlancheMinerva and their collaborators.
@synth_labs
SynthLabs
8 months
PINK ELEPHANTS! 🐘 Now, don’t think about it. Chatbots also find this supremely difficult. Ask one of the most popular open source models NOT to talk about pink elephants, and it will fail 34% of the time. In our new paper, we address this problem. 1/N
Tweet media one
4
20
76
0
7
28
@AiEleuther
EleutherAI
7 months
@ClementDelangue
clem 🤗
7 months
We just crossed 100,000 organizations on HF! Some of my favorites: - The MLX community for on-device AI: - The @AiEleuther org with over 150+ datasets: - The @Bloomberg org to show big financial institutions can use the hub:
18
37
270
0
7
27
@AiEleuther
EleutherAI
11 months
The biggest issue with the FMTI is that it's not what it purports to be: instead of focusing on transparency, most of the questions are closer to "being a good product." An extremely transparent LLM can score as low as 30/100 on the FMTI! See how here:
1
1
27
@AiEleuther
EleutherAI
1 year
If you're interested in eliciting and editing knowledge in neural networks, don't miss @norabelrose 's talk on her recent and up-coming research. These ideas form one of the core interpretability research areas at EleutherAI.
@CohereForAI
Cohere For AI
1 year
Thank you to @norabelrose who gave an engaging presentation on Concept Erasure and Elicit Latent Knowledge to our open science community this week. ✨ Thanks @oohaijen and @jonas_kg for hosting. 📹 Catch the replay here
1
2
11
1
4
27
@AiEleuther
EleutherAI
11 months
Instead, it shoehorns questions about "impact", "risks" and "mitigation" under the umbrella of "transparency." These are important things, certainly. But they're not transparency and pretending they are muddles the conversation about responsible AI.
2
2
26
@AiEleuther
EleutherAI
1 year
RWKV isn’t without its flaws. While we do approximately match the performance of transformers, our anecdotal experience is that it’s more sensitive to prompts and struggles to incorporate very long range information more than traditional transformers do.
1
0
26
@AiEleuther
EleutherAI
1 year
We're thrilled to be sponsoring @cv4ecology with >12,000 A6000-hours of compute. Sharing innovations in ML beyond "core ML" applications is an essential and underfunded job that we are proud to play a part in. The world needs better ecological work more than another LLM.
@cv4ecology
CV4Ecology Workshop
1 year
Week 1 (of 3) in the books at #CV4Ecology2023 ! We're working hard while still enjoying the California sunshine, and celebrating our wins as they come 😀!
Tweet media one
0
10
43
0
1
26
@AiEleuther
EleutherAI
1 year
@QuentinAnthon15 @BlancheMinerva @haileysch__ This is the first in a series of blog posts on implementation details for large scale distributed DL that are far too often skimmed over in papers and articles. Stay tuned for more, including how to choose your parallelization and a deeper dive on FLOPs, latency, and perf metrics
0
3
26
@AiEleuther
EleutherAI
4 months
We are very excited to have this long-awaited feature finally live! This will substantially enhance people's ability to study chat-finetuned models.
@KonradSzafer
Konrad Szafer
4 months
New feature in the @AiEleuther 's Harness (lm-eval): chat templates for @huggingface models! You can now: - evaluate chat models fairly, in a turn-by-turn fashion - and specify system prompts to tailor the model behavior!
Tweet media one
4
2
21
2
3
26