Unsloth AI Profile Banner
Unsloth AI Profile
Unsloth AI

@UnslothAI

5,965
Followers
380
Following
30
Media
159
Statuses

Open source fine-tuning of LLMs! 🦥 Github: Discord:

San Francisco
Joined November 2023
Don't wanna be here? Send us removal request.
Pinned Tweet
@UnslothAI
Unsloth AI
2 months
Llama 3.1 support is here! Unsloth supports 48K context lengths for Llama 3.1 (70B) on a 80GB GPU - 6x longer than HF+FA2. QLoRA fine-tuning Llama 3.1 (70B) is 1.9x faster, uses 65% less VRAM & Llama 3.1 (8B) is 2.1x faster and fits in a 8GB GPU! Blog:
Tweet media one
5
33
197
@UnslothAI
Unsloth AI
2 months
We made a step-by-step tutorial on how to finetune Llama-3 with Google Colab & deploy it to @Ollama Tutorial: Colab notebook: Blog post & video coming soon. 🦥
Tweet media one
Tweet media two
7
122
579
@UnslothAI
Unsloth AI
5 months
Long-context Llama 3 finetuning is here! 🦙 Unsloth supports 48K context lengths for Llama-3 70b on a 80GB GPU - 6x longer than HF+FA2 QLoRA finetuning Llama-3 70b is 1.8x faster, uses 68% less VRAM & Llama-3 8b is 2x faster and fits in a 8GB GPU! Blog:
Tweet media one
4
60
349
@UnslothAI
Unsloth AI
3 months
Unsloth now allows you to do continued pretraining with QLoRA 2x faster and use 50% less VRAM than Hugging Face+FA2. Continued pretraining allows models to train on new domain data. Read our blog:
Tweet media one
7
41
205
@UnslothAI
Unsloth AI
2 months
You can now use CSV/Excel files for fine-tuning & directly export models to @Ollama ! Unsloth also now supports datasets with more than 3 columns, automatically merging them into one prompt for fine-tuning. Blog: Tutorial:
Tweet media one
1
38
197
@UnslothAI
Unsloth AI
4 months
We have resolved issues with training Llama 3, so finetuning is much better now! Unsloth now supports the new Phi-3 models, Mistral v3, Qwen and more! Read our blog:
Tweet media one
1
37
194
@UnslothAI
Unsloth AI
2 months
Gemma 2 support is here! Unsloth supports 50K context lengths for Gemma 2 (9B) on a 80GB GPU - 5x longer than HF+FA2. QLoRA finetuning Gemma 2 (27B) is 1.9x faster, uses 53% less VRAM & Gemma 2 (9B) is 2x faster, 63% less VRAM + fits in a 8GB GPU! Blog:
Tweet media one
2
31
183
@UnslothAI
Unsloth AI
19 days
You can now fine-tune Microsoft's new Phi-3.5 (mini) model 2x faster with 50% less memory with Unsloth! Free Colab notebook: We also 'Llamified' the models for improved accuracy and uploaded them to Hugging Face:
Tweet media one
6
28
181
@UnslothAI
Unsloth AI
2 months
Mistral's new model, NeMo (12B) is now supported! Unsloth makes finetuning NeMo fit in a 12GB GPU! QLoRA training is 2x faster, uses 60% less memory & we support 3-4x longer context lengths than HF+FA2. Read our Blog:
Tweet media one
4
27
163
@UnslothAI
Unsloth AI
6 months
Unsloth is trending on GitHub this week! 🙌🦥 Thanks to everyone & all the ⭐️Stargazers for the support! Check out our repo:
Tweet media one
2
17
128
@UnslothAI
Unsloth AI
6 months
Over 1,000 models have now been trained with Unsloth & shared on @HuggingFace 🤗🦥 Also special thanks to the HF team for the custom icon on our tag! See models trained with Unsloth:
Tweet media one
2
21
119
@UnslothAI
Unsloth AI
5 months
Unsloth now supports fine-tuning of LLMs with 4x longer context windows! We managed to reduce memory usage by a further 30% at the cost of +1.9% extra time overhead. Read our blog:
Tweet media one
5
17
111
@UnslothAI
Unsloth AI
1 month
. @Google releases a new Gemma 2 model with 2B parameters & it's the best performing model for its size! Unsloth makes Gemma 2 (2B) QLoRA fine-tuning 2x faster with 65% less memory. Did you know the Gemma 2 models are Unsloth's 2nd most popular models to fine-tune with (250K
Tweet media one
2
18
109
@UnslothAI
Unsloth AI
4 months
We have hit 1 million monthly downloads on @HuggingFace ! 🥳🦥 Thanks to every single person who has used Unsloth or downloaded our models! ❤️
Tweet media one
6
16
104
@UnslothAI
Unsloth AI
5 months
Unsloth is currently trending on Github! 🙌🦥 If you want to finetune LLMs like Llama 3 or Mistral, now is a good time to try!⭐️
Tweet media one
4
17
101
@UnslothAI
Unsloth AI
28 days
We just hit 2 million monthly downloads on @HuggingFace ! 🦥🥳 Over 13K models trained with Unsloth have also been uploaded to Hugging Face. Huge thanks to the Unsloth community, the model teams and the HF team! 🤗
Tweet media one
5
12
96
@UnslothAI
Unsloth AI
5 months
Unsloth has surpassed 500K+ monthly model downloads on @HuggingFace ! 🥳 🦥 Thanks for the support! 🩷 See our collection of 4bit models + more:
Tweet media one
1
12
93
@UnslothAI
Unsloth AI
3 months
We'll be live on @Ollama 's server at 12pm ET today, to show our new support for Ollama! 🦥🦙 First learn with Sebastien about 'Emotions in AI', then we'll teach & give early access to our guide on how to finetune a model & deploy to Ollama. Join us here:
Tweet media one
2
9
83
@UnslothAI
Unsloth AI
7 months
You can now QLoRA finetune Gemma 7B 2.43x faster & use 58% less VRAM than @HuggingFace + FA2 via Unsloth!🦥 When compared to vanilla HF, we're 2.53x faster & use 70% less VRAM. We have a blog on our learnings (like a RoPE bug, GeGLU) + Colab notebooks!
Tweet media one
2
9
65
@UnslothAI
Unsloth AI
8 months
We're thrilled to share that @HuggingFace has published a blog post about our exciting collab! We've been working hard behind the scenes & are eager for you to learn more about what we've been up to.🦥
0
14
57
@UnslothAI
Unsloth AI
4 months
We're so happy to announce that Unsloth is part of the 2024 @GitHub Accelerator program!🦥 If you want to easily fine-tune LLMs like Llama 3, now is the perfect time!
3
7
56
@UnslothAI
Unsloth AI
8 months
• Finetune 387% faster TinyLlama • 600% faster GGUF conversion • 188% faster DPO • 400% faster model downloads All part of the new Unsloth AI release! 🦥 Read our blog:
Tweet media one
1
6
51
@UnslothAI
Unsloth AI
6 months
Unsloth has hit 125K+ monthly model downloads on @HuggingFace ! 🥳🦥 Thanks for the support! 🩷 See our collection of 4-bit models (4x faster downloading) + more:
Tweet media one
1
5
43
@UnslothAI
Unsloth AI
2 months
@danielhanchen We have uploaded 4bit bnb quants for now and are working on Llama 3.1 support! Llama 3.1 (8B) 4bit: Llama 3.1 (8B) Instruct 4bit: Llama 3.1 (70B) 4bit: Llama 3.1 (70B) Instruct 4bit:
3
7
42
@UnslothAI
Unsloth AI
6 months
A huge thank you to @UnderscoreTalk and @YKilcher for sharing Unsloth!🦥🩷 Underscore's latest in AI apps video (French): Yannic's latest ML News video: And of course thanks to everyone else for the constant support😊
Tweet media one
1
2
23
@UnslothAI
Unsloth AI
3 months
Tomorrow we will be handing out our new stickers for the @aiDotEngineer World's Fair! 🦥 Join us at 9AM, June 25 where we will be doing workshops on LLM analysis + technicals, @Ollama support & more! We'll also be presenting a speech about fixing LLM bugs at 2:20PM, June 26!
Tweet media one
3
5
20
@UnslothAI
Unsloth AI
15 days
@_philschmid @Microsoft Thank you so much Philipp & Hugging Face for the support! And the community loves & appreciates the Microsoft AI team for these models. 🤗
1
1
16
@UnslothAI
Unsloth AI
9 months
Introducing 🦥 You can now train a personal ChatGPT in a few hours instead of 54 days! 30x faster, 60% less memory usage with 0% loss in accuracy. We also released an open source version which finetunes Llama 5x faster and uses 50% less memory!
Tweet media one
1
5
12
@UnslothAI
Unsloth AI
3 months
@MaziyarPanahi Hi there yes, Qwen 110B will work on a H100! You only need 76GB VRAM or so with Unsloth. See our blog for more details:
@danielhanchen
Daniel Han
5 months
Llama-3 70b QLoRA finetuning is 1.83x faster & uses 63% less VRAM than HF+FA2 1. Llama-3 70b + Unsloth can fit 48K context lengths on bsz=1 on A100 80GB (6x longer than FA2) with +1.9% overhead 2. Llama-3 8b QLoRA fits in a 8GB card & is 2x faster, uses 68% less VRAM. Can fit
Tweet media one
7
60
318
1
2
9
@UnslothAI
Unsloth AI
5 months
This works on all model architectures which use gradient checkpointing (ie stable diffusion, Mamba etc) See bar graph for memory saving benchmarks:
Tweet media one
1
0
9
@UnslothAI
Unsloth AI
2 months
@MervinPraison @ollama @huggingface @danielhanchen Thank you Mervin for this incredible and in-depth video as always! 😀
0
1
9
@UnslothAI
Unsloth AI
29 days
@hokazuya Should be fixed now so please update Unsloth. Now you can do xformers & TRL with any version! Please let us know if works. 🙏
1
0
6
@UnslothAI
Unsloth AI
2 months
@rohanpaul_ai @MistralAI @nvidia @danielhanchen Thank you so much Rohan as always for supporting Unsloth! 🦥 Hope you will like Unsloth Studio (our upcoming UI) which will hopefully be out next week. 🥰
1
0
6
@UnslothAI
Unsloth AI
2 months
@mpowers206 @ollama Very soon! We're already rolling out multiGPU beta access for a lot of folks! 👍
0
0
6
@UnslothAI
Unsloth AI
1 month
@maximelabonne @huggingface Thank you for the support Maxime and this guide is just incredible! 🔥🙏
0
0
6
@UnslothAI
Unsloth AI
3 months
@ollama A huge thank you to the @Ollama team for inviting us! Sebastien was amazing & we learnt so much from him! 🙏♥️
1
1
4
@UnslothAI
Unsloth AI
2 months
@hsinskip Thank you so much Henry for the support! We've got a UI coming next week which will make AI even more accessible! :)
1
0
4
@UnslothAI
Unsloth AI
1 month
@onegaspine Thank you so much for using Unsloth! 🥰🦥 And yes, people love the VRAM reductions!
0
0
4
@UnslothAI
Unsloth AI
29 days
@hokazuya Thank you so much hokazuya! Glad it's working now! We're also going to be releasing a UI either next week or the week after to make finetuning even more accessible/easier! 🦥
0
0
4
@UnslothAI
Unsloth AI
2 months
@kaggle @danielhanchen Thank you for supporting open source! ♥️🦥
0
0
4
@UnslothAI
Unsloth AI
3 months
@eugeneyalt @willpienaar @sh_reya @ShreyaR @shrumm @devstein64 @tristanzajonc @danielhanchen @aiDotEngineer Thank you so much for coming! Hopefully you managed to grab some of our stickers! 🦥♥️
0
0
2
@UnslothAI
Unsloth AI
2 months
@MarvinGabler @danielhanchen Thank you so much Marvin for coming and glad you enjoyed our talk! 🦥♥️
0
0
3
@UnslothAI
Unsloth AI
19 days
@op7418 We just saw this but thank you so much for the support! ❤️
0
0
2
@UnslothAI
Unsloth AI
18 days
@gentonje . @maximelabonne wrote a lovely beginner's Guide on using Llama 3.1 with Unsloth so be sure to check it out!
@maximelabonne
Maxime Labonne
1 month
🦥 Fine-tune Llama 3.1 Ultra-Efficiently with @UnslothAI New comprehensive guide about supervised fine-tuning on @huggingface . Over the last year, I've done a lot of fine-tuning and blogging. This guide brings it all together. 📝 Article:
Tweet media one
11
85
452
1
1
2
@UnslothAI
Unsloth AI
2 months
@TheLouisDupont Yes, so we support all the popular models like Llama 3, Gemma 2, Qwen2, Phi-3 and more. We are working on all model support coming soon! Also any model we upload on our @HuggingFace is supported:
0
0
1
@UnslothAI
Unsloth AI
30 days
@hokazuya Hey thanks for using Unsloth! We are currently working on updating TRL dependency to the latest version. Will let you know when it's done! 👌
1
0
1
@UnslothAI
Unsloth AI
1 month
@tuturetom @danielhanchen Thank you so much Tom for the constant support! ♥️🙏
1
0
1
@UnslothAI
Unsloth AI
11 days
@AIMakerspace Thank you so much for inviting us! 🙏 Can't wait to get started! 🦥
0
0
1
@UnslothAI
Unsloth AI
2 months
@lily_xlz @aiDotEngineer @danielhanchen Thank you so much Lily for coming!! ♥️🦥
1
0
1
@UnslothAI
Unsloth AI
28 days
@fouriergalois @huggingface Thank you so much for the constant support Kendrick! 🦥♥️
0
0
1
@UnslothAI
Unsloth AI
26 days
@VContribution @Humbertoblood Thank you so much for using Unsloth and glad Maxime Labonne's guide was useful! 😃
1
0
1
@UnslothAI
Unsloth AI
2 months
@newplatonism @danielhanchen We are working on supporting 8bit for Unsloth. I guess we can still upload them hopefully soon! 🤞
1
0
1
@UnslothAI
Unsloth AI
3 months
@felix_red_panda @aiDotEngineer @ollama @danielhanchen Will let you all know if we can deliver some to you guys instead! 🦥
0
0
1
@UnslothAI
Unsloth AI
1 month
@fahdmirza @YouTube @danielhanchen Thank you so much Fahd as usual for the constant support! 🙏😃
0
1
1
@UnslothAI
Unsloth AI
1 month
@waydegilliam @danielhanchen Hey Wayde, if you are using 2 GPUs, Unsloth is actually faster on 1 GPU than 2. As for multiGPU suppot, we gave a few people early access to our multiGPU support and will be hopefully rolling it out next month via subscription!🙂
0
0
1
@UnslothAI
Unsloth AI
1 month
@RisingSayak @danielhanchen We're working on it. It will take some time though. We're trying to push out a UI first ASAP 🙏
0
0
1
@UnslothAI
Unsloth AI
2 months
@hackwithzach @aiDotEngineer Thank you so much for coming & glad you enjoyed our talk Zach! We really appreciate it! 😀🦥
0
0
1