Lamini Profile Banner
Lamini Profile
Lamini

@LaminiAI

6,291
Followers
10
Following
38
Media
298
Statuses

The LLM tuning & inference platform for enterprises. Factual LLMs. Deployed anywhere.

Joined April 2023
Don't wanna be here? Send us removal request.
@LaminiAI
Lamini
1 year
🎉 Big secret! We’ve been running on @AMD Instinct™ GPUs in production for over a year. 🤝 Thrilled to now partner with AMD to offer GPU-rich enterprise LLMs! 🥳 LLM Superstation – combining Lamini's LLM infrastructure with AMD Instinct. 👉 Learn more:
@realSharonZhou
Sharon Zhou
1 year
Excited to announce a HUGE secret with @LisaSu : @LaminiAI has been building LLMs on @AMD GPUs *in production* for over the past year! We’ve made running LLMs on AMD super easy and a highly competitive option through our LLM Superstation, available now at ~10x lower cost than
36
103
781
2
15
111
@LaminiAI
Lamini
2 months
Like many startups, our tech is possible because of access to open source LLMs. @realSharonZhou @matthew_d_white @starlordxie and @pentagoniac recently discussed the importance of an open ecosystem and implications of SB 1047. Thanks to @AIatMeta and @cerebral_valley for
2
18
94
@LaminiAI
Lamini
1 year
Training multiple LLMs taking forever? 😤 Costing you a fortune?💸 Enter PEFT! Get ready to multiply!! 🚀 1000 models, just 1 machine! 🤖 3 months of training -> 3 milliseconds ⚡️ Just one API call, load and train with Lamini! 👉 👀
1
7
42
@LaminiAI
Lamini
1 year
We're live! Lamini makes it easy & developer-friendly to rapidly train custom LLMs! Fine-tune, RLHF, you name it. All with just a few lines of code. Swap out foundation models in a single line. Don’t worry about their different prompts. We'll handle it.
@realSharonZhou
Sharon Zhou
1 year
I’m super excited to announce @LaminiAI , the LLM engine that gives every developer the superpowers that took the world from GPT-3 to ChatGPT! We make it easy to rapidly train custom LLMs from @OpenAI @EleutherAI @Cerebras @Databricks @HuggingFace @Meta 🧵
Tweet media one
94
607
3K
2
12
41
@LaminiAI
Lamini
1 year
Getting structured output from an LLM can be a pain 🤦‍♀️ Our type system makes it easy to connect your data to a LLM 🎉  Just like another stage in your data pipeline. Play here 👉
3
7
34
@LaminiAI
Lamini
1 year
📢Exciting news! In a few days, we’ll be releasing “Finetuning LLMs”, co-created by our CEO @realSharonZhou and Andrew Ng. In this 1 hour course, you’ll learn how to finetune thousands of new LLMs within minutes! 👀A sneak peek
1
10
32
@LaminiAI
Lamini
1 year
Just in!!! @LaminiAI Cofounder & CTO @GregoryDiamos (key CUDA contributor) shares how we built an optimized LLM finetuning system on @AMD 's ROCm AI stack. Leveraging @AMDInstinct & optimizations for major speedups! 🚀 👉 More in-depth technical details:
0
5
32
@LaminiAI
Lamini
1 year
📣Thrilled to release “Finetune LLMs,” co-created by our CEO @realSharonZhou & @AndrewNg ! 👉 Enroll for free now! 🥳 Share what you build with us @LaminiAI . We'll showcase the best Lamini llamas (LLMs) with the world!
@AndrewYNg
Andrew Ng
1 year
New short course on Fine-tuning LLMs! Many developers are moving beyond only prompting, to also fine-tuning LLMs - that is, taking a pre-trained model and training it further on your own data, which can deliver superior results inexpensively. In this course, @realSharonZhou , CEO
43
520
3K
2
6
24
@LaminiAI
Lamini
1 year
Taylor Swift is in the bay - Swiftie Clara!!🎉 We built this bot for all Swifties🤩🌈 Ask questions about her 👉 How to build this bot? check our Colab 👉 #TaylorSwift #ErasTour #SwiftieClara #Swifties #LLM
Tweet media one
Tweet media two
2
5
22
@LaminiAI
Lamini
1 year
📢 Exciting news: Introducing custom fine-tuned models with LoRA in your environment! Goal: Get you training larger models faster Save: Time and compute 🌟 Plus, we've got you covered with a hosted playground➡️ @huggingface
1
5
20
@LaminiAI
Lamini
1 year
📢Excited to share that our API endpoint for model inference is now publicly available!🚀 Effortlessly integrate open-source LLMs into your applications, regardless of the programming language or platform you're working with. 🌐Access our API endpoint 👉.
1
7
16
@LaminiAI
Lamini
1 year
Simple steps to prepare your data and train an LLM 📚 1️⃣ Define the LLM interface 2️⃣ Find relevant data 3️⃣ Load data into types, Load types into LLM 4️⃣ Generate data 5️⃣ Train the LLM Each step here 👉🏻
Tweet media one
2
3
19
@LaminiAI
Lamini
1 year
Excited to announce: Finetuning for the people! 👉 It’s free, on small LLMs 👉 It’s fast, 10-15 minutes 👉 It’s furious, putting GPUs in a frenzy Github repo: Blog: 🧵
Tweet media one
1
7
19
@LaminiAI
Lamini
11 months
@HamelHusain We have a drop-in open-source replacement, including function calling! We have both a hosted version and a version for you to run on your own hardware (NVIDIA or AMD).
3
1
18
@LaminiAI
Lamini
1 year
Struggling with creating large datasets? 🤯 Lamini augmenters automatically generate high-quality data from <100 examples! 🥳 Install our Python library, augment your dataset, and make training magic today!!🪄 Get started: Docs:
0
3
15
@LaminiAI
Lamini
1 year
Try our finetuning demos! See the magic of Lamini in a few clicks! 😎 🔮 Finetune your custom LLM: 🦙 Llama-2 PEFT: 🦙🦙 Another Llama-2 finetuning: What other finetuning demos do you want to see? 🤔
Tweet media one
0
6
15
@LaminiAI
Lamini
9 months
Our 2024 first startup cohort is working hard at building LLMs on Lamini 💪 🌶️ We are now accepting applications for our next batch in March. If you are an early-stage startup building LLM applications and needing compute, please apply now! 🙌 🥳
0
2
14
@LaminiAI
Lamini
2 months
Our new @DeepLearningAI course on Improving Accuracy of LLM Applications is live! If you are short on time but curious about fine-tuning LLMs, this is the course for you!
@DeepLearningAI
DeepLearning.AI
2 months
Learn how to improve the accuracy of your LLM apps in our new course with @LaminiAI & @Meta . Taught by experts @realSharonZhou & @asangani7 , you’ll learn a development pattern to systematically improve the reliability and accuracy of LLM apps. Join now:
1
24
100
0
2
14
@LaminiAI
Lamini
1 year
Finetuning large open-source LLMs with LoRA be like
1
4
14
@LaminiAI
Lamini
1 year
To prompt or to fine-tune? 🤔 What are the differences? 💭 Which is the best to improve your LLM? 📈 We’re here to demystify things. 🔍 Plus, a sneak peek into our next big thing 👀 👉
Tweet media one
0
8
14
@LaminiAI
Lamini
3 months
Proud & happy @LaminiAI team with our @VentureBeat Most Promising Generative AI Startup trophy! 🏆 Huge thanks to every Laminati for your passion, dedication, and hard work. Here's to more achievements ahead 🙌
Tweet media one
1
3
13
@LaminiAI
Lamini
1 year
We're #hiring ! Seeking software engineers eager to work directly with clients, with a mix of technical skills, entrepreneurial mindset, and product intuition. If you're an engineer who loves working with customers, this is your dream job! 👉 Apply now
0
1
9
@LaminiAI
Lamini
1 month
LLM inference frameworks have hit the “memory wall”, which is a hardware imposed speed limit on memory bound code. Is it possible to tear down the memory wall? @GregoryDiamos explains how it works in his new technical blog post.
1
3
13
@LaminiAI
Lamini
8 months
Love working with @MistralAI - wonderful open-source LLMs that we and our customers love :)
Tweet media one
0
2
12
@LaminiAI
Lamini
11 months
Introducing Lamini Pro! Just $99/mo, you get ALL: Llama 2 finetuning, JSON outputs, up to 10k requests, hypertuning, RAG, full SDK access, hosted on Lamini, and more 🤩🚀 Focus on building your own LLMs without worrying about 💸🤑 👉 Subscribe now:
Tweet media one
2
1
12
@LaminiAI
Lamini
7 months
A technical deep dive into how we set up multi-node training on AMD GPUs and speed up LLM training for 1000x or even 10,000x! Led by our amazing @ayushis4026403 👉
@realSharonZhou
Sharon Zhou
7 months
Excited to share how we’re scaling to thousands of GPUs in production! …with multi-node LLM training, on not just Nvidia but @AMD GPUs Details 👉 Great blog by our team, led by Ayushi 💅 tl;dr - Push the limits of training LLMs on enterprise data
Tweet media one
6
15
162
0
5
12
@LaminiAI
Lamini
1 year
Try our LLM SDKs, fresh and delicious, loved by our designer👩🏻‍🎨 👉 Docs to QA LLM: Chat about your docs! LLM Classifier: Train a new classifier with just a prompt! LLM Routing Agent: Using tools with just prompts! LLM Operator: Build your own operator!
Tweet media one
0
2
11
@LaminiAI
Lamini
1 year
Excited to announce that you can easily specialize LLMs with your data, all inside your @Databricks cluster! We’re officially partnering 🦙+ 🧱= 🚀 ✅ Your data, kept private ✅ Your infrastructure ✅ Your LLM 👉 👉
0
4
10
@LaminiAI
Lamini
3 months
📈New tutorial: Use LLMs to get accurate data from earnings calls transcripts with Llama 3 and Lamini. Give it a try and let us know how it works for you!
0
2
10
@LaminiAI
Lamini
1 year
Happy Monday! Are you having fun with our fast, free, and furious finetuning?🚀 We made it to the next level - easily manage your training, check progress, see eval results, and test your model in a beautiful interface at 🚄
Tweet media one
0
0
9
@LaminiAI
Lamini
1 year
Lamini empowers every enterprise and developer to build their own private LLMs easily, fast, and higher-performing than general LLMs! 💪 Sign up now to get more exclusive updates from the Lamini team!🔮
@GregoryDiamos
Greg Diamos
1 year
Let's democratize LLMs. Thank you @realSharonZhou and @AndrewYNg for creating a simple and accessible 1-hour course.
1
2
12
0
1
9
@LaminiAI
Lamini
1 year
Code Llama🦙 Code Llama🦙 Code Llama🦙 Code Llama🦙 Code Llama🦙 Code Llama🦙 👉 #CodeLlama #Llama2 #LLM #Finetuning #PEFT
@LaminiAI
Lamini
1 year
Llama 2 on prem🦙 Llama 2 on prem🦙 Llama 2 on prem🦙 Llama 2 on prem🦙 Llama 2 on prem🦙 Llama 2 on prem🦙
1
2
14
0
2
9
@LaminiAI
Lamini
1 year
Woohoo! Next Friday, Nov 10, Lamini's the best & the only @realSharonZhou will be speaking at this year's @AngelList Confidential! RSVP today to join us for an EXCITING panel discussion about breaking barriers with AI 🤩 👉
Tweet media one
0
6
8
@LaminiAI
Lamini
7 months
Join us 🦙😜😎 👉
Tweet media one
2
2
8
@LaminiAI
Lamini
11 months
"LLMs are the new IP." — @realSharonZhou at Microsoft Ignite meaning, "AI is the new pink."
Tweet media one
1
0
9
@LaminiAI
Lamini
1 year
ChatGPT giving irrelevant answers? 😤 Dream of an LLM that truly understands your data?💡 Lamini’s Domain Adaptation can help you make any LLM an expert in your domain with just 3 lines of code: 1⃣model.load_data(data) 2⃣model.train() 3⃣model.evaluate() 👉
0
1
9
@LaminiAI
Lamini
7 months
🚨 Tiny errors from LLMs could mean disaster in critical domains. 🥳 Lamini unveils "Photographic Memory" suite to benchmark LLM precision on specialized data across healthcare, finance, and more. 👉
1
2
9
@LaminiAI
Lamini
1 year
@realSharonZhou Sharon's kitchen looking like Jensen's kitchen was the inspo behind it all
0
0
8
@LaminiAI
Lamini
1 year
Excited to collaborate with you @DeepLearningAI_ 🤝🦙🚀 Learn fine-tuning your own LLM with @realSharonZhou and @AndrewYNg 🤩 Enroll for free: 💪 Read more about the course:
@DeepLearningAI
DeepLearning.AI
1 year
Finetuning your own LLM can solve problems by stopping hallucinations and preventing leakage. Our short course, co-created with @LaminiAI , helps you learn to fine-tune LLMs in a matter of minutes. Learn more about it:
Tweet media one
3
10
47
0
2
7
@LaminiAI
Lamini
1 year
🥳 Thanks for sharing your experience using @LaminiAI ! 👀 You can still enroll for free for our finetuning course! 👉
@tinztwins
Tinz Twins
1 year
🧐 Non-fine-tuned LLM vs. Fine-tuned LLM An untrained LLM has no understanding of the world. It is completely random. The first thing we need to do is pre-training. Then, we get a base LLM (non-fine-tuned). After that we can fine-tune the base LLM. The figure shows the
Tweet media one
4
3
15
1
0
8
@LaminiAI
Lamini
11 months
No more headaches writing parsers!🤯 Lamini now guarantees valid JSON output!🥳 Our very own @SakshamConsul shares challenges with parsers & prompting, how we designed our schema generator, and 👀 more spicy technical details🌶️ 👉
Tweet media one
1
1
7
@LaminiAI
Lamini
1 month
Go from AI novice to fine-tuning wiz with our Improving Accuracy of LLM Applications course with @DeepLearningAI + @asangani7 . Here's one student's experience getting to 96% accuracy on factual data in just 3 iterations.
0
2
8
@LaminiAI
Lamini
8 months
"LLMs are the new IP" @realSharonZhou
@AMD
AMD
8 months
Advancing AI: @LaminiAI Co-founder and CEO @realSharonZhou explains why LLMs are the new IP.
6
15
108
0
1
8
@LaminiAI
Lamini
9 months
It's tomorrow morning! Sign up now! 🥳
@DeepLearningAI
DeepLearning.AI
9 months
Unlike software engineering, prompt engineering requires a unique workflow. In tomorrow’s live workshop, @LaminiAI ’s CEO Sharon Zhou will help us demystify prompt engineering for open large language models. Learn more and register here:
1
11
44
0
1
8
@LaminiAI
Lamini
1 year
Thrilled to partner with @Nutanix ! 🤝 "Together, we make enterprise #LLMs easier by delivering AI-ready infrastructure to help organizations simplify operations, maintain data control, and accelerate #AI adoption." - @gregorydiamos , Co-Founder, Lamini
0
1
7
@LaminiAI
Lamini
23 days
. @realSharonZhou recently spoke at @Aurecon 's #ExemplarForum2024 on high-ROI use cases for LLMs and overcoming key challenges in AI deployment, including poor model quality, hallucinations, costs, and security. Watch the video here:
1
1
6
@LaminiAI
Lamini
4 months
We're at Databricks Data + AI Summit, gearing up to release technical details on how to systematically remove hallucinations from LLMs. Come to our talk on Thurs at 11a if you're around ( @realSharonZhou ): Or, drop us a note at info @lamini .ai to connect!
1
2
7
@LaminiAI
Lamini
5 months
🤖 Rigorously evaluate open LLMs like Llama 3 in 3 simple steps with @LaminiAI 's SDK: 1️⃣ Install Lamini, get API key 2️⃣ Prepare golden test set + generate extended dataset 3️⃣ Run eval script to compare LLM performance 👉 Try now We compared Llama 3,
0
0
6
@LaminiAI
Lamini
1 year
Stay tuned for a more in-depth technical blog post from Lamini's co-founder and CTO @GregoryDiamos and... former Nvidia CUDA software architect 😎
@DylanOnChips
Semiconductor News by Dylan Martin
1 year
The quote about @AMD 's ROCm platform having "software parity" with @nvidia 's CUDA platform for large language models came, interestingly, from a former Nvidia CUDA software architect who co-founded the startup.
3
19
135
1
1
7
@LaminiAI
Lamini
5 months
@roydanroy We have AMD Instinct GPUs serving production loads for enterprises that can scale clusters from 1 to 1000s of GPUs. It's been like that for over 1 year now. Literally can try it now if you want, just sign up and hit our REST API. Our cloud is only AMD MI210s, MI250s, and MI300s.
0
0
7
@LaminiAI
Lamini
1 year
👀 "AI Startup @LaminiAI bets future on @AMD 's @AMDInstinct GPUs"
@TheRegister
The Register
1 year
AI startup Lamini bets future on AMD's Instinct GPUs
0
1
4
0
0
7
@LaminiAI
Lamini
9 months
If you missed the live session, here's the recording - it's spicy 🌶️ 😎 🦙
@realSharonZhou
Sharon Zhou
9 months
We had the highest turnout in deeplearning livestream event history! 🎉 Inside joke emoji: 👖 Here's the full recording:
1
5
17
1
0
6
@LaminiAI
Lamini
1 year
Thrilled to release an easy, fast way to finetune LLMs. Now anyone can iterate on what finetuning feels like on a toy example🧸 This is the *path* to turning an LLM into an expert on all your data, privately. Run it in a few minutes on our Colab:
1
1
5
@LaminiAI
Lamini
1 year
Finetune your own LLMs in < 15 mins! 🚀🚀 Pro tip: You can now also share your trained models with others using the "Share" button on the UI to generate a shareable link so others can run inference on your model:) Happy fine-tuning! 🎉🦙
Tweet media one
@Yeyu2HUANG
Yeyu Huang
1 year
🥳Have fun training a tiny free model of your own! Integrated with @streamlit and @LaminiAI . Find the source code in the thread 👇
2
0
7
0
3
6
@LaminiAI
Lamini
1 year
🤔Which model do you prefer to finetune? 🔥Vote!! The game is on!!👇
GPT-3.5 (OpenAI)
21
Llama-2 (MetaAI)
40
1
0
5
@LaminiAI
Lamini
1 year
So exciting!! Lamini is more powerful with @AMDInstinct 💪🦙🚀 Order LLM Superstation and ship your own LLMs now! 👉
@AMD
AMD
1 year
The secret is out. We are ecstatic to see the curtain lifted on @LaminiAI Superstation, powered by AMD Instinct. It's so easy that we are also a customer! The team can't wait to see what Enterprise LLMs developers will tune and personalize with their data. 🤝🤩🌟
9
27
145
0
0
5
@LaminiAI
Lamini
30 days
🎉🎉🎉 Excited to announce our new pay-as-you-go offering, Lamini On-Demand. Get $300 in free credit to run your tuning and inference jobs on our high-performance GPU cluster. Happy tuning!
0
1
6
@LaminiAI
Lamini
2 months
What @AndrewYNg said...
@AndrewYNg
Andrew Ng
2 months
Learn a development pattern to systematically improve the accuracy and reliability of LLM applications in our new short course, Improving Accuracy of LLM Applications, built in partnership with @LaminiAI and @Meta , and taught by Lamini’s CEO @realSharonZhou , and Meta’s Senior
21
170
705
0
2
6
@LaminiAI
Lamini
1 year
And voila! Our LLM is producing structured output.
Tweet media one
0
0
5
@LaminiAI
Lamini
1 month
Vertical vs. horizontal AI use cases? GitHub Copilot started vertical and crossed over into horizontal applications. Low latency + accuracy were key! Thanks for the great discussion @gajenkandiah and @Hitachi !
0
0
5
@LaminiAI
Lamini
1 year
@sampullara @realSharonZhou @LisaSu @AMD Reopening!!! Sorry, reached the limit way too fast LOL.
1
0
5
@LaminiAI
Lamini
1 year
@jeremyphoward @realSharonZhou @LisaSu @AMD The sales call is in fact with Sharon... you can open up a terminal and pull up some loss curves during it, instead of powerpoint.
0
0
5
@LaminiAI
Lamini
1 year
Tens of thousands of students have already enrolled. Join them! Master finetuning LLMs! 🚀 Enroll now! (free for a limited time 😎) 👉 @realSharonZhou @AndrewYNg
@LaminiAI
Lamini
1 year
📣Thrilled to release “Finetune LLMs,” co-created by our CEO @realSharonZhou & @AndrewNg ! 👉 Enroll for free now! 🥳 Share what you build with us @LaminiAI . We'll showcase the best Lamini llamas (LLMs) with the world!
2
6
24
0
1
5
@LaminiAI
Lamini
1 year
To define a type, all you need is a name, field, type, and context for each field!
Tweet media one
1
0
4
@LaminiAI
Lamini
9 months
Thank you for the shoutout! @DrStarson We're glad you enjoyed these courses. Please do let us know if there're any specific topics you want to learn. Stay tuned for more learnings 🦙 🙌 😎
@DrStarson
Starson 🇺🇸🚀🇺🇸
9 months
I've also learned a lot from @AndrewYNg and @DeepLearningAI mini course and from a recent lecture on open source prompt engineering by @realSharonZhou from @LaminiAI :
0
1
4
0
1
5
@LaminiAI
Lamini
1 year
📢When it comes to model training, garbage in = garbage out. That is why Lamini is thrilled to announce that dataset filters are now available as part of our python package! 🚀 Here is the link for access 👉 .
1
1
5
@LaminiAI
Lamini
1 year
Mistral 7B, Mistral 7B, Mistral 7B Zephyr 7B, Zephyr 7B, Zephyr 7B Get them now!! 🦙🤩🚀 👉
Tweet media one
0
0
4
@LaminiAI
Lamini
3 months
🤯 Excited that @JohnCena is excited about making LLMs awesome too. (Thanks for the follow!)
Tweet media one
0
1
4
@LaminiAI
Lamini
1 year
0
0
4
@LaminiAI
Lamini
11 months
@felix_red_panda @Muhtasham9 @anyscalecompute @togethercompute We got your email! Will get back to you soon :) Thanks for your patience!
1
0
4
@LaminiAI
Lamini
1 year
It knows how to improve itself😏
Tweet media one
0
0
4
@LaminiAI
Lamini
1 year
How it works: - Load your Q&A data - Call llm.train() - 💥Your LLM improves on your domain or style! Repeat to debug. AI is iterative! Training unlocks an LLM's full potential: it’s what the big AI labs like @OpenAI use to get their LLMs to learn about the whole internet!
Tweet media one
1
0
4
@LaminiAI
Lamini
1 year
Build your prod-ready fine-tuned models today with Lamini! 🦙🎉
@pelaseyed
homanp
1 year
🤯 Finetuned Question Answering 🤯 Made a small POC on @Replit this morning. Finetuning a LLM with Teslas Q2 2023 earnings report. It's super fast, nimble and accurate in its responses. Demo: A prod ready version will be shipped in Superagent v0.0.1
10
8
39
1
2
4
@LaminiAI
Lamini
1 year
@bensbitesdaily
Ben's Bites
1 year
Lamini ditches Nvidia in favour of AMD @LaminiAI , an AI startup, is using AMD GPUs instead of the more popular Nvidia GPUs to run large language models (LLMs) like Llama-2 for customers
1
0
2
0
0
4
@LaminiAI
Lamini
1 year
@zdubsf @AMD We wanted to build something substantial before announcing it - so it's reliably easy to build LLMs with proven touchpoints
0
0
4
@LaminiAI
Lamini
11 months
0
0
4
@LaminiAI
Lamini
1 year
Beyond the toy: for larger models & production use, we offer paid plans. But the free version is plenty powerful to run a bunch of experiments and get a feel for finetuning. Share our free-tier GPUs nicely please ♥️ Give it a spin:
Tweet media one
0
0
4
@LaminiAI
Lamini
1 year
Here's an example, our model thinks its a wolf🐺
Tweet media one
1
0
4
@LaminiAI
Lamini
11 months
YES! Build and deploy your own prviate GPT-4 Turbo with Lamini! Contact us We also have some big news coming soon. Stay tuned!
@realSharonZhou
Sharon Zhou
11 months
You can do the same things as GPT-4 Turbo on every open-source LLM today, @LaminiAI does it all: 🚀 - Structure: Return valid JSON - Speed: Make multiple function calls at once - More knowledge: Retrieval built-in, with finetuning - Longer context: Extend context windows (~128k,
5
6
69
0
0
4
@LaminiAI
Lamini
1 year
@LaminiAI
Lamini
1 year
Struggling with creating large datasets? 🤯 Lamini augmenters automatically generate high-quality data from <100 examples! 🥳 Install our Python library, augment your dataset, and make training magic today!!🪄 Get started: Docs:
0
3
15
0
0
4
@LaminiAI
Lamini
1 year
It's #Snowday ! Lamini has integrated with @SnowflakeDB 🦙❄️ Now, you can easily deploy & finetune large language models inside Snowflake 🚀 👉 See a demo: 👀 Read Snowflake's announcement:
0
1
4
@LaminiAI
Lamini
1 year
everyone needs it 👏👏everyone benefits from it 👏👏 To build yours today for free, head over to 🦙
@karpathy
Andrej Karpathy
1 year
"What would someone need a personal computer for?" -> "What would someone need a personal LLM node for?"
165
458
4K
0
0
3
@LaminiAI
Lamini
8 months
Thank you for the shoutout! @rohanpaul_ai We appreciate any feedback 🦙🙌
@rohanpaul_ai
Rohan Paul
8 months
Guarantee Valid JSON Output with @LaminiAI smooth ---- Why structured JSON output is so hard 🤔 LLMs are largely based on the transformer architecture, which uses an auto-regressive generator. Transformer treats each word as a token and generates one token at a time. The LLM
Tweet media one
6
10
47
0
0
3
@LaminiAI
Lamini
1 year
@GregoryDiamos 🦙🦙🦙😎😎😎
@OctoAICloud
OctoAI
1 year
The world of language models is fast-evolving. Join innovators among the rise of domain aware LLMs and hear real world advice on leveraging language models in commercial applications. Tap into the conversation w/ @LangChainAI , @LaminiAI , @GenAICollective , @UnstructuredIO & more
Tweet media one
0
4
9
0
1
3
@LaminiAI
Lamini
1 year
@AndrewYNg @realSharonZhou Come follow me here! 🦙
0
0
3
@LaminiAI
Lamini
2 months
Distilling the knowledge of a vast LLM into a super specialized and efficient one to achieve 100ms latency is...
1
0
3
@LaminiAI
Lamini
1 year
@AIatMeta
AI at Meta
1 year
Today we’re releasing Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools. Keeping with our open approach, Code Llama is publicly-available now for both research & commercial use. More ⬇️
173
1K
4K
1
0
3
@LaminiAI
Lamini
1 year
@realSharonZhou
Sharon Zhou
1 year
Excited to announce a HUGE secret with @LisaSu : @LaminiAI has been building LLMs on @AMD GPUs *in production* for over the past year! We’ve made running LLMs on AMD super easy and a highly competitive option through our LLM Superstation, available now at ~10x lower cost than
36
103
781
0
0
2
@LaminiAI
Lamini
1 year
@HenkPoley @realSharonZhou @alexgraveley @LisaSu @AMD Thank you! We're looking into this issue - adjusting permissions.
1
0
3
@LaminiAI
Lamini
1 year
@realSharonZhou @LisaSu @AMD Follow me here 🦙🦙🦙
0
0
3
@LaminiAI
Lamini
11 months
$99/mo for custom finetunes/LoRAs😎 Sign up to try! Need something more? Contact us at info @lamini .ai
@realSharonZhou
Sharon Zhou
11 months
@bentossell @LaminiAI - $99/mo for several custom finetunes/LoRAs. We also have customers doing continued pretraining and pretraining from scratch (more than $99/mo, less than 3M 🙃)
0
0
21
2
1
3