🎉 Big secret! We’ve been running on
@AMD
Instinct™ GPUs in production for over a year.
🤝 Thrilled to now partner with AMD to offer GPU-rich enterprise LLMs!
🥳 LLM Superstation – combining Lamini's LLM infrastructure with AMD Instinct.
👉 Learn more:
Excited to announce a HUGE secret with
@LisaSu
:
@LaminiAI
has been building LLMs on
@AMD
GPUs *in production* for over the past year!
We’ve made running LLMs on AMD super easy and a highly competitive option through our LLM Superstation, available now at ~10x lower cost than
Training multiple LLMs taking forever? 😤
Costing you a fortune?💸
Enter PEFT! Get ready to multiply!! 🚀
1000 models, just 1 machine! 🤖
3 months of training -> 3 milliseconds ⚡️
Just one API call, load and train with Lamini!
👉
👀
We're live! Lamini makes it easy & developer-friendly to rapidly train custom LLMs! Fine-tune, RLHF, you name it. All with just a few lines of code. Swap out foundation models in a single line. Don’t worry about their different prompts. We'll handle it.
Getting structured output from an LLM can be a pain 🤦♀️ Our type system makes it easy to connect your data to a LLM 🎉 Just like another stage in your data pipeline. Play here 👉
📢Exciting news! In a few days, we’ll be releasing “Finetuning LLMs”, co-created by our CEO
@realSharonZhou
and Andrew Ng.
In this 1 hour course, you’ll learn how to finetune thousands of new LLMs within minutes!
👀A sneak peek
Just in!!!
@LaminiAI
Cofounder & CTO
@GregoryDiamos
(key CUDA contributor) shares how we built an optimized LLM finetuning system on
@AMD
's ROCm AI stack. Leveraging
@AMDInstinct
& optimizations for major speedups! 🚀
👉 More in-depth technical details:
📣Thrilled to release “Finetune LLMs,” co-created by our CEO
@realSharonZhou
&
@AndrewNg
!
👉 Enroll for free now!
🥳 Share what you build with us
@LaminiAI
. We'll showcase the best Lamini llamas (LLMs) with the world!
New short course on Fine-tuning LLMs! Many developers are moving beyond only prompting, to also fine-tuning LLMs - that is, taking a pre-trained model and training it further on your own data, which can deliver superior results inexpensively. In this course,
@realSharonZhou
, CEO
📢 Exciting news: Introducing custom fine-tuned models with LoRA in your environment!
Goal: Get you training larger models faster
Save: Time and compute
🌟 Plus, we've got you covered with a hosted playground➡️
@huggingface
📢Excited to share that our API endpoint for model inference is now publicly available!🚀 Effortlessly integrate open-source LLMs into your applications, regardless of the programming language or platform you're working with. 🌐Access our API endpoint 👉.
Simple steps to prepare your data and train an LLM 📚
1️⃣ Define the LLM interface
2️⃣ Find relevant data
3️⃣ Load data into types, Load types into LLM
4️⃣ Generate data
5️⃣ Train the LLM
Each step here 👉🏻
Excited to announce: Finetuning for the people!
👉 It’s free, on small LLMs
👉 It’s fast, 10-15 minutes
👉 It’s furious, putting GPUs in a frenzy
Github repo:
Blog:
🧵
@HamelHusain
We have a drop-in open-source replacement, including function calling!
We have both a hosted version and a version for you to run on your own hardware (NVIDIA or AMD).
Struggling with creating large datasets? 🤯
Lamini augmenters automatically generate high-quality data from <100 examples! 🥳
Install our Python library, augment your dataset, and make training magic today!!🪄
Get started:
Docs:
Try our finetuning demos! See the magic of Lamini in a few clicks! 😎
🔮 Finetune your custom LLM:
🦙 Llama-2 PEFT:
🦙🦙 Another Llama-2 finetuning:
What other finetuning demos do you want to see? 🤔
Our 2024 first startup cohort is working hard at building LLMs on Lamini 💪 🌶️
We are now accepting applications for our next batch in March. If you are an early-stage startup building LLM applications and needing compute, please apply now! 🙌 🥳
Our new
@DeepLearningAI
course on Improving Accuracy of LLM Applications is live! If you are short on time but curious about fine-tuning LLMs, this is the course for you!
Learn how to improve the accuracy of your LLM apps in our new course with
@LaminiAI
&
@Meta
.
Taught by experts
@realSharonZhou
&
@asangani7
, you’ll learn a development pattern to systematically improve the reliability and accuracy of LLM apps.
Join now:
To prompt or to fine-tune? 🤔
What are the differences? 💭
Which is the best to improve your LLM? 📈
We’re here to demystify things. 🔍
Plus, a sneak peek into our next big thing 👀
👉
Proud & happy
@LaminiAI
team with our
@VentureBeat
Most Promising Generative AI Startup trophy! 🏆
Huge thanks to every Laminati for your passion, dedication, and hard work. Here's to more achievements ahead 🙌
We're
#hiring
! Seeking software engineers eager to work directly with clients, with a mix of technical skills, entrepreneurial mindset, and product intuition.
If you're an engineer who loves working with customers, this is your dream job!
👉 Apply now
LLM inference frameworks have hit the “memory wall”, which is a hardware imposed speed limit on memory bound code. Is it possible to tear down the memory wall?
@GregoryDiamos
explains how it works in his new technical blog post.
Introducing Lamini Pro! Just $99/mo, you get ALL:
Llama 2 finetuning, JSON outputs, up to 10k requests, hypertuning, RAG, full SDK access, hosted on Lamini, and more 🤩🚀
Focus on building your own LLMs without worrying about 💸🤑
👉 Subscribe now:
A technical deep dive into how we set up multi-node training on AMD GPUs and speed up LLM training for 1000x or even 10,000x! Led by our amazing
@ayushis4026403
👉
Excited to share how we’re scaling to thousands of GPUs in production!
…with multi-node LLM training, on not just Nvidia but
@AMD
GPUs
Details 👉
Great blog by our team, led by Ayushi 💅
tl;dr
- Push the limits of training LLMs on enterprise data
Try our LLM SDKs, fresh and delicious, loved by our designer👩🏻🎨
👉
Docs to QA LLM: Chat about your docs!
LLM Classifier: Train a new classifier with just a prompt!
LLM Routing Agent: Using tools with just prompts!
LLM Operator: Build your own operator!
Excited to announce that you can easily specialize LLMs with your data, all inside your
@Databricks
cluster! We’re officially partnering 🦙+ 🧱= 🚀
✅ Your data, kept private
✅ Your infrastructure
✅ Your LLM
👉
👉
📈New tutorial: Use LLMs to get accurate data from earnings calls transcripts with Llama 3 and Lamini.
Give it a try and let us know how it works for you!
Happy Monday! Are you having fun with our fast, free, and furious finetuning?🚀 We made it to the next level - easily manage your training, check progress, see eval results, and test your model in a beautiful interface at 🚄
Lamini empowers every enterprise and developer to build their own private LLMs easily, fast, and higher-performing than general LLMs! 💪
Sign up now to get more exclusive updates from the Lamini team!🔮
Woohoo! Next Friday, Nov 10, Lamini's the best & the only
@realSharonZhou
will be speaking at this year's
@AngelList
Confidential!
RSVP today to join us for an EXCITING panel discussion about breaking barriers with AI 🤩
👉
ChatGPT giving irrelevant answers? 😤
Dream of an LLM that truly understands your data?💡
Lamini’s Domain Adaptation can help you make any LLM an expert in your domain with just 3 lines of code:
1⃣model.load_data(data)
2⃣model.train()
3⃣model.evaluate()
👉
🚨 Tiny errors from LLMs could mean disaster in critical domains.
🥳 Lamini unveils "Photographic Memory" suite to benchmark LLM precision on specialized data across healthcare, finance, and more.
👉
Finetuning your own LLM can solve problems by stopping hallucinations and preventing leakage.
Our short course, co-created with
@LaminiAI
, helps you learn to fine-tune LLMs in a matter of minutes.
Learn more about it:
🧐 Non-fine-tuned LLM vs. Fine-tuned LLM
An untrained LLM has no understanding of the world. It is completely random. The first thing we need to do is pre-training. Then, we get a base LLM (non-fine-tuned). After that we can fine-tune the base LLM. The figure shows the
No more headaches writing parsers!🤯
Lamini now guarantees valid JSON output!🥳
Our very own
@SakshamConsul
shares challenges with parsers & prompting, how we designed our schema generator, and 👀 more spicy technical details🌶️
👉
Go from AI novice to fine-tuning wiz with our Improving Accuracy of LLM Applications course with
@DeepLearningAI
+
@asangani7
. Here's one student's experience getting to 96% accuracy on factual data in just 3 iterations.
Unlike software engineering, prompt engineering requires a unique workflow.
In tomorrow’s live workshop,
@LaminiAI
’s CEO Sharon Zhou will help us demystify prompt engineering for open large language models.
Learn more and register here:
Thrilled to partner with
@Nutanix
! 🤝
"Together, we make enterprise
#LLMs
easier by delivering AI-ready infrastructure to help organizations simplify operations, maintain data control, and accelerate
#AI
adoption."
-
@gregorydiamos
, Co-Founder, Lamini
.
@realSharonZhou
recently spoke at
@Aurecon
's
#ExemplarForum2024
on high-ROI use cases for LLMs and overcoming key challenges in AI deployment, including poor model quality, hallucinations, costs, and security. Watch the video here:
We're at Databricks Data + AI Summit, gearing up to release technical details on how to systematically remove hallucinations from LLMs.
Come to our talk on Thurs at 11a if you're around (
@realSharonZhou
):
Or, drop us a note at info
@lamini
.ai to connect!
🤖 Rigorously evaluate open LLMs like Llama 3 in 3 simple steps with
@LaminiAI
's SDK:
1️⃣ Install Lamini, get API key
2️⃣ Prepare golden test set + generate extended dataset
3️⃣ Run eval script to compare LLM performance
👉 Try now
We compared Llama 3,
The quote about
@AMD
's ROCm platform having "software parity" with
@nvidia
's CUDA platform for large language models came, interestingly, from a former Nvidia CUDA software architect who co-founded the startup.
@roydanroy
We have AMD Instinct GPUs serving production loads for enterprises that can scale clusters from 1 to 1000s of GPUs. It's been like that for over 1 year now.
Literally can try it now if you want, just sign up and hit our REST API. Our cloud is only AMD MI210s, MI250s, and MI300s.
Thrilled to release an easy, fast way to finetune LLMs.
Now anyone can iterate on what finetuning feels like on a toy example🧸
This is the *path* to turning an LLM into an expert on all your data, privately.
Run it in a few minutes on our Colab:
Finetune your own LLMs in < 15 mins! 🚀🚀
Pro tip: You can now also share your trained models with others using the "Share" button on the UI to generate a shareable link so others can run inference on your model:)
Happy fine-tuning! 🎉🦙
The secret is out. We are ecstatic to see the curtain lifted on
@LaminiAI
Superstation, powered by AMD Instinct. It's so easy that we are also a customer! The team can't wait to see what Enterprise LLMs developers will tune and personalize with their data. 🤝🤩🌟
🎉🎉🎉 Excited to announce our new pay-as-you-go offering, Lamini On-Demand. Get $300 in free credit to run your tuning and inference jobs on our high-performance GPU cluster. Happy tuning!
Learn a development pattern to systematically improve the accuracy and reliability of LLM applications in our new short course, Improving Accuracy of LLM Applications, built in partnership with
@LaminiAI
and
@Meta
, and taught by Lamini’s CEO
@realSharonZhou
, and Meta’s Senior
Vertical vs. horizontal AI use cases? GitHub Copilot started vertical and crossed over into horizontal applications. Low latency + accuracy were key! Thanks for the great discussion
@gajenkandiah
and
@Hitachi
!
Tens of thousands of students have already enrolled.
Join them! Master finetuning LLMs! 🚀
Enroll now! (free for a limited time 😎)
👉
@realSharonZhou
@AndrewYNg
📣Thrilled to release “Finetune LLMs,” co-created by our CEO
@realSharonZhou
&
@AndrewNg
!
👉 Enroll for free now!
🥳 Share what you build with us
@LaminiAI
. We'll showcase the best Lamini llamas (LLMs) with the world!
Thank you for the shoutout!
@DrStarson
We're glad you enjoyed these courses. Please do let us know if there're any specific topics you want to learn. Stay tuned for more learnings 🦙 🙌 😎
📢When it comes to model training, garbage in = garbage out. That is why Lamini is thrilled to announce that dataset filters are now available as part of our python package! 🚀 Here is the link for access 👉 .
How it works:
- Load your Q&A data
- Call llm.train()
- 💥Your LLM improves on your domain or style! Repeat to debug. AI is iterative!
Training unlocks an LLM's full potential: it’s what the big AI labs like
@OpenAI
use to get their LLMs to learn about the whole internet!
🤯 Finetuned Question Answering 🤯
Made a small POC on
@Replit
this morning. Finetuning a LLM with Teslas Q2 2023 earnings report. It's super fast, nimble and accurate in its responses.
Demo:
A prod ready version will be shipped in Superagent v0.0.1
Lamini ditches Nvidia in favour of AMD
@LaminiAI
, an AI startup, is using AMD GPUs instead of the more popular Nvidia GPUs to run large language models (LLMs) like Llama-2 for customers
Beyond the toy: for larger models & production use, we offer paid plans.
But the free version is plenty powerful to run a bunch of experiments and get a feel for finetuning.
Share our free-tier GPUs nicely please ♥️
Give it a spin:
You can do the same things as GPT-4 Turbo on every open-source LLM today,
@LaminiAI
does it all: 🚀
- Structure: Return valid JSON
- Speed: Make multiple function calls at once
- More knowledge: Retrieval built-in, with finetuning
- Longer context: Extend context windows (~128k,
Struggling with creating large datasets? 🤯
Lamini augmenters automatically generate high-quality data from <100 examples! 🥳
Install our Python library, augment your dataset, and make training magic today!!🪄
Get started:
Docs:
It's
#Snowday
! Lamini has integrated with
@SnowflakeDB
🦙❄️
Now, you can easily deploy & finetune large language models inside Snowflake 🚀
👉 See a demo:
👀 Read Snowflake's announcement:
Guarantee Valid JSON Output with
@LaminiAI
smooth
----
Why structured JSON output is so hard 🤔
LLMs are largely based on the transformer architecture, which uses an auto-regressive generator. Transformer treats each word as a token and generates one token at a time. The LLM
The world of language models is fast-evolving.
Join innovators among the rise of domain aware LLMs and hear real world advice on leveraging language models in commercial applications.
Tap into the conversation w/
@LangChainAI
,
@LaminiAI
,
@GenAICollective
,
@UnstructuredIO
& more
Today we’re releasing Code Llama, a large language model built on top of Llama 2, fine-tuned for coding & state-of-the-art for publicly available coding tools.
Keeping with our open approach, Code Llama is publicly-available now for both research & commercial use.
More ⬇️
Excited to announce a HUGE secret with
@LisaSu
:
@LaminiAI
has been building LLMs on
@AMD
GPUs *in production* for over the past year!
We’ve made running LLMs on AMD super easy and a highly competitive option through our LLM Superstation, available now at ~10x lower cost than
@bentossell
@LaminiAI
- $99/mo for several custom finetunes/LoRAs.
We also have customers doing continued pretraining and pretraining from scratch (more than $99/mo, less than 3M 🙃)