Curious about Llama 2? Here's a fun feature we shipped last week: automatically convert your GPT-3.5 prompt to the Llama 2 format with best practices!
Play with Llama 2, Claude 2, and GPT models at 😉
Sorry to all the chefs out there, this recipe is only an example 😅
But if you are trying to build an AI-first product on top of LLM models, fine-tuning is definitely the answer!
I've now heard from two different sources that is fine-tuning hyperpersonalized LoRAs on a per-character and possibly per-user basis.
I expect we'll see much more of this! LoRAs aren't much more expensive to serve than base models.
Just dropped a new Mistral variant "Mistral 7B Fine-Tune Optimized." Built to be a very strong base for downstream fine-tuning.
On our evals fine-tunes trained on it are stronger than fine-tunes on any other 7B model, and even beat gpt-4. 🙂
We just officially launched as a YC company, and announced our fine-tuning functionality!
With OpenPipe fine-tuning you can automatically convert your expensive LLM prompt into a cheap, fast fine-tuned model.
Check out more details in our launch at
Yesterday we released new versions of our Python and JS SDK. Our fine-tuned models now support all the latest OpenAI features, including "tool calls" (the new version of function calls)!