Rick Lamers Profile Banner
Rick Lamers Profile
Rick Lamers

@RickLamers

4,300
Followers
813
Following
327
Media
2,112
Statuses

👨‍💻 AI Research & Engineering @GroqInc . Angel investor. I publish technical resources about LLMs every week. Opinions are my own.

Join 5,575+ readers →
Joined July 2009
Don't wanna be here? Send us removal request.
@RickLamers
Rick Lamers
3 months
I’ve been leading a secret project for months … and the word is finally out! 🛠️ I'm proud to announce the Llama 3 Groq Tool Use 8B and 70B models 🔥 An open source Tool Use full finetune of Llama 3 that reaches the #1 position on BFCL beating all other models, including
Tweet media one
74
236
1K
@RickLamers
Rick Lamers
1 year
Couldn't get access to ChatGPT Code Interpreter so I wrote my own! And Open Sourced it of course 🕺
44
221
1K
@RickLamers
Rick Lamers
1 year
Never write a single shell command ever again! Today I'm releasing Shell AI ✨, an open source CLI you can `pip install shell-ai` right now to run things like `shai git diff but without the json blobs` MIT licensed. Install, fork, PR & have fun!
37
191
1K
@RickLamers
Rick Lamers
1 month
Interesting idea from @karpathy at CUDA MODE: can LLMs become compilers such that we can skip building (imperfect) abstractions through frameworks and libraries?
35
27
750
@RickLamers
Rick Lamers
1 year
If you’re manually prompting you probably want to start thinking about meta prompting strategies that allow you to treat prompting as a programming problem instead of a string manipulation problem. DSPy is a library that takes a page out of PyTorch’s module based API for
Tweet media one
4
58
366
@RickLamers
Rick Lamers
3 months
I strongly feel that users should be able to steer which X posts they see based on their personal preferences. 👨‍💻 Which is why I've created an open source browser extension that uses fast LLM evaluation on posts and auto-hides them when your personal filter crosses the threshold
Tweet media one
Tweet media two
Tweet media three
Tweet media four
21
47
312
@RickLamers
Rick Lamers
6 months
Frontier level Tool Calling now live on @GroqInc powered by Llama 3 🫡 Outperforms GPT-4 Turbo 2024-04-09 and Claude 3 Opus (FC version) in multiple subcategories At 300 tokens/s 🚀 I've personally been working on this feature, and man, the new Llama is good!
Tweet media one
21
41
302
@RickLamers
Rick Lamers
1 year
Thank you @swyx for organizing the AI Engineer conference 🙏 Here are the key takeaways: 1. Better RAG = better LLM apps. If you’re not moving beyond the basics you’re leaving performance on the table. 2. Structured outputs (Pydantic, TypeScript, OpenAI Functions) and
Tweet media one
8
26
229
@RickLamers
Rick Lamers
1 year
300 minutes of audio in 10 minutes Open question: how much better do the internal models of the data generating processes of human thought of LLMs become when fed with 100s of billions of tokens of humans thinking out loud (e.g. podcasts)?
5
30
218
@RickLamers
Rick Lamers
2 months
60ms! Wait whattttttttttttttttt
Tweet media one
@GroqInc
Groq Inc
2 months
We're expanding our GroqCloud support to image, audio & text. With LLaVA v1.5 7B, developers & businesses can tap into the vast potential of multimodal AI, enabling innovative applications that combine visual, auditory & textual inputs. Read more here:
Tweet media one
18
49
286
6
13
211
@RickLamers
Rick Lamers
8 months
GPT-4 consistently getting outclassed on code understanding was not on my list for Q1 24’. Great start of the year for AI 🔥 Must read 120K token task 👇
7
24
181
@RickLamers
Rick Lamers
8 months
Breaking! Mistral Next on LMSYS Chat and seems to outperform Gemini Ultra. Vibe check:
Tweet media one
Tweet media two
8
20
156
@RickLamers
Rick Lamers
2 years
@nonmayorpete Given GPT-3 equivalent pricing of $0.02 per 1k tokens, avg ChatGPT answer length of 150 tokens, $150B profit would require every human to ask a question every day for the next 34 years assuming 50% margin.
28
3
127
@RickLamers
Rick Lamers
6 months
Llama 3 70B 300 Tokens/sec letsgoooooo 🚀
Tweet media one
7
18
125
@RickLamers
Rick Lamers
3 months
Console: Blog post: Hugging Face 🤗:
6
14
87
@RickLamers
Rick Lamers
4 months
Model merging is nuts, check out this family tree :0
Tweet media one
8
5
86
@RickLamers
Rick Lamers
23 days
My take on Reflection 70B is simple: the scores I'm seeing on HumanEval, GPQA and MMLU are interesting and point to "training for test-time-inference CoT" seems to be working. Glad everything is open source so the community can dive deep to see if this is some weird form of
@csahil28
Sahil Chaudhary
23 days
On September 5th, @mattshumer_ announced Reflection 70B, a model fine-tuned on top of Llama 3.1 70B, showing SoTA benchmark numbers, which was trained by me on Glaive generated data. Today, I'm sharing model artifacts to reproduce the initial claims and a post-mortem to address
22
47
518
5
3
80
@RickLamers
Rick Lamers
3 months
This is a fantastic new standard being set by @GoogleDeepMind with Gemma and they should be applauded for it. Should be the new bar for calling a model open weights (open source bar is even higher, and understandably not always an option for org creating the model). cc
@xhluca
Xing Han Lu
3 months
OTH Gemma 2's ToU has unrestricted use for the output, which means models trained on Gemma-2 output can be used for anything:
Tweet media one
1
2
47
4
11
76
@RickLamers
Rick Lamers
3 months
Llama 3.1 🫡
Tweet media one
3
7
79
@RickLamers
Rick Lamers
3 months
Demo:
3
7
79
@RickLamers
Rick Lamers
6 months
Learn more about what we're up to at @GroqInc around tool use/function calling from today's AMA. h/t to @karpathy for discussing his LLM OS ideas publicly - they are a big contributing factor to our vision for driving low-latency agentic loops with Groq LPUs!
1
8
75
@RickLamers
Rick Lamers
1 year
@xiao_ted Maybe MSR should have shipped a ChatGPT level product based on the fruits of their research and leadership would have taken a different stance.
5
0
67
@RickLamers
Rick Lamers
4 months
I absolutely LOVE this! Agentless baselines should be mandatory for anyone claiming to have found an agentic approach that is better than direct model prompting. If we don’t run ablations, how will we learn collectively what works and doesn’t?
@LingmingZhang
Lingming Zhang
4 months
Introducing OpenAutoCoder-Agentless😺: A simple agentless solution solves 27.3% GitHub issues on SWE-bench Lite with ~$0.34 each, outperforming all open-source AI SW agents! It's fully open-source, try it out: 🧑‍💻 📝
Tweet media one
20
132
649
4
5
64
@RickLamers
Rick Lamers
8 months
Proud to stand by @sundeep @GavinSherry and the rest of the incredible @DefinitiveIO team and start an incredible journey at @GroqInc 🔥🚀
@GroqInc
Groq Inc
8 months
With accelerated growth at @GroqInc we're excited to announce the acquisition of @DefinitiveIO . Co-founder and CEO @sundeep will head and scale our GroqCloud™ business unit to meet increasing demand for our revolutionary AI inference technology. Read more:
Tweet media one
25
24
170
11
4
63
@RickLamers
Rick Lamers
1 year
@stanfordnlp has released a framework for composing retrieval and language models, no need to re-invent the wheel when prompt engineering for knowledge heavy use cases, check out Demonstrate-Search-Predict framework for Python
3
18
59
@RickLamers
Rick Lamers
5 months
I shipped a thing! On Friday, haha yes I’m crazy
@GitMaxd
Git Maxd
5 months
. @GroqInc API support for combining streaming with tool use has been just been released They quietly announced on their discord just in time for the weekend 🔥
Tweet media one
2
11
53
7
3
51
@RickLamers
Rick Lamers
4 months
@bindureddy No attribution? Smh
8
0
46
@RickLamers
Rick Lamers
6 months
Phi-3 is laundering OpenAI’s proprietary data for us. *flies away*
3
3
44
@RickLamers
Rick Lamers
1 year
Hackathon pre-game at @agihouse_org with @jerryjliu0 from @llama_index 🔥
Tweet media one
1
3
42
@RickLamers
Rick Lamers
1 year
it now plots too!
Tweet media one
6
7
41
@RickLamers
Rick Lamers
9 months
Here is why I think Kevin is wrong. Story time! A software veteran once confessed to me that they had wasted a few years reinventing a worse version of git. When they found out about git and took a closer look they realized how much better the abstractions in git were and how
7
3
42
@RickLamers
Rick Lamers
8 months
I know @tldraw went viral with sketch-to-AI but can we just appreciate the attention to detail of their canvas for a moment 🙇‍♂️ h/t to @steveruizok for giving a great presentation in AMS yesterday - the company has a beautiful engineering+tinkering soul seen in few (Framer,
@tldraw
tldraw
11 months
let's go
145
1K
7K
3
7
40
@RickLamers
Rick Lamers
1 month
@recursiverealms @karpathy The idea is you can let the LLM compile from a much higher level “source code” than the current code that goes into a compiler. Can still be somewhat formal, but definitely doesn’t need to be as detailed as current level of expression (think Python, React as current, think
3
0
39
@RickLamers
Rick Lamers
2 years
Tweet media one
0
2
37
@RickLamers
Rick Lamers
3 months
This was the easiest deploy experience ever 🤯, they're seriously cooking at Hugging Face 🤗
@Gradio
Gradio
3 months
Congratulations to @RickLamers , @GroqInc , and @GlaiveAI for an amazing release of Llama 3 Groq Tool Use models and demo! You can access the demo on @huggingface Spaces:
0
5
27
0
7
38
@RickLamers
Rick Lamers
2 years
We've been doing data conferences all wrong, and how a new grassroots conference is getting it right. A 🧵 1/
2
7
36
@RickLamers
Rick Lamers
1 year
🚨 We've been working on something very exciting at @DefinitiveIO and I can finally show it to you: Code Indexer Loop: a fully automated vector based indexer for your source code Apache 2.0, continuous code chunking, embeddding & indexing, based on
3
9
34
@RickLamers
Rick Lamers
2 years
Hackathon in Amsterdam about generative AI. Awesome talk about prompt engineering from folks at Anthropic and lots of LangChainnnnn 🔥
Tweet media one
2
3
35
@RickLamers
Rick Lamers
9 months
Came across this neat project: a 💯% local grammar focused text rewriter for macOS, based on Mistral. The fact that this is so easy is mind-blowing. It's a lot of "standing on the shoulders of giants", for sure. Completely free (MIT) too 🙌 h/t @ivanfioravanti @MistralAI
@ivanfioravanti
ifioravanti
9 months
Autogram-ollama and Autogram-mlx for your Apple Silicon Devices are here! Open source, free, easy and fast grammar checker powered by - Model: Mistral 7B Instruct 0.2 @MistralAI - Ollama @ollama - Apple MLX @apple Go, play, copy, fork, experiment, have fun! 🎉🥳
Tweet media one
6
18
156
3
4
33
@RickLamers
Rick Lamers
9 months
@karpathy Interestingly that makes “how boring is this?” a great heuristic for quality.
0
0
35
@RickLamers
Rick Lamers
2 months
Qwen2-VL-72B slaps!
Tweet media one
1
2
34
@RickLamers
Rick Lamers
11 months
The big breakthrough powered by Q* explained by OpenAI founder John Schulman himself: Source: r/LocalLLaMA u/Mrleibniz
Tweet media one
1
6
34
@RickLamers
Rick Lamers
2 years
Today is a BIG 🤯 day for us. We’re launching Orchest 2.0: the best version we have ever shipped. Support us on Product Hunt and give it a try
4
5
31
@RickLamers
Rick Lamers
1 year
You've probably all seen the phi-1 model from the MSFT paper "Textbooks Are All You Need" While a lot of attention has rightly gone to the efficiency and hence affordability (go Open Source!) that can be achieved in terms of training best-in-class language
3
8
30
@RickLamers
Rick Lamers
7 months
Tools are All You Need 😄 Any and all feedback is welcome!
@GroqInc
Groq Inc
7 months
Tool Use/Function Calling (beta) for Groq API is now available! 🚀 This highly anticipated feature allows models available on GroqCloud to take user-defined functions as inputs and generate structured output to invoke them from external tools / codebases. .
15
36
282
3
1
31
@RickLamers
Rick Lamers
8 months
Pro tip! Use @anysphere 's Cursor IDE with @AnthropicAI 's Claude models using @OpenRouterAI
Tweet media one
2
4
31
@RickLamers
Rick Lamers
5 months
This is one crazy AI-infused VS Code fork. Cursor and Replit just got company 😮
Tweet media one
4
6
30
@RickLamers
Rick Lamers
8 months
. @GroqInc fast LLM inference is a practical example of the Jevons paradox. "technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource
3
2
28
@RickLamers
Rick Lamers
3 months
Powered by the cheapest and fastest inference API of course ;-) @GroqInc
1
2
28
@RickLamers
Rick Lamers
22 days
LPU goes brrrr 🌬️
@ArtificialAnlys
Artificial Analysis
22 days
Groq has set a world record in LLM inference API speed by serving Llama 3.2 1B at >3k output tokens/s 🏁 Meta's Llama 3.2 3B and 1B models are well positioned for two categories of use-cases. Firstly, applications running on edge devices or on-device, where compute resources are
Tweet media one
6
23
131
0
2
28
@RickLamers
Rick Lamers
1 year
Breaking! GitHub Copilot is experimenting with a new skill based agent. The endpoint lists these skills: 1. Code search 2. Find snippets 3. Find symbols from file 4. Ping 5. Read blob 6. Recent Changes 7. Docs search And the model is apparently called “copilot-gpt-4-2”
3
1
26
@RickLamers
Rick Lamers
1 month
@jeremyphoward @__tinygrad__ Sounds like a great candidate for Saw yesterday at PyTorch conference how torch.compile, packing, activation checkpointing, (q)loras allows you to get real ambitious real fast :)
0
0
26
@RickLamers
Rick Lamers
2 years
I made a thing while at @DataCouncilAI . @lloydtabb made me think: wouldn’t it be interesting if you could write SQL only ETL pipelines with a better SQL. Better SQL you ask? Enter Malloy! Check out this Malloy pipeline built on top of Node-RED
2
4
25
@RickLamers
Rick Lamers
2 years
"Join relations won't affect aggregate calculations." @lloydtabb at Data Council. I'm so ready to abandon SQL for Malloy 👉
1
5
25
@RickLamers
Rick Lamers
3 months
@Teknium1 @intrstllrninja @NousResearch Definitely think in general you guys are doing the community a service with your public work on LLMs and want to acknowledge that. Added a special mention to both HF repos 🙌
Tweet media one
1
2
23
@RickLamers
Rick Lamers
6 months
Do not underestimate how much users care about responsiveness 👇 Great data 📈👏
@consolelogwill
will
6 months
Put Llama3 from @GroqInc live in production. The speed boost is incredible, but what's more interesting is our average session duration has jumped from 18 to around 31 minutes! Fast Responses = Better Experience = Stickier product. Thanks @sundeep !
Tweet media one
5
11
113
0
2
24
@RickLamers
Rick Lamers
3 months
Shoutout h/t to @GlaiveAI and @GroqInc ofc
1
0
23
@RickLamers
Rick Lamers
6 months
I'm excited to announce that I'll be speaking at the AI Quality Conference in SF. What about? My favorite topic: evaluating LLM tool use 🙌 Let me know if you're in town!!
Tweet media one
1
3
22
@RickLamers
Rick Lamers
1 year
Soooo many ideas to try. It's addictive.
Tweet media one
0
2
22
@RickLamers
Rick Lamers
1 month
Full house at CUDA MODE IRL. Thanks @Accel for hosting and @neurosp1ke for organizing 🔥
Tweet media one
0
0
22
@RickLamers
Rick Lamers
4 months
I gave a talk at @aiDotEngineer about Tool Use with Open-Source LLMs and luckily many of the other interesting talks at the event were recorded. I've listed some of the most interesting ones in this week's newsletter, check it out:
Tweet media one
4
2
21
@RickLamers
Rick Lamers
1 year
If you’re working on LLM powered products you want to watch this one: Why? Oh I don’t know, maybe because this guy is responsible for the most successful LLM powered product in the world: GitHub Copilot. It’s absolute gold, I promise.
3
4
20
@RickLamers
Rick Lamers
5 months
I just can't get over how much I like everything that @cartesia_ai is doing. Their website. Their tech. Their papers. Super bullish! No affiliation.
3
2
21
@RickLamers
Rick Lamers
1 year
Places that host OSS models with pricing per token👇 Fireworks AI Together AI OpenRouter Anyscale Endpoints Vertex AI You're welcome 🤗 Know of more? Leave them below!
1
1
21
@RickLamers
Rick Lamers
1 month
Great to see your model in the wild! They grow up so quick 🤗 `llama3-groq-70b-8192-tool-use-preview`
Tweet media one
@muratsutunc
Murat Sutunc
1 month
Ok now this cool
0
1
6
0
2
20
@RickLamers
Rick Lamers
9 months
. @deepseek_ai is the best open source code generation model. Just announced: their next code model is based on MoE for even more efficient inference. Checkout their 16B MoE Chat performance gap: Now imagine that efficiency jump for code: Honestly, I can't wait 🙌
Tweet media one
Tweet media two
1
1
18
@RickLamers
Rick Lamers
17 days
Congratulations John J. Hopfield and Geoffrey E. Hinton 🙇‍♂️
Tweet media one
1
1
19
@RickLamers
Rick Lamers
3 months
Haha this is pretty epic!
Tweet media one
@linoy_tsaban
Linoy Tsaban🎗️
3 months
A new image editing technique quietly landed on the hub 👀🤫 ✨Turbo Edit ✨ 🌬️blazing fast - works with as little as 3-4 steps ⚡️using SDXL Turbo ✍️ super clever approach for adapting edit friendly ddpm inversion to distilled & fast sampling models
3
15
100
4
1
19
@RickLamers
Rick Lamers
3 months
Hit me up if you need a referral, the guys from @GlaiveAI seriously cook!
@naklecha
naklecha
3 months
we worked with groq to train the sota open source function calling model, yes literally the best! + we rank #1 on the berkeley's function calling leaderboard. if you or your company needs custom language models, try @glaiveai . also, we do highly custom language models, dm us :)
3
9
97
1
0
18
@RickLamers
Rick Lamers
5 months
This will be _a lot_ of fun. The low latency Speech-to-Speech starter project that I’ve been developing for this hackathon has been continuously blowing my mind, truly where the Groq latency shines. Includes early access to our very low latency Whisper model 👀
@GroqInc
Groq Inc
5 months
We're excited to cosponsor the UC Berkeley AI Hackathon. Don't miss our workshop by @RickLamers and increased API rates during the event. Stop by our table to say hi. We can't wait to see what you build on Groq!
Tweet media one
0
4
18
3
4
18
@RickLamers
Rick Lamers
23 days
What I'm seeing when running benchmarks: boosts on MMLU, GPQA, HumanEval compared to vanilla Instruct Llama 3.1 70B MATH and GSM8K were misreported earlier because of a bug in the LLM-as-a-judge code as I understand from Sahil.
Tweet media one
1
3
16
@RickLamers
Rick Lamers
2 years
I rarely try to give people FOMO but if you're not going to Normconf you're ... missing out. P.S. we couldn’t be more proud to be a gold sponsor of the conference as we consider it a vote for this fresh and positive direction for the data industry. 11/11
2
2
17
@RickLamers
Rick Lamers
1 year
GPT-3.5: replaces Google search GPT-4: replaces Stack Overflow Who's with me? 😄
7
0
17
@RickLamers
Rick Lamers
1 year
We're in the LLM build phase, everyone is building and a lot of tools are coming out to expedite the process. Discover some you might not have heard about in this week's CoWI!
4
5
17
@RickLamers
Rick Lamers
1 year
This week is all about SkyPilot: a project from Berkeley that makes it easy to launch compute jobs across heterogeneous cloud resources: bare metal k8s, AWS, GCP, Azure, … A fine-tuning and a serving example should get you underway! h/t @skypilot_org
5
3
17
@RickLamers
Rick Lamers
1 year
This project lets you expose your local codebase (vectordb-indexed) to ChatGPT's GPT-4 through `localhost` ChatGPT plug-ins. Try it out & study the code. This is a neat project! @loladotdev 👏
3
1
17
@RickLamers
Rick Lamers
1 year
‼️ You Don't Need To Depend On Proprietary LLMs! Open Source LLMs are becoming better because of: - higher quality data; - fewer bits during training/inference; - inference sampling optimizations; - decoding constraints; - stronger base models; - combining large and small
5
4
16
@RickLamers
Rick Lamers
1 month
@remilouf @karpathy Yes, insert llm.c codebase in the context window and ask for llama 3.1 in CUDA/C (llm.c is gpt-2)
2
1
16
@RickLamers
Rick Lamers
3 months
Reminds me of good software engineering practices, remove complexity one PR at a time 🔥 +329 -615,669
Tweet media one
@sundeep
sunny madra
3 months
Incredible engineering.
1
1
21
3
2
16
@RickLamers
Rick Lamers
1 month
PyTorch conference let’s gooo!
Tweet media one
1
2
16
@RickLamers
Rick Lamers
4 months
@fchollet Do you believe we can formulate a test that if passed implies we have AGI? Would make it easier to understand what people mean by this AGI thing…
5
1
16
@RickLamers
Rick Lamers
28 days
🫡
@ozenhati
Hatice Ozen
28 days
Friday feature drop: Llama 3.2 11B Vision and assistant message pre-filling now available on Groq! 🚀
Tweet media one
15
13
156
2
1
16
@RickLamers
Rick Lamers
1 year
Implication is what @AndrewYNg has been saying for a while, focus on the training data. Mentally model transformers as lookup tables with some margin around the samples.
@martin_casado
martin_casado
1 year
Ok great. Can we all chill the fuck out now and throw our strong support behind building systems using this amazing new computer science primitive?
11
14
139
1
2
16
@RickLamers
Rick Lamers
8 months
Very interesting work by @OpenAI on interpreting the inner workings of Transformers 👏
@janleike
Jan Leike
8 months
This is still an early stage research tool, but we are releasing to let others play with and build on it! Check it out:
9
83
569
1
2
15
@RickLamers
Rick Lamers
1 year
@swyx @FanaHOVA The paper gives a nice overview of instruct datasets!
Tweet media one
1
0
15
@RickLamers
Rick Lamers
10 months
Introducing: BuildAnything This was so much fun to build! It feels like true magic when using it 🪄 Generate any HTML page or app and see it appear fully functional right in front of you. MIT licensed & on GitHub!
2
3
14
@RickLamers
Rick Lamers
8 months
175B model on 300B tokens in under 2 days 🏃
@_akhaliq
AK
8 months
ByteDance presents MegaScale Scaling Large Language Model Training to More Than 10,000 GPUs present the design, implementation and engineering experience in building and deploying MegaScale, a production system for training large language models (LLMs) at the scale of more than
Tweet media one
11
83
433
1
0
13
@RickLamers
Rick Lamers
2 years
@angelofuture @madelinelawren Didn’t stop Microsoft though! Mkt cap 2.01T
Tweet media one
1
0
13
@RickLamers
Rick Lamers
1 year
@skypilot_org I want to give a shoutout to @jxnlco for helping folks with fine-tuning. One of the pesky problems when fine-tuning is generating the right training data traces to improve task performance. The Instructor library can make your life easy here by
1
3
14
@RickLamers
Rick Lamers
5 months
Input token processing speed is an underrated metric 👀 As in-context learning ability increases this becomes ever more important.
@GroqInc
Groq Inc
5 months
Keeping up or keeping track? The team is working hard nights and weekends. They keep telling us there's more to come, and we believe them. Thanks @sundeep for the screen grab! 30k t/s Input. 🔥⚡️ Llama 3, 8b.
Tweet media one
10
9
113
1
1
14
@RickLamers
Rick Lamers
1 year
The pace of innovation in Large Language Models & ML is truly mind-blowing 🤯 Even as a full-time Machine Learning Engineer I find staying up-to-date to be challenging. 1/5 🧵
1
3
14
@RickLamers
Rick Lamers
9 months
The vLLM team @zhuohan123 @woosuk_k @simon_mo_ et al. is doing incredible work with the vLLM project. These were their priorities 3 months ago and boy did they deliver: their goals & more. E.g. prefix caching; especially useful if you have beefy system prompts 🙌🏻
Tweet media one
2
0
14
@RickLamers
Rick Lamers
2 years
While these headlines can spark the imagination, I'd argue that this has practically nothing to do with reality for nearly all data professionals working on actual data initiatives in their companies. 3/
1
0
14
@RickLamers
Rick Lamers
2 years
@malcolmtyson @nonmayorpete What eventual margin are you suggesting? Google search has become orders of magnitude cheaper since launch but Alphabet sits at 55% gross margin (granted this is more than search).
3
0
13
@RickLamers
Rick Lamers
6 months
Visualize gradient descent in 3D, very cool project!
2
2
13
@RickLamers
Rick Lamers
1 year
Hyper-parameters are tricky to dial in for optimal performance. Luckily we can build on existing benchmarks that match closely with the task we care about for parameter value selection. To contribute to the community's understanding @DefinitiveIO is releasing a highly
Tweet media one
Tweet media two
0
7
12