George Morgan Profile Banner
George Morgan Profile
George Morgan

@vr4300

1,628
Followers
362
Following
87
Media
1,884
Statuses

Building the orb @symbolica . Previously @tesla_ai .

London
Joined November 2008
Don't wanna be here? Send us removal request.
Pinned Tweet
@vr4300
George Morgan
22 days
Tweet media one
1
0
16
@vr4300
George Morgan
5 years
Tweet media one
3
88
354
@vr4300
George Morgan
1 year
@mckaywrigley If you typed even two of those names into ChatGPT I'm sure it could guess the movie by association.
5
0
53
@vr4300
George Morgan
6 years
Sometimes I get Bjarne Stroustrup confused with char *strdup().
2
13
47
@vr4300
George Morgan
5 years
Good progress on the N64 RDP fuzzer this weekend. And the video decoder is working too!
Tweet media one
Tweet media two
1
8
43
@vr4300
George Morgan
24 days
@theojaffee Still true
0
0
42
@vr4300
George Morgan
5 years
Here’s a picture of before the layout was cleaned up. You can see the FPGA that’s decoding the RCP’s VI bus and driving a 320x240 LCD panel.
Tweet media one
0
5
28
@vr4300
George Morgan
1 year
I think I just discovered a very strange text-davinci-002-render-sha artifact. If you type the letter A (in all caps) 373 times, with a space in between each A, the model will produce a hyper specific answer to a random question. Has anyone seen this before?
Tweet media one
6
2
30
@vr4300
George Morgan
2 years
I'd place my bets that discretized (symbolic) systems will have the greatest chance of being the first to demonstrate these properties. Extremely early progress at Symbolica has already demonstrated this. As scale increases, discrete systems may be the most capable models yet.
5
1
23
@vr4300
George Morgan
7 months
Thanks for featuring @symbolica , @FortuneMagazine and @sharongoldman . We're super excited to build the future of AI. A future defined by structure, symbols, logic, and reasoning!
@sharongoldman
Sharon Goldman
7 months
NEW: Vinod Khosla @vkhosla is betting on a former Tesla autopilot engineer @vr4300 , who quit to found @symbolica , which will build small AI models that can reason: “We love people coming from left field,” Khosla said in an exclusive chat with @fortune .
0
6
36
5
5
25
@vr4300
George Morgan
3 months
@ylecun I think this is approaching the problem from the wrong direction. If you can write a simulator why not just build the model correctly from induction to begin with?
4
0
24
@vr4300
George Morgan
3 months
Great to meet you @mmbronstein . GDL was a huge inspiration to me starting @symbolica . Your kind words mean a lot!
@mmbronstein
Michael Bronstein
3 months
@symbolica ⁩ is perhaps the most intriguing ML startup out there
Tweet media one
Tweet media two
2
8
84
0
2
22
@vr4300
George Morgan
1 year
@mineekdev So glad to see people still doing stuff like this. I miss the iPhone Linux days!
0
0
20
@vr4300
George Morgan
3 months
Making the computer think from our new research lab in London
Tweet media one
3
0
20
@vr4300
George Morgan
1 year
@SullyOmarr What is the breakthrough that has enabled you to build an autonomous agent? Surely not using OpenAI APIs right?
2
0
18
@vr4300
George Morgan
11 months
It's becoming increasingly clear to me that @VictorTaelin is onto something huge with HVM. This is going to be a massive deal for AI & computing in the upcoming years.
@VictorTaelin
Taelin
11 months
HVM is becoming the world's fastest λ-calculator! For a perspective, let's perform a Radix Sort on a Scott Tree with millions of ints, vs state-of-art runtimes: JavaScript (V8): 29.081s Haskell (GHC): 11.073s Kind (HVM): 2.514s How is that possible? See below ↓
14
68
629
1
1
18
@vr4300
George Morgan
6 months
@VictorTaelin Seeding your brain with the correct abstractions to come to powerful conclusions is always the most time consuming part. It's not time wasted!
0
0
16
@vr4300
George Morgan
3 years
@henrycarless1 @CrabsAndScience @ChemicalKevy @NileRed2 @MrBeast It is called a A100374CT-ND. It's an insulation displacement connector used to attach the leads of the electronic match igniter to the board, as shown in this photo.
Tweet media one
Tweet media two
0
0
16
@vr4300
George Morgan
2 months
Proud of have collaborated with Stephen on this work!
@stephen_wolfram
Stephen Wolfram
2 months
What's really going on in machine learning? Just finished a deep dive using (new) minimal models. Seems like ML is basically about fitting together lumps of computational irreducibility ... with important potential implications for science of ML, and future tech...
Tweet media one
115
716
4K
1
2
17
@vr4300
George Morgan
7 months
This is precisely why we need to build structured, symbolic models. So that AI can reliably do the things that it's advertised as being capable of.
@random_walker
Arvind Narayanan
7 months
The crappiness of the Humane AI Pin reported here is a great example of the underappreciated capability-reliability distinction in gen AI. If AI could *reliably* do all the things it's *capable* of, it would truly be a sweeping economic transformation.
14
89
460
4
1
15
@vr4300
George Morgan
3 months
@FreyaHolmer Same. They have a great culture, where they trust in their employees to bring only people they trust. It works when it works.
0
0
15
@vr4300
George Morgan
4 years
@TubeTimeUS Do you ever stop working on projects? The amount of content here is staggering.
2
0
12
@vr4300
George Morgan
4 years
@lexfridman Or, also like computer science, you know you've found the correct tweet, friendship, love, marriage, or meaning of existence when you can reduce the computational complexity to O(1).
2
1
11
@vr4300
George Morgan
4 months
Synthetic data is an incredibly silly concept. If you can build a synthetic data generation engine, by extension it should be possible to directly build a model that understands the innate structure of the task completely skipping the need for more training data to learn it.
@JacquesThibs
Jacques
4 months
I keep saying how underrated it is among alignment researchers
Tweet media one
9
6
71
3
0
14
@vr4300
George Morgan
6 years
In heated debate about dongle chaining: "Build me a keyboard to display adapter." - me, confidently "I think that's just a computer." - @harlanhaskins I am foiled again.
0
2
14
@vr4300
George Morgan
2 years
@souplovr23 Plots like these are shadows of beautiful high dimensional structure that we will never be able to visualize. We can pattern match with symbols, but imagine having the ability to see data like this in its true form.
1
1
14
@vr4300
George Morgan
7 months
Much more to come!
@tangled_zans
Zanzi Tangle, now at Monoidal Cafe
7 months
Lots of category theory in the news lately
0
9
51
1
0
13
@vr4300
George Morgan
3 years
Tweet media one
1
0
13
@vr4300
George Morgan
7 months
Amazing how far @symbolica has come since I first met Tim at NeurIPS in 2022. I'm very excited for the upcoming MLST episode on categorical deep learning which will dive into the tech that we are building to pave the way towards structured symbolic cognition in machines!
@MLStreetTalk
Machine Learning Street Talk
7 months
Exciting times ahead for @symbolica - we caught up with Dr. Paul Lessard, a Principal Scientist in their founder team about their neurosymbolic/category theory approach to taming deep learning as a prelude to our special edition we filmed in London. Cameo from @vr4300
0
2
32
0
1
13
@vr4300
George Morgan
2 years
⚠️ Do not make a new programming language. I repeat. Do not make a new programming language. ⚠️
3
0
13
@vr4300
George Morgan
2 years
@0xmaddie_ Or it's just as simple as not having enough infrastructure to handle demand.
0
0
11
@vr4300
George Morgan
2 months
@bindureddy This is total nonsense, honestly. If they had a reasoning engine, wasting time and energy using it to generate data to train a larger non-reasoning engine model would be an absurd waste.
1
1
11
@vr4300
George Morgan
11 months
@cstanley Keep going strong. We're proud of you. 💪
2
0
11
@vr4300
George Morgan
3 months
"My CPU is not capable of performing branching or control flow. But don't worry! As long as I run enough instructions it is Turing complete!"
@srush_nlp
Sasha Rush
3 months
Starting out "LLMs just generate the next-word, and therefore cannot..." is a great way to signal you are not making a serious argument.
22
25
455
0
1
10
@vr4300
George Morgan
8 months
@miniapeur Computational statistics.
0
0
11
@vr4300
George Morgan
18 days
It was inspiring to see much of what I spent 4.5 years of my life working on launched yesterday at the Tesla We, Robot event. I'm proud of my friends and coworkers who pushed it over the finish line. Tesla is building the future. It continues to inspire me to do the same.
0
0
19
@vr4300
George Morgan
5 months
Yann is now based. I repeat. Yann is now based.
4
0
10
@vr4300
George Morgan
2 years
To reiterate something that I can't stress enough, models that will be capable of dynamic adaptation (true generalization) will necessarily need to define their own loss function. This is incompatible with gradient descent or any gradient based learning models we know of today.
4
1
9
@vr4300
George Morgan
5 years
1D = airline 2D = airplane
0
2
10
@vr4300
George Morgan
1 month
It's time to build machines that are actually capable of reasoning.
1
0
10
@vr4300
George Morgan
2 months
At this point I'm fairly confident that AGI will never be achieved by scaling a single system / architecture. It will most certainly involve bootstraping many systems / architectures with themselves. The first architecture in this process hasn't yet been created. LLMs are not it.
0
0
10
@vr4300
George Morgan
11 months
@tim_zaman Pretty confident you don't need all the GPUs. You just need a new, powerful technique. :)
0
0
0
@vr4300
George Morgan
7 months
Category theory will change machine learning forever. It's the key to being able to describe symbolic reasoning.
0
0
9
@vr4300
George Morgan
15 days
Solving machine intelligence is the most pressing issue of our time. And the next after that: humanoid robotics
3
0
11
@vr4300
George Morgan
9 months
@svpino It blows my mind that people still think he knows what he's talking about.
4
0
9
@vr4300
George Morgan
1 month
Reasoning cannot magically emerge from data alone. You need the right architecture, the right data, and critically: the ability to interact with a mechanism that validates the result of your reasoning. Can you learn to write programs by looking only at code? No, you need to run
2
0
9
@vr4300
George Morgan
1 year
It gets weirder if you ask it what you just asked it.
Tweet media one
2
0
9
@vr4300
George Morgan
1 year
@nice_byte But it also makes incredibly difficult things easy.
1
0
9
@vr4300
George Morgan
3 years
Was a blast building the hardware and firmware for this project. If anyone is interested, I can post a thread on how we pulled it off! Go watch the video!
@WilliamOsman
WilliamOsman
3 years
I finished making all 456 wireless explosive charges for MrBeast's Squid Game. Go watch it!! Ill send a board to a random person who retweets this (does not include explosive)
Tweet media one
Tweet media two
Tweet media three
140
5K
33K
0
1
9
@vr4300
George Morgan
11 months
It's time to accelerate.
0
1
9
@vr4300
George Morgan
1 year
@ID_AA_Carmack It's getting there, especially with embedded support improving. SPARK charges licenses for the compiler, but also provides support. Rust is free, but changes rapidly. The biggest roadblock to using it in safety critical applications is lack of good LLVM support for hardened archs
0
0
9
@vr4300
George Morgan
1 year
@nice_byte Getting fearless concurrency / async threading working in C / C++ is a complete nightmare, for instance. Not to mention the crazy number of crates Rust has that you can just cargo add and be off to the races instead of dealing with compiler and linker errors for days.
1
0
9
@vr4300
George Morgan
11 months
There won't be *one* way to build AGI, similarly to how there isn't *one* way to derive the value of Pi. Instead, we will see many subtly different algorithms that yield AGI bootstrapping each other until we find an algorithm that optimizes for the hardware we have.
0
1
9
@vr4300
George Morgan
2 months
Could not imagine a worse take tbh
@elder_plinius
Pliny the Liberator 🐉
10 months
AGI will be achieved through prompt engineering at this point. 100% serious.
13
4
57
1
0
10
@vr4300
George Morgan
1 month
I don't buy the "it's the data not the architecture" argument at all. If that were the case, we'd already have AGI.
0
1
9
@vr4300
George Morgan
1 year
@mattmireles @OpenAI @ilyasut @sama What are you talking about? If AGI was achieved internally OpenAI's financial issues would be solved and the board would not have any complaints.
1
0
8
@vr4300
George Morgan
7 months
We're gearing up for the launch of something really amazing. Hope you're all excited for the future of symbolic AI. 👀
1
1
8
@vr4300
George Morgan
11 months
@hi_tysam Love the premise but this is like saying "the reduction of entropy" in a zip file is a world model. I don't disagree with the premise insofar as this is the fundamental information theoretic container of the world built by the model, but like with zip this isn't actually useful.
4
1
7
@vr4300
George Morgan
9 months
Things like this will be remembered as the most egregious misuse of compute in human history.
@JosephJacks_
JJ
9 months
Holy shit. Meta is training LLaMa 3 on 600,000 H100s.... That's $20 BILLION worth of GPUs.... L.F.G.
79
239
3K
1
0
9
@vr4300
George Morgan
3 years
@chipro Building tools is like building a factory that builds a product, often overlooked entirely or underestimated in complexity. Shoutout to the tool developers! 😁
1
0
0
@vr4300
George Morgan
5 years
@GregDavill I did this for my board!
Tweet media one
1
0
8
@vr4300
George Morgan
7 months
@akbirthko This take is operating at the wrong level of abstraction. Of course the brain is not doing category theory. Think of category theory more like the description language / compiler that provides a definition of what the brain is doing. It's more like the language than the program.
0
0
8
@vr4300
George Morgan
10 months
The greatest honor one can receive in the field of AI is inventing something that has already been invented by Schmidhuber.
0
0
8
@vr4300
George Morgan
1 year
@Plinz Almost certainly no chance. Plain and simple: LLMs can't learn online.
1
0
8
@vr4300
George Morgan
5 years
It’s nice and organized now. 😍
Tweet media one
2
0
8
@vr4300
George Morgan
25 days
Computer vision didn't work until convolution. Sequence modelling didn't work until recurrence. NLP didn't work until the transformer. But yea, it's just all scale and data right? And also, the transformer is definitely the last architecture.
@mrsiipa
maharshi
25 days
> flow matching outperforms diffusion > llama model outperforms DiTs > scaling data, and compute does this mean data and scaling is the moat, and model architecture do not matter that much?
Tweet media one
4
4
73
0
1
9
@vr4300
George Morgan
5 years
@GregDavill And in dark mode of course. :)
Tweet media one
0
0
7
@vr4300
George Morgan
1 year
Interesting GPT-4V failure.
Tweet media one
0
0
7
@vr4300
George Morgan
2 years
With Yann's latest admittance that LLMs aren't enough to get to AGI, it's surprising to see this kind of attitude. Why are these companies focused on copying what's already being done? A huge focus needs to be placed on figuring out what next-gen architectures will look like.
@sundarpichai
Sundar Pichai
2 years
1/ In 2021, we shared next-gen language + conversation capabilities powered by our Language Model for Dialogue Applications (LaMDA). Coming soon: Bard, a new experimental conversational #GoogleAI service powered by LaMDA.
735
3K
15K
1
0
7
@vr4300
George Morgan
2 months
Victor is one of the only people I know of taking Symbolic AI research seriously. I'd bet on him succeeding. This is the way.
0
0
7
@vr4300
George Morgan
2 months
My niche symbolic AI twitter audience: what do you think about OpenAI's o1? I think we still have a long long way to go to true reasoning. What do you think?
3
0
7
@vr4300
George Morgan
5 months
I'll be speaking at Open Sauce industry day. If you're there make sure to say hi!
@OpenSauceLive
Open Sauce
5 months
We’ve been cooking up some new and innovative ideas for #OpenSauce2024 and are so excited to announce Industry Day on June 14th!
Tweet media one
Tweet media two
2
1
86
0
0
7
@vr4300
George Morgan
21 days
There exists complete and absolute truth. The machines can tell us this absolute truth (a proof) with certainty of its correctness. A proof is a program that we can run. If we build this machine, we unlock the secrets of the computational universe. We must build the orb.
2
0
8
@vr4300
George Morgan
5 years
The basic idea is to bring out the SysAD bus and control signals and then commandeer the bus with the FPGA pictured on the right. This will let the FPGA change writes to the internals of the RCP on the fly. Lets see what happens!
1
0
7
@vr4300
George Morgan
2 years
NeurIPS 2022 was very insightful. It's interesting to see so much attention to "wide" as opposed to "deep" learning models. Hinton's new Forward-Forward Algorithm introduces a possible framework for learning over the forward pass, but still using gradients.
1
0
7
@vr4300
George Morgan
6 months
@jimkxa The hardware is good, if utilized properly. We need better models.
1
0
7
@vr4300
George Morgan
1 year
@KrauseFx I miss this
0
0
0
@vr4300
George Morgan
4 months
Chollet describes @symbolica
@fchollet
François Chollet
4 months
Re: the path forward to solve ARC-AGI... If you are generating lots of programs, checking each one with a symbolic checker (e.g. running the actual code of the program and verifying the output), and selecting those that work, you are doing program synthesis (aka "discrete
40
79
857
1
2
7
@vr4300
George Morgan
7 months
@FutureJurvetson We can't continue scaling compute and expecting reasoning to emerge. We must build a formal rigorous study of architecture and then construct models that we know will exhibit properties at scale. In civil engineering you know the bridge you've designed won't collapse at scale.
2
0
7
@vr4300
George Morgan
2 months
@1x_tech Note to self: Symbolica's first humanoid robotics demo won't be this offputting
3
0
7
@vr4300
George Morgan
27 days
This guy cooks
@sdamico
Sam D'Amico
27 days
unboxing
Tweet media one
66
7
587
0
0
7
@vr4300
George Morgan
1 year
@sdand Nobody knows what to do anymore.
0
0
6
@vr4300
George Morgan
10 months
@brandon_xyzw It's not just that image. It breaks on this one too.
Tweet media one
1
0
6
@vr4300
George Morgan
3 years
I feel like I can tick off an engineering "I did that!" box today. My board designs made it to hackaday! 🥲
@hackaday
hackaday
3 years
Engineering on a Deadline for Squid Game
0
6
38
1
1
6
@vr4300
George Morgan
1 year
The world will change when the day comes that you can load an entire codebase (or multiple) into a language model and make useful inferences from it.
0
0
6
@vr4300
George Morgan
3 years
Gradient decent
@BartWronsk
Bart Wronski 🇺🇦🇵🇸
3 years
The year is 2021. A flagship product used by almost everyone who has anything to do with images, graphics design, or pixels gets *an option* to draw gradients that are not simply broken.
Tweet media one
15
111
703
0
0
6
@vr4300
George Morgan
1 year
@personofswag Relevant meme for the future.
Tweet media one
0
2
6
@vr4300
George Morgan
2 months
@VictorTaelin This is sad but makes total sense. I think you will be very effective putting all your attention back into HVM! It will help HOC move along much faster. And that is all in direct favor of better symbolic AI algorithms too. Really excited to see the Mac Mini cluster come together!
0
0
6
@vr4300
George Morgan
2 years
I am willing to bet that all of the compute necessary to run a model in real time with emergence indistinguishable from human intelligence is contained in a single RTX 4090 GPU.
2
1
5
@vr4300
George Morgan
4 years
@karpathy Have you tried RTX voice?
0
0
6
@vr4300
George Morgan
3 years
@ID_AA_Carmack I feel that most good ML papers, and technical papers in general, could be presented in a 15 minute 3Blue1Brown style video that make the actual-thing-that-helps extremely obvious. If the paper can’t be presented in such a format, it is likely that it is just fluff.
0
0
6
@vr4300
George Morgan
7 years
I'm starting to write an N64 emulator in Rust! :)
0
1
6
@vr4300
George Morgan
3 years
@ID_AA_Carmack I think there should be an "ImageNet in the smallest possible self contained executable" contest. Zip everything up into a single CPU only executable and see who can achieve the best runtime vs executable size ratio.
0
0
6
@vr4300
George Morgan
1 year
@pmddomingos Synthetic data makes less than 0 sense. It indicates a huge flaw with the foundations of the models.
3
0
6
@vr4300
George Morgan
5 months
Congrats Masha! Really excited to be working with you and the team at Day One. And equally excited for the future of your fund. 💪🎉
@mashadrokova
Masha Bucher
5 months
Today, I’m proud to announce @DayOneVC ’s $150M fund III that brings our AUM to over $450M. Our north star remains the same: we will keep betting on the most exceptional founders of our time working on the biggest ideas possible.
86
39
566
0
3
6
@vr4300
George Morgan
1 year
@levie AI is the ultimate example of this.
0
0
0
@vr4300
George Morgan
3 years
Go watch @WilliamOsman 's documentary on how we saved @MrBeast 's Squid Game!
0
1
6
@vr4300
George Morgan
2 months
Incredible post by @sedielem that showcases why visualizing data using different frames of reference is so powerful!
@sedielem
Sander Dieleman
2 months
Diffusion is the rising tide that eventually submerges all frequencies, high and low 🌊 Diffusion is the gradual decomposition into feature scales, fine and coarse 🗼 Diffusion is just spectral autoregression 🤷🌈
33
164
1K
0
0
7
@vr4300
George Morgan
11 months
History repeats itself. Humans are so easily fooled.
@fchollet
François Chollet
11 months
I'm old to enough to remember when *GPT-2* was so dangerous it couldn't be released.
8
35
452
0
0
6
@vr4300
George Morgan
4 years
Cool new translucent SP, with custom battery cell, IPS display, and translucent atomic purple shell.
Tweet media one
Tweet media two
Tweet media three
3
0
6
@vr4300
George Morgan
1 year
@killroy42 @mckaywrigley @Ciaran2493 Agreed. It's certainly impressive that it can extract the spatial information from the arrows but its understanding of the hierarchy of the story almost certainly comes from the information about the movie in its text training set.
1
0
5