Yesterday I gave a guest lecture
@Stanford
on AI code generation for CS 224G, the course on LLMs.
Teaching gives me a ton of energy and hopefully this is the first of many! 😄
Check out the slides here:
Reading takes time, and sometimes you just want the tl;dr, right?
Using a summarization model I've been working on, I was able to get a synopsis of an article I wrote a few days ago, and this is the result.
#productivity
#deeplearning
#nlp
#ai
The
@coframe_ai
x
@agihouse_org
hackathon was legendary — I’m so impressed with the quality and creativity of projects that everyone brought to life in just under 8 hours. Many friendships were formed and dreams made real. Yesterday was a glimpse of the future!
The vision I'm excited for:
◐ a Flask API -> FastAPI or Node.js ◑
◐ a React Native app -> Swift/Android ◑
◐ a Python data pipeline -> Rust pipeline ◑
◐ a Vue frontend -> Typescript + Next.js ◑
...all in just a few minutes ⚡
Here's how it works:
1. It creates a Docker environment for <newlang>
2. Recursively rebuilds new <newlang> code from your existing code
3. Builds unit tests from your old code
4. Tests the new code against them
5. Iteratively debugs the code for you
(Debugging 👇)
📣 This is a call to action! Migration is TOUGH, and I've really only made a small chip at it. If this problem gets you excited and you're interested being a core contributor, I (and most software orgs/people, really) would love to see you in the repo!
Security + deep learning = ??
Deep steganography has the potential to absolutely revolutionize secure communication. Proposed in this article are two applications: text and audio. Thank you
@TDataScience
for making it an Editors' Pick!
Just gave another lecture on AI x code gen, this time for USC, but had to deliver it on the go (parked)…and no one was the wiser ;) thank you virtual backgrounds
What if digital interfaces (like websites and apps) could adapt and improve themselves, like living things do?
That's the vision of Coframe: an AI-powered designer, frontend dev, and A/B testing researcher...operating and self-improving continuously.
I have to admit, it’s kind of addicting to just press “y” and let it fix the code whenever there’s an error 😁
Keep in mind that *this is a hackathon project* (from Saturday's
@agihouse_org
hackathon), so it's extremely nascent, and there's a ton of work left to do.
In fact, I recently migrated from Flask to FastAPI for another project (TBA) and I *wished* there was something like this out there. I've been asked by pretty much everyone I've shared this with: "does it do language x to language y?" The answer is probably, but it's early, so...
Code migration is a painstaking and complex task faced by everyone from indie devs to the Fortune 500. Transpilers exist, but yield unmanageable code. Practical migration is only now becoming tractable, thanks to today's LLMs, as shown by
@swyx
,
@antonosika
,
@SigGravitas
et al.
Here's a good start: the safety mechanisms now exist, and the need is greater than ever. Agreement amongst medical institutions and professionals is crucial
@StanfordMed
@StanfordEng
@StanfordMedX
🤖 Smart:
Coframe creates a feedback loop between your users and your website or app, constantly improving it over time based on real-world performance ("regenerative AI", if you will). This means that two users may see two different things.
🥧 Easy to use:
Coframe takes 2 minutes to integrate! For websites, simply copy+paste the Coframe script tag. For anything else (email campaigns, mobile apps, notifications...), plug into Coframe’s simple API.
The internet of agents is a blue-sky paradigm shift. What will it look like?
Search platforms like
@Perplexity_ai
and
@Google
and agent platforms like
@multion_ai
and
@AdeptAILabs
have started to show us a hint. Search is increasingly going to be done 𝘧𝘰𝘳 us, rather than
While Coframe is limited to text today, the concept goes far beyond this. Visual elements, UI structure, even entire flows may soon have their own similar sense of intelligence. This has been a fun project to work on and I'm excited for what it could become.
@chrysb
@tinahhong
It's set up haha - the issue is that I wanted to uncommit the backend changes for my FE PR so I did `git rm -r backend`, but that untracked the backend and basically did the opposite of what I wanted
Introducing ALOHA 🏖: 𝐀 𝐋ow-cost 𝐎pen-source 𝐇𝐀rdware System for Bimanual Teleoperation
After 8 months iterating
@stanford
and 2 months working with beta users, we are finally ready to release it!
Here is what ALOHA is capable of:
@HammadTime
@willdepue
@matei_zaharia
@Si_Boehm
@james_y_zou
The performance difference assuming proper formatting would probably have been worth mentioning in the paper. The spirit of the paper probably should have been about the reasoning - a breaking change from formatting should be at most a blog post
@xanderatallah
This kind of chain-of-thought reasoning was explored in the PaLM paper - seems these models hone in on logical continuity of the chain (makes sense for a transformer) but don’t “validate the answer” with high-level reasoning. Hierarchical method or architecture might solve this
@transitive_bs
@coframe_ai
@onum_tw
@tinahhong
For the especially long files it was really difficult to get it to output the full file...we tested only having one "Do not omit..." and it performed noticeably worse than having five of them
Abstractive graph summarization could be cool. Graph pooling for encoder, generative GNN (e.g. GRAN) for decoder. Task-conditioned as well (summaries capture x type of community, y type of motif, etc)
@karpathy
Anyone want to take a shot at implementing this with RandNLA? Negligible relaxation in precision in exchange for major efficiency boost (O(n))