I'm super excited to have
@thesephist
joining us at
@NotionHQ
– we'll be exploring how AI can truly push us forward as humans, and moving quickly to ship useful products.
Come join us!
Life update🎉
I'm very excited to be joining
@NotionHQ
to continue prototyping and researching ways AI can help us be more creative, thoughtful, and productive!
Looking forward to learning from the team and bringing some of my ideas from the past year to a tool loved by many 👇
It’s so sad that a generation has been brainwashed to believe humanity has no future.
We need to be dream maxxing. Building an insanely glorious future.
Ivan and Simon have been building Notion for 10 years. Today, they’re peeling back the curtain on the early days!
Their first meeting. The first version of the product. Our first conversations with users.
First Block, season 1 finale.
Watch now:
I believe all 3 of these are true:
- AI could benefit us massively
- AI could cause short term harm
- AI could be an existential risk to humanity
It’s disappointing to see people push back on 3 without offering real object level disagreements. This is a really important debate!
I'm so excited to be showing Q&A to the world – the ability to instantly find any information in Notion has made this a daily essential for me.
More to come soon!
Notion 1.0 was built by our founders in Kyoto, coding in their underwear.
Notion AI was built in their hotel room at a team offsite.
You know the product well. Now here’s the story behind it:
@AlfredoAndere
Our approach is to ship new features as soon as they are ready, and create a marketing release when we feel we have a cohesive bundle. Principle is to never hold up our product team shipping value to users :)
I've noticed that criticisms of controls on AI from open source tech folks share a common sense that freedom is the overriding priority.
This works fine most of the time but critically breaks down for dangerous technology.
What if we gave the worlds top 1% smartest people a device that transforms their internal train of thought into tokens, then did unsupervised training on all of it?
Jokes aside, this is a perfect example of a kind of reductionist fallacy I see often in AI debates.
As an analogy, imagine complaining about a “ban on atoms” because they can be used to make weapons.
“In the beginning was the Word. Then came the fucking word processor. Then came the thought processor. Then came the death of literature. And so it goes.”
— Hyperion Cantos
As it would be incredibly foolish for this to be the priority for nuclear or biological weapons, so too for powerful AI systems.
There are some that think AI isn't dangerous. If that is your position, I disagree but that is at least consistent.
I bet you could beat the Turing test in a bigger project with iterative evaluation, data collection, and fine-tuning GPT-4.
This one seems to have only tried a prompt.
Current chatbots can pass the Turing Test, right?
A lot of people have claimed this, but Cameron Jones (
@camrobjones
) and Benjamin Bergen of UCSD actually tested the claim!
(Spoiler: The answer is "no, they don't pass.")
@atroyn
People are working on this. Here is one project: . I think most people would agree that effective regulation would need to be grounded in concrete problems and evaluations.
@alfred_twu
@rsnous
There are a lot of tricky constraints with bus route design! The optimum for mobility seems to be a mixture of transport methods, including cars. There’s a chapter in this book about this:
@atroyn
It’s not saying they are close now, just “once they are close…” — this sentence seems very sensible to me! I would want labs to have strong procedures of this form.
Just watched Most Likely to Succeed
I really enjoyed it, but the broad point at the end felt like a cop out. Why couldn't an educational system based on better principles (open-ended projects, "soft" skills, self accountability...) be scalable?
@RichardMCNgo
I think people both deceive and criticize themselves about their insecurities in a way that is messy and not at all consistent.
A major source of both could be learned behaviors from early models like our parents.
@amasad
@atroyn
I can point to a few resources that might be useful here. The default scenario that I think about is a more gradual loss of control but it's important to consider others.