I prefer my country and society to be governed by principles and ideas more than by specific people. So, if a person I dislike or distrust expresses an idea I agree with, I will still agree with the idea. I hope this contributes to a tendency for good ideas to gain more power.
Without internationally enforced speed limits on AI, humanity is very unlikely to survive. From AI's perspective in 2-3 years from now, we look more like plants than animals: big slow chunks of biofuel showing weak signs of intelligence when undisturbed for ages (seconds) on end.
When I count on my fingers, I use binary, so I can count to 31 on one hand, or 1023 on two. It took me about 1 hour to train the muscle memory, and it's very rhythmic, so now my right hand just auto-increments in binary till I'm done, and then I just read off the number.
FYI: you can count up to 100 on your fingers like so:
- right hand is ones
- left hand is tens
- thumbs are 5/50, fingers are 1/10.
This is convenient enough that it's the way I count by default.
Dear everyone who wants to regulate and slow down AI: please stop fighting over who has the Most Correct Reason for the slow down. Just work together and make it happen! Reasons in alphabetical order:
* autonomous weapons
* bias
* biosecurity
* children's safety
(more...)
My followers might hate this idea, but I have to say it: There's a bunch of excellent LLM interpretability work coming out from AI safety folks (links below, from Max Tegmark, Dan Hendrycks, Owain Evans et al) studying open source models including Llama-2. Without open source,
* cybersecurity
* discrimination
* existential risk
* fake news
* global conflict
* harassment bots
* human extinction
* mental health
* national security
* social media addiction
* terrorism
* unemployment
I don't agree with all these, but I endorse the conclusion: regulate AI!
Reminder:
Without internationally enforced speed limits on AI, I think humanity is very unlikely to survive. From AI's perspective in 2-3 years from now, we look more like plants than animals: big slow chunks of biofuel showing weak signs of intelligence when undisturbed for
Yann LeCun is calling the list of scientists and founders below "idiots" for saying extinction risk from AI should be a global priority. Using insults to make a point is a bad sign for the point… plus Hinton, Bengio, and Sutskever are the most cited AI researchers in history:
From my recollection, >5% of AI professionals I’ve talked to about extinction risk have argued human extinction from AI is morally okay, and another ~5% argued it would be a good thing. I've listed some of their views below. You may find it shocking or unbelievable that these
Belated congrats to
@ilyasut
for becoming the third most cited AI researcher of all time, before turning 40… huge! He's actually held the spot for a while — even before GPT-4 — but it seems many didn't notice when it happened.
Go Canada 🇨🇦 for a claim on all top three 😀
Reminder: "Mitigating the risk of (human) extinction from artificial intelligence should be a global priority", according to…
The CEOs of the world’s three leading frontier AI labs:
• Demis Hassabis — CEO, Google DeepMind
• Dario Amodei — CEO, Anthropic
• Sam Altman — CEO,
Dear everyone: trust your common sense when it comes to extinction risk from superhuman AI. Obviously, scientists sometimes lose control of the technology they build (e.g., nuclear energy), and obviously, if we lose control of the Earth to superhuman intelligences, they could
Reminder: some leading AI researchers are *overtly* pro-extiction for humanity. Schmidhuber is seriously successful, and thankfully willing to be honest about his extinctionism. Many more AI experts are secretly closeted about this (and I know because I've met them).
AI boom v AI doom: since the 1970s, I have told AI doomers that in the end all will be good. E.g., 2012 TEDx talk: : “Don’t think of us versus them: us, the humans, v these future super robots. Think of yourself, and humanity in general, as a small stepping
Many people I know (>100) have felt bullied and silenced about AI extinction risk, for many years, by being treated as crazy or irrational. Many of them were relative experts who knew AI would present an extinction risk to humanity, but said little or nothing in public or even to
The three most cited AI researchers in the world
— Hinton, Bengio, and Sutskever
— all say AI is an extinction risk. Now, Bengio beautifully summarizes the three most important factors in AI regulation: progress, safety, and democracy.
If you use insults to debate these
Happy Father's Day! Please let the GPT-4o video interface be a recurring reminder:
Without speed limits on the rate at which AI systems can observe and think about humans, human beings are very unlikely to survive.
Perhaps today as many of us reflect on our roles as parents
Say hello to GPT-4o, our new flagship model which can reason across audio, vision, and text in real time:
Text and image input rolling out today in API and ChatGPT with voice and video in the coming weeks.
If you felt disturbed by the OpenAI governance debacle, and you work in AI, you might be tempted to work on "alignment" to help reduce your worries that AI will get out of control. But why not channel your technical abilities to work directly on something that helps with
This is crazy. Check out this early "compromise text" for the EU AI Act, which would have made the *most powerful* AI systems — "general purpose AI" — *exempt* from regulation. This is one of the craziest things I've ever seen in writing. Making the *most powerful* version of a
@vkhosla
It's not an assumption.
Reasoning, as I define it, is simply not doable by a system that produces a finite number of tokens, each of which is produced by a neural net with a fixed number of layers.
Classic fallacy: comparing typewriters to a forthcoming super-fast smarter-than-human species that could rival us for planetary control, some of whose creators overtly want them to operate autonomously without needing humans to survive so they can replace us.
Honest mistake?
There's a simple mathematical reason why AI *massively* increases the risk of a world-ending super-virus: AI *decreases the team size* needed to engineer a virus, by streamlining the work. Consider this post a tutorial on how that works 🙂 Only high-school level math is needed to
Reposting for emphasis, because on this point Eliezer is full-on correct: AI output should always be labelled as AI output. If the UK summit fails to produce support for a rule like this, I will resume my levels of pessimism from before the CAIS Statement and Senate hearings. A
"Every AI output must be clearly labeled as AI-generated" seems to me like a clear bellweather law to measure how Earth is doing at avoiding clearly bad AI outcomes.
There are few or no good uses for AI outputs that require a human to be deceived into believing the AI's output
Congrats to DeepMind! Since 2022 I've been predicting 2025 as the year in which AI can win a gold medal at the International Mathematics Olympiad. I stand by that prediction. By 2026 (or sooner) you will probably see more focus and progress on AI that solves physics and
Advanced mathematical reasoning is a critical capability for modern AI. Today we announce a major milestone in a longstanding grand challenge: our hybrid AI system attained the equivalent of a silver medal at this year’s International Math Olympiad!
Why are AI labs asking for *governments* to regulate them, rather than just self-regulating? There’s a simple explanation: they do not trust each other. In case you haven't noticed the trend:
• OpenAI formed partly in reaction to DeepMind seeming too closed-off with their
From over a decade of conversations about x-risk, my impressions agree strongly and precisely with Zvi here, as to what *exactly* is going wrong in the minds of people who somehow "aren't convinced" that building superintelligent AI would present a major risk to humanity. Cheers!
I want to meet more communities that think really hard and care about their impact on the world, who use *both* logic and probability to reason from observations to actions. Where can I find them and make friends?
Don't say:
* academia
* effective altruism
* rationalists
🙏
New paper with an exhaustive taxonomy of societal-scale AI risks, based on accountability:
Extinction, injustice, and other widespread harms are considered. Additional taxonomies are needed for a more diverse and robust perspective on risk. Meanwhile,
1/ Humanity is on a dangerous path where people calling for AI regulation are saying "AGI *can't* be controlled". That argument will fail in a few years when someone produces a controllable AGI: regulators will be blindsided, become disorganized, and fall behind...
"Short term risks from AI" is almost always misused as a phrase when I see it.
Example 1: Unfair discrimination is not a "short term risk", because
1.1) It's not a "risk", it's already happening.
1.2) It's not "short term" because it's also deeply threatening to the value of
In today's excitement about progress toward proving the Riemann Hypothesis, let me just say: the distribution of prime numbers is *wild*. Here's my favorite explanation for why:
Imagine you're looking for a simple formula to estimate the number of primes that are ≤N, for any
It's time to move past name calling and into genuine collective decision-making about how to address extinction risk from AI:
What trade-offs do we as a species want to make, or not, to lower our extinction risk? To even have a chance of fairly deciding
@AryehEnglander
Frankly, I also want to normalize calling slow timelines "sci fi". E.g., the Star Trek universe only had AGI in the 22nd century. As far as I can tell, AI progressing that slowly is basically sci-fi/fantasy genre, unless something nonscientific like a regulation stops it.
People ask whether AI can "truly" "create new knowledge". But knowledge is "created" just by inference from observations. There's a fallacy going around that "fundamental science" is somehow crucially different, but sorry, AI will do that just fine. By 2029 this will be obvious.
People ask whether AIs can truly make new discoveries or create new knowledge. What's a new discovery or new knowledge you personally created in 2023 that an AI couldn't currently duplicate?
Here's a great take-down of "AI alignment" as a concept:
Since around 2016 I have been trying to move AI-risk-aware people conceptually away from "alignment", to little avail. Very happy to see more writings like this.
GPT-4 is not only able to write code, more reliably than GPT-3.5, it writes code that writes code; see the example below (GPT-3.5 was not able to do this). But first:
1)
@OpenAI
: Thank for your openness to the world about your capabilities and shortcomings!
Specifically...
I'm sad to see so many people leaving OpenAI. I've really enjoyed their products, and the way they've helped humanity come to grips with the advent of LLMs by making them more openly available in their products.
I remain "optimistic" that we probably have only a ~25% chance of
If you don't want an authoritarian lockdown on AI technology, start thinking about how you can play your part in preventing rogue AI & extinction risk. Why?
1) If no one thinks about prevention, eventually we all die.
2) If only a few people think about it, those few will end
@AndrewYNg
@AndrewNG
, I suggest talking to someone not on big-tech payroll: Yoshua Bengio, Geoffrey Hinton, Stuart Russell, or David Krueger. IMHO Yoshua maximizes {proximity to your views}*{notability}*{worry}, and would yield the best conversation.
Thanks for engaging with this topic :)
Some of my followers might hate this, but I have to say it: the case for banning open source AI is *not* clear to me.
Open source AI will unlock high-impact capabilities for small groups, including bioterrorism:
*Still* I do not consider that a slam-dunk
There's a simple mathematical reason why AI *massively* increases the risk of a world-ending super-virus: AI *decreases the team size* needed to engineer a virus, by streamlining the work. Consider this post a tutorial on how that works 🙂 Only high-school level math is needed to
AI hype is real, but so is human hype. Einstein was not magic. E=mc² can be found by a structured search through low-degree algebraic constraints on observations of light and matter. Consciously or not, this is how Einstein did it. Not magic, just better search.
@ShaneLegg
Agreed 🙏 Sadly, many folks I've met seem to feel or believe that fundamental science (e.g., e=mc²) differs from Go and protein folding in some crucial way that can't be explored with hypothesis search. Yes this is false, but like with Go, until they see it they won't believe it.
Big +1 to Dario Amodei,
@sama
, and everyone else seated here for briefing our government on how keep human society safe in the age of ever-accelerating AI technology.
Artificial Intelligence is one of the most powerful tools of our time, but to seize its opportunities, we must first mitigate its risks.
Today, I dropped by a meeting with AI leaders to touch on the importance of innovating responsibly and protecting people's rights and safety.
For quite a while now I've been estimating there's around an 80% chance that humanity will destroy itself with AI sometime in the next 40 years. But something could soon lower that estimate for me:
If before 2028, the United Nations passes a resolution *completely banning*
I really dislike how non-consensual AI-driven human extinction is likely to be. A large fraction of people, including some experts, are emotionally incapable of facing extinction as a real possibility and adopting norms to avert it, subverting informed consent by denying risk.
It's time for America to adopt *ranked choice voting*, at least for primaries. President Biden, Former President Trump, and America as a whole are all victims of a voting system that selects and motivates leaders to oppose a large fraction of the country — the other Party — in
@jrhwood
Thanks Jesse, these are good points, and I agree with you that intelligence, agency, and evil are all different. Unfortunately, I think plants rather than neanderthals are a better analogy for humans if AI is developed without speed limits.
Zuckerberg's message here is really important. I prefer to live in a world where small businesses and solo researchers have transparency into AI model weights. It parallelizes and democratizes AI safety, security, and ethics research. I've been eagerly awaiting Llama 3.1, and I'm
Mark Zuckerberg says in the future there will be more AI agents than people as businesses, creators and individuals create AI agents that reflect their values and interact with the world on their behalf
@elonmusk
Probably ~AGI arrives first, but yes I hope Neuralink supports human relevance & AI oversight by broadening the meatsticks-on-keyboard channel for humans 🙏
Thanks also for being a voice for AI regulation over the years; now is a key juncture to get something real in place.
If you're being harmed by AI, please don't give up or be silenced. Things could get *much* worse as the technology advances, especially if victims lose their voice.
If you're worried about extinction-level AI risks and ignoring ongoing harms, don't. Ignoring those less fortunate
+1 to all these points by
@ylecun
. If we dismiss his points here, we risk building some kind of authoritarian AI-industrial complex in the name of safety. Extinction from AI is a real potentiality, but so is the permanent loss of democracy. Both are bad, and all sides of this
Fairness, social justice, and employment security are not distractions from human existential safety; they are supposed to be part of the solution. Calling these "short term issues" is dismissive and elides their urgency for steering humanity toward a safe and acceptable future.
In 2021, I publicly released these AI disaster scenarios that I found especially plausible: "Production Webs", "Flash Wars", and "Flash Economies". Now in 2023, these scenarios have stood the test of time — they're plausible to many more people now that GPT-4 is out, and
7/ Speaking for myself, the reason I think we have an 85% chance of extinction from AI this century is because discourse on the topic is so poor that we will fail, collectively, to avoid very stupid decisions with AGI, and eventually, yes, humanity will lose control…
What's the highest acceptable extinction risk for humanity developing AI that if safe would cheaply cure all known diseases, including aging, during the next 30 years?
A puzzle for you: Imagine a village of (nuclear) families where the average # of kids per family is 7. On average, how many siblings does each kid have?
*
*
*
*
*
*
*
*
*
*
6?
Not so! On average each kid has more than 6 siblings, because most of the kids come from
If you want to raise your child to be a great leader, it helps to give them a lot of siblings.
There are no US presidents that were only children and only three presidents had one sibling. On average US presidents have had just over 5 siblings!
@AndrewYNg
@AndrewYNg
, you're the one who convinced me that we'd get AGI during our lifetimes, back in 2010 in a talk you gave at Berkeley. So why have you been saying publically that AGI risk is like overpopulation on Mars, if you believed it was just decades away? Doesn't seem honest. I
"The Goddess of Everything Else", narrated by
@Liv_Boeree
and
@robertskmiles
, is now my favorite way to convey the idea below, which is now also one of my favorite quotes:
"Darwinism is a kind of violence that is no longer needed for progress." - David
@davidad
Dalrymple
“The Goddess of Everything Else” by
@slatestarcodex
is, imo, one of the most beautiful short stories ever written.
And it’s just been made into an animation, in which I voice-act the Goddesses!!!! So stoked 👇
At some point this decade I suspect humanity will switch from being too free-wheeling with AI development to being too restrictive in important ways. I'd like to mitigate that effect. If/when I feel we've crossed that line, I expect to turncoat and promote AI benefits over risks.
Helen, I don't know what exactly you needed to know but didn't, but I'm glad the Board had the integrity to put an end to the false signal of supervision. I honestly can't tell from the outside if this was the best way, but it was a way, and better than faking oversight for show.
Today, I officially resigned from the OpenAI board.
Thank you to the many friends, colleagues, and supporters who have said publicly & privately that they know our decisions have always been driven by our commitment to OpenAI’s mission.
1/5
Factory farms are actually much creepier and more horrific than the newly developing clean meat labs. It's also very unamerican to oppose a free market demand for clean meat. Please watch:
Hah, you're so correct about synthetic data. I also lol and fail to understand why this is not obvious. Maybe people think too much in terms of Shannon info theory, where synthetic data carries no "information"? But computation is just as important as information!
#LogicalDepth
5/ The ratio of rhetoric to reasoning in AI risk discourse is truly awful. It's suffocating progress both on regulation and on tech. It's just so fun to say things like "We have *no idea* how to control super-human AI", when we literally have *multiple ideas*…
I'm with Jess Whittlestone on this. Talk about extinction risk should not crowd out other issues core to the fabric of society; that's part of how we're supposed to avoid crazily unfair risk-taking! E.g., more inclusive representation in who controls a single powerful AI system
Strong agree with this - I've been pleased to see extreme risks from AI getting a bunch more attention but also disheartened that it seems like tensions with those focused on other harms from AI are getting more pronounced (or at least more prominent and heated online)
10/ If you want to lower the probability of human extinction, try just saying true things without exaggerating. Try noticing if you're in a filter bubble repeating the same mantras without noticing progress.
Greg was one of the founding team at OpenAI who seemed cynical and embarrased about the org's mission (basically, the focus on AGI and x-risk) in the early days.
I remember at ICLR Puerto Rico, in 2016, the summer after OpenAI was founded, a bunch of researchers sitting out on
I've seen some pretty mean and dismissive reactions to people for claiming that machines can, will, or already have the capacity for morally valuable internal experiences. Yes, I agree that humans deserve special treatment for willfully creating AI, and I believe we deserve to be
Indeed. While some aspects of AI safety are well-championed by the EA zeitgeist, others are ignored or even disparaged. Ideally, more and more communities will stand up to represent their values as deal-breaking constraints on how AI is developed, so that risks are only taken if
SFF is hoping to distribute at least $1MM-$3MM to projects supporting human freedom in AI development. Freedom is crucial to human flourishing... but with super-human AI around, how can humans be free? It's no doubt possible, but far from easy:
Trope: "There's no way for humanity to prevent {AGI | rogue AGI | superintelligence | etc.}"
Me: Not buying it. Fatalism ignores that we sometimes pull together to ban stuff, like CFCs and human cloning.
If you want humans to keep doing something, just admit you like it. Don't
AI safety discourse, especially around EA, continues to miss the importance of AI ethics for keeping the world safe. Aiming for safety through unethical means is extremely unlikely to yield societal-scale safety, and there needs to be more attention on principles of fairness,
AI extinction risk is profoundly unfair. This point is *finally* landing with both ethics and safety experts, and I hope it can help unite these communities. In 2022, extinction risk was dismissed as "not real", but now the top AI experts acknowledge it, and the only retort left
Zuckerberg and Patel having an amazing conversation on AI risk. Great questions and great responses in my opinion. I'm with Zuckerberg that these risks are both real and manageable, and hugely appreciative of Patel as an interviewer for keeping the discursive bar high.
Zuck's position is actually quite nuanced and thoughtful.
He says that if they discover destructive AI capabilities that we can't build defenses for, they won't open source it. But he also thinks we should err on the side of openness. I agree.
Some believe that AGI will remain simultaneously *not regulated* and *not invented* for like, a decade. I struggle to imagine stagnating that long. I can imagine crazy-feeling sci-fi scenarios where unencumbered AI developers somehow don't make AGI by 2034, but not in this world.
@ESYudkowsky
a) AI x-risk is obviously between 10% and 90%, which warrants major action.
b) "Not convinced"-people ask for specific scenarios with many details, and then object that the scenario has too many details and is therefore unlikely.
c) Discomfort with the conclusion is driving (b)
Agreed, balance is key. We shouldn't abandon other values like liberty and fairness just to reduce extinction risk.
But we need to quit the habit of calling extinction "long term" as a way of remembering other values, and dismissing other values as "short term".
Why?
Mitigating AI risk should absolutely be top priority, but literal extinction is just one risk, not yet well-understood; many other risks threaten both our safety and our democracy.
We need a balanced approach, confronting a broad portfolio of risks both short-term and long.
3/ EA&R discourse repeats the argument "alignment is impossible", without actually acknowledging the incredible progress being made on controlling AI. There's an ego-like unwillingness to acknowledge progress here, which is interfering with a proper regulatory approach…
8/ What's my probability that the first successful AGI lab gets us call killed? Around 10% I'd say. Unacceptably high, but not 85%. The other 75% comes from humanity just being *terrible* at talking to itself about what to do with the crazy amounts of power we're making.
I wholeheartedly agree with adding nuance and subtracting tribalism from AI risk discourse, and personalized-belief- infographics could do a lot to help with that. I should probably make one. Cheers!
Here are some of my views on AI x-risk.
I'm pretty sure these discussions would go way better if there was less "are you in the Rightthink Tribe, or the Wrongthink Tribe?", and more focus on specific claims. Maybe share your own version of this image, and start a conversation?
@ESYudkowsky
and others who might cheer/imitate: it's bad form to troll about AI risk with intentionally bad arguments. The internet is confused enough about AI. Adding more sarcasm doesn't help, and unmoores your recent progress toward good-faith communications about risk. 👎
Dual-use gain of function research for large language models is really taking off.
@NateSilver538
and fans: how many AI labs will be working on this stuff before we get a cyber lab leak?
In AI, unlike in biosecurity, dual-use research of concern (DURC) is mostly unregulated.
This voluntary handing over of control to AI is an instance of what I've called a robust agent agnostic process (RAAP). Absent regulation, humans are basically on track to compete each other out of our jobs and into oblivion.
Many AI risk arguments focus on showing that AIs could take control in a sudden, violent takeover. But I think we're already going to be giving AIs control of our civilization by default. We're going to give up the keys voluntarily. A dramatic takeover event isn't necessary.
4/ One reason my p(doom) is so high (85%) is that so few people are using logic to analyze the AI risk situation, just side-taking and attacking.
There's an "AGI can't be controlled" camp, which I find absurd, and also an absurd "X-risk is impossible" camp…
Can we all just agree to follow Divya's vision for the future? I am so in. Like, can we elect her and the CIP team to a highly influential position of global leadership and just start implementing their plans? I very seriously want this. Thought leadership like this is rare.
Doomsayers: do not accept the "doomer" label. You believe you are warning of doom, not bringing it. Only your opponents believe you are bringing doom to their hopes. And anyway it's better to demand your opponents name and attack your claims & arguments, not your identity.
11/ And if you think human extinction is somehow logically impossible, try writing out the logic. It won't work. Species go extinct to more powerful variants all the time, and so can we. And personally, I think we *will*, unless x-risk discourse improves.
Thanks for reading!
I am such a fan of using words in accordance with their normal meanings. AI x-risk discourse, because it was a taboo topic prior to 2023, tended to create jargon that just doesn't mean what it normally means. This sucks and makes regulatory discussion painfully confused.
@Blueyatagarasu
I'm intentionally trying to change the definition of the phrase in this context to match its definition in other contexts (e.g. computer security). Yes, that will create confusion sometimes. I think it's worth it.
Musk on Rogan re:
1) San Francisco as a "zombie apocalypse",
2) Extinctionism, both explicit and implicit, and
3) Extinctionist influence on AI.
If you don't live near SF, let me tell you: Musk is right that (1)-(3) are all real, and he's right to connect them. I've met
@patrickc
, don't forget 3/3 of the most cited AI researchers in history also endorse the extinction risk priority (apart from the 3/3 leading lab CEOs):
Geoffrey Hinton, Yoshua Bengio, and Ilya Sutskever.
@ylecun
Maybe "essentially all" and "major" were unhelpfully imprecise. 3/3 of the leaders of the top 3 frontier labs (as defined by performance of the lab's best foundation model) have endorsed this concern. I recognize that some (like you) don't agree. And, to be clear, I think it's
2/ This is bad, because we need regulations, or else AGI *won't* be controlled. Someone, somewhere, will *choose* to release uncontrolled AGIs on the world, negligently or on purpose.
This is unfashionable to care about in EA-adjacent or rationalist-adjacent discourse because…
Microsoft is training a custom, narrow-focus LLM specifically on the regulatory process for small nuclear plants. They need to build SMRs to power Bing's brain. MS expects the LLM to eliminate 90% of the costs and human hours involved.
Hey transformer models: remember when humans used to call you dumb for being confused by adversarial inputs?
Hey humans: how y'all doin with this stuff?
6/ Being rhetorical and hyperbolic gets you more followers, but it degrades the commons. We do not have "no chance" of survival with super-human AI. It is not "impossible" for a single AGI lab to accidentally lose control of their AGI and ket us all killed…
I'm glad leaders in effective altruism are starting to de-emphasize the question of identity, i.e., "Who counts as an EA?".
I like being altruistic and being effective about it, but I've never called myself "an EA" because to me, that always felt overly reductive and tribal. I
Lab-grown meat won't grow up and eat you. Lab-grown minds might.
Honestly, in an economy of machines that think a million times faster than you… you might just be more appealing as biofuel.
And if we don't exemplify caretaking of less intelligent life, p(we're next)++.
My take on OpenAI: I'm just very sad to see the dissolution of a group of people for whom I had great respect. OpenAI's work so far has been — from my viewpoint — a highly positive contribution to humanity. I hope the world finds a way to restore this beautiful arc of progress,
It's time to abandon "the alignment problem". There are
1) obedience problems — does the AI sufficiently obey the intentions or instructions of its owners or creators?
2) externality problems — does the AI cause positive or negative effects for everyone else?
Not the same.
@sama
, I'm glad to see you taking steps to remediate this situation, and admitting embarrassment for not knowing about it. This is a better response than I'd expect from most CEOs in your situation.
@everyone
: Whether or not you trust Sam, you can at least infer that either he
in regards to recent stuff about how openai handles equity:
we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
there was
Sad article from Bill Gates… names 5 AI risks but doesn't mention survive & spread (rogue AI). Dying out as a species starts with lots of head-in-the-sand articles just like this one:
(I technically agree with the headline, but it's a red herring.)