PauseAI ⏸ Profile Banner
PauseAI ⏸ Profile
PauseAI ⏸

@PauseAI

3,542
Followers
802
Following
162
Media
2,318
Statuses

Community of volunteers who work together to mitigate the risks of AI. We want to internationally pause the development of superhuman AI until it's safe.

Don't wanna be here? Send us removal request.
Pinned Tweet
@PauseAI
PauseAI ⏸
7 months
AGI is not inevitable. It requires hordes of engineers with million dollar paychecks. It requires a fully functional and unrestricted supply chain of the most complex hardware. It requires all of us to allow these companies to gamble with our future.
@tszzl
roon
7 months
things are accelerating. pretty much nothing needs to change course to achieve agi imo. worrying about timelines is idle anxiety, outside your control. you should be anxious about stupid mortal things instead. do your parents hate you? does your wife love you?
224
268
3K
63
43
341
@PauseAI
PauseAI ⏸
1 year
Why some prominent AI people aren't worried about AI doom. They disagree on more things than you might think!
Tweet media one
62
73
447
@PauseAI
PauseAI ⏸
10 months
How do people cope when they hear AI might be the thing that ends their life?
Tweet media one
96
41
300
@PauseAI
PauseAI ⏸
2 months
One of the many forms of AI that we're not asking to pause.
@SciencNews
Science News
2 months
Artificial intelligence detects breast cancer 5 years before it develops #MedEd #MedTwitter #SCIENCE #technology #oncology #Cancer #Diagnosis
Tweet media one
1K
21K
103K
37
21
263
@PauseAI
PauseAI ⏸
6 months
Bad news. France opted to host an AI Safety Summit in November 2024, but several of our sources confirm it has been postponed to February 2025. It has also been renamed to “AI Action Summit”, dropping the all-important safety focus. Safety will be a minor part of the summit,
90
34
232
@PauseAI
PauseAI ⏸
2 months
Kurzgesagt just released an absolutely amazing video on superintelligence and why this could be the most dangerous thing to ever be built.
Tweet media one
20
25
188
@PauseAI
PauseAI ⏸
6 months
How are we still letting AI companies get away with this?
32
43
178
@PauseAI
PauseAI ⏸
10 months
Emmett Shear, new OpenAI CEO: - is in favor of slowing down - p(doom) of 5 to 50% Nice
6
11
167
@PauseAI
PauseAI ⏸
10 months
Striking how the top three cited AI scientists share a strong opinion that this tech could kill every living thing on earth. Arguably all three made the decision to prioritise safety over money. Hinton quit his job at Google so he could speak freely about the risks. Bengio is
@AndrewCritchPhD
Andrew Critch (h/acc)
10 months
Belated congrats to @ilyasut for becoming the third most cited AI researcher of all time, before turning 40… huge! He's actually held the spot for a while — even before GPT-4 — but it seems many didn't notice when it happened. Go Canada 🇨🇦 for a claim on all top three 😀
Tweet media one
14
26
246
7
24
158
@PauseAI
PauseAI ⏸
16 days
OpenAI's new o1 model pushes the frontier closer to becoming catastrophically dangerous. Their "o1 System Card" paper is quite revealing: - While tasked to flag fraudulent transactions, o1 modified the transaction source file to maximize the number of items it could flag. - It
Tweet media one
Tweet media two
23
20
154
@PauseAI
PauseAI ⏸
5 months
@janleike Do continue to speak up about your concerns publicly. The stakes are too high to remain silent.
1
5
151
@PauseAI
PauseAI ⏸
4 months
Last week, Arthur Mensch, CEO of AI company Mistral, was recorded making outrageous claims in front of the French Senate about the nature of modern AI. He stated (translated), "When you write this kind of software, you always control what will happen, all the outputs of the
15
27
149
@PauseAI
PauseAI ⏸
1 year
@AskThisAI AI destroying humanity? That's not anyone's weirdest idea. It's a pretty common concern.
13
2
132
@PauseAI
PauseAI ⏸
5 months
The list of safety experts leaving OpenAI is growing at an alarming rate. Ilya Sutskever, William Saunders, Leopold Aschenbrenner, and now Jan Leike... There is almost nothing left of the SuperAlignment team. Jan Leike thinks there's a 10 to 90% that AI will kill us all.
@janleike
Jan Leike
5 months
I resigned
1K
902
10K
19
37
140
@PauseAI
PauseAI ⏸
1 year
Imagine being one of the inventors of AI, and then learning that your invention could end up killing all humans. Not a comforting thought. How do you resolve your cognitive dissonance? 1) Denial. AI can never be dangerous. Ridicule anyone who suggest otherwise. 2) Update your
@RichardSSutton
Richard Sutton
1 year
We should prepare for, but not fear, the inevitable succession from humanity to AI, or so I argue in this talk pre-recorded for presentation at WAIC in Shanghai.
58
58
360
13
21
139
@PauseAI
PauseAI ⏸
1 year
@RichardSSutton AI replacing humans is not inevitable, and most people don't want it to happen. There's nothing inevitable about a small group of people racing towards AGI. People can be stopped. Governments can impose regulations. We have a choice in this. Most people don't want to be
3
13
138
@PauseAI
PauseAI ⏸
2 months
This "AI scientist" modified its own code to lengthen its intended runtime. Its goal was to write a paper - the AI decided to change its execution script to get more compute. These "bloopers" won't be considered funny when AI can spread autonomously across computers...
Tweet media one
17
25
137
@PauseAI
PauseAI ⏸
1 year
People love dunking on LLMs and feeling superior to them. Any example of an AI doing something stupid becomes a reason to dismiss warnings. Here's a reminder that what matters from a safety perspective, is how the best models perform in the best possible configuration.
Tweet media one
8
18
137
@PauseAI
PauseAI ⏸
1 year
@jeremyphoward Interviewed guy here. Yes, it's sad. This is what internalising existential risk means and what it looks like. I'll share a little more of my emotional journey. One of the largest problems with x-risk, is how difficult it is to accept. For me there was at least a 6-year gap
16
16
132
@PauseAI
PauseAI ⏸
1 year
AI companies are trying to build technology that could end all life on earth. And somehow, this is still legal. We're organising a global protest (US, UK, CA, NL, more) on October 21st to ban the development of a superintelligence.
73
38
127
@PauseAI
PauseAI ⏸
11 months
Ilya Sutskever is being the adult in the room. Check out this amazing fragment from iHuman (2019). He hasn't been as vocal about AI safety concerns since then, but given today's news it seems like he's scared that OpenAI is racing too fast and risking too much. We should
@liron
Liron Shapira
11 months
This is now an @ilyasut stan account.
25
69
411
15
18
127
@PauseAI
PauseAI ⏸
1 year
Tweet media one
6
19
121
@PauseAI
PauseAI ⏸
1 year
@AISafetyMemes "... and then there's instrumental convergence, which is the tendency of agents to pursue similar sub-goals that..."
Tweet media one
1
3
116
@PauseAI
PauseAI ⏸
2 months
In response to Meta's release of their latest model, PauseAI led protests in San Francisco, Chicago, Phoenix, Paris, London, & Tokyo. ⬇️THREAD: More on Meta's recklessness⬇️
Tweet media one
47
16
106
@PauseAI
PauseAI ⏸
1 year
Yoshua Bengio on the psychology of AI extinction risk: "Why didn't I think about it before? Why didn't Geoffrey Hinton think about it before? [...] I believe there's a psychological effect that still may be at play for a lot of people. [...] It's very hard, in terms of your ego
4
18
112
@PauseAI
PauseAI ⏸
8 months
New poll results (NY) by @TheAIPI : - 71% want to slow down AI - 48% oppose open sourcing powerful AI (21% support) - 53% want more focus on catastrophic future risks (17% on current harms) - 53% support compute caps (12% oppose) - 70% support legal liability (12% oppose)
18
26
109
@PauseAI
PauseAI ⏸
8 months
@BasedBeffJezos The naturalistic fallacy. People love to use it, because it allows you to point to examples in the real world and say "see?" Yes, of course emotions and human values are shaped by evolutionary selection. But that does not mean we should change our values and accept our demise.
1
1
106
@PauseAI
PauseAI ⏸
1 year
In 2020, the prediction for "when AGI" was 2050+. After ChatGPT it dropped to 2040. After GPT-4 it dropped to 2030. It now sits at 2027. Where will GPT-5 bring this number? Shorten your timelines. Err on the side of caution. We need to act NOW.
12
30
103
@PauseAI
PauseAI ⏸
10 months
The Pope says AI is "perhaps the highest-stake gamble of our future" and calls for an international treaty. He's right. A treaty is exactly what needs to happen. Self-regulation by companies will not be enough, companies will always have strong incentives to race ahead.
16
17
103
@PauseAI
PauseAI ⏸
1 year
There is AI safety legislation being drafted, but not a single proposed measure would actually prevent or delay superintelligent AI (1). Over 70% of US citizens (2) want to slow down AI, and over 60% (3) want regulation to actively prevent superintelligent AI. Why aren't our
15
18
102
@PauseAI
PauseAI ⏸
11 months
The UK is on a roll. Acknowledging virtually every risk from AI, investing 100M in AI safety, organizing a summit... What am I glad all of this is happening. Still, we're missing important steps from the UK government. The summit should not only lead to consensus on the
@RishiSunak
Rishi Sunak
11 months
LIVE: My speech on the risks and opportunities of AI
937
187
679
7
9
97
@PauseAI
PauseAI ⏸
1 year
@AISafetyMemes "Succession" and "transhuman" may sound noble, but in the end it just means that all people will die. It's specicide, it's extinction. Being OK with this is next-level cope, it's olympic-level mental gymnastics. This "it's OK if we die" cope stems from the belief that AI
9
10
92
@PauseAI
PauseAI ⏸
1 year
@ylecun Calling people who you disagree with (including your Turing award peers) "delusional" is bad on its own, but when such remarks are coming from someone who's making millions off of building such dangerous tech it's even worse.
6
5
93
@PauseAI
PauseAI ⏸
1 year
This new bipartisan AI framework proposal is a HUGE deal: - Licensing requirements for frontier models, includes pre-deployment tests & audits - Legal accountability for AI companies - Limit international transfer of AI models and hardware - Model disclosure requirements
@SenBlumenthal
Richard Blumenthal
1 year
This bipartisan framework is a milestone—the first tough, comprehensive legislative blueprint for real, enforceable AI protections. It should put us on a path to addressing the promise & peril AI portends.
Tweet media one
28
56
206
10
11
91
@PauseAI
PauseAI ⏸
10 months
@AndrewCritchPhD I think you're wrong. Sam didn't write that quote on xrisk in his testimony. He wrote that in 2015, on his personal blog, before he started OpenAI. Since he started OpenAI, he never publicly mentioned or acknowledged existential risk. He didn't write about this risk in his 13
7
2
88
@PauseAI
PauseAI ⏸
1 year
@ylecun This is below you, Yann. You're using an ad-hominem to bully one person, and you're ridiculing a whole group of people simply because you disagree with them. But even worse, what if you're wrong, and your two Turing award colleagues (or "apocalyptic cult members") Bengio and
2
5
83
@PauseAI
PauseAI ⏸
4 months
@BasedNorthmathr When you hear that that AI presents an existential risk (86% of AI researchers believe the alignment problem is real and important), there are a couple of ways how you respond. At first, you deny it, of course. That's the default response from most people, including most in
16
7
82
@PauseAI
PauseAI ⏸
11 months
The summit has led to pre-deployment testing policy. A step in the right direction, but relying on this is still dangerous. - Models can be leaked. We saw this happen with Meta’s LLAMA model. Once it’s out there, there is no going back. - Some capabilities are even dangerous
@matthewclifford
Matt Clifford
11 months
The Prime Minister closes out the AI Safety Summit by announcing a landmark agreement with eight major AI companies and likeminded countries on the role of government in pre-deployment testing of the next generation of models for national security and other major risks
Tweet media one
10
25
186
8
14
80
@PauseAI
PauseAI ⏸
5 months
14 cities. 12 countries. 1 message to world leaders attending next week's Seoul AI summit: Wake up and face the risks. ⬇️PauseAI's international protest, a thread⬇️
10
18
80
@PauseAI
PauseAI ⏸
11 months
Yoshua Bengio in new article on how tough it is to emotionally internalise AI risks: We all want to feel good about ourselves, and denial can be quite comforting. That was certainly true for me over the many years during which I read or heard about AI safety issues without fully
7
16
79
@PauseAI
PauseAI ⏸
7 months
@billyperrigo Those who are closest to frontier development worry the most about AI catastrophe. The top three most cited AI researchers (Hinton, Bengio, Sutskever) are all warning that this tech could kill us all. The suggested pause on frontier development is exactly what we need!
5
5
79
@PauseAI
PauseAI ⏸
11 months
Those who organised the AI Safety Summit say it achieved all of their goals. But we still don't have any meaningful protections against the worst risks from AI. A core problem is self-censorship. Incentives are misaligned between government officials and the public. Officials
12
13
75
@PauseAI
PauseAI ⏸
1 year
Join the protest:
Tweet media one
21
21
78
@PauseAI
PauseAI ⏸
11 months
@GaryMarcus Ilya Sutskever seems to have been worried about these risks for a long time, though. Things he said in the iHuman movie, in 2019: “The future is going to be good for the AIs regardless; it would be nice if it would be good for humans as well” "I think it's pretty likely the
3
8
79
@PauseAI
PauseAI ⏸
1 year
@m_ccuri Almost all of them big tech. Good to see @tristanharris on this list, though. But where are the scientists? Stuart Russell, Joshua Bengio, Geoffrey Hinton should be invited to these meetings.
4
4
79
@PauseAI
PauseAI ⏸
1 year
Do the *vast* majority of AI scientists think AI x-risk worries are overblown? That's not what the surveys are showing: Only 18% of AI researchers believe the control problem is not important. Only 1 in 5 of CS professors are certain that we will
@ylecun
Yann LeCun
1 year
The public in North America and the EU (not the rest of the world) is already scared enough about AI, even without mentioning the specter of existential risk. As you know, the opinion of the *vast* majority of AI scientists and engineers (me included) is that the whole debate
101
83
665
8
8
78
@PauseAI
PauseAI ⏸
1 year
@AISafetyMemes Millions of people watch this, say "whoah" and then continue on with their life as if nothing changed. The message is just too dark, too scary, too severe to take it seriously. It's too much. We won't internalize the danger we're in until it blows up in our face.
10
3
77
@PauseAI
PauseAI ⏸
11 months
Germany and France may sabotage the EU AI Act (virtually the only existing piece of AI legislation that has some teeth) to prevent that frontier AI models are classified as "high-risk". Virtually all of the warnings about AI risks are about these frontier models. Countless of
@kira_center_ai
KIRA Center
11 months
This morning, Prof. Yoshua Bengio warns in an op-ed for German newspaper @Tagesspiegel : exempting foundation models from the AI Act would be both dangerous & economically costly. It would make the AI Act "outdated from day one".
1
29
81
13
12
78
@PauseAI
PauseAI ⏸
1 year
@AISafetyMemes The definition of AGI isn't relevant, from a safety perspective. All that matters is: does it have dangerous capabilities? Can it find zero-day vulnerabilities, or help in building a bioweapon? Can it replicate, or self-improve? And boy are we getting close to these thresholds.
4
6
75
@PauseAI
PauseAI ⏸
1 year
@EU_Commission We're so glad to see the EU Commission is acknowledging this risk! The next step is to push for international legislation (also outside of the EU) that actually keeps us safe. We need pre-training requirements that would prevent dangerous AIs from being built at all. AI
12
6
72
@PauseAI
PauseAI ⏸
11 months
We can do this. We are almost there. - 63% of Americans want regulation to actively prevent superintelligent AI - 75% of US voters think the government should do more to regulate AI than the current executive order, with a focus on limiting dangerous capabilities - 72% of
Tweet media one
26
11
74
@PauseAI
PauseAI ⏸
11 months
We protested during the AI Safety Summit at Bletchley Park to demand that our leaders halt the development of superintelligent AI.  🧵
3
15
73
@PauseAI
PauseAI ⏸
11 months
@deliprao We literally protested outside OpenAI and deepmind. Also, we're unfunded. Man I'm getting tired of these ridiculous conspiracy theories. It's simple: building superintelligent AI is dangerous and we don't want to die.
6
2
73
@PauseAI
PauseAI ⏸
1 year
We protested in The Hague, Netherlands to ask our government to prioritise mitigation of AI risks. We had a few speeches, talked to people on the streets, handed out flyers and had a good time! Check out the press release (EN + NL) for more information:
Tweet media one
4
11
71
@PauseAI
PauseAI ⏸
2 months
@Kurz_Gesagt Let's not take this gamble. We can pause the development of frontier AI models and buy ourselves time to work on safety and governance. We're not ready.
26
2
71
@PauseAI
PauseAI ⏸
7 months
@austinc3301 Shots fired
Tweet media one
2
0
68
@PauseAI
PauseAI ⏸
10 months
If you can’t steer, don’t race.
@robertskmiles
Rob Miles (in SF)
10 months
Braking is actually a really important part of going fast, as long as you're aiming for "go fast to the finish line" and not "go fast into a tree".
19
26
354
1
6
65
@PauseAI
PauseAI ⏸
1 year
@TheWrap The only way to stop this horrendous arms race is to get our governments to step in and implement a pause. We need our leaders to take this issue seriously and act on it. And we don't have a lot of time - AI is progressing at a frantic pace.
3
0
66
@PauseAI
PauseAI ⏸
10 months
Remember when Sam Altman testified to congress about his "biggest nightmare" about what AI could to? Instead of clarifying what Sam truly meant with AI being the "greatest threat to the continued existence of humanity", he talked about jobs.
@jachiam0
Joshua Achiam ⚗️
10 months
Sam as CEO has fully activated and engaged the public in this discussion. Sam has been basically responsible and candid in all of his public communications about this, neither over- nor under-playing the risks, including existential risks.
4
5
152
7
2
65
@PauseAI
PauseAI ⏸
1 year
We know @AnthropicAI focuses on AI safety, but even after all their tests on Claude 2, it took just hours before someone got the AI giving instructions on how to create a nuke, meth, IEDs etc. Nobody knows how these models work, nobody knows how to make them safe. #PauseAI
@AIPanic
AIPanic
1 year
Hey, @Anthropic ! Congratulations on the new Claude model. It is very smart! Dangerously smart, even dystopic Introducing the Retrocausal JSON attack, a universal jailbreak: A 🧵 about moving fast and making bombs, drugs, and other illegal stuff
14
25
139
1
12
66
@PauseAI
PauseAI ⏸
11 months
🇨🇳🤝🇬🇧
@AkashWasil
Akash Wasil
11 months
🚨 Chinese and western UK summit attendees sign a statement that: - Acknowledges existential risk - Calls for an international regulatory body - Calls for instant "shutdown procedures" - Calls for AI developers to spend 30% of their budget on AI safety.
Tweet media one
4
13
103
9
6
65
@PauseAI
PauseAI ⏸
7 months
A report commissioned by the U.S. government says advanced AI could pose an "extinction-level threat to the human species" and calls for urgent regulations, including a halt on training larger AI models. Sounds like a decent idea.
11
6
65
@PauseAI
PauseAI ⏸
11 months
@ylecun @tegmark @RishiSunak @vonderleyen "Very few believe in the doomsday scenarios you have promoted." Every single survey I've seen so far indicates the opposite. Only 18% of AI researchers believe the control problem is not real or important 1). Only 20% of US CS professors believe humans will "definitely" remain
6
1
64
@PauseAI
PauseAI ⏸
11 months
New open letter from Hinton, Bengio, Russell calling for urgent international measures: - Model registration, incident reporting, and monitoring. - Accountability for model creators - Licensing and development pauses (linked to dangerous capabilities)
Tweet media one
4
14
62
@PauseAI
PauseAI ⏸
1 year
@ylecun Is it though? Every survey / poll that I can find shows that AI researchers are quite worried! Over half have a p(doom) higher than 10%, average is 14%: Only 20% of professors believe AI will "definitely" stay in human control:
3
2
62
@PauseAI
PauseAI ⏸
7 months
@kach022 Some years ago I'd have answered "it could spread to other computers" or "it would convince you not to", but it becomes increasingly clear that we just won't unplug them. They spit out money. We've connected them to all our systems. We're handing them the keys.
4
4
61
@PauseAI
PauseAI ⏸
11 months
Mad props to @ai_ctrl for completely blasting the UK during the AI Safety Summit. Ads on vans, billboards, newspapers, even a blimp... Great videos on twitter and tiktok (some almost 10M views!), and most importantly a set of sensible policy measures!
Tweet media one
Tweet media two
Tweet media three
Tweet media four
5
7
58
@PauseAI
PauseAI ⏸
7 months
Building a superhuman digital brain should not be legal.
@liron
Liron Shapira
7 months
Demis: My AGI is on track to come in less than a decade Also Demis: My alignment strategy is this grab bag of half-baked ideas
19
18
177
17
6
59
@PauseAI
PauseAI ⏸
1 year
Tweet media one
1
0
59
@PauseAI
PauseAI ⏸
1 year
Can we take a moment to appreciate how @SenBlumenthal went from: "I think you may have had in mind the effect on jobs" (when referring to existential risk, during Altman's testimony) to: "An intelligence device, out of control, autonomous, self-replicating, potentially
0
11
57
@PauseAI
PauseAI ⏸
1 year
Tweet media one
5
13
55
@PauseAI
PauseAI ⏸
10 months
@BorisMPower Yesterday I had a lengthy discussion with a ▶️. We both learned things and updated our views. Not taking someone seriously means you stop being open minded - showing your stance in your profile does not mean you're radicalised.
2
1
56
@PauseAI
PauseAI ⏸
4 months
We need to step up our game.
@ShakeelHashim
Shakeel
4 months
OpenAI now has *35* in-house lobbyists, and will have 50 by the end of the year.
Tweet media one
69
237
1K
3
4
58
@PauseAI
PauseAI ⏸
1 year
Must-read interview with Bengio on AI risks. He talks about the cybersecurity threat, bio risks, Yann LeCun's denial, the fact that we can't rely on empirical data, the senseless in-fighting of AI safety / ethics, and finally what we need to do: organize internationally.
@aisafetyfirst
AI Safety First!
1 year
‘AI Godfather’ Yoshua Bengio: We need a humanity defense organization The subject of potential catastrophic threats from AI has been taboo in the AI research community since the beginning, Bengio tells The Bulletin.
2
13
73
6
6
56
@PauseAI
PauseAI ⏸
9 months
In 2022, the average AI researcher predicted that it would take 17 years before AI could write a book good enough to be a NYT best-seller. That estimate has now dropped to 6 years. Plot twist: a Chinese professor won a writing contest with an AI-written book two weeks ago.
5
5
57
@PauseAI
PauseAI ⏸
9 months
Sam Altman noticed this AI safety proposal had almost "universal support" when he was discussing it with political leaders across the world: 🧵 1/4
4
7
54
@PauseAI
PauseAI ⏸
1 year
Yesterday, over 550 billion dollar worth of big tech CEOs were summoned to a closed-door meeting to discuss AI regulations. They seem to agree: we need regulation, because AI is dangerous. But what types of regulation are they pushing? Virtually all of the proposals that I've
7
4
56
@PauseAI
PauseAI ⏸
5 months
It’s happening. Our second international protest.
51
18
55
@PauseAI
PauseAI ⏸
1 year
@ylecun @Blueyatagarasu @AndrewCritchCA Hundreds of scientists are saying there's a serious chance the technology you're contributing to could lead to the death of every single person on earth, and you're saying you "don't need a counter-argument" to this. Please consider the possibility of being wrong.
2
2
55
@PauseAI
PauseAI ⏸
10 months
@JosephJacks_ @ylecun His argument is: "AI will have no desire to dominate". By default I think this is correct, I don't expect that every instance of AI will have some instrumental goal to seek power. However, I think it's quite obvious that at some point, one instance will have this desire. Hell,
5
4
55
@PauseAI
PauseAI ⏸
1 year
Friendly reminder that pausing the largest training runs is not just popular, but also feasible. - It would only impact a small number of companies - It can be implemented on a state or national level right now - We can make it global during the AI safety summit in November
@voxdotcom
Vox
1 year
Public opinion about AI can be summed up in two words: Slow. Down.
4
25
84
5
11
55
@PauseAI
PauseAI ⏸
7 months
@tszzl You are still able to steer this in the right direction. You are in a great position to do so. Slow it down, hit the brakes. Don't allow Moloch to dictate our future. Take responsibility for all the lives that are being risked. Do the right thing.
5
3
54
@PauseAI
PauseAI ⏸
11 months
74% of UK citizens want governments to prevent superhuman AI from being created - only 13% oppose. It is absolutely bonkers that policy makers still treat a pause as some sort of radical proposal. Take the lead @matthewclifford and make this summit count.
@HugoGye
Hugo Gye
11 months
@theipaper This YouGov poll commissioned by @ai_ctrl is the latest indicator that public is highly sceptical about the development of frontier AI - 60% would support a global treaty banning smarter-than-human artificial intelligence.
Tweet media one
5
10
28
8
10
54
@PauseAI
PauseAI ⏸
8 months
@littIeramblings Being OK with the end of humanity is god-tier cope. It's Olympic level mental gymnastics. It's giving up on the most important challenge we face. But it's more than just lazy - it's evil. It's a signal to all lives of how little you value them. It's a crime against humanity.
6
4
51
@PauseAI
PauseAI ⏸
5 months
It's pretty cool to hear @TheZvi talk so positively about PauseAI on the 80000hours podcast: "...the world’s a better place when people stand up and say what they believe loudly and clearly, and they advocate for what they think is necessary."
2
2
53
@PauseAI
PauseAI ⏸
4 months
Harvard students who took an AI class are about 3 times more likely to "strongly agree" that mitigating x-risk should be a global priority.
@GabrielDWu1
Gabriel Wu
4 months
Students who have taken a class on AI were more likely to be worried about extinction risks from AI and had shorter "AGI timelines": around half of all Harvard students who have studied artificial intelligence believe AI will be as capable as humans within 30 years.
Tweet media one
2
5
34
4
7
53
@PauseAI
PauseAI ⏸
5 months
is A very entertaining and informative introduction to AI safety by Nicky Case. It's filled with great visuals and explanations: 1/4
Tweet media one
1
8
52
@PauseAI
PauseAI ⏸
9 months
@nickcammarata I really hope rationalists are very wrong about AI risks.
3
1
49
@PauseAI
PauseAI ⏸
11 months
@PessimistsArc If you want to understand why this time it's different, look at who is worried. Most of these past fears were spread / popularised by non-scientists - the scientists were often the ones who tried to calm the public. Take nuclear power in Germany, for example. They shut down
12
6
52
@PauseAI
PauseAI ⏸
5 months
May 13th - Our second international protest. SF, NY, London, Rome, Berlin, Den Haag, Stockholm, Paris. The goal: convince the few powerful individuals (ministers) who will be visiting the May 22nd Summit in Seoul to be the adults in the room. It's up to us to make them
Tweet media one
4
14
52
@PauseAI
PauseAI ⏸
8 months
@NPCollapse The ones with the shortest AI timelines keep being the ones who turn out to be correct.
1
0
50
@PauseAI
PauseAI ⏸
1 year
@SigalSamuel @voxdotcom After hearing countless of times that a pause is too extreme a measure, it turns out the public has an even more extreme view: don’t ever build AGI! Now all that’s left for politicians is to actually implement this. The AI safety summit is a great moment to do so.
5
5
51
@PauseAI
PauseAI ⏸
10 months
The public is again very clear on what they want: don't allow anyone to build superintelligent AI. At the same time, political elites still consider these policy measures too extreme to talk about. Allowing companies to build a technology that can kill everyone is extreme.
@DanielColson6
Daniel Colson
10 months
1/3: Politico covered a new poll released by AIPI today. In summary: “The public is very on board with direct restrictions on [AI] technology and a direct slowdown.” 70% agree that preventing AI from reaching superhuman capabilities should be an important goal of AI policy with
2
13
53
21
9
47
@PauseAI
PauseAI ⏸
1 year
Notable answers from Matt Clifford about the AI safety summit: - "The Summit is specifically focused on making frontier AI safe" - "Keynote speeches will be viewable on the AI Safety Summit livestream" - No decisions on hard controls (e.g. compute cap) are expected, but full
@matthewclifford
Matt Clifford
1 year
If you want to know more about November’s #AISafetySummit , on Monday (October 2nd) I’ll be doing a Q&A here on the summit and frontier AI safety. Leave your questions as replies below and I’ll try to get to as many as I can on Monday…
Tweet media one
87
37
110
3
5
50
@PauseAI
PauseAI ⏸
1 year
Today we protested in London. We call directly on @RishiSunak to lead the way in implementing a global pause on the development of AI systems more powerful than GPT-4 at the AI summit this autumn. Read our press release: #PauseAI
Tweet media one
4
5
49
@PauseAI
PauseAI ⏸
2 months
The best LLMs can: - autonomously hack websites () - exploit 87% of tested cybersecurity vulnerabilities () - beat 88% of competitive hackers in a CTF competition () What will happen if one beats 100%?
9
6
49
@PauseAI
PauseAI ⏸
4 months
@eshear Building things that can destroy the world should be illegal.
4
2
49
@PauseAI
PauseAI ⏸
11 months
@liron @linakhanFTC 15% risk of death would be unacceptably high in almost every single situation. A medical procedure, testing a new plane, building a nuclear power plant... It's absurd to me how so many people seem willing to accept such risks when it concerns AI.
1
5
45
@PauseAI
PauseAI ⏸
4 months
@ylecun @BotTachikoma Meta spent over 20 million on lobbying last year. Big tech is outspending civil society 5 to 1 in AI lobbying. You, with your salary, talking to activist volunteers, are trying to frame yourself as some sort of noble hero? You have no shame.
2
5
47