Rob Bensinger ⏹️ Profile Banner
Rob Bensinger ⏹️ Profile
Rob Bensinger ⏹️

@robbensinger

9,067
Followers
314
Following
413
Media
19,472
Statuses

Comms @MIRIBerkeley . RT = increased vague psychological association between myself and the tweet.

Berkeley, California
Joined November 2008
Don't wanna be here? Send us removal request.
Pinned Tweet
@robbensinger
Rob Bensinger ⏹️
3 months
What if we just decided to make AI risk discourse not completely terrible?
Tweet media one
69
34
350
@robbensinger
Rob Bensinger ⏹️
2 years
Tweet media one
10
106
1K
@robbensinger
Rob Bensinger ⏹️
2 years
"Bad take" bingo cards are terrible, because they never actually say what's wrong with any of the arguments they're making fun of. So here's the "bad AI alignment take bingo" meme that's been going around... but with actual responses to the "bad takes"!
Tweet media one
33
162
912
@robbensinger
Rob Bensinger ⏹️
2 years
A surprising thing I've realized over time is that I can often outperform without being super clever, just by doing normal garden-variety thinking and not letting the thinking get derailed by [List of Tempting Distractions and Simple Mistakes].
13
51
782
@robbensinger
Rob Bensinger ⏹️
10 months
There sure is a lot of Twitter discourse the last 24 hours from people who seem to legitimately not realize that all the leadership conflicts and disagreements at OpenAI, Anthropic, etc. are between people who share the view that AI isn't unlikely to kill literally all humans.
Tweet media one
30
73
566
@robbensinger
Rob Bensinger ⏹️
10 months
Seems like the first news article with leaks from the board, and possibly the first to represent something like their perspective? I have to say, if @sama was trying to keep board members from saying anything negative about OpenAI's safety practices in public, I think this is
@pitdesi
Sheel Mohnot
10 months
New info! -Sam was trying to push Helen out for her academic paper critical of OAI; Ilya sided with her to push out Sam -The Anthropic folks had also tried to push Sam out -There are 6 board members bc of disagreement on who to add -Helen ok to destroy OAI for the mission
Tweet media one
81
256
2K
25
36
515
@robbensinger
Rob Bensinger ⏹️
9 months
A common mistake I see people make is that they assume AI risk discourse is like the left image, when it's actually like the right image. I think part of the confusion comes from the fact that the upper right quadrant is ~empty. People really want some group to be upper-right.
Tweet media one
Tweet media two
67
38
412
@robbensinger
Rob Bensinger ⏹️
9 months
Here are some of my views on AI x-risk. I'm pretty sure these discussions would go way better if there was less "are you in the Rightthink Tribe, or the Wrongthink Tribe?", and more focus on specific claims. Maybe share your own version of this image, and start a conversation?
Tweet media one
69
48
403
@robbensinger
Rob Bensinger ⏹️
13 days
Regular reminder that MIRI folks consider it plausible that AI just keeps being more and more beneficial for society up until the day before AI causes everyone to drop dead in the same five seconds. The x-risk view has never been very close to the generic "AI bad, boo AI" view.
26
32
348
@robbensinger
Rob Bensinger ⏹️
10 months
AI didn't spend a long time with roughly human-level ability to imitate art styles, before it became vastly superhuman at this skill. Yet for some reason, people seem happy to stake the future on the assumption that AI will spend a long time with ~par-human science ability.
23
27
330
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex I think you're just missing context on what happened here. Circa 2000, a set of transhumanist freedom-loving libertarians realized that we don't know on a technical level how to get good outcomes from AI, and if we don't figure it out in advance we're likely to destroy ourselves.
2
21
331
@robbensinger
Rob Bensinger ⏹️
2 years
I side with Scott Aaronson in saying that the Chinese room thought experiment is sophistry. I don't know why it gets treated as anything else by any intellectual?
Tweet media one
Tweet media two
@ShakedDown
Fake Mario
2 years
@robbensinger @MikePFrank @SturnioloSimone @leventov @smatta1701 @heyorson @davidmanheim @Tris_Legomenon @AyeGill @BasedBeffJezos @cat_fro_devnull @bayeslord @RollinReisinger @xlr8harder @JeffLadish @LesaunH I'm on the fence here. It's plausible that consciousness is an entirely separate phenomenon and that Chinese room/AI can't have it (but could be trained to convincingly sound like it does). Maybe assuming it can't is safer.
4
0
4
38
19
324
@robbensinger
Rob Bensinger ⏹️
1 year
The way ML developed post-2010 seems like more or less a worst-case scenario for humanity: - AI is opaque. - ML is impressive enough to build hype and shorten timelines, but not to help save the world at all. - We got there with brute force, not new insights into minds.
28
26
316
@robbensinger
Rob Bensinger ⏹️
3 months
My take on Leopold Aschenbrenner's new report: I think Leopold gets it right on a bunch of important counts. Three that I especially care about: 1 - Full AGI and ASI soon. (I think his arguments for this have a lot of holes, but he gets the basic point that superintelligence
Tweet media one
22
61
305
@robbensinger
Rob Bensinger ⏹️
11 months
Seems like one of the more important facts about our civilization -- we live in the world where paying people is seen as taking advantage of them, while lying to people is seen as normal and OK. (In a surprisingly large number of cases.)
@robertwiblin
Rob Wiblin
11 months
Paying people in exchange for their blood is very bad — but saying misleading things so they'll give you their blood for free is very good.
Tweet media one
4
13
198
4
31
295
@robbensinger
Rob Bensinger ⏹️
1 year
I'm going to lost an ungodly, unrecoverable number of Bayes points if humanity somehow sticks the landing on this whole AI thing and it's going to be glorious
25
13
287
@robbensinger
Rob Bensinger ⏹️
10 months
With the benefit of hindsight, the last few days really look like "powerful people fought, and journalists were purely functioning as easily-controlled pawns in the power struggle". Which... for all the criticisms I have of journalists, is not something I remember seeing before.
20
15
279
@robbensinger
Rob Bensinger ⏹️
10 months
We didn't learn "they can't fire him". We did learn that the organization's staff has enough faith in Sam that the staff won't go along with the board's wishes absent some good supporting arguments from the board. (Whether they'd have acceded to good arguments is untested.)
@tobyordoxford
Toby Ord
10 months
The last few days exploded the myth that Sam Altman's incredible power faces any accountability. He tells us we shouldn't trust him, but we now know the board *can't* fire him. I think that's important.
Tweet media one
115
173
2K
30
8
278
@robbensinger
Rob Bensinger ⏹️
10 months
I implore everyone who agrees with me and everyone who disagrees with me: please have good discourse. Please say true and relevant things. Concede points from the other side when they're right. Focus on conversational cruxes. Fight the urge to zing.
13
23
266
@robbensinger
Rob Bensinger ⏹️
3 years
What's the best way to get every journalist in the world to read this article? (Or failing that, get every journalist at the fifteen most widely-read serious English-speaking news outlets to read it.) Extra credit: make 'we've read this' common knowledge.
13
62
254
@robbensinger
Rob Bensinger ⏹️
5 months
The thing I found most disturbing in the board debacle was that hundreds of OpenAI staff signed a letter that appears to treat the old-fashioned OpenAI view "OpenAI's mission of ensuring AGI benefits humanity matters more than our success as a company" as not just wrong, but
@robertwiblin
Rob Wiblin
5 months
I've seen a few people remark that it's ironic that the workers of OpenAI sided with their billionaire CEO over a non-profit board. But that misunderstands the board's intended purpose — never protecting OpenAI staff from its leadership, but rather protecting society from OpenAI
6
18
212
18
23
247
@robbensinger
Rob Bensinger ⏹️
2 years
When a thing is good or bad, it's usually not good/bad in every respect simultaneously. The impulse to make @elonmusk either good-on-all-dimensions or bad-on-all-dimensions simultaneously should be a red flag for motivated reasoning.
Tweet media one
3
20
235
@robbensinger
Rob Bensinger ⏹️
10 months
Could someone from OpenAI explain to me why y'all have highlighted this quote as one of the main objections to the board's conduct? To my ear, if there's no world where it would be OK to shutter OpenAI, then it's not OK to shutter OpenAI even if the org is causing net harm and
Tweet media one
33
6
229
@robbensinger
Rob Bensinger ⏹️
4 months
This level of reputation management seems congruent with the reporting that @sama tried to get @hlntnr kicked off the OpenAI board for publicly criticizing some of OpenAI's safety practices, and that this sparked the board conflict. I'm actually very sympathetic to orgs like
@KelseyTuoc
Kelsey Piper
4 months
Equity is part of negotiated compensation; this is shares (worth a lot of $$) that the employees already earned over their tenure at OpenAI. And suddenly they're faced with a decision on a tight deadline: agree to a legally binding promise to never criticize OpenAI, or lose it.
5
32
539
9
25
224
@robbensinger
Rob Bensinger ⏹️
2 years
A lot of the relative placements on that AGI political compass meme seemed very wrong to me, so here's one that does match my current impressions: (My incredibly vague, amazingly low-confidence, June 17 2022 impressions.)
Tweet media one
24
19
213
@robbensinger
Rob Bensinger ⏹️
1 year
I've been citing to explain why the situation with AI looks doomy to me. But that post is relatively long, and emphasizes specific open technical problems over "the basics". Here are 10 things I'd focus on if I were giving "the basics" on why I'm worried:
13
25
205
@robbensinger
Rob Bensinger ⏹️
2 years
I'm not a big fan of the "takeoff" analogy for AGI. In real life, AGI doesn't need to "start on the ground". You can just figure out how to do AGI and find that the easy way to do AGI immediately gets you a model that's far smarter than any human. Less "takeoff", more "teleport".
15
15
196
@robbensinger
Rob Bensinger ⏹️
2 years
My thoughts on EA optics:
Tweet media one
7
7
192
@robbensinger
Rob Bensinger ⏹️
9 months
Is this @ylecun 's view?: 1. The probability of AGI being developed by a method other than my favored one is negligible. 2. The probability of my favored approach being hard to align is negligible. 3. The probability of early AGIs being cheap to run or very smart is negligible.
19
10
188
@robbensinger
Rob Bensinger ⏹️
9 months
Or, in simpler terms:
Tweet media one
@the_megabase
megabase
9 months
political compass of effective altruist critiques (long version)
Tweet media one
20
54
433
12
16
186
@robbensinger
Rob Bensinger ⏹️
5 months
The impression I'm getting from some OpenAI staff is that their view is something like: "OpenAI's 1200+ employees are, pretty much to a man, extremely committed to the nonprofit mission. Effectively all of us take existential risk from AI seriously, and would even be willing to
@robbensinger
Rob Bensinger ⏹️
5 months
The thing I found most disturbing in the board debacle was that hundreds of OpenAI staff signed a letter that appears to treat the old-fashioned OpenAI view "OpenAI's mission of ensuring AGI benefits humanity matters more than our success as a company" as not just wrong, but
18
23
247
8
10
182
@robbensinger
Rob Bensinger ⏹️
2 years
Today's mood: everyone less autistic than me is a lying unprincipled PR robot, everyone more autistic than me is a goofball who thinks their sensory sensitivities and social preferences are important moral principles that keep communities from collapsing into ruin
9
12
180
@robbensinger
Rob Bensinger ⏹️
2 years
Ran it six times with the same prompt, got "men are taller than women" 6/6 times. Good to check whether a prompt gets you the same result before retweeting, since otherwise Twitter will amplify the most surprising outlier ChatGPT behavior, rather than its usual behavior.
Tweet media one
@mattyglesias
Matthew Yglesias
2 years
ChatGPT is so averse to stereotypes and generalization that it's reluctant to say men are taller than women.
Tweet media one
78
27
964
6
5
179
@robbensinger
Rob Bensinger ⏹️
2 years
Reminder: Hume's is-ought distinction is an "is", so it has no "ought" implications.
14
12
175
@robbensinger
Rob Bensinger ⏹️
2 years
Conservatives and progressives are gradually doing the equivalent of brain-damaging each other over time, by associating various cognitive steps and contents with a particular political coalition and therefore causing the rival coalition to be less able to think certain thoughts.
5
25
169
@robbensinger
Rob Bensinger ⏹️
4 months
Thoughts on OpenAI from @ozziegooen :
Tweet media one
5
28
175
@robbensinger
Rob Bensinger ⏹️
4 years
. @TwitterSupport , please unban @lukeprog . Nobody knows why he is banned, including Luke, and we are very confused. If anything, it would make more sense to ban everyone else and just have Twitter consist of @lukeprog going forward.
1
14
170
@robbensinger
Rob Bensinger ⏹️
10 months
Last I checked, I the term for that kind of e/acc is "doomer"
@rumtin
Rumtin
10 months
Is there an "e/acc for everything but nukes, bioweapons, AI-enabled warfare and authoritarian surveillance"? Asking for a friend.
50
16
249
4
9
170
@robbensinger
Rob Bensinger ⏹️
10 months
I disagree with Bostrom's 'society isn't yet worried enough, but I now worry there's a strong chance we'll overreact'. I think underreaction is still hugely more likely, and hugely more costly. But I'm extremely glad x-risk people are the sorts to loudly voice worries like that.
8
7
168
@robbensinger
Rob Bensinger ⏹️
2 years
The Four Non-Blind Men and the Elephant: A Fable Once, while traversing a great wood, four non-blind men happened upon an elephant. All of them said "That's an elephant", because it was an elephant.
5
9
158
@robbensinger
Rob Bensinger ⏹️
1 year
Eliezer Yudkowsky's response to "Can someone please explain how people get such highly confident estimates of near-certain doom from AI?":
Tweet media one
45
12
161
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex I realize that there are a lot of statists in the world, so it makes sense to have that hypothesis queued up. But jesus fucking christ, have you ever misunderstood what's happening in this particular weird case.
6
4
161
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex The freedom-loving transhumanists then spent twenty years working hard to try and better understand this problem, spin up a research field about it, and generally do everything possible except alert the governments of the world about it.
2
7
159
@robbensinger
Rob Bensinger ⏹️
8 months
Regular reminder: in a true prisoner's dilemma, "defect if you can get away with it cost-free" is a crucially important policy. It's actively bad, unvirtuous, unethical, etc. to make a policy of never defecting in those cases. (Imagine meeting a serial killer and aiding them in
19
13
144
@robbensinger
Rob Bensinger ⏹️
2 years
"AGI is scary because argmaxing is scary" 🢡 "AGI is scary because what if an AGI ran a paperclip factory!" "MIRI does math" 🢡 "MIRI does GOFAI" "There's no fire alarm for AGI" 🢡 "Let's start calling every new advance in AI a fire alarm!" WHY DOES MEMETICS WORK THIS WAY
14
11
150
@robbensinger
Rob Bensinger ⏹️
2 years
My updated guesses at people's views:
Tweet media one
27
7
144
@robbensinger
Rob Bensinger ⏹️
2 years
Indeed, often thinking of a question as "just a mundane question I can answer by thinking about it the same as any other question" *is* the key unusual move needed to answer the question.
4
10
145
@robbensinger
Rob Bensinger ⏹️
1 year
@Aella_Girl @eigenrobot obversed stupidity is not intelligence
Tweet media one
11
8
140
@robbensinger
Rob Bensinger ⏹️
2 years
Rather a lot of work is done by just following thoughts through to their conclusion, consistently applying "easy" reasoning methods, etc. Many unusual and important conclusions can be reached without your being sparklingly creative or anything.
1
1
141
@robbensinger
Rob Bensinger ⏹️
2 years
Thread for examples of alignment research MIRI has said relatively positive stuff about: ("Relatively" because our overall view of the field is that not much progress has been made, and it's not clear how we can change that going forward. But there's still better vs. worse.)
1
23
140
@robbensinger
Rob Bensinger ⏹️
9 months
Seems important to mention, given Eliezer Yudkowsky's note about @sama never reaching out to him to talk over the years: Nate Soares and Sam have had a few conversations over the years. Specifically: Nate reached out to Sam in mid-2015 to talk about Sam's plans to create OpenAI;
8
7
137
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex Because the arguments for 'government intervention will probably just make this problem even worse' were very obvious. Bureaucrats won't understand the problem. They can't ban the dangerous practices alone, because they don't know what's dangerous.
1
3
136
@robbensinger
Rob Bensinger ⏹️
8 months
Just had a 30m conversation about combinatorics, probability theory, and the gambler's fallacy, using as an example 'What if our hotel room number happens to be 620, the same as my sister's apartment number?'. Walked across the street, checked in, found our number was 620. 😲
13
2
137
@robbensinger
Rob Bensinger ⏹️
2 years
A few small edits:
Tweet media one
11
7
137
@robbensinger
Rob Bensinger ⏹️
1 year
I'd guess that the release of the movie "Don't Look Up" is a not-unimportant reason people are responding more sanely to AGI right now. (Though not the only reason.) I'd call "Don't Look Up" a rationality intervention / metacognition intervention / sociology intervention.
11
4
125
@robbensinger
Rob Bensinger ⏹️
6 years
Elizabeth Morningstar: "rationality is like a martial art in that if you find yourself in a situation where you could use it, you must always try running away first"
3
24
134
@robbensinger
Rob Bensinger ⏹️
2 years
IMO, EAs should be proud of achievements like the TIME cover story and the UN report. It's always hard to predict the net effect of high-profile events like this, but I'd guess they're positive. And these ideas are genuinely worthy ideas, deserving of serious discussion.
3
4
132
@robbensinger
Rob Bensinger ⏹️
1 year
@perrymetzger @ATabarrok "Hidden Complexity of Wishes" isn't arguing that a superintelligence would lack common sense, or that it would be completely unable to understand natural language. It's arguing that loading the right *motivations* into the AI is a lot harder than loading the right understanding.
5
3
130
@robbensinger
Rob Bensinger ⏹️
3 months
Seems absurdly overconfident. E.g., seems to entail a less than 1 in 100,000,000 chance of something else wiping us out (or permanently derailing civilization) first, like an act of bioterrorism?
@romanyam
Dr. Roman Yampolskiy
6 months
I am at 99.999999%
50
7
68
8
2
127
@robbensinger
Rob Bensinger ⏹️
2 years
"I will sound crazy on Twitter so that my colleagues can sound crazy in private so that their colleagues can build a safer AI so that our children can live in the glorious transhuman future"
6
9
126
@robbensinger
Rob Bensinger ⏹️
4 months
You don't need to throw out probability theory in order to think 'falsifiability is what separates science from pseudoscience'. But Popper separately does throw out probability theory. And a HUGE part of why he likes 'falsifiability' as a criterion is that it makes science sound
Tweet media one
10
11
126
@robbensinger
Rob Bensinger ⏹️
2 years
@peterhartree @ESYudkowsky @MIRIBerkeley It's either true (given the information available to us) that humanity's odds of surviving AGI are very low, or it's false -- our ability to come up with mean-sounding labels ("luddite despair"), or to object on vibes grounds (doesn't sound optimistic enough) doesn't change that.
8
7
120
@robbensinger
Rob Bensinger ⏹️
2 years
@ohabryka @trevorjtweets @RichardMCNgo My mental model of how EA came about looks something like this. What would you change about this picture? (Same Q to others who see this.) (Can include 'this is suboptimally coarse-grained, the fine-grained version should note things like X and Y'.)
Tweet media one
13
18
125
@robbensinger
Rob Bensinger ⏹️
4 months
All the dramatic claims have been about Twitter, but I feel like Facebook has already quietly become basically-not-a-functional-website. It is no longer a place with core functionality like 'in a discussion, there's a way for each person to get notified when the other responds'.
8
5
117
@robbensinger
Rob Bensinger ⏹️
1 year
Responding to someone who said he agreed with @AndrewYNg at the time that worrying about smarter-than-human AI was "like worrying about overpopulation on Mars", but now he thinks Mars is starting to fill up: It really was a uniquely bad argument at the time.
2
12
122
@robbensinger
Rob Bensinger ⏹️
2 years
Uncommon mental motions that can prevent a surprising number of philosophical errors: 1. When you hear something mysterious like "reality doesn't exist", don't fall into a reverent fog (). Say "Oh really? How does that work?"
Tweet media one
2
12
120
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex It's almost certain not to work, but the other options to avoid all dying seem even more hopeless at this stage. And then we see folks with zero context on the state of the field over the last twenty years jumping in and confidently asserting that we must be statists.
2
3
121
@robbensinger
Rob Bensinger ⏹️
3 months
The feeling I sometimes get when I'm arguing with someone who thinks good science and responsible engineering means collecting data and not speculating about what the data might mean a year or two out:
Tweet media one
3
4
118
@robbensinger
Rob Bensinger ⏹️
4 months
A retraction from Harlan: the MIRI Newsletter said "it appears that not all of the leading AI labs are honoring the voluntary agreements they made at the [UK] summit", citing Politico. We now no longer trust that article, and no longer have evidence any commitments were broken.
4
8
115
@robbensinger
Rob Bensinger ⏹️
10 months
I don't know why the folks at OpenAI have acted as they did, but Twitter seems to have completely lost the plot. Most seem to think this is a mundane battle over shepherding cool products to markets; a minority think it's a battle between the AI-unworried and the AI-terrified.
2
1
116
@robbensinger
Rob Bensinger ⏹️
9 months
One of the disadvantages of having terms like "EA" and "longtermism" is that it can make some parts of common sense sound like abstruse philosophical positions. Helping people is good. Helping more people is gooder. Helping someone born tomorrow counts too. Wild stuff, I know.
Tweet media one
18
4
114
@robbensinger
Rob Bensinger ⏹️
9 months
@badwind86 @AuransApp @ESYudkowsky @astupple @slatestarcodex If we die, there will be no poetry in it. Just another sad, brute, avoidable fact added to the heap.
4
7
115
@robbensinger
Rob Bensinger ⏹️
2 years
A lot of EAs seem to under-appreciate the extent to which your response to a crisis isn't just a reaction to an existing, fixed set of societal norms — the act of choosing a response is the act of 𝘤𝘳𝘦𝘢𝘵𝘪𝘯𝘨 a norm.
1
8
112
@robbensinger
Rob Bensinger ⏹️
2 years
The world is likely to become increasingly distracting. If you're doing something important, don't lose hours in ways you'll regret. + Prepare now for a world that's much more effective in the future at burning an afternoon of yours in ways that are neither fun nor build utopia.
1
8
112
@robbensinger
Rob Bensinger ⏹️
9 months
Another part of the confusion seems to be that half the people think "doomer" means something like "p(doom) above 5%", and the other half think "doomer" means something like "p(doom) above 95%". Then their wires get crossed by the many people who have a p(doom) like 20% or 40%.
Tweet media one
6
4
114
@robbensinger
Rob Bensinger ⏹️
10 months
@jachaseyoung Nuanced views on this are in fact possible, and in fact were the norm in public AI x-risk discourse until, I'd say, the past year or two. Bostrom and Yudkowsky are holdovers from a more precise culture that doesn't ground its world-view in zingers, political slogans, and memes.
8
6
111
@robbensinger
Rob Bensinger ⏹️
1 year
In a sane world, it doesn't seem like "well, maybe AI will get stuck at human-ish levels for decades" or "well, maybe superintelligence couldn't invent any wild new tech" ought to be cruxes for "Should we pause AI development?" or "Is alignment research the world's top priority?"
10
6
108
@robbensinger
Rob Bensinger ⏹️
2 years
Why do people often use the word "incentive" when they really mean "myopic incentive"? E.g., someone thinks that AGI is likely to kill everyone, such that doing ML research shortens her lifespan. But she does the research anyway because it's fun / interesting / profitable / etc.
23
8
109
@robbensinger
Rob Bensinger ⏹️
11 months
@benlandautaylor @ID_AA_Carmack "When did open source become bad?" and "Are you opposed to open source?" are wrong questions. Open source is generally good, and always has been generally good; the exception is for tech that lets you destroy the world.
8
6
110
@robbensinger
Rob Bensinger ⏹️
3 years
Putin quote: "These sanctions that are being imposed, they are akin to declaring war. But thank God, we haven't got there yet." NPR headline: "Putin calls sanctions a declaration of war as Zelenskyy pleads for more aid" Please be more cautious with your phrasings, @NPR .
5
5
107
@robbensinger
Rob Bensinger ⏹️
2 years
I'd instead say: it's good that AGI tech not proliferate (because otherwise we all die), and getting in the habit of closed-sourcing now is plausibly crucial for that to actually happen in real life. Separately, closed-source lengthens timelines to AGI, which is good.
@peterwildeford
Peter Wildeford 🇺🇸🇬🇧🇫🇷
2 years
My hot take: closed source is good actually. AI technology can be very harmful / dangerous and it is good that AI technology does not proliferate to a ton of different actors.
13
2
92
10
5
108
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex Or at worst, will just cause governments to join the race and try to build the world-destroying tech too. What changed is just that the technical research didn't pan out. Lots of great researchers have looked at this problem at this point, and very little progress has been made.
4
0
108
@robbensinger
Rob Bensinger ⏹️
3 years
The real way to achieve high US vaccination rates would have been for Trump to come out in favor of J&J way too early and the press to lampoon him for it, so the Right embraces it; while meanwhile QAnon spreads the idea that Pfizer is evil, so the Left embraces it.
3
6
109
@robbensinger
Rob Bensinger ⏹️
1 year
@Hello_World @ESYudkowsky ... Wot? Humans didn't land rockets on the moon via blind trial-and-error, then start doing theory afterwards to figure how we'd succeeded. We built theory first, and designed rockets deliberately based on beliefs about how all the parts worked and interacted.
9
5
106
@robbensinger
Rob Bensinger ⏹️
5 months
17 months post-FTX, and EA still hasn't done any kind of fact-finding investigation or postmortem on what happened with SBF, what mistakes were made, and how we could do better next time. There was a narrow investigation into legal risk to Effective Ventures last year, and
Tweet media one
20
10
109
@robbensinger
Rob Bensinger ⏹️
3 years
Something that's feeling extra salient to me right now: Every side in every (sufficiently large) conflict has people out there supporting it with terrible arguments. And you'll often see those first, even if the conclusion is correct.
5
9
107
@robbensinger
Rob Bensinger ⏹️
1 year
Tweet media one
2
4
107
@robbensinger
Rob Bensinger ⏹️
2 years
And the distractions take forms like "but surely there has to be more to it than that!", "surely I'm not smart enough to puzzle this through myself", etc. -- noise and tangents you'd skip over if you just casually ran into the question with no context and felt curious about it.
3
3
104
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex They can't fund alignment research, because they don't know what alignment research is useful (and because some alignment research is dangerous, and they don't know what).
2
0
105
@robbensinger
Rob Bensinger ⏹️
2 years
I think all of these things at once: 1. Compared to the rest of the world, effective altruism is absolutely goddamned amazing. It's remarkable and disturbing how rare the basic EA combination of traits is, and it suggests EA is something precious, to be protected and grown.
1
5
106
@robbensinger
Rob Bensinger ⏹️
9 months
@ylecun "If it were true that raw intelligence was sufficient for a human to want to dominate others and succeed at it, then Albert Einstein, Richard Feynman, Leonard Euler, Niels Abel, Kurt Gödel and other scientists would have been both rich and powerful, and they were neither." Top
3
12
105
@robbensinger
Rob Bensinger ⏹️
10 months
@astupple @slatestarcodex Meanwhile, capabilities have made enormous strides. On the alignment/safety side, we look generations away from being ready; on the capabilities side, we may only be a handful of years away from smarter-than-human AI.
2
1
105
@robbensinger
Rob Bensinger ⏹️
5 months
Jesus christ.
Tweet media one
2
6
106
@robbensinger
Rob Bensinger ⏹️
2 years
Hot take: the real exploitation is refusing to pay poor people money because you want to look morally superior and above-the-fray. Literally taking money out of the hands of the global poor in order to burnish your reputation!
@CineraVerinia_2
𝕮𝖎𝖓𝖊𝖗𝖆 (is an aspiring alignment theorist)
2 years
As a full time job (40 hours a week, 4 weeks a month), this pays NGN 240K at current rates; my monthly salary as an upper junior software developer was ~ NGN 200K, entry level was NGN 90K. No "racism", "exploitation" or "sweatshop" here; this is probably a steal for the locals.
Tweet media one
9
11
208
6
5
103
@robbensinger
Rob Bensinger ⏹️
2 years
From your perspective, are there any AI x-risk arguments that MIRI has seemed weirdly impervious to? Name arguments where you'd have expected a fully sane version of MIRI to hear it and go "oh shit, that's a big update", but real-MIRI seemed to do something else instead.
42
3
105
@robbensinger
Rob Bensinger ⏹️
2 years
Tweet media one
6
12
103
@robbensinger
Rob Bensinger ⏹️
2 years
............... What???? This literally ever happens?? Huh? What do you mean by "conscious" here??
@ElodesNL
Still Elodes
2 years
@Malcolm_Ocean I remember it as being a fairly clear single moment, around when I was three or four years old; suddenly one day I was conscious and I haven't stopped since I also had a few memories already, which was interesting; they'd been saved for me while I still lacked awareness.
1
0
31
35
2
102
@robbensinger
Rob Bensinger ⏹️
1 year
I'd guess this is directionally correct: more insights are required to get to AGI, than to get from AGI to superintelligence.
@sama
Sam Altman
1 year
building agi is a scientific problem building superintelligence is an engineering problem
463
520
5K
3
2
103
@robbensinger
Rob Bensinger ⏹️
5 months
I wonder what the annual death toll is from Anglophone scientists preferring words like "murine" over "mouse-ish" in order to sound fancy, thereby blocking lots of laypeople from understanding or remembering things like medical terminology?
20
9
103
@robbensinger
Rob Bensinger ⏹️
2 years
"Tweets I would have been extremely concerned to read with no context if you sent them back in time to 2017-me"
@hlntnr
Helen Toner
2 years
If you spend much time on AI twitter, you might have seen this tentacle monster hanging around. But what is it, and what does it have to do with ChatGPT? It's kind of a long story. But it's worth it! It even ends with cake 🍰 THREAD:
Tweet media one
41
586
3K
4
7
101