🚨I’m in the New York Times!!🚨
AI is weird. Many of the people who pioneered the tech, along with the leaders of all the top AI companies, say that it could threaten human extinction. In spite of this, it’s barely regulated in the US.
Whistleblower protections typically 🧵
Seems like more people should be talking about how a libertarian charter city startup funded by Sam Altman, Marc Andreessen, and Peter Thiel is trying to bankrupt Honduras.
Próspera is suing Honduras to the tune of $11B (GDP is $32B) and is expected to win, per the NYT 🧵
I finally got a chance to tell a story that I’ve been keeping to myself for 6+ years. My first fulltime job was as a consultant at McKinsey. At the time, it seemed like a dream job—a way to work with brilliant people, learn a lot, and maybe even improve things from the inside 🧵
This article is full of bombshells. Excellent reporting by
@dseetharaman
.
The biggest one: OpenAI rushed testing of GPT-4o (already reported), released the model and then subsequently determined the model was too risky to release! I had a scenario like this in a forthcoming...
Basically, the libertarian charter city startup Próspera made a deal with a corrupt, oppressive post-coup govt in Honduras to get special economic status. This status was the result of court-packing and is wildly unpopular. A democratic govt is trying to undo the deal…
So Silicon Valley billionaires are backing a project that is trying to bankrupt a poor country for reneging on a deal struck with people who have been indicted on corruption, drug trafficking, and weapons charges. These same billionaires want to build superhuman AI ASAP...
In response, Próspera is suing the govt for ⅔ of its annual state budget. An op-ed in Foreign Policy states that the suit’s success “would simply render the country bankrupt.” ...
The longer story appears to be (from the Foreign Policy op-ed):
2009: military coup results in a corrupt and oppressive post-coup govt
2011: This govt decrees special “employment and economic development zones,” called ZEDEs ...
This is what thew new president had to say about the special economic zones: “Every millimeter of our homeland that was usurped in the name of the sacrosanct freedom of the market, ZEDEs, and other regimes of privilege was irrigated with the blood of our native peoples.” ...
and are vigorously resisting regulation of such technology. If you'd like to see how they'd govern the world with a superintelligent AI, it might be instructive to see how they act now.
Próspera is incorporated in Delaware and has received support from the US ambassador to Honduras and the State Dept, despite Biden’s stated opposition to these kinds of investment-state arbitrations…
Nov 2021: Center-left govt led by Honduras’ first female president Xiomara Castro takes power
April 2022: new govt votes unanimously to repeal ZEDE law…
Próspera is funded by Pronomos Capital, which is advised, among others, by Balaji S. Srinivasan, a former partner at Andreessen Horowitz, who wants to partner with the police to take over San Francisco (some people might call this impulse fascistic).
...
2012: Honduras’ Constitutional Court finds decree unlawful so Honduran Congress swaps out judges for pro-ZEDE judges
2013: new court rules in favor of ZEDEs
2017: Próspera ZEDE granted official status…
Dec 2022: “Próspera announced that it was seeking arbitration at the International Centre for Settlement of Investment Disputes (ICSID) for a sum of nearly $10.8 billion.” (Image is from NYT Mag article: ) ...
My good friend Ian MacDougall had a fantastic story on Próspera w/ Isabelle Simpson in Rest of the World a few years back. The roots of this story can be found there. ...
I came into McKinsey believing in a certain “technocratic utopianism” that animates the firm. I left McKinsey radicalized against capitalism and the amorality profit-seeking at its center.
The core irony...
OpenAI whistleblower William Saunders is testifying before a Senate subcommittee today (so is Helen Toner and Margaret Mitchell). His written testimony is online now. Here are the most important parts 🧵
My first ever cover story is out in
@thenation
and does something unprecedented: to my knowledge, no former McKinsey employee has ever publicly discussed project specifics with real client names. The firm is intensely secret, above and beyond competitor consultancies. McKinsey...
will say this is to protect client interests, but it also serves to protect McKinsey from scrutiny and accountability.
I found myself working directly for two of McKinsey’s most controversial clients: Rikers Island and ICE. At Rikers, we were meant to help reduce violence...
I’ve spent years reckoning with my role in all this and trying to figure what I can do to hold McKinsey accountable. I was also terrified of publicly attacking such a powerful org. I’m lucky enough now to be doing what I love and feel secure enough to do what I think is right...
what would have stopped us from working for the Nazis, he muttered something about McKinsey being a values-based organization. The thing is, I couldn’t point to any violation of McKinsey’s values as they were stated...
My desire to work for mission-driven orgs led me to the front-lines of Trump’s administration agenda. What started as a culture survey of ICE’s staff transformed into an all-hands-on-deck effort to help ICE comply with Trump’s executive orders to target all undocumented people...
that makes McKinsey so resilient is that, no matter what awful thing it does, the name still burnishes your resume. Even in my case, my anonymous Current Affairs essay about McKinsey launched my journalism career...
for deportation and triple the number of deportation officers. I was tasked with modeling out how to meet the new hiring targets.
During a team-wide meeting on the ethics of our work, the senior partner said that “The firm does execution, not policy.” When I asked him...
Newsom vetoed SB 1047. The reasoning given here is so transparently weak. Like even if you bought that smaller models were JUST as likely to cause problems, why wouldn't it be better to regulate some models while you work on more comprehensive regulations? This justification...
Back in 2018, I anonymously wrote my first magazine story for
@curaffairs
about my former employer, McKinsey. I analyzed how McKinsey accelerates and exacerbates basically every negative trend of capitalism, weaving in some of my personal experience along the way. 🧵
This is Fei-Fei Li and her self-described “long-term friend and colleague” Marc Andreessen, co-founder of the venture capital firm Andreessen Horowitz (a16z).[1]
Li is often called the “godmother” of AI.[2] ...
In my debut for
@voxdotcom
, I discuss why my year of pescetarianism was a moral mistake. The evidence for fish sentience is way stronger than I previously believed: some fish have passed the self-recognition test, team up w/ other species to hunt...
Anthropic's letter may be a critical factor in whether CA AI safety bill SB 1047 lives or dies.
The existence of an AI company at the frontier saying that the bill actually won't be a disaster really undermines the 'sky is falling' attitude taken by many opponents. 🧵
though it seems like a clear violation of the spirit of the voluntary commitments at least.
Other new stuff: SamA and other execs begged Ilya to come back, he seemed like he would, then execs rescinded the offer. These details aren't super surprising, but it's by far the...
@slatestarcodex
Hi Scott! I actually read your entire post on this back when it came out bc I found it fascinating. I don't have the time rn to go as deeply into this as I'd like, but a few thoughts:
- What about the precedent set by a successful suit? Find a corrupt govt, get them to agree to
piece, as a hypothetical relayed to me by someone who used to work at OpenAI, but then it turns out it actually already happened, according to this reporting. Bc all of this is governed by voluntary commitments, OpenAI didn't violate any law...
So I remember
@a16z
relaying this howler to the UK House of Lords, but I did not realize that
@ylecun
,
@ID_AA_Carmack
, and other technical folks signed their name to the same claim in a letter to Biden!
I really don't know how you could possibly take them seriously after this.
110+ employees and alums of top-5 AI companies just published an open letter supporting SB 1047, aptly called the "world's most controversial AI bill." 3-dozen+ of these are current employees of companies opposing the bill.
Check out my coverage of it in the
@sfstandard
🧵
based on his appearance on Dwarkesh Podcast.
Next is some straight business gossip: Greg Brockman reportedly annoyed people so much that Sam asked him to leave. Brockman has been the exec most steadfastly loyal to Sam...
If you're considering working at McKinsey, watch this (to the end)!
Proud to have played a small part in this great video about my old employer's impact on the working class. It's surprisingly fun, and I even learned some new things!
NEW: There’s a secret, parasitic consulting firm at the heart of nearly every industry in America.
They’re responsible for the worst corporate “best practices” — lay-offs, safety cuts, price-gouging.
We uncovered how McKinsey is waging a secret war on the working class.
most we've heard about the situation. I guess it's a bit surprising they initially tried to bring him back given that he publicly lost confidence in the CEO.
We also got some new details about John Schulman's dissatisfaction. Schulman seems very serious about AI safety...
Last bit: the new funding deals have not closed. This could explain the timing of the departures of Mira Murati and the other two execs. Abruptly losing your CTO and two other senior people could spook investors. Idk what is motivating these choices, but if Murati wanted to...
Longtermism has gotten a lot of flak from the Left, but deep down, socialism and longtermism are not incompatible. My latest in
@jacobin
makes the case for why leftists should do more to prioritize the many generations yet to come.🧵
I recently appeared on
@CBSSunday
to talk about my time at McKinsey and the ways in which the elite consulting firm does harm in the world. I was also featured in the excellent new book When McKinsey Comes to Town by
@waltbogdanich
and
@PekingMike
. 🧵
This is obviously speculative, but it explains the situation pretty well IMO. She and the other execs could have stuck it out for 2 more weeks so the deals closed, or told Sam privately and announced after the deals closed. The fact that they didn't is noteworthy.
NYT silently edited their article that claims Bernie's math doesn't add up, removing a possible explanation for the gap. This explanation is the one offered by Bernie on his site, which the reporter linked.
hurt Sam, leaving without warning right now is a pretty effective way of doing it. Murati did push for Sam's reinstatement after the coup, but she also raised concerns about Sam's behavior to the board: ...
@slatestarcodex
One more thought: I think if you're trying to do novel, controversial deals that undermine traditional notions of sovereignty with sketchy regimes and the deal backfires, seems like a risk you were consciously running!
My experience at McKinsey radicalized me. I realized that the adults in the room were perfectly happy to enable atrocities. It’s easy to hide behind spreadsheets and slide decks, distancing yourself from the consequences of your work. I’m grateful for the reporting of...
Saunders, like many others at the top AI companies, think artificial general intelligence (AGI) could come in “as little as three years.” He cites OpenAI's new o1 model, which has surpassed human experts in some challenging technical benchmarks for the first time...
Damn, mic drop moment from a departing OpenAI employee. “How do you expect to be trusted with [the responsibility to develop AGI safely] when you failed at the much more basic task” of not threatening “to screw over departing employees"?
What does hundreds of thousands of dollars of corporate campaign contributions buy you? Members of Congress parroting industry talking points under Congressional letterhead, at least. There’s a serious problem with almost every single part of this letter 🧵
Today, the New Yorker published a profile of
@willmacaskill
, one of the founders of effective altruism. I first heard about EA through a podcast interview with Will in 2017. The ideas felt like an extension of intuitions I felt as long as I could remember...
A number of people have claimed that Peter Thiel is an adherent or supporter of longtermism or effective altruism.
This is false. 🧵
Thiel spoke at the Effective Altruism Summit in 2013+14, but hasn’t been involved in any capacity since[1] (1/6)
OpenAI has “repeatedly prioritized deployment over rigor. I believe there is a real risk they will miss important dangerous capabilities in future AI systems.”...
Written testimony:
Live video:
Leopold Aschenbrenner told his side of the story for why OpenAI fired him. If accurate, this is wild! Overall, it sounds like he was targeted for being a squeaky wheel (not signing the SamA letter, raising security issues w/ the board, talking about AGI being a govt project...
“When I was at OpenAI, there were long periods of time where there were vulnerabilities that would have allowed me or hundreds of other engineers at the company to bypass access controls and steal the company’s most advanced AI systems including GPT-4.”
This is consistent...
@RandolphCarterZ
It seems like there's a real tension between sovereignty and creating the incentive for foreign investment. idk if I have a wholly coherent view on this, but I think suing (for a far smaller amount) might be appropriate.
This is all coming out in the final days before
@GavinNewsom
has to decide on SB 1047, the AI safety bill that has riven Silicon Valley. His comments indicated he was leaning toward a veto, but a lot has happened since then (big Hollywood stars coming out in force for...
with claims made by Leopold Aschenbrenner, who alleges OpenAI found a pretense to fire him after he complained about cybersecurity vulnerabilities to the board:
OpenAI's latest board member ran the NSA and oversaw a massive increase in offensive cyberattacks.[1,2]
Bc of this, he's probably one of the most knowledgeable people in the world on cyberdefense, but this also represents a major step toward OpenAI integrating itself into...
Leopold Aschenbrenner’s SITUATIONAL AWARENESS predicts we are on course for Artificial General Intelligence (AGI) by 2027, followed by superintelligence shortly thereafter, posing transformative opportunities and risks.
This is an excellent and important read :
SCOOP: The CA Chamber of Commerce used a wildly biased push poll to try to get legislators to vote against SB 1047.
It didn't work.
The Assembly just passed the first-of-its-kind AI safety bill, which is now one step away from Newsom's desk. A lot more unreported...
By far the most interesting thing about today’s NYC
@PauseAI
protest outside of
@Microsoft
were the reactions from passersby. Most people ignored it (as expected), but all the engagement I saw was supportive. The most common objection was that AI doom was inevitable. One guy...
OpenAI is increasingly behaving like a normal company (e.g. reinstating the hyper-successful founder/CEO, not publishing an embarrassing report), but what they're doing is far from normal. Their top employees think that the work they're doing could literally drive humanity
Watching a bunch of e/accs gloat about Newsom's veto of sb 1047 as if their shitposting had anything to do with it...
It's hard to think of a more politically incompetent and intrinsically unpopular group of people.
This happened bc of Big Tech and Nancy Pelosi.
This is far from a consensus view and should not be stated so confidently.
The historian Philip Zelikow, who led the 9/11 Commission, said, “I think had the United States not built an atomic bomb during the Second World War, it's actually not clear to me when or possibly...
“No one knows how to ensure that AGI systems will be safe and controlled.” Not news to anyone following this field, but an important point to convey to the Senate...
In 2023, Pelosi's husband owned between $16 million and $80 million in stocks and options in Google, Amazon, Microsoft, and Nvidia.
Her office told me, "Speaker Pelosi does not own any stocks, and she has no prior knowledge or subsequent involvement in any transactions."
AI springs from California. Thank you,
@CAgovernor
Newsom, for recognizing the opportunity and responsibility we all share to enable small entrepreneurs and academia – not big tech – to dominate.
🚨EXCLUSIVE: The National Organization for Women (NOW), SAG-AFTRA, and Fund Her have each sent letters to
@GavinNewsom
urging him to sign CA AI safety bill, SB 1047, into law.🚨
I obtained the letters, which are published here for the first time.🧵
Honored to land the cover story for
@jacobin
's AI issue! Whether you're new to the topic or work in the field, I think you'll get something out of it. I spent 5 months digging into the AI existential risk debates and the economic forces driving AI development 🧵
The story of SB 1047 is long and complicated but the gist is very simple. By and large, the AI industry does not want to be regulated. It especially doesn’t want to be liable for harms caused by its AI models. They can't say this explicitly, so they make different args instead...
Are effective altruists and leftists natural enemies? Are these two approaches to improving the world irreconcilable? I dive deep on these questions in a new podcast ep with
@FreshMangoLassi
, a lefty career advisor at the EA org
@80000Hours
...
I used to be a big
@sapinker
fan. I've read 5 of his books and reviewed 1. I now believe he is biased and ideological writer and thinker, but one who doesn't recognize it. This takedown from
@NathanJRobinson
and
@curaffairs
is fantastic.
When OpenAI published their letter opposing SB 1047, the CA AI safety bill, they didn't mention the bill's whistleblower protections at all, which is noteworthy given their... troubled history on the matter...
OpenAI whistleblower Daniel Kokotajlo just published a response...
@shiringhaffary
See attached letter. William and I are disappointed but not surprised by OpenAI trying to kill SB 1047. (I support it, mostly for its whistleblower protections)
Leopold's series is clearly argued, but has a number of very big, very load-bearing assumptions. Having just dug into how much data contamination affects LLM perf on various tests, especially things like high school tests, this claim stuck out to me as too strong...
I went on
@Glovely1
's podcast and we had a 🔥 conversation about Bernie and various aspects of the presidential race (some controversial, if controversy appeals to you). If you need something to listen to, you could do way worse than this!
There are basically no arguments in this statement against SB 1047 from Pelosi, just appeals to authority, who themselves have been parroting industry talking points and disinformation, which I and others have extensively documented...
This is wildly off base. There are plenty of good criticisms of EA, but this isn’t one of them. EAs have cared about AI risk for a long time because there are really compelling arguments for worrying about it. AI risk is in the news because AI capabilities have been...
Reeling from the reputational damage SBF caused to EA, this became somewhat of an existential risk to the EA movement itself: nukes are too obvious, mosquito nets are too small, putting AI x-risk on the map was the path to show the world the enormous value EA offers society.
6.5 years ago, I pledged to give away 10%+ of my lifetime income to effective charities and am so glad that I did. This is the best write-up on why you should consider doing the same. There is an enormous amount of suffering in the world, and so much of it is preventable.