I was already in the process of writing a response to this Big Technology article by Alex Kantrowitz when news of Sam Altman's ouster from OpenAI was announced Friday, and it took me until today to reorganize myself enough to respond. That was enough time to find out that Meta had abandoned their AI safety project and for Sam Altman to be spared the ignominy of having to file for unemployment by joining Microsoft.

Altman's departure for not being "consistently candid in his communications" with the board (whatever the hell that means) is not essential. That the helm of one of the largest (if not THE largest) players in the AI space is suddenly a chaotic mess is critically important. There have already been mass resignations in the wake of Altman's departure, most notably co-founder Greg Brockman, who Microsoft also snapped up after a whole weekend of considering his options. Any sudden change at a company worth more than the GDP of more than half the world's nations is worth at least raising an eyebrow about. Add to that the care and feeding of the most widely used generative AI ... and you might want to raise both eyebrows.

We'll wait and see what all the fuss is about. Still, I'm willing to put my money down on the board being concerned with Altman's messianic view of what OpenAI was engaged in, as revealed in a Vanity Fair article that came out November 15th, but that's just a personal theory.

[Update: As of the day this article was published, Sam Altman was back at OpenAI after literally at the company turned in their resignations. It appears my theory was at least partially correct: Sam Altman is capital-first with AI and was ousted poorly by decelerationists on the board. They have since all been shown the door. I therefore anticipate OpenAI to push the accelerator to the floor.]

Meta quietly dismantling their 'Responsible AI' program is just par for the current accelerationist course. With billions flooding the zone for anything and everything AI, every dollar not spent directly on training and expanding LLaMa (or proprietary AI built on the backbone of LLaMa) by Meta is, of course, 'wasted.' Right now, AI is all-gas-no-brake, which seems unlikely to change anytime soon, as the slip-n-slide of capitalism sends us to whatever bottom Silicon Valley can conjure up.


Which brings me to the article that first spurred me into writing, AI Doomers Are Finally Getting Some Long Overdue Blowback:

Worrying about AI safety isn’t wrongheaded, but these Doomers’ path to prominence has insiders raising eyebrows. They may have come to their conclusions in good faith, but companies with plenty to gain by amplifying Doomer worries have been instrumental in elevating them. Leaders from OpenAI, Google DeepMind, and Anthropic, for instance, signed a statement putting AI extinction risk on the same plane as nuclear war and pandemics. Perhaps they’re not consciously attempting to block competition, but they can’t be that upset it might be a byproduct.

This paragraph manages to touch just about every argument that accelerationists leverage against the notion that we are running a sprint (or a quarter mile of asphalt) instead of a marathon when it comes to the development of generative AI:

  • "worrying about AI safety isn't wrongheaded" - but it is, at best, secondary, tertiary, or quaternary. We all know which priority is Number One.
  • "They may have come to their conclusions in good faith" - of course, the sobriquet 'Doomers' lets you know how seriously they take those good-faith conclusions.
  • "companies with plenty to gain [...] have been instrumental in elevating them" - and here we have the argument that is the crux of the article, and the crux of this argument: No matter how well-intentioned, any regulation of AI is just handing the future to OpenAI, Meta, and Google.
For accelerationists, this is the worst possible outcome.

Let me be exceptionally clear: I am a skeptic of AI, and a decelerationist only at my most vehement. I don't harbor dark nightmares of Skynet, Gray Goo, or Colossus when I consider the dangers of the future of AI. I hope most AI decels share the same rational level of concern. There is a difference between "we're all going to die tomorrow if Artificial General Intelligence comes online" and "generative and general AI have social and economic impacts that we, as a capitalist society, are both ill-suited and unlikely to address."

For all of the hue and cry about the dangers of regulatory capture brought on by decelerationist concerns, we already know that regulatory capture is highly unlikely; why? We already have a perfect example of a technology with significant social and economic ramifications: social media.

At the time of publication, we're 19 years into a giant social and psychological experiment run by a handful of companies shaping society and possibly human cognition in ways we have yet to understand, let alone get substantial control over. Some of those companies are owned by terrible people with agendas I feel safe in labeling asocial at best, if not outright hostile to the project of liberal democracy. We've seen mountains of evidence that even without direct experimentation on the part of these companies (which we know they have ALSO done), their industry is ripe for regulation, if not investigation - and we have not had a single meaningful piece of legislation in place for nearly a decade and a half.

This lack of regulation hasn't resulted in a proliferation of social media platforms and robust competition - in fact, given the state of play in social media, you could have sworn that the wrath of Regulatory Capture had been invoked in the social media sphere. The only growth area in social media networks is right-wing grifters peeling off silos of folks who can't live another minute without using the N-word in public again. I'd put a stake in 2011 as the last major launch of a 'new' form factor of social media (live streaming via Twitch, since acquired by Amazon) that has given us any meaningful innovation in the space. So, despite zero regulation by any governmental body, we've seen nothing but "Facebook for/with <INSERT NICHE HERE>" or "Twitter except <INSERT FEATURE HERE>" for the last decade at least. So don't tell me that regulation will stifle innovation and lock in some oligarchical high-rollers. Silicon Valley has managed to lock in a handful of social media giants all by itself, thank you very much. Even the latest contender, TikTok, is just a social media giant from another country that's managed to get a foothold here. (TikTok is also just Longform Vine. Rest in Power, king.)


Social media also gives us a template by which we can gauge the potential downsides of AI as a technology. You might be able to find "thought leaders" who will still lean into the idea that social media has been a net positive for society – but even the most fervent booster of social media would have to admit that positivity is marginal. Since 2016, we've known that social media (and the tools and analytics that allow micro-targeting online populations) is, in the wrong hands, capable of shaping events to the self-interested few instead of the whole. The pandemic turned the volume on misinformation channels up to 11, and we've not seen it abate since.

Now consider the following points:

  1. The same coterie of companies that have served as conduits of misinformation, knowingly or unknowingly, are the same companies currently invested in pursuing generative and general AI.
  2. We have not seen any meaningful attempts at correcting the ample problems that already exist with a technology that will be 20 years old in 2024 and even outright resistance to harm-reduction on the part of social media (I'm looking at you, Elon).
  3. Generative AI offers a far more robust suite of tools to the self-interested few than social media. These are tools capable of not just spreading misinformation but also generating it wholesale, at quantities that defy the ability of any concerned party to mitigate the damage caused.

The accelerationists propose that the same actors who have profited off the most toxic corners of social media are those we should put our trust in going forward. I find that notion ludicrous.

I would hazard that the probability of a major news story breaking within the next year before the 2024 US Presidential election involving content produced with generative AI is greater than 50%, and that percentage will not drop at any point over the next year. We will have an event that causes a shift in public opinion, which will be wholly synthetic. The corporation that owns the model used to synthesize the hoax will disavow or deflect blame, but the damage will be done.

You don't need a literal Skynet to have an existential risk to life posed by AI. You don't need a bomb to go off when your target is standing in quicksand. All you need is hubris to believe that we, collectively, will always be one step ahead of the self-interested, well-armed, and ethically challenged few. The arms dealers, in this case, are profit-driven, unbeholden to the bulk of society, and similarly ethically challenged.