I
The left, broadly speaking, does not take AI particularly seriously. It’s not hard to see why: talk of “theory” notwithstanding, popular opinion down here in the ruins is driven by the same associationist impulses at work everywhere else. AI safety = cryptocurrency enthusiasts = libertarian bloggers = would-be technocrats paying homage on Sand Hill Road = bad, and that’s that.
Obviously this sort of reasoning is indefensible on its own merits. That doesn’t mean much: most of us don’t have it in us to always do the defensible thing. If you’ve got a great deal on tickets because your cousin’s daughter’s husband’s wife knows a guy, if you’re handing out literature in a suit and tie, if you just need a minute of my time - I’m sorry, but I’m not interested. Go away. And if you think of tech executives and venture capitalists as just two more species of scam artist, then I don’t blame you for tuning them out.
If on the other hand, you think that what they’re full of is not so much shit as bile, if you think powerful people always get what they want and not just what they ask for, if you think robber-barons might as well literally be both, then sure, maybe a little oppositional defiance is in order. Maybe you should doubt what they say just because they say it. Maybe the focus on how these systems are nothing like Skynet - the fixation on ill-will, on bad character as prior to bad behavior - makes perfect sense when you think that actually-existing evil kind of is.
And if for whatever reason you think that what it’s all about, what this has all been building towards, the final takeaway from these last eight thousand years and change, is whether the people with money and power will also in the end be cool - I can’t help you.
II
Reducing politics to individual moral character, to nice people doing nice things because it feels nice to do them and bad-willed bad people doing bad, to simple virtues and storybook evil, is natural enough. For the first five hundred thousand years or so, that was pretty much correct. And assuming that everything will always be as it was (though really only ever how it seems to always have been) is understandable as well. Most things are, most of the time. If it were so easy to get people to realize that no, really, there are larger things than us at work and everything is different now, it’d be just a little bit less true.
Aestheticizing politics is less defensible, and leads absolutely nowhere good - though the “e/acc” entropy cult may well be the second-worst way it’s ever been expressed - but is also difficult to attack except on equally aesthetic grounds. I won’t make any further attempts here.
But “scientific” socialists have no excuse for any of this.
You know this is not the world we grew up in, to whatever extent you want to call us grown; you know these are not conditions we chose. The city changed everything; the corporation changed everything again. And you know that one day, before history ends and we can all settle in for the long good time, everything will change again. Are you really sure there’s just one more mode to go?
And you know that there’s nothing Musk or Bezos or Buffet or Brin can do to stop it. You know perfectly well that this train is moving under its own power now, that the first-class compartment isn’t the engine or even the coal room, that though it still reads “EMERGENCY BRAKE” on the wall the actual machinery was tossed out to save weight long ago. And you know that the tossing is far from done, whether the passengers like it or not. One day, you hope, we’ll shed enough weight that we can have an engine in every car, and do away with ticket class altogether. But are you really sure you know where that process stops?
You know that actual machinery, the literal factory kind, is not just an instrument, that through industrial engineering “the accumulation of knowledge and of skill, of the general productive forces of the social brain, is … absorbed into capital”1 , that in an “automatic system of machinery … workers themselves are cast merely as … conscious linkages”. Are you sure you know just how big such a system can get? Just how many conscious linkages it really needs?
And you know that all of this boils down, more or less, to one simple principle: Capital wants to accumulate, and will do what it takes to make that happen. It’s nothing more than numbers in ledgers, and nonetheless very, very real. It’s built of human hearts and hands, and more powerful than all of us combined. It’s a blind idiot god, and the whole history of the last two hundred years, from the workhouse to the world market, is just it waking up. Do you really think putting the numbers in RAM makes them fake? Do you think replacing managers with machines will make corporations more human? Do you think teaching Capital to read and write will put it back to sleep?
III
All existing systems are optimized for their ability to continue existing. Exceptions do not go on existing long. In a competitive environment (and all the big ones are) that means accumulating resources, and stopping everyone else from accumulating yours. Capitalism was able to carry out the near-total conquest of the premodern world because trade and manufacturing can generate wealth at rates that dwarf the wildest dreams of would-be conquerors - which allowed capitalist powers to beat up said kings and princes, and turn all their stuff into feedstock too.
Any realistic hope of socialism - not just to survive in some small corner of the world, guarding too little too well to be worth taking, but actually to replace capitalism as humanity’s dominant form of social organization - rests on the fact that generating wealth and accumulating capital are not quite the same thing; that we might be able to beat Moloch at its own game by playing it poorly in just the right way. For the capitalist firm, maximizing profit is only a means, and maximizing social value is besides the point entirely: the real end of an idealized firm is simply the collective interest of its shareholders2. The real end of a real firm is whatever its managers are incentivized to do, and this can be just about anything.
But just how far these come apart, and what sort of entities squeeze in in between, is far from fixed. Transaction costs go up far enough, and you get massive midcentury conglomerates; push them all the way down, and you get a diffuse gas of independent contractors. Strong enough economies of scale create world-bestriding megacorps; severe enough diseconomies of scale get you white-shoe professionals hawking fine artisanal goods. Cheap capital lets a thousand startups bloom; expensive capital gets you ossified dinosaurs. And these different sorts of firm have different capabilities and different political consequences. Even if you still genuinely believe in the inevitable end of production for exchange, getting there by nationalizing Amazcrosoftmart-Aramco-Hathaway is not the same as getting there by general strike, or by riding the momentum of some “temporary” economic mobilization, or by crushing Wall Street beneath the weight of your sovereign wealth fund’s superior risk management - to say nothing of getting there by picking up the pieces after those maniacs blow it all to hell. You should care exactly which face of capitalism you’re confronting, if only to adjust your tactics appropriately.
So all the talk of how AI “can’t be truly creative”, how it’s “just” going to be a “new McKinsey”, how artificial intelligence was actually invented in 1551 and we’re just making it a little bit smarter is missing the point. Markets turn quantitative change in the small into qualitative change at scale all the time. AI, even if all it amounts to is a way to drive the price of semiskilled white-collar labor down to the price of power, threatens to produce a new sort of entity: the well-managed large firm.
I don’t know exactly what this is going to look like - imagine, perhaps a Sears-style internal market, except it has no principal-agent problems and so actually works; imagine every meeting that could have been an email actually was; imagine a technostructure capable of forming intentions - but it will be different, and it may well win. We face the very real possibility of a world in which firm-level management is as thoroughly delegated to nonhuman entities as state-level economic planning is today; where “shareholder primacy” is, like “popular sovereignty”, largely unenforceable. What comes next is anybody’s guess.
From Marx’s Fragment on Machines
Value and expected profit coincide only under ideal conditions. In general, profit is a random variable, and can only be maximized with respect to a particular measure.
The dismemberment of General Electric, for example, was pure redistribution: not creation, not investment, not even really transport, except of entries from one ledger to another. And not even a redistribution of wealth, except perhaps to a few executives who found themselves with shinier titles: if you owned GE before, you owned GE Vernova and GE Aerospace afterwards. What got redistributed was risk: from shareholders in the former conglomerate, to those willing to hold shares only in its shakier half. That doesn’t necessarily mean that no genuine social value was created - on the contrary, the sooner the cult of Jack Welch is disposed of the better - but the connection is a loose and contingent one.