Author's Note: A Tale of Two Futures
First in my Author's Note series, this week I will be discussing how our technological, economic, and social decisions all work together and how we can get to a better future.
I believe we’re all, by now, familiar with that one scene in the Matrix where Neo must decide to take the red pill or the blue pill. A common interpretation is that one of these pills will allow Neo to see the world as it really is, while the other will allow Neo to continue living a comfortable, albeit artificial, life while being used as a battery to power intelligent machines, bent on destroying what remains of thinking, feeling humanity. In writing, this is a very explicit plot device called the inciting incident. An incident, a.k.a. being pursued by AI agents, has brought Neo to a crossroads, and it very much must be his decision which direction to take. While we on the sidelines may be cheering him on to take one pill or the other, let me tell you a secret: it doesn’t matter which pill he takes. Had it been the blue pill, and important to the plot that Neo woke up, he would still have woken up…somehow. That’s the author's magic: the decisions the characters make are often illusory, whereas in the real world, we must make real choices that have real impacts on our lives.
This is the first installment of my Author’s Note series, where I discuss a future that we can get to, and the changes I believe we need to make to get there. This one is more general, but I’ll be getting into specifics in upcoming posts. Paid subscribers get this series first, but as usual for everything we post here, these posts will all open to everyone within a week, so you don’t have to be a paid subscriber (though it’d be helpful for us if you decide to become one!).
In fact, we’re collectively facing our own inciting incident: the rise of artificial intelligence.
Unfortunately for us, it’s nowhere near as clear which way to decide on our own artificial intelligence dilemma. There is no sleeping chamber; we live in a soupy mess, plugged into machines, that will validate our decisions once we “wake up.” Even the choices we make are of such a type that our individual decisions by themselves yield no obvious consequences. For example, my own decision of whether or not to use AI will not keep a data center from being built. That’s why the AI takeover is even more dangerous: because there’s no single blatant warning sign on it. The evidence is trickling in, though, one AI-assisted suicide at a time. Even that’s not enough, and not going to move the needle fast enough, because we can still chalk that up to individual decision, if we tilt our heads and blur our collective eyes. We can brush away concerns about the data center power consumption by championing renewable energy (which trust me, isn’t going to die just because our oil oligarchs want it to). But, like big oil, and the thing that we don’t tend to understand collectively, is that the ocean is starting to boil.
By that, I mean it took decades for the full impacts of climate change to be understood, and even so, we’re still trying to convince some people. Those in positions of power seem to have decided it’s a hoax by their actions, or rather, inactions. So it’s up to us to figure out a way out. The same is true for artificial intelligence, and that brings me back to the title of this Author’s Note: a tale of two futures.
The first future yields a split of the middle class and more rigid lines between the rich and poor. The middle part, where a lot of good-paying jobs exist today, is being hollowed out. Oh, there was plenty of hollowing out before AI, but AI means that jobs that were once considered safe investments in the future are going to disappear, too. There’s no need for anything besides private education in the world of this future, because most of human society will live in the dregs. These are the new expendable poor, who only exist to be fed into the machinery to keep the wheels turning so the rich can live lives of decadence. In this future, Epstein doesn’t matter because the rich will do whatever they want to the impoverished with impunity. We live in a version of this future already, as you can tell by the dual-tiered justice system we suffer from. In this future, billionaires need bunkers and private security to quell unrest among the masses. In this future, autocrats play golf with other autocrats and make decisions that kill two hundred thousand people at a swipe, like the dismemberment of USAID. In this future, artificial intelligence further hollows out the middle class until there’s nothing but a desiccated shell of sycophants, hoping that they’ve gained the favor of whatever autocrat they’re sucking up to. This seems to be the future that many of the broligarchs are gambling on.
But there’s another future. This other future taps into the power of AI and focuses it on tasks that humans genuinely struggle to do. For instance, advanced mining equipment so that miners don’t have to suffer from black lung. AI can be used to design gear to help advance human dignity, instead of taking away the arts and writing from us. It can be used to reform society and prop up the middle class, instead of stealing, repackaging, and selling our stories back to us so that we can “escape” into a machine-dictated alternate reality. In this future, we acknowledge that the common people are being robbed daily. Nobody does over a thousand times more work than anyone else, and nobody should be paid that way. We can write laws to regulate AI before it is used against us as a monitoring tool. We can harness the power that human advancement, not individual wunderkind, has produced, and use it for all. But that means thinking about economics differently.
This week’s focus for this author is the economics of living and how we can do better as a society. It’s clear to all of us that the direction we’re currently heading is wrong. But…what is the right one, and what can we do to fix it? This week, that’s my focus in this new series I’m calling: Author’s Note.


