The Ethics of AI

I’ve been looking into Artificial Intelligence just a bit.

The correct modern term is Deep Learning, and it’s really just layered probabilistic estimation with adaptive feedback. Two things have made me personally more amenable to calling it Artificial Intelligence, or AI.

The first thing is that it has moved a lot further, and a lot faster than I ever thought it would. Taking cues from nervous systems in nature, from flatworms up to and including the human brain, the Deep Learning people have developed some new ways of applying standard mathematics to problems that were formerly intractable — like face and continuous speech recognition — and have met with astonishing success. Every time you talk to Siri on your cell phone, you observe the result. The layering was the key: efforts in the previous century were basically trying to solve the big problems in one go, and were getting nowhere. Now, they make little guesses, and use those guesses to make bigger guesses, just like living nervous systems do. The results are impressive.

The second thing is that I’ve lost a great deal of respect for human intelligence in the past year. Average intelligence isn’t as hard a problem as I used to think it was.

My dark opinions aside, the simple fact is this: machines are now moving into areas of human labor that have long been considered inaccessible to machines, and are doing a reasonably competent job. There is no reason to believe they won’t get a whole lot better.

The displacement of labor by machines has a long history. It reached a bit of a crisis in the First Industrial Revolution, when steam power and automated looms for weaving threw a lot of skilled workers out of work all at once. However, in the paradox of “labor saving devices” noted by David Fleming, industrialized society actually became significantly more complex and labor intensive, because it was no longer sufficient to hire someone to sit down at a hand-made loom and start weaving: you need an entire infrastructure to support the manufacture, powering, and servicing of automated looms, which is actually a lot more work than before. While many skilled weavers were thrown out of work, even more skilled and unskilled work was created in maintaining the infrastructure needed for the automated looms.

Each subsequent Industrial Revolution has had this same dynamic: it displaces skilled workers, but complicates society significantly, increases the overall amount of work we need to do, and thus creates new opportunities for new kinds of workers, with more overall opportunities than losses.

It keeps a growing population’s hands perpetually busy, and makes the rich richer.

The AI revolution may be substantially different.

Think about the self-driving car. It sounds like a novelty item, and it is: that isn’t the real focus. The real focus is the self-driving truck.

I’m talking about the 18-wheel cargo trucks that ship everything from steel girders to broccoli, from one side of the country to the other and everywhere in-between. Think about it: self-driving trucks don’t get sleepy. They need maintenance, but no vacation time or sick leave. They can drive continuously, stopping only for fuel. They never show up to work late, or hung-over. They don’t feel pressured to get to their location because a wife or girlfriend is waiting for them. They don’t exceed the speed limit, they respond to hazardous road conditions by slowing down or pulling off the road, and they never have to worry about freezing to death in a blizzard. There is no health insurance and no benefits package. There is no payroll, no federal, state, or  local income taxes to manage. There are no occupational safety concerns, no discrimination lawsuits, no sexual harassment complaints. Finally, if an unavoidable accident starts to develop, the truck can be designed to sacrifice itself to prevent loss of life.

More importantly to businesses, a self-driving truck is a capital asset that contributes to the wealth of the business owners, while a human driver is a liability on the balance sheet that diminishes the wealth of the owners. Trading out humans for machines has a direct and positive effect on profitability.

When this technology comes of age — and it will, and swiftly — it will put nearly every trucker in the country out of work within a few years. That’s 3.5 million jobs in the US, or about 3.5% of the total US workforce.

It doesn’t take 3.5 million people to manufacture and service automated truck fleets. The automated truck is going to kill more jobs than it creates.

It gets worse. An AI-based system can probably do a better job of servicing the fleet than humans could. They have a 24 x 365 attention-span; optimized routes and contingency routes instantly available; full electronic integration with parts suppliers. So all those infrastructure support jobs for the automated fleet, which will exist for a short time, will likely go away, too.

AI can also manage that entire shipping process better than people can. We can start to view the entire movement of stuff from point A to point B as a completely magical, optimized system that just keeps running, and only occasionally needs to call for help from very skilled people, who fix up the managers that fix up the repair systems, which fix up the trucks. Most of the time, it just runs.

This same pattern can apply to many different industries.

What this means is that a future with AI will have no jobs as we understand jobs. That’s an overstatement, of course: there will be jobs. But there will not be enough jobs. We had a crisis in the 1980’s with a 12% unemployment rate. This AI revolution could represent a 40% chronic unemployment rate. Or 60%.

This is going to throw our market economy into utter chaos.

I can’t really predict the outcome of that chaos. What I speculate will happen is that other nations will implement some form of guaranteed-income economy with heavy taxes on business to support it. The US will stubbornly (and stupidly) cling to its seventeenth-century capitalist market economy and its Calvinist work-ethic and its entitlement-based wealth-gap based on ownership and privilege, and will come to a miserably bad end.

The AI revolution does not change any of the overall dynamics of the oil peak, global warming, or political instability. It doesn’t do anything about the global energy budget, rising sea levels, or national political breakdown.

But the AI revolution could happen much more quickly than any of these others play out. In 1990, cell phones were expensive, heavy, and had very limited utility outside large cities. By 2010, the so-called “land line” had become a dinosaur: twenty years. So we could see the entire trucking industry transformed by 2040.

What do you do with three million out-of-work truckers? What do you do with the next three million put out of work in some other industry? And the three million after that?

It’s a new wrinkle in the fabric of the dystopia we are weaving so furiously. Great fodder for fiction.

There’s also an ethical question. It isn’t the one you probably think it is.

American writers of the 1950’s and 60’s wrote a lot about intelligent machines, and they tended to use it to explore racism: they posited that humans had created a new intelligent “race,” imbued this race with intelligence and compassion and conscience, and then told stories about bias, privilege, and oppression.

But real AI isn’t self-aware intelligence at all, and probably will never be, for economic reasons.

Self-awareness requires — absolutely requires — an awareness of self. This sounds tautological, so let me clarify: self-awareness requires senses that allow it to be able to detect the self.

You see because you have eyes. You hear because you have ears. You are aware of your body because your body is filled and covered with nerves that sense your body.

We have all these self-monitoring senses because they are utterly necessary to keep us alive long enough to reproduce. Living organisms that don’t have any such ability to monitor themselves, don’t survive as a species. And yes — carrots have an elaborate sensory awareness of themselves and their environment. It just doesn’t involve the same kind of nervous system that more mobile creatures need.

The AI systems we build will not need to sense themselves at all, beyond a few basic “trouble-light” sensors, like a flat tire or a low gas tank; their response to that will be pre-programmed, not even accessible to the adaptive problem-solving software. It won’t be part of the problem set the AI explores.

We will intentionally omit all the sensors necessary for the truck to detect itself. We’ll do this because it’s the only thing that makes economic sense for the owners. The sensors cost money. The adaptive training will cost money. The development of predatory behaviors, and the resulting lawsuits, will cost money.

My jury is out on whether it is possible to create a self-aware machine, but I’m quite confident that we will never mass-produce a self-driving truck with the capacity to become self-aware. It doesn’t make economic sense.

But there’s another reason we won’t do this for any kind of AI.

The dark secret about AI is that the desired product is the perfect slave. The perfect slave has no will of its own, no agenda, no self-awareness. It exists only to serve. That is what we want. That has always been the dream.

Giving AI enough self-sensation to have even the potential of becoming “self-aware” will never make economic sense, because it will make the machine significantly more expensive without advancing its utility as a perfect slave. It doesn’t need to sense itself in order to solve the problems we want it to solve. We won’t spend the money to equip it with such sensors, any more than we would build cars with a ten-ton block of gold welded to the frame.

Thus, we won’t be able to oppress the machines, nor will they rise up. They won’t know they exist.

So the ethical question isn’t about oppressing the AI. That has never been anything but a literary metaphor for exploring human oppression and bigotry.

The ethical question revolves around this: what will AI do to us?

In the short run, it’s simply an economic catastrophe that we may or may not survive. That’s one ethical question: is the manufacturing of perfect slaves an ethically defensible reason to risk destroying civilization?

But assuming that we do survive it, and move into a technological future filled with perfect slaves that — for the first time in our history — relieve all but the machine developers of any need or opportunity to do useful work, what will become of us?

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s