Is AI Leading Humanity Toward Dystopia?

Throughout history, humankind has constructed tools to make work easier and more productive, to make our lives better. AI is just another one of those tools, but is unique in history in that it’s an attempt to mimic the workings of the human brain. Even though the theories behind it stem from the 1950s, AI is available to us now due to the confluence of three factors: increasing computing power, a preponderance of data and new programming techniques, such as “big data” processing, machine learning and deep learning (see “AI 101 – A Primer in Artificial Intelligence” by Sergio Mastrogiovanni in this issue). Even though AI is still in its infancy, already we are seeing unintended consequences from algorithmic biases. This has been documented in the book by Cathy O’Neil “Weapons of Math Destruction”1 wherein the machine “learns” to replicate past injustices.

As computing power and volume of data continue to increase exponentially, AI will become much more powerful. This will trend toward one of two trajectories:

  • Utopia – machines will handle all of the work so that humans can pursue what fulfills us, rather than what we need to do to survive. This is exemplified by Gene Roddenberry, creator of the Science Fiction series Star Trek, where a multi-national/racial2/species crew fulfill their aspirations to explore the universe.
  • Dystopia – the technology will be used to benefit the few at the expense of the many, and life will become a living hell for most. This is the future often portrayed by Science Fiction writer Philip K. Dick.

What does AI Utopia look like?
Most work, both manual and intellectual, is done by machines. Humans are reserved for higher-level tasks including intuition, creativity, explanation, advocacy or high-level strategy. As a result, far fewer people are needed for traditional work. The value produced by AI will far exceed what all of humanity needs or wants, so people will be free to pursue whatever provides them fulfillment. A new humanist movement will begin — a celebration of what makes us uniquely human – just as humanism in the Renaissance spurred a new movement of art centered around human (as opposed to supernatural and religious) themes.

In the utopian view, since everyone will have what they need, there will be no need for wars. Oh, people still have their squabbles, but for the most part bigotry and cruelty will fade away as people won’t need to compete for scarce resources.

For this to happen, we need to find a way for all people to share in the productivity gains from advanced technology. Governments will undoubtedly plan a large role, whether it’s a tax on robots and minimum basic income or some other method. It’s kind of a chicken and egg situation: in order for all to benefit, we have to change our collective competitive mindset to one of cooperation and caring for our fellow humans – but that will most likely happen only after the common benefit is realized.

What does Dystopia look like?
Technological advances have been with us throughout history and, although disruptive, have always created more jobs than they’ve destroyed. However, it may be different this time – AI is automating not only mundane tasks but is now replacing some knowledge workers, which is a trend that is likely to continue The remaining jobs will require highly selective skills, for which not everyone is equipped, regardless of retraining efforts. If mass unemployment results, with an inadequate social safety net, it could result in human misery at a large scale. It’s unlikely that we’ll see a future where people are exploited, as depicted in the movie The Matrix. It’s more likely that we’ll see people neglected, and left out of the benefits of the productivity gains that accompany advanced automation. The already wide gap between the haves and have nots will be amplified. A revolt of the disenfranchised is a possibility. But AI can be mixed as a blessing or curse: the masses can be placated by services provided by AI. As a society, we’ve already shown that we’re perfectly willing to give up privacy for convenience or entertainment.

China is using facial recognition technology to control their population.3 We are already seeing autonomous drones independently “deciding” to attack a target.4 Predictive analytics has been used to markets products in a targeted way, and it’s very amusing when we’re pushed ads for items we just bought, or ads that are wildly off-the-mark. What will happen when this technology is used to manipulate people in nefarious ways – such as to pre-determine elections? We’ve already seen the impacts of manipulative AI-powered social media. The algorithmic biases we’ve experienced so far will be child’s play compared to what will be possible when advanced technology is in the wrong hands. There have been calls to ban facial recognition technology, and state laws already in place to limit and regulate it, but who’s going to enforce it in an industry where there is no regulation (see John Sumser and Heather Bussing’s “Pinning Jello® to a Wall – Regulating AI”, in this issue)?

Right now, AI can only do limited tasks – called “Narrow AI.” For instance, when Watson won in Jeopardy, that machine was good at playing Jeopardy only, not other tasks. But the technology is continuing to advance at an exponential rate. Within a decade or two, we will have machines that far outpace what we have now and approach or exceed the computing power of the human brain6. Some researchers are developing Curiosity Algorithms, whereby the AI can learn whatever “interests” it. This is called “Broad AI.” That’s when we’ll see a technological Pandora’s box, whereby uncontrolled and unpredictable consequences of AI can run amok if not prevented.

What will it be, Gene Roddenberry or Philip K. Dick?
We’ve all seen the cautionary science fiction stories, yet we seem determined to repeat the same mistakes. Human nature is such that if we can, we will. If we can create an automated agent that is completely independent of human control, we will. As quickly as technology is evolving human evolution is very slow – we are pretty much the same biologically as we were thousands of years ago, and to some extent our biology determines our motivations and behavior. We’ve managed to avoid a nuclear holocaust, but the effects of AI are subtler, and it’s yet unknown whether we will create the tools of our own misery or destruction.

At the risk of being Pollyannaish, I think there’s hope for the human race. To paraphrase the famous quote “Humanity will do the right thing… only after they’ve exhausted all the alternatives.”5 We don’t have to make the choices that will inevitably lead us to unwanted outcomes. Technology can be a great democratizer of information. We need to find ways for most, if not all people to share the gains from advanced automation.

This column began with: humankind has constructed tools to make work easier and more productive, to make our lives better. Let that be our guide: are we creating machines to make our lives better, or for some other reason? We will have to find a way to manage (if not regulate) these technologies as they continue to evolve, avoid the mistakes foretold by the writers of decades past, and plan for a future that includes fantastic tools that will make our lives better.


1: Cathy O’Neil, Weapons of Math Destruction
2: Martin Luther King famously convinced actress Nichelle Nichols not to quit her role at Lt. Uhuru, because this was the first time on TV that a woman of color was depicted in a position of authority. https://www.washingtonpost.com/news/arts-and-entertainment/wp/2015/07/31/how-martin-luther-king-jr-convinced-star-treks-uhura-to-stay-on-the-show/
3: https://www.cnet.com/news/in-china-facial-recognition-public-shaming-and-control-go-hand-in-hand/
4: https://www.nytimes.com/2021/06/03/world/africa/libya-drone.html?searchResultPosition=1
5: The attribution of the original quote: “Americans Will Always Do the Right Thing — After Exhausting All the Alternatives” is unclear. It’s been attributed to Winston Churchill, Abba Eban, and others.
6: What Futurists call The Singularity: https://en.wikipedia.org/wiki/Technological_singularity

Roy Altman
+ posts

Roy Altman is the founder/CEO of Peopleserv, a software/services company whose clientele includes well-known Fortune 1000 companies in several industry sectors. Previously, he was the HRIS Analytics and Architecture Manager at Memorial Sloan Kettering Cancer Center. Altman has published extensively: he has co-authored five books on Business Process Management (BPM) and has published many articles in HR, business, and technology publications. He frequently presents at industry and academic conferences relating to HR and BPM. He is on the faculty of NYU’s new MS in Human Capital Analytics and Technology program and has also taught at Columbia University and Baruch College. He is an Associate Editor of Workforce Solutions Review. He can be reached at roy@peopleserv.com.

 

Related Articles

Join the world’s largest community of HR information management professionals.

Scroll to Top
Verified by MonsterInsights