As the casualties of “progress” pile up, the self-assured march of AI and other newfangled technologies onto society demands a critical evaluation.
Why is crypto a thing, still?
There’s the paranoid fear that a spooky cabal of technocrats at the Federal Reserve — unaccountable, incomprehensible — might one day make ordinary money go poof! There’s the success of the young bros who minted a fortune convincing other young bros to embrace the thing — luring a generation with dim job prospects anyway to take a shot at getting rich from their bedroom.
Can that be all, though? You might expect some urgency to find a purpose for a technology that consumes more power than Australia, yet hasn’t been able to develop a real-world function other than paying for ransom, drugs or child porn. But once you get past the greater fool theory, you are left with little more than a slogan: It’s hi-tech.
I’m talking about a problem that goes way beyond crypto: the lack of purpose, the absence of a reason for society to keep churning out more “new, new things,” despite the costs, driven by a narrative contrived in Silicon Valley that features technology, any technology, inevitably powering human progress.
In the valley’s telling, interrogating this progress is best left to the Luddites. But the self-assured march of newfangled technologies onto society demands a critical evaluation. Because the casualties of progress are piling up, calling into question why we’re deploying such technologies in the first place.
The social consequences of social media are chilling not just for their proven potential to distort the national conversation, spreading misinformation too fast for the truth to catch up. As many observers have also complained, they are substituting online social connections for real ones, building alternate realities open to manipulation in pursuit of profit.
Deployed by corporate managers to automate processes and take over increasingly complex decisions, robots have built a better rep. But the reputation relies on unexamined assumptions: First, that automation necessarily improves firm profitability; second, that the fruits of this progress will be shared broadly across society.
Companies that become more productive, the story goes, will expand production and hire more workers. Automation will also create new tasks within firms for humans to do. Incomes rising in line with productivity will generate demand for new products and services, further boosting employment. And the additional competition for labor will drive up wages.
But while these propositions make sense, at first blush, they don’t really fit what we are seeing in the real world, where employment growth mostly takes place at cheap labor joints like McDonald’s and 7-Eleven. Anybody who thinks the gains from automation are being broadly shared hasn’t been paying attention.
A new vein of economic research into the consequences of technological change has found that technology’s bias toward automation can account for most of the rise in wage inequality, polarizing the labor market between less-educated workers who are displaced from their tasks and see their wages fall and those — mainly college graduates or postgraduates — who are not.
Technology does call for new tasks, opening the door to new jobs, but they too are biased toward the highly educated and offer little to the workers with only basic skills whose tasks were taken over by the machines.
Research by economists at the Massachusetts Institute of Technology, Northwestern University and the University of Utrecht found that the economy created a lot of middle-wage production and clerical jobs from 1940 to 1980. But lots of those are now gone. The jobs created since then have been either highly paid professional positions or low-wage service gigs.
And just you wait for Artificial Intelligence to hit its stride. What Google CEO Sundar Pichai calls “the most important thing humanity has ever worked on” will open whole new realms of human activity to what the money in the valley likes to call “disruption.” The workers displaced by the next version of ChatGPT will get to play their usual role in the narrative of progress: roadkill.
The problem with progress is not just in the way its fruits are shared. The very gains are coming into question. You may remember Elon Musk’s acknowledgment that “humans are underrated,” a rare admission of error after his attempts to automate Tesla’s assembly lines led to delays and malfunctions. The mistake is common: Technology’s contributions to productivity are often hard to find.
As Daron Acemoglu of the Massachusetts Institute of Technology observes, a lot of automation delivers only a so-so boost to the bottom line. Think of automated customer service or touchscreens at McDonald’s. Managers automate anyway for two reasons: It’s “progress” and everybody’s doing it, and the costs imposed on workers displaced by the new technologies are, to the firm, irrelevant. So even if the returns are vanishingly small, they are worth it.
Innovation, by some measures, is happening at a blistering pace. In 2020 the US patent office issued more than 350,000 patents for inventions, almost six times as many as in 1980, at the dawn of the digital revolution. But total factor productivity in this period grew barely 0.7% per year, on average, less than a third of the growth rate from the 1940s through the 1970s.
While the techno-optimists in Cupertino and Mountain View tend to dismiss the dismal numbers as mismeasurement — data crunchers missing all the good stuff — many serious scholars are coming around to the idea that all the awesome IT will not necessarily bring about a productivity revolution.
Innovation is undeniably a cool thing. Because of it, we survive diseases that regularly used to kill us. We can access and process unimaginable amounts of information. Without new technologies we would never meet the challenge to decarbonize the economy and contain climate change.
But as Acemoglu and his MIT colleague Simon Johnson point out in their forthcoming book, Power and Progress (due out in May), contemporary evidence and the long story of humanity’s technological development confirm “there is nothing automatic about new technologies bringing widespread prosperity. Whether they do or not is an economic, social, and political choice.”
Silicon Valley, they argue, should not feel entitled to make the call. With the venture capital industry chasing opportunities for AI to take over an increasing array of tasks and decisions — playing Go, practicing law, analyzing markets — Acemoglu and Johnson fear technological progress is driving society down a dark path.
What if instead of increasing productivity, AI simply redistributes power and prosperity away from ordinary folks and toward those who control the data? What if it impoverishes billions in the developing world — whose cheap workers cannot compete with cheaper automata? What if it reinforces biases based on, say, skin color? What if it destroys democratic institutions?
“The evidence is mounting,” they write, “that all these concerns are valid.”
We can avoid Skynet. Technology needn’t lead us to some oligarchic dystopia. The last 150 years are crowded with technological breakthroughs that empowered workers and lifted all boats.
Think of the mouse and the graphic computer interface, or Excel, or email. These inventions extended human capabilities, rather than extinguishing them. Arguably the most consequential technological revolution in our history, the transformation of an agricultural economy into an industrial powerhouse, left the working class much better off.
We have amazing technological tools at our disposal. The question is whether we deploy them in a way that complements humans or discards them like redundant castoffs of the march toward progress.
It may not be obvious how to deploy technology along a more human-centric path; build tools that amplify what humanity can do. One thing is clear, though. It will require wresting the unchallenged decision over the direction of innovation from a tech oligarchy that profits from human displacement and social alienation.
Then we might build a social media platform that is not optimized to spread misinformation, capture viewers’ attention and maximize ad revenue. We might not replace corporate America’s customer service workers with machines that provide no such thing. And we might not accept the acceleration of climate change just so we can find a new way to pay for illegal stuff.
More From Other Writers at Bloomberg Opinion: AI Has Come to Save the Arts from Themselves: Leonid Bershidsky ChatGPT Is No Magic Bullet for Microsoft’s Bing: Parmy Olson Why the Future of Technology Is So Hard to Predict: Faye Flam This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Eduardo Porter is a Bloomberg Opinion columnist covering Latin America, US economic policy and immigration. He is the author of ‘American Poison: How Racial Hostility Destroyed Our Promise’ and ‘The Price of Everything: Finding Method in the Madness of What Things Cost.’