In Israel, where everything is extraordinarily political, the discourse on the alleged use of spyware by police has centered around Benjamin Netanyahu: Will the indefatigable ex-PM deploy anger at the practice to discredit the criminal charges against him? However, there’s a far bigger issue to be considered: Should we try to contain the march of technology?
Until recently, questioning technology was a sure-fire way to draw ridicule. Anyone doing it risked comparison to the Luddites, those British textile workers who fought in vain the mechanization of their industry, correctly fearing that the machines would displace them.
The argument against the Luddites was not really that machines would not displace them, it was that the greater good (plentiful and cheap apparel) was overwhelming and that the displaced could be retrained in a virtuous cycle of progress and prosperity.
Both arguments rang true and things have worked out pretty well. Billions have been lifted out of poverty, illiteracy has almost been vanquished and lifespans have more than doubled. We have access to once-unimaginable benefits, like having most human knowledge accessible on a device in our pocket. From shopping to research to conferencing, we can do things from home that once required travel and time. The achievements are spellbinding.
Moreover, it has been reasonable to hope that even dangerous technologies had opposite uses for the good and humanity would choose wisely: Indeed, nuclear energy lit up cities all across the globe, while nuclear weapons have not been used since 1945. Essentially, people were willing to gamble based on a brittle notion of humanity’s innate wisdom because they wanted and needed the good side more than they feared the bad.
The technological advances of recent years have mostly continued and validated that trend. For example, most people wanted the convenience of emails more than they missed the demise of the letter or feared the trivialization of the word.
So great have been the benefits, especially to Israel, which has ridden the high-tech wave to riches, that it is odd to contemplate even the possibility of a paradigm shift. And yet, one must: there are signs that technology may join globalization as a thing that was once seen as positive that is now viewed by many with suspicion.
There are a number of reasons for this.
First, never has the potential damage to the world from technology been as profound. Initially, we could cause some carnage, then two superpowers could destroy the Earth, soon, the democratization of knowledge will enable random lunatics to poison the global water supply. There is a point at which that risk equation leads to system failure.
Second, the scale of labor disruption may exceed what retraining can correct within certain fixed parameters, such as the current human lifespan. The chief culprit here is artificial intelligence, which in simulating and in some ways bettering human intelligence strikes at the one thing that distinguishes us as a species (however certain political outcomes may suggest this is not so).
As a grad student 35 years ago, I remember a course on computer vision. I suspected nothing sinister at the time, but this cool technology begot the face recognition that now drives Big-Brother angst. It’s wonderful to have self-heating houses and automatic translators based on natural language processing, but do you want those things more than you fear a world without jobs for most people?
There is strong reason to fear that jobs will be eliminated at such a fast rate that useful retraining will prove impossible: Being a computer engineer (or a sex worker) is not for everyone. Go into an automated McDonald’s and you will see the early version of the dystopian vision: A handful of refugee humans still puttering around, trying to look useful and hoping someone will somehow demand an extra straw.
Third, the assumption that the greater good is overwhelming is starting to be challenged.
We still desperately want to achieve some things, for sure. We hope to give sight to the blind, cure all cancers and extend lifespans further still, but the desire and fear ratio may be changing before our eyes. We may soon fear the consequences more than we appreciate the advances.
THIS NET utility question can be viewed through the prism of the internet.
The early years of the web – say, 1990 through 2005 – were clearly useful. News and entertainment online, early forms of incredibly efficient communications (from emails to Israel’s ICQ) and physical proximity no longer needed for many things. Everyone could understand it as well, which is nice.
Web 2.0 was even more enticing, but also vexing in a way. It certainly seemed like social media and smartphones were useful: Everyone could be a publisher, you could maintain contact with faraway friends, maintain a news service to the world, form virtual political movements and network with anyone in your field without flying off to conferences.
However, this functionality married up with human nature caused unforeseen damage in the form of screen overuse, traumatized young people, dopamine addictions and attention-deficit disorder. Most horrifyingly, lies proved more viral than truth and; consequently, the most impactful of the newly-enabled were not civic-minded democracy activists but extremists (and, of course, legacy media was nearly devastated).
Now comes Web 3.0. It will, we are told, disintermediate. We will all be avatars floating around digital life, independent of current forces that have grown too powerful, including social media and central banks. Web 3.0 will normalize cryptocurrencies and give rise to entire virtual worlds based on blockchain databases controlled by no one but armies of unknowable mathematical geniuses solving complex equations with bewildering supercomputers in climate-change-accelerating anonymity.
This Web 3.0 business violates a number of undeclared rules that were used to justify the scorn of the Luddites.
For one thing, much of it is incomprehensible to people in ways that a mechanized loom or the horseless carriage most certainly were not.
For another, it is not exactly answering a market need. Rather, it acquiesces to an idea that is rarely articulated but fervently believed by a minority of cognoscenti: Technology must march on, independent and unchecked, pretty much no matter what. This sounds like a formula for scientific advancement but is not.
Web 3.0 has many devoted fans and much of the enthusiasm is undoubtedly justified, but it is not a response to a clamor from the people. Other than those who would use it for simulated sex, gamers and perhaps the physically disabled, most people don’t seem to actually want blockchain or the metaverse (68% have metaverse skepticism by one survey), and previous efforts at virtual reality has mostly failed. Many people fear a Matrix-like descent into a virtual nightmare in which entire nations of children will never throw a ball.
There is even resistance to advances already here or just around the corner: Studies show most people dislike facial recognition and only 21% will ride in autonomous vehicles despite evidence and logic that they will reduce accidents.
Into this maelstrom arrive products that enable others to spy on your cellphone with ease. If you are a wanted terrorist, society may like this just fine. However, it’s alleged that not just terrorists were targeted by Israeli police. We used to worry that spyware might get into the hands of rogues, but now it seems even the democratic states cannot be trusted.
Perhaps society will evolve in a direction where the complete loss of privacy, the elimination of most jobs and marginalization of (somewhat) trusted authorities are acceptable if they accompany something amazing enough, like a cure against aging. Still, I doubt that the idea that technology must not be checked will survive in current form.
The ranks of the new Luddites are swelling. There are no easy solutions because steering technology toward useful outcomes raises philosophical and practical challenges, but developed democracies will need to account for this somehow.
The writer, a technologist by education, is chief strategy officer at the digital engagement firm Engageya and managing partner of Thunder11, a communications firm with a tech specialization. Before that, he was the top editor for the Associated Press in Europe, Africa and the Middle East. In the 1980s, he was among the first to develop and market multilingual word processors for personal computers.