What’s the greatest regret in the history of technology of mankind? On the face of it, the obvious winner of this non-existent contest would be Robert Oppenheimer, for whom remorse of this magnitude, came following the invention of the atom bomb, conducting a successful test in the New Mexico desert and then watching his own government drop it on two populated Japanese cities. He then lived with that regret for the rest of his life, saying things like “I am become death, the destroyer of worlds.”
You don’t have to reach Oppenheimer’s level to deeply regret your own inventions. Take prominent Silicone Valley entrepreneur, Aza Raskin, who invented something much smaller than the atom bomb, which he gauges has cost no fewer lives. Raskin invented, wait for it… the infinite scroll.
Who thought it up? Aza Raskin? Raskin thinking was just googling in 2006 when he hit the bottom of the search results page when he was offered a series of clickable numbers to take him to further results. It irritated him. At the time, he wrote in his blog: “Don’t ask users to request more content. Just don’t.” He further explained that having to wait for the next page to upload cuts off the users’ concentration and train of thought.
His wish was almost instantly granted. Raskin, a man with a conscience who had specialized in an open-source user interface, has regretted it ever since. He reached a state of near-depression when he realized the level of addiction and time-wasting it has caused on social networks across the globe. Internet giants wasted no time in utilizing his idea in every way possible. In a hotel room in Helsinki, where he had gone for some soul-searching, he calculated that the feature he’d invented wastes humanity 200,000 lifetimes each day.
In short, he‘s been called the “digital Oppenheimer” and he makes amends at every possible opportunity. In 2019, he told the British Telegraph newspaper that humanity is currently dealing with two very ancient stories: Firstly: Be careful what you ask for because you might get it, and secondly: Inventors losing control of their creations.
Raskin joined forces with former Google product manager, Tristan Harris, who in a 2013 meeting also warned that the company was implementing practices that exploit human psychology to squander time. Google responded by promoting him to the hollow position of being “responsible for product ethics.” He left the company three years later, having managed to implement no changes.
Harris and Raskn joined forces in an attempt to curb addiction, distraction, disinformation, polarity and extremism – all the products of an industry making big money out of captivating time and attention, finding ways to make people watch difficult, emotion-evoking content. Raskin preached: “If you can’t determine the impacts of the technology you're about to unleash, it’s a sign you shouldn’t do it.”
Sam Altman, the CEO of the company that brought the world ChatGPT, told reporters that he was “losing sleep” and suspected he’d done "something really bad" and testified before a Senate subcommittee, “I think if this technology goes wrong, it can go quite wrong"
Go tell that to Sam Altman and the other 350 people who developed artificial intelligence that could in principle (and with a little help from friends) be writing this article instead of me. Only then, would they ask “My God, what have we done?”
Panic or calm down?
Sam Altman is the CEO of OpenAI, the company that brought the world ChatGPT – artificial intelligence that you can talk to as it threatens to make whatever-you-do-for-a-living obsolete. At the start of this year, Altman told reporters that he was “losing sleep” and suspected he’d done "something really bad" and testified before a Senate subcommittee, “I think if this technology goes wrong, it can go quite wrong."
He called for the implementation of government regulation on AI and, along with a further 350 senior hi-tech executives including Elon Musk, signed a letter calling for “mitigating the risk of extinction from AI”. Yes, in those very words.
How scared should we be? Not very. This is the same Altman who told Atlantic magazine that he has no regrets about developing ChatGPT, adding that humanity would just have to adapt itself to sharing the planet with a further form of intelligence that will change everything from the job market through to relationships. Oh, and just after signing this letter of warning, Elon Musk founded his own AI company.
“I wouldn’t say their remorse is comparable to that of Oppenheimer or Alfred Nobel (who developed dynamite, regretted it and founded the Nobel Peace Prize),” says Prof. Yossi Keshet, AI researcher of Speech, Language and Deep Learning and chief scientist at aiOla. “Because unlike Oppenheimer and Nobel, this time round, there’s a huge money factor. This time, the entrepreneurs are personally earning vast amounts of money. And people who earn that kind of money don’t regret what they do.“
“In effect, he’s saying: AI isn’t really going to write this article instead of me anytime soon. AI’s thought process is cumulative probabilistic averaging, based on what has already been written and the ability to generalize. But there’s no way you could get it to write the new Anna Karenina. The best it could do would be to create an average of Russian novels, but it couldn’t replace the book. The rest’s just hype.”
So should we just calm down? Depends on whom you're asking. “When it comes to AI, we’ll only see remorse when the creature overpowers its creator. It could definitely happen, but it’ll be too late by the time we’re sorry“ says Shai Shalev-Shwartz, Mobileye CTO and professor of Computer Science at the Hebrew University, Jerusalem.
“Even with the atom bomb, I think if Oppenheimer hadn’t developed it, and the Germans got there first, things would have been worse. At the end of the day, with all inventions, you have to weigh up the alternatives, and if someone doesn’t invent something, someone else will. It’s the people who decide to abuse and misuse technology who should be expressing regret.”
So should the government set regulatory limits for inventions such as AI? “As a rule, I think regulation is right, but it must be done in a way that doesn’t stifle progress. It’s the prisoner’s dilemma. Let’s take the climate crisis for example, The U.S. president says “If I reduce my pollutant emissions, but China, Russia and others carry on polluting, I end up a sucker and I haven’t solved the problem.’ This is also true for technological development. If Israel decides to regulate AI development, it won’t prevent the development of AI somewhere else that could destroy humanity. So, there needs to be a world coalition. And that’s very hard.“
Prof. Yoav Shoham, partner and founder at leading Israeli AI company A121 Labs and chair of the science committee at the national AI program, explains that “technological advances scare us more than they harm us. They always advance humanity. AI actually is very attentive to ethics, not necessarily because they want to do the right thing, but because it’s good business sense. If a machine starts playing up, no one will use the product.”
Computer science teacher Scott Fahlman, who invented the emoticon 37 years ago, had no idea what kind of a genie he’d released into the world. “Sometimes I feel like Dr. Frankenstein... my creature started as benign but it’s gone places I don’t approve of"
And if AI replaces my job, won’t you be sorry? At least for me? “What makes you think that your life serving the machine won’t be better?” Shoham laughs. “Listen. There are servants that have become obsolete. We don’t have to navigate the roads anymore. Language editing jobs are fast disappearing, but that does make the role of top editor redundant. It just allows them to concentrate on the things a person can do. That’s the whole story. I’m more worried about the very opposite happening. AI isn’t as strong as people think and they’ll be very disappointed when they try using it beyond certain areas.”
The greatest regretters
But ever since AI people started talking about how sorry they are for developing it, technology remorse has cast a dark shadow.
Technology remorse isn’t really new. It’s always been there – since before (and especially after) Oppenheimer and Nobel. Mankind’s history of technology is awash with people regretting their creations. It’s easy to understand them when it comes to the direct development of arms. Kamran Loghman, who invented pepper spray in the 1980s for the FBI, in a 2011 interview said he regretted its use against nonviolent demonstrators.
Toward the end of his life, Mikhail Kalashnikov wrote in a letter to the Russian Orthodox patriarch that he regretted inventing the AK-47. (The church spokesman’s response read: “The Church has a very definite position: when weapons serve to protect the Fatherland, the Church supports both its creators and the soldiers who use it”).
But you don’t have to develop destructive tools to regret it. Computer Science teacher Scott Fahlman, who invented the emoticon 37 years ago, had no idea what kind of a genie he’d released into the world and how he brought humanity, closer and closer, to the age of the smiling poop. In a 2013 interview, he commented: “Sometimes I feel like Dr. Frankenstein... my creature started as benign but it’s gone places I don’t approve of.”
Get in line with Ethan Zuckerman, inventor of pop-up ads (“I’m sorry. Our intentions were good”). In the same line, you’ll meet Dong Nguyen, creator of Flappy Bird, the addictive game for mobile phones. After over 50 million downloads and endless addicts, Nguyen himself took the game down, tweeting, “I can’t take this anymore."
It might sound trivial. What about internet creator, Sir Tim Berners-Lee who decided that “//” now and forevermore, had to follow “http:” on every internet address. He (much) later commented that they were "unnecessary" and there was no need for them. "There you go, it seemed like a good idea at the time," he said.
Robert Propst, who invented the office cubicle (branding it “Action Office”), before his death in 2000, released a statement regretting his invention stating that “the cubiclizing of people in modern corporations is monolithic insanity."
We might feel some glee knowing that Australian dog breeder, Wally Conron, who crossed a Labrador and Poodle creating the “Labradoodle”, told reporters: “I opened a Pandora box and released a Frankenstein monster." He continued, explaining: “I find that the biggest majority are either crazy or have a hereditary problem. But I do see some damn nice Labradoodles that are steady, just like I’d breed, but they are few and far between,”
No time for caution
Remorse takes time, especially without being able to see into the future. Justin Rosenstein, who invented the Facebook “like” button, had no idea how much damage it would cause to the self-esteem of so many (mainly young) people.
Karl Benz, inventor of the motor car, had no way of predicting pollution and traffic jams. The brilliant scientist, Thomas Midgley - who invented leaded gasoline and Freon gas to solve the problem of stiff engines and refrigerators that created ice – could never have foreseen that he would be posthumously described by environmental historian, J. R. McNeill, as the man who "had more adverse impact on the atmosphere than any other single organism in Earth's history."
Inventors and developers work to serve their inventions. When tech entrepreneurs, however, talk (too much) about changing the world, they almost never consider that it might be a change for the worse.
As Rachel Bostman, who authored a book on the subject for Wired magazine, explains it took radio over 50 years to reach 99% of American households, and 38 years for television to do the same. It took Instagram only three months to reach a million users after its 2010 launch. TikTok gleaned its billionth user after only four months.
This speed of distribution and expansion leaves no room for caution, moderation, informed decisions or control over the type of user. Entrepreneurs, for their part, are committed to accelerated movement all the time, with the need to move quickly and break stuff forever hovering over them.
So, it’s rare that possible future regrets are factored in ahead of time. And yet, sometimes they are. The development of Mobileye’s autonomous car is one such case. Shalev-Shwartz recallsת “When we came to develop the car, we asked ourselves what its limitations were. What do we need to promise? How will it work? Because it’s a car that can be dangerous, it’s a kind of animal that we need to first define what it can and can’t do, how it needs to behave in certain circumstances. We created a mathematical model for the regulatory bodies about how to treat it.
"We defined, for example, how much distance it needs to keep from other cars, how cautious and tolerant of other’s mistakes it has to be, what’s reasonable and what isn’t when it comes to what elements on the road might do, what constitutes a dangerous situation. We, in effect, gave the regulators tools to choose where to draw the line between safety and utility."
Do you see yourselves regretting this in the future? “I think autonomous vehicles will be good for mankind. They’ll make the world a better place and reduce the number of fatal accidents – so no, we have no regrets. Perhaps, our only regret will be the disappointment that despite presenting the 2017 model, it was only taken on in China. The regulators didn’t go with it.”
"[R]egretting technology is a rather romantic notion. You can regret how the technology is used. Oppenheimer didn’t regret discovering how to split the atom. His regrets began when he started feeling responsible for the deaths of 200,000 people"
And remorse is for amateurs anyway. Prof. Yonatan Dubi of the Chemistry department at Ben Gurion University gained notoriety in recent years with his sweeping rejection of the idea of global warming and in particular, the feelings of guilt surrounding it. “We definitely have to stop beating ourselves up over this,” he said.
“There’s no significant proof that the temperature changes of anything on the planet result from carbon dioxide concentration. It’s a very weak scientific hypothesis. Just saying it loudly a lot doesn’t make it true.“
Technology regret, he says, isn’t a real idea anyway. “Technology isn’t good or bad. At the end of the day, if it catches on, it means it has some value. It may well come with drawbacks. Things have drawbacks. The world’s a complicated place. But regretting technology is a rather romantic notion. You can regret how the technology is used. Oppenheimer didn’t regret discovering how to split the atom. His regrets began when he started feeling responsible for the deaths of 200,000 people. But he knew beforehand that the bomb would be used and he didn’t think it was a good idea. So, regret is a very emotional thing. Technology must come in a neutral way.”
So it’s better to have no regrets at all? “If you’re really looking for technology regret, we should all regret stifling research in the field of nuclear energy. We have an energy source that’s stable, available and can be cheap, but collective and political stupidity has made us curb research in the field since the 1970s. The terrible fear of nuclear explosion was part of the problem. And these things are very different on the physics scale. There can’t be a nuclear explosion in a nuclear energy plant. It’s like putting milk on the table and waiting for it to turn into Gouda cheese. A nuclear renaissance has begun because everyone’s talking about emission-free electricity.”
On a personal level, is there no technology you regret? “I could have skipped the fax and moved straight onto e-mail.”
How to avoid regret
Aza Riskin, who invented the infinite scroll feature for user convenience - and who later saw the Internet giants abuse it to increase addiction and increase use time (also known as "doom scrolling") - has tried to address the problem of regret by founding the Center for Humane Technology where he proposes three solutions to prevent future entrepreneurs regretting their inventions.
The first solution is a code of ethics signed by users – a kind of digital Hippocratic oath detailing “proper and improper” use. Improper use may lead to the cancelation of user licenses.
The second solution defines future user volume, so that whenever the invention reaches a further milestone – 100 thousand users, a million, a billion, etc. – the entrepreneur will be required to resubmit a request for their license, based on positive and negative impact until that time.
The third solution is founding a “doubt club”, like the one set up by Raskin himself – a forum used by entrepreneurs working on non-competing ideas in which they share their doubts and reservations about their products. The discussions don’t leave the room and the aim, Riskin says, is to reduce ignorance and increase and encourage what Raskin calls “epistemic humility.”