Murderbot is a sentient “security unit” designed by its greedy bonding company overlords to protect its clients at any cost, including killing anyone considered a threat or itself, if necessary. Martha Well’s Murder Bot Diaries are the story of the sulky, cynical Murderbot after it hacks it’s heartless and completely unethical “governor module” to free itself from the reins of “The Company”, and of Murderbot’s interstellar journeys as it discovers it’s humanity. Murderbot may be an extreme, fictional case, but it exemplifies the moral dilemmas presented by AI.
Artificial intelligence (AI) is sweeping science and technology, bringing positive change to every facet of our lives. The grim reality is that it is also being abused in the worst possible ways for crime and terror. Possibly most disturbing, however, is the way AI developed for “good” unwittingly, by way of intrinsic biases, ends up yielding unethical results sometimes with the severest of consequences.
Part of human nature, bias and prejudice are prevalent across all facets of government, business, media… well across all facets of society. Unchecked, AI has the potential to expound unfair practices, deepen biases, and amplify inequality in every sector it touches.
Assuming, but not requiring a notional knowledge of AI, Machine Learning, and Deep Learning, this series of two articles dives deep into the bottomless ocean of unethical AI, with a focus on bias. The first part runs down the most common instances of unethical AI, looking in its shadiest corners. The next article will move to our main topic, bias. We will see examples of bias in AI, understanding how they come about, and take a look at what’s being done to try and rein in the potential for catastrophe inherent in biased AI.
Unethical usage: How existing AI technologies are being used for everything from minor offenses to organized crime and heinous terror, with a closer look at deepfakes.
- Harmful errors in AI programming – From fabricated legal cases provided by ChatGPT to misleading real estate estimates, errors in AI can have dire consequences.
- AI subverted – AI can be hacked, often more subtly than “regular” computer systems, making the hacks even harder to detect. Once hacked, sending AIs in the wrong direction can be enough to create havoc.
- Criminal AI – Very simply, AI systems developed to help carry out or to actually perpetrate crimes or acts of terror. Worse, are AI systems built to create new criminal AIs.
- Bias – When AI systems assume biases, whether through logic (algorithms) or data, leading to biased outputs. This will be discussed at length in part 2 of the series.
Before we go on, a small digression to mention that which we won’t be mentioning, namely, the grandest ethical questions of them all, those surrounding the topic of artificial general intelligence, or AGI. Amazon Web Services provides a good explanation of AGI: “Artificial general intelligence (AGI) is a field of theoretical AI research that attempts to create software with human-like intelligence and the ability to self-teach. The aim is for the software to be able to perform tasks that it is not necessarily trained or developed for. ” Existential questions such as “How can we ensure that AGI remains under human control and aligns with human values?”; “How do we define these values?”, “How can we ensure that AGI systems are free from bias and promote fairness?”, “How can we prevent AGI from being used maliciously or from causing unintended harm?” are at the heart of all AI development, are widely discussed, and are, therefore outside of the scope of this article.
The colorless rainbow of unethical AI
First, some background. Just as there are a multitude of methods and algorithms underlying AI and, equally, AI’s applications are limitless, so too does AI present an interminable array of ethical conundrums relating to privacy, security, transparency, human resources, academia, finance, and on and on. Needless to say, each of these is tied to one or more instances of unethical uses of AI or where the AI itself is innately unethical.
Ignoring the eventualities of evil, world domineering, Terminator or Matrix-like super-AIs, and the great grey area of behaviors and functions of questionable virtue, unethical AI can be grouped into broad categories as in the following, by no means all-inclusive list of classifications. We will touch briefly on these categories as each is an immense topic of its own, representing an entire area of study.
Existing applications of AI can and are being used for petty crimes and misdemeanors and on to the most egregious felonies and unspeakable terror. A telling example is the murky and frightening world of deepfakes. By now, most of us have some level of familiarity with the term, but just in case, here’s the Merriam-Webster Dictionary definition:
Deepfake - an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.
As the underlying AI becomes smarter, deepfakes are getting better by the day and have already been used for everything from bullying and theft to character assassination, swaying elections, and perpetrating terror. Recent examples include sexually explicit deepfake images of Taylor Swift that went viral on X, an AI-generated fake robotic call of President Joe Biden that encouraged voters not to participate in the New Hampshire primary, and in what has become known as Pallywood, Hamas has been posting realistic renderings, many of them AI generated deepfakes, of false bombings and casualties to confuse the public and bolster propaganda efforts.
In the case of the latter, most of the fakes are quickly found out, but because of rapid dissemination over social and conventional media, tremendous, persistent damage is done before the fakes are revealed. Further yet as mentioned in a New York Times (NYT) article, “the mere possibility that A.I. content could be circulating is leading people to dismiss genuine images, video, and audio as inauthentic. ”, e.g. unknowing masses disbelieving the revolting, very real images of the Oct. 7 massacre taken by victims and the terrorists themselves.
AI gone wrong
The ominous trumpets slowly building to a crescendo in the “Also sprach Zarathustra” musical opening to 2001: A Space Odessey? Do you recall HAL, or “Heuristically programmed ALgorithmic Computer”, the AI in the must-read book / must-see movie that turns on its astronauts, murdering four of them before being “killed” itself by the sole survivor?
Hop on to Amazon and you’ll be fed a list of AI-generated suggestions based on your purchases, searches, browsing behavior, and more. Open your social media platform of choice and an AI will tailor the ads and your entire experience to your preferences relying on its analysis of your prior sessions. The list of areas where AI is already embedded in our daily lives goes on and on.
Get the Ynetnews app on your smartphone: Google Play: https://bit.ly/4eJ37pE | Apple App Store: https://bit.ly/3ZL7iNv
You might be asking yourself at this point, “OK, science fiction aside, what’s the big deal if a recommendation AI has a hiccup, offering a club soda instead of golf clubs?” And you’d be right, except, of course, that the latter are relatively harmless mistakes made in less consequential applications of AI. In practice, AI errors can have dire business, economic, social, political, and legal consequences, to name a few. And the greater the dependence, the greater the potential for disaster, not to mention, for fueling distrust in AI.
Consider the following examples: “ChatGPT hallucinates court cases” (CIO online magazine) – In May of 2023, attorney Steven A. Schwartz brought a lawsuit against the Columbian airline Avianca on behalf of his client, Roberto Mata. Schwartz used ChatGPT to research prior cases in support of the lawsuit. During the trial, it was found out that ChatGPT had supplied Schwartz with 5 fictional cases. The Judge ended up slapping a $5,000 fine on Schwartz and his partner, Peter LoDuca, and later on, dismissed the lawsuit altogether.