Will AI help spread antisemitism? Most Americans believe so

Majority in the U.S. believe generative AI technologies should be regulated in order to prevent their misuse in the spreading of antisemitism and hate, Anti-Defamation League report shows

Three-quarters of Americans are very concerned about the potential harm that could arise from the malicious use of generative artificial intelligence (GAI) tools such as ChatGPT, according to a new survey published by the Anti-Defamation League Sunday.
<< Follow Ynetnews on Facebook and Twitter >>
Read More:
While there is a significant proportion of people in the U.S. expressing hope regarding the potential of GAI tools to improve our lives, the survey found that 75% of Americans are very concerned about the technology being used for harm or as a tool for promoting hate.
2 View gallery
 קהילה יהודית
 קהילה יהודית
(Photo: Shutterstock)
The survey also found that 75% of respondents believe that these tools will generate misleading content, while 70% believe that generative AI tools will intensify extremism, hatred, and antisemitism in America.
“If we’ve learned anything from other new technologies, we must protect against the potential risk for extreme harm from generative AI before it’s too late,” said Jonathan Greenblatt, the Anti-Defamation League’s CEO.
“We join with most Americans in being deeply concerned about the potential for these platforms to exacerbate already high levels of antisemitism and hate in our society, and the risk that they will be misused to spread misinformation and fuel extremism.”
The survey’s main findings include:
The majority of Americans support steps to mitigate the perceived risks of AI. Nearly 90% of respondents believe that companies should take steps to prevent their tools from generating harmful content and should not allow users to create antisemitic or extremist images.
2 View gallery
ג'ונתן גרינבלט
ג'ונתן גרינבלט
Jonathan Greenblatt
(Photo: Anti-Defamation League)
Respondents largely supported government intervention, with 87% supporting efforts by Congress to enforce transparency and privacy on AI companies, and 81% stating that AI creators "should be held responsible," and ensure their tools aren’t used for hatred, harassment, or extremism.
84% of respondents are concerned that generative AI tools could be used for criminal purposes, such as fraud or identity theft.
75% believe that AI tools will generate biased content against marginalized groups and people.
The majority of Americans strongly believe that civil society should have the ability to oversee generative AI tools, with 85% agreeing that academics or civil society groups " should have access to review or audit the tools to make sure they are properly constructed."
In a new blog post, "Six Pressing Questions We Must Ask About Generative AI," the Anti-Defamation League urged policy-makers and industry professionals to implement safety precautions to prevent the abuse of the technology for disinformation, harassment, or fueling extremism.
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.