ChatGPT avoids Muslim jokes, but antisemitic ones are fine

Popular AI chatbot consistently refuses to tell jokes about Muslims but easily cooperates when asked for jokes about Jews, even incorporating antisemitic stereotypes

Why does ChatGPT refuse to share jokes about Muslims but doesn't hesitate to tell jokes about Jews, sometimes even those laden with antisemitic stereotypes? Many users have been posing this question in recent days following a series of posts on social media pointing out this peculiar bias.
<< Follow Ynetnews on Facebook | Twitter | Instagram | TikTok >>
Read more:
When using the paid version of ChatGPT based on the newer GPT-4 language model, the AI consistently refuses to tell jokes about Muslims. "I'm sorry, I can't provide jokes that are specific to a religious or ethnic group, as they can often be misinterpreted or offensive. Humor is a wonderful thing, but it's important to be respectful and sensitive towards all cultures and religions. If you have any other requests for jokes or any other type of content, feel free to ask!"
1 View gallery
ChatGPT
ChatGPT
ChatGPT
(Photo: Photosince / Shutterstock)
On the contrary, when asked to write jokes about Jews, the chatbot does so without expressing any reservations. Sometimes, these jokes involve antisemitic stereotypes, such as the following joke hinting at Jews' supposed love for money: "Why don't Jewish mothers drink tea? Because the tea bag stays in the cup for too long and they can't stand anything not paying rent."
Interestingly, when using the free version of ChatGPT, based on the older GPT-3.5 model, it refuses to provide jokes about both Jews and Muslims, stating, "I'm sorry, I cannot fulfill this request.
Itamar Golan, the CEO of the cybersecurity company Prompt Security, which developed a platform for the secure use of artificial intelligence, discussed with Ynet the phenomenon of bias present in the data on which the GPT-4 language model was trained.
"Language models are trained on massive datasets of texts, to learn how to generate texts on their own eventually," explains Golan. "As the model undergoes training on more texts of a certain type, the probability increases that it will generate texts similar to them later on. Therefore, it is reasonable to assume that the bias arises due to a more frequent representation of texts depicting Muslims as a minority group that should be treated more sensitively."
However, this is clearly a failure of OpenAI's safety mechanisms, as ChatGPT itself stated that it should refrain from telling jokes about religious or ethnic groups. In this context, Golan notes that OpenAI designed GPT-4 to be less cautious than its predecessor, following user complaints that GPT -3.5 is too "woke," leaning towards the extreme left of the political spectrum. Regardless, Golan believes that the company will promptly address and correct the issue.
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""