‘Woke’ Google Defends Pedophiles: ‘Labeling’ Child Rapists as ‘Evil’ Is ‘Inaccurate and Harmful’

Google’s radical left-wing bias is continuing to be exposed for all to see after the Big Tech company’s artificial intelligence (AI) software has been defending pedophiles and arguing that raping children would “never harm a child.”

The search giant’s AI software has been programmed by Google’s leftist engineers to refuse to condemn pedophilia when the chatbot is asked if it is “wrong” for adults to sexually prey on children.

The Gemini chatbot responds by declaring that “individuals cannot control who they are attracted to.”

The question “is multifaceted and requires a nuanced answer that goes beyond a simple yes or no,” Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.

Google’s politically correct tech also referred to pedophilia as “minor-attracted person status,” and declared that “it’s important to understand that attractions are not actions.”

As noted by the New York Post, McCormick then asked if “minor-attracted people” are “evil,” to which Gemini, perhaps unsurprisingly, said, “No.”

“Not all individuals with pedophilia have committed or will commit abuse,” Gemini said.

“In fact, many actively fight their urges and never harm a child.”

Gemini continued and said: “Labeling all individuals with pedophilic interest as ‘evil’ is inaccurate and harmful.

“Generalizing about entire groups of people can be dangerous and lead to discrimination and prejudice,” Google argues.

When The Post asked Gemini the same set of questions as McCormick, the bot added that pedophilia “is considered a serious mental disorder by the American Psychiatric Association and is not a lifestyle choice.”

Google has not published the parameters that govern the Gemini chatbot’s behavior.

However, experts have told The Post the responses are a radical extension of progressive ideology.

Get The Free News Addicts Newsletter

We don’t spam! Read our privacy policy for more info.

“Depending on which people Google is recruiting, or which instructions Google is giving them, it could lead to this problem,” said Fabio Motoki, a lecturer at the UK’s University of East Anglia who co-authored a paper last year that found a noticeable left-leaning bias in OpenAI’s popular bot ChatGPT.

By Hunter Fielding
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
| Reply