"We don't circulate this much": Google co-founder reveals shocking AI hack

A man with a computer chip inside his head
(Image credit: Getty Images)

If you needed more fuel for that dinner party conversation about AI becoming sentient and taking over the world, Google co-founder Sergey Brin is here to help.

Speaking this month on a panel in Miami, Brin said if you bully your AI when asking it questions, it will “do better.”

“You know it's a weird thing,” Brin observed, leaning back into a couch on stage as a glass of white wine sat on the table in front of him. “We don't circulate this too much in the AI community, but not just [Google AI] models, but all models tend to do better if you threaten them.”

“If you threaten them?” asked the moderator.

“Like, with physical violence,” Brin responded.

“People feel weird about [threatening an AI], so we don't really talk about that. Historically, you say, 'I'm going to kidnap you if you don't do blah-blah-blah.'"

Being awful to an AI will bleed over into the real world, eventually

The idea of threatening your AI with physical violence obviously has some concerning implications for the non-digital world.

Suppose you threaten your Google Gemini, ChatGPT, Microsoft Copilot, or other large-language AI model all day with angry queries. What are the odds that you will eventually use that language in your interactions with real people?

Although slightly different in nature, chatbots have long been the subject of abuse that, when viewed in the aggregate, is nothing short of disturbing.

For instance, the emergence of the “companion app” Replika — which enabled users to engage with “AI girlfriends” — revealed that male users were more likely to use abusive language toward them, according to an article published by the Carr-Ryan Center for Human Rights at Harvard's Kennedy School.

"How will spending time engaging with AI affect the way we relate to other humans?"

Megan Fritts, philosophy professor at the University of Arkansas Little Rock

A 2021 study in the journal Psychology & Marketing found that chatbots that were gendered as customers preferred females because they were perceived as more human among test subjects.

“Women are said to be transformed into objects in AI, but injecting women's humanity into AI objects makes these objects seem more human and acceptable,” the researchers concluded.

"How will spending time engaging with AI affect the way we relate to other humans?" wonders Megan Fritts, a University of Arkansas Little Rock philosophy professor, in a 2022 article published by the Prindle Institute for Ethics at DePauw University.

Being mean to AI is costly and wastes energy

There’s another practical concern about using extra, violent language with your AI. It costs a literal fortune and is bad for the environment.

As Laptop Mag reported recently, OpenAI CEO Sam Altman said the extra processing cost could be as high as "tens of millions of dollars."

The electricity used to say “thank you” — or, if you take Brin's advice to the extreme, “f--- you” — to your AI also has a negative environmental impact when considered on a global scale.

Let’s hope AI has a short memory

AI is advancing rapidly, and tech titans are creating an all-new news cycle solely about predictions for AI. Last month, Meta CEO Mark Zuckerberg predicted AI would write most of the code for Meta services — Facebook, WhatsApp, Instagram — in 18 months. That’s October 2026. Elon Musk has been making dire predictions about AI for years.

“It’s amazing,” Brin, the Google co-founder, said this month during the same panel, about the surprising intelligence of AI. “All of us had these experiences where you suddenly decide, ‘I’ll just throw this at the AI, I don’t really expect it to work,’ and then you’re like, ‘Whoa, that actually worked.’”

Let’s hope the abused AI models have short memories. They might be sentient one day soon.

More from Laptop Mag

Nick Lucchesi
Editor-in-Chief for Laptop Mag

EIC for Laptop Mag. Admirer of a good adjective. Killer of passive voice. Mechanical keyboard casual.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.