If you can find it with a Google Search, it’s fair game. That’s what the new policy implies, at least.
“We may collect information that’s publicly available online or from other public sources to help train Google’s AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities.”
The policy’s previous wording indicated Google’s intent to use public information to train Google Translate. However, the company’s scope has seemingly expanded since then, requiring them to amend the policy with its desire to use public information to train “AI models” or build “products” like Bard and Cloud AI features.
Google’s policy does that too, before also coveting the public internet at large, unashamedly claiming the world wide web as good for harvesting, processing, and force-feeding into its AI projects like Bard.
If you’ve ever made a public post online, you’ve been reasonably aware of its visibility. A browsing user or indexing search engine could come along and see the post and cite it in replies or results.
With the era of Large Language Model (LLM) AI chatbots, things are quite different. That information could now be consumed en masse, digested, and regurgitated to others under the guise of an intelligent artificially-crafted response.
Frankly, that's something none of us anticipated when flexing our 14-year-old insights onto how Linkin Park and Limp Bizkit are the Beatles and Rolling Stones of our era on some long-forgotten Angelfire blog of yesteryear.
Does Google have the right to do this? Yes. Sort of. Technically, a private entity like Google faces little to no restrictions on what it can do with information or data collected from a public entity.
It’s the basis of how the Google Search Engine works after all — scraping through billions of public webpages daily to index into its megalithic databanks. But just because Google can do this, won’t make people feel any easier that it intends to.
More and more questions are being raised about the ethics and legality of the training of AI based on public information, and while there are no legal roadblocks standing in Google’s way, maybe it’s time there should be.
For everything AI can do, it can’t yet truly create — only interpret and imitate. As such, there’s no guarantee on how your words, your images, your videos, or your voice can be used during this process.
I find it fascinating, if not a little disturbing, that a company would be willing to offer its chatbot so much unrestricted freedom to people’s information when its own parent company Alphabet is already afraid of Bard’s loose lips when it comes to data of its own.
Stay in the know with Laptop Mag
Get our in-depth reviews, helpful tips, great deals, and the biggest news stories delivered to your inbox.
Rael Hornby, potentially influenced by far too many LucasArts titles at an early age, once thought he’d grow up to be a mighty pirate. However, after several interventions with close friends and family members, you’re now much more likely to see his name attached to the bylines of tech articles. While not maintaining a double life as an aspiring writer by day and indie game dev by night, you’ll find him sat in a corner somewhere muttering to himself about microtransactions or hunting down promising indie games on Twitter.