Google’s AI-Powered ‘Inclusive Warnings’ Feature Is Badly Broken

Starting this month, 21 years after Microsoft turned off Clippy because people hated it so much…Google is rolling out a new feature called “assistive writing” that embeds itself in your prose to take style and tone notes on word choice, conciseness and inclusive language.

The company has been talking about this feature for quite some time; Last year, it released documentation guidelines that urge developers to use accessible language, voice, and tone documentation. It is selectively deployed for enterprise-level users and is enabled by default. But this feature shows up for end users in Google Docs, one of the company’s most used products, and it’s annoying as hell.

At Motherboard, Senior Writer Lorenzo Franceschi-Bicchierai typed in “annoyed” and Google suggested he change it to “angry” or “upset” to “improve your writing flow.” Being annoyed is a completely different emotion than being angry or upset – and “upset” is so amorphous it could mean a whole spectrum of feelings – but Google is a machine, while Lorenzo is a writer.

A screenshot showing Google suggesting replacing

Social writer Emily Lipstein typed “Motherboard” (as in, the name of this website) into a document and Google popped up to tell her she was unresponsive: “Inclusive disclaimer. Some of these words may not be inclusive for all readers. Consider using different words.

Journalist Rebecca Baird-Remba tweeted an “inclusive warning” she received about the word “proprietary”, which Google suggested she change to “owner” or “proprietary”.

Motherboard editor Tim Marchman and I continued to test the limits of this feature with prose excerpts from famous works and interviews. Google suggested that Martin Luther King Jr. should have spoken of “the intense urgency of now” rather than “the fierce urgency of now” in his “I Have a Dream” speech and edited President John F. Kennedy’s use in his inaugural address of the phrase “for all mankind” to mean “for all mankind.” A transcribed interview with neo-Nazi and former Klan leader David Duke — in which he uses the N-word and talks about hunting down black people — gets zero ratings. Radical feminist Valerie Solanas ACUM Manifesto gets more edits than Duke’s tirade; she should use “police” instead of “police,” Google helpfully notes. Even Jesus (or at least the responsible translators of the King James Bible) doesn’t get off easy – rather than talking about the “wonderful” works of God in the Sermon on the Mount, claims the Google bot, He would have had to use the words “great”, “wonderful” or “lovely”.

Google told Motherboard that this feature is “continuously evolving.”

“Aided writing uses language comprehension models, which rely on millions of common phrases and sentences to automatically learn how people communicate. It also means they may reflect some human cognitive biases,” a Google spokesperson said. “Our technology is constantly improving, and we do not yet (and may never have) a comprehensive solution to identify and mitigate all unwanted word associations and biases. »

Being more inclusive in our writing is a good goal, and one worth striving for as we string these sentences together and share them with the world. “Police officers” is more precise than “police officers”. Cutting phrases like “whitelist/blacklist” and “master/slave” from our vocabulary not only addresses years of the usual biases in tech terminology, but forces us as writers and researchers to be more creative in how which we describe things. Changes in our discourse, such as exchanging “manned” spaceflight for “crewed” spaceflight, are attempts to correct stories of the erasure of women and non-binary people from the industries in which they work.

But words To do wicked things; calling the landlords “owners” is almost worse than calling them “the landchads”, and half as accurate. It’s for people like Howard Schultz who would rather you didn’t call him a billionaire, but a “person of means.” To a more extreme end, if someone Hears to be racist, sexist, or exclusionary in their writing, and want to put that in a Google Doc, they should be allowed to do so without an algorithm trying to sanitize their intentions and confuse their readers. This is how we end up with dog whistles.

Thinking and writing outside of binary terms like “mother” and “father” can be helpful, but some people are mothers, and whoever writes about them should know that. Some websites (and computer parts) are simply called Motherboard. Trying to embed self-awareness, sensitivity and careful editing into people’s writing using machine learning algorithms – already deeply flawed and often unintelligent pieces of technology – is misguided. Especially when it comes from a company that’s grappling with its own internal accounts in inclusion, diversity, and worker abuse advocating for better ethics in AI.

These suggestions will likely improve as Google Docs users respond to them, putting countless unpaid labor into training the algorithms, as we’re already training its autocorrect, predictive text, and search suggestion features. Until then, we’ll have to keep saying no, we really mean Motherboard.

Leave a Comment