Tech

Google’s AI-Powered ‘Inclusive Warnings’ Feature Is Very Broken

A feature rolling out this month uses algorithms to suggest edits in Google Docs, but falls into the same bias traps it’s trying to prevent.
Photo by Tim Gouw from Pexels

Starting this month—21 years after Microsoft turned off Clippy because people hated it so much—Google is rolling out a new feature called “assistive writing” that butts into your prose to make style and tone notes on word choice, concision, and inclusive language. 

Advertisement

The company’s been talking about this feature for a while; last year, it published documentation guidelines that urge developers to use accessible documentation language, voice and tone. It’s rolling out selectively to enterprise-level users, and is turned on by default. But this feature is showing up for end users in Google Docs, one of the company's most widely-used products, and it’s annoying as hell.

At Motherboard, senior staff writer Lorenzo Franceschi-Bicchierai typed “annoyed” and Google suggested he change it to “angry” or “upset” to “make your writing flow better.” Being annoyed is a completely different emotion than being angry or upset—and “upset” is so amorphous, it could mean a whole spectrum of feelings—but Google is a machine, while Lorenzo’s a writer.

A screenshot showing Google suggesting replacing "annoyed" with upset or angry

Social editor Emily Lipstein typed “Motherboard” (as in, the name of this website) into a document and Google popped up to tell her she was being insensitive: “Inclusive warning. Some of these words may not be inclusive to all readers. Consider using different words.”  

A screenshot showing Google suggesting that "Motherboard" isn't inclusive

Journalist Rebecca Baird-Remba tweeted an “inclusive warning” she received on the word “landlord,” which Google suggested she change to “property owner” or “proprietor.” 

Motherboard editor Tim Marchman and I kept testing the limits of this feature with prose from excerpts from famous works and interviews. Google suggested that Martin Luther King Jr. should have talked about “the intense urgency of now” rather than “the fierce urgency of now” in his “I Have a Dream” speech and edited President John F. Kennedy’s use in his inaugural address of the phrase “for all mankind” to say “for all humankind.” A transcribed interview of neo-Nazi and former Klan leader David Duke—in which he uses the N-word and talks about hunting Black people—gets no notes. Radical feminist Valerie Solanas’ SCUM Manifesto gets more edits than Duke’s tirade; she should use “police officers” instead of “policemen,” Google helpfully notes. Even Jesus (or at least the translators responsible for the King James Bible) doesn’t get off easily—rather than talking about God’s “wonderful” works in the Sermon on the Mount, Google’s robot asserts, He should have used the words “great,” “marvelous,” or “lovely.”

Advertisement

Google told Motherboard that this feature is in an “ongoing evolution.”  

“Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate. This also means they can reflect some human cognitive biases,” a spokesperson for Google said. “Our technology is always improving, and we don't yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases.”

Being more inclusive with our writing is a good goal, and one that’s worth striving toward as we string these sentences together and share them with the world. “Police officers” is more accurate than “policemen.” Cutting phrases like “whitelist/blacklist” and “master/slave” out of our vocabulary not only addresses years of habitual bias in tech terminology, but forces us as writers and researchers to be more creative with the way we describe things. Shifts in our speech like swapping “manned” for “crewed” spaceflight are attempts to correct histories of erasing women and non-binary people from the industries where they work.

But words do mean things; calling landlords “property owners” is almost worse than calling them “landchads,” and half as accurate. It’s catering to people like Howard Schultz who would prefer you not call him a billionaire, but a “person of means.” On a more extreme end, if someone intends to be racist, sexist, or exclusionary in their writing, and wants to draft that up in a Google document, they should be allowed to do that without an algorithm attempting to sanitize their intentions and confuse their readers. This is how we end up with dog whistles.   

Thinking and writing outside of binary terms like “mother” and “father” can be useful, but some people are mothers, and the person writing about them should know that. Some websites (and computer parts) are just called Motherboard. Trying to shoehorn self-awareness, sensitivity, and careful editing into people’s writing using machine learning algorithms—already deeply flawed, frequently unintelligent pieces of technology—is misguided. Especially when it’s coming from a company that’s grappling with its own internal reckoning in inclusivity, diversity, and mistreatment of workers who stand up for better ethics in AI. 

These suggestions will likely improve as Google Docs users respond to them, putting an untold amount of unpaid labor into training the algorithms like we already train its autocorrect, predictive text, and search suggestion features. Until then, we’ll have to keep telling it that no, we really do mean Motherboard.