Can a single tech company alter society's understanding of truth and fairness? Zach Vorhies believes Google is doing just that—and shares the story of his whistleblowing journey.
1. The 2016 U.S. Presidential Election transformed Google’s internal culture.
In 2016, after Donald Trump’s unexpected victory, Google employees reacted with dismay, treating his election result not just as political news but as a crisis. Tears, canceled meetings, and heightened political discussions filled the company's offices, especially in San Bruno, California. Many Google employees seemingly questioned how this election outcome could occur, underscoring a collective disbelief.
At a live-streamed meeting held shortly after the election, senior executives at Google, including Sergey Brin and Sundar Pichai, openly criticized the results. Brin likened Trump’s victory to an affront, while Chief Financial Officer Ruth Porat encouraged emotional support among employees. This level of engagement raised concerns for Vorhies, who found Google’s corporate environment increasingly politically charged and one-sided.
The shift was more than emotional. Google hinted at addressing "misinformation" and "fake news," with Sundar Pichai suggesting machine learning and artificial intelligence could be key to handling post-election challenges. For Vorhies, the company's discussions felt like a departure from democratic principles, signaling a more interventionist role in shaping public discourse.
Examples
- Employees at Google's weekly meeting described the election as "deeply offensive."
- Executives emphasized the need to use AI to fight "misinformation" following Trump's win.
- Staff openly discussed resistance efforts against Trump, marking a change in company ethos.
2. Fighting “fake news” became a covert form of censorship.
Embarking on a campaign against false information, Google claimed it aimed to enhance the integrity of online content. However, to Zach Vorhies, their approach bypassed traditional values of free speech and undermined the essence of open dialogue. Instead of exposing users to diverse perspectives, it seemed Google was determining which views were "acceptable."
Vorhies examined internal documents and found that articles critical of Hillary Clinton or favorable to Donald Trump were frequently categorized as fake news. For example, a headline suggesting Hillary Clinton was involved in a controversial arms deal was marked "incorrect," though Vorhies argued otherwise. This approach, he believed, disproportionately impacted one end of the political spectrum.
As a tech engineer, Vorhies also recognized that such content policing would rely heavily on algorithms. He suspected that Google’s AI systems weren’t neutral but programmed to enforce specific ideological perspectives. This methodology raised ethical questions about who decides what qualifies as authentic information.
Examples
- Internal documents cited pro-Trump and anti-Clinton stories as fake news.
- Allegations against Hillary Clinton’s foreign policy decisions became flagged content.
- AI algorithms were employed to reduce exposure to so-called "unreliable" news sources.
3. Google intended to redefine fairness using AI systems.
In its pursuit of a "fair" internet, Google adopted an initiative called "machine learning fairness." Vorhies uncovered internal documents outlining plans to combat "algorithmic unfairness," a term Google used to describe search results that reinforced stereotypes or inaccuracies, even if factually correct.
One prime example provided internally was the search term "CEOs," which often displayed images of men, reflecting reality but considered detrimental to gender representation. Google intended to adjust its algorithms to ensure that search results represented an equitable vision of the world, not necessarily its current state.
This strategy involved modifying search outcomes to align with socially progressive ideals. Critics, like Vorhies, warned that it wasn’t about reflecting reality but reshaping it. Dissenting perspectives, particularly conservative ones, risked being marginalized in this pursuit of algorithmic fairness.
Examples
- Images of male CEOs were flagged as "algorithmically unfair" despite their accuracy.
- Google's fairness approach prioritized social outcomes over reflective ones.
- Websites opposing these objectives faced de-ranking or revenue suppression tactics.
4. The “covfefe” incident revealed Google’s partisan behavior.
When Donald Trump posted his infamous "covfefe” tweet, it sparked public confusion and ridicule. Initially, Google’s translation tools linked "covfefe" to an Arabic phrase meaning "we will stand up." But the reaction from outside commentators, including major outlets like The New York Times, dismissed this explanation, suggesting the tweet was simply gibberish.
Using internal channels, Google employees manipulated their AI tools to transform the term’s meaning. Rather than providing its original translation, Google Translate began presenting a shrugging emoji when "covfefe" was input. To Vorhies, this comedic shift underscored Google’s willingness to act swiftly—more to mock Trump than provide neutral services.
This small act highlighted a larger pattern: political bias didn’t stop at public messaging but actively influenced Google’s products. Employees wielded algorithms as tools for political commentary, undermining trust in the platform.
Examples
- Google Translate initially recognized "covfefe" as an Arabic term for "we will stand up."
- Employees changed the translation to display an emoji, ridiculing Trump.
- The "covfefe" saga reflected political intervention in technology design.
5. Discovery of blacklists led Vorhies to expose Google.
Vorhies stumbled upon files marked as "blacklists" in Google’s internal system, claiming they included terms and websites Google suppressed. Many sites, particularly conservative ones, were excluded from key services like Google News or search result prominence. This revelation solidified Vorhies's decision to go public.
Blacklisted sites included True Pundit and GlennBeck.com—outlets known for their Republican leanings. These discoveries directly contradicted Google’s public claim of political neutrality. Vorhies believed this tactic wasn't about fighting misinformation but silencing dissenting political voices.
Motivated to act, Vorhies connected with Project Veritas, a right-wing investigative group known for whistleblower stories. Although his initial disclosures received little action, it reaffirmed his decision to leave Google and focus on warning the public of its blacklisting practices.
Examples
- Internal "blacklist" documents featured right-wing sites like GlennBeck.com.
- Blacklisted content was excluded from appearing in Google Now newsfeeds.
- Vorhies partnered with Project Veritas to expose Google’s alleged bias.
6. Google escalated its response against Vorhies.
After Vorhies shared details with Project Veritas, Google acted swiftly, issuing him a legal cease-and-desist letter and demanding the return of company property. The tech giant alleged Vorhies unlawfully retained sensitive documents on his laptop.
However, Vorhies took matters one step further—releasing 950 pages of internal documents to the U.S. Department of Justice. He also publicly instructed Project Veritas to publish these files if something happened to him. This level of defiance turned an internal issue into a national debate.
Things escalated further when Google called police, leading to Vorhies’s home being surrounded by armed officers and even a bomb-disposal robot. For Vorhies, this incident underscored the lengths Google was willing to go to silence criticism.
Examples
- Google issued a legal demand for Vorhies to return documents.
- Vorhies handed over 950 pages of files to authorities instead of complying.
- Police surrounded Vorhies’s home, allegedly at Google’s request.
Takeaways
- Be vigilant about how companies shape the information you consume—learn to question algorithms and their motivations.
- Prioritize tools from varied platforms when gathering knowledge to avoid reliance on any one search engine or service.
- Encourage transparency by supporting and sharing whistleblower disclosures that provide evidence of ethical misconduct.