WASHINGTON (Sinclair Broadcast Group) — A representative from Google was met with disbelief when she denied the company is using algorithms, artificial intelligence (AI) and machine learning to change people's attitudes and behaviors.
Maggie Stanphill, Google's director of user experience and digital well-being, testified before a Senate Commerce subcommittee Tuesday. The hearing looked at how large internet platforms are using "persuasive technology" to influence the public's actions and beliefs and whether the government needs to step in to check that power.
Asked directly whether Google uses "persuasive technology" to affect users' attitudes or behaviors, Stanphill flatly denied it. "No, we do not use persuasive technology at Google. In fact, our foremost principles are built around transparency, security and control of our users' data."
The denial came as Google faces criticism for alleged search engine bias and concerns about YouTube's automated video recommendations leading users, particularly children, to more provocative, polarizing and potentially dangerous content.
It also follows the release of a report from Project Veritas, a conservative undercover reporting organization, showing Google insiders suggesting the company would use its power to affect the next election.
At the same hearing, tech experts argued that Google's business model, as well as Facebook, Twitter and others, is to keep users hooked by becoming the best at predicting how they will engage with platform content, what they want to see next and how to keep them coming back. Essentially, how to win the war for your attention.
A confused Sen. Brian Schatz, D-Hawaii, later returned to the question of Google influencing users' behavior. "Did you say Google doesn't use persuasive technology?"
Stanphill responded, "That is correct, sir."
Stunned, he continued, "Because either I misunderstand your company or I misunderstand the definition of persuasive technology."
Stanphill, unphased, gave a lengthy response explaining "the whole family of companies, including YouTube" is free from such influence, and that "dark patterns in persuasive technology are not core to how we design our products at Google."
Shatz responded, "I don't know what any of that meant."
In recent years, lawmakers have woken up to the degree of power digital platforms wield, including the extent to which they shape the political environment. For the first time, in 2019, candidates are cumulatively spending more on digital advertisement than traditional television or radio ads. Even the Supreme Court has acknowledged that, in essence, social media is "the modern public square."
Congress is slowly looking at possible regulations. Democratic presidential candidates have proposed breaking up the largest companies in Silicon Valley, while the Justice Department and Federal Trade Commission have launched antitrust probes aimed at Google and Facebook, respectively.
Concerned about their unchecked power, Sen. Jon Tester, D-Mt., said he believes big tech will soon be able to affect election outcomes.
"I'm probably going to be dead and gone—and I'm probably thankful for it—when all this s--- comes to fruition, because I think that, this scares me to death," he said addressing the Google representative. "You guys could literally sit down at your board meeting, I believe, and determine who's going to be the next president of the United States."
He continued that he could be wrong and hopes he is.
Republicans have long argued that Google has an anti-conservative bias. Research is scarce and shows mostly anecdotal evidence of conservative content being de-ranked or more recently, conservative political commentators being banned from earning ad revenue on YouTube under the company's anti-extremism policy.
At the same time, more liberal voices are allegedly being promoted on Google and other social sites like Instagram. Sen. Ron Johnson, R-Wis., insisted that "conservatives have a legitimate concern that content is being pushed from liberal progressive standpoint to the vast majority of users of these social sites."
Neither Congress or the Justice Department or the Federal Trade Commission have a clear insight into how content is being filtered by platforms like Google through algorithms, AI, machine learning and some human intervention as well.
Chairman of the Commerce subcommittee on technology, innovation and the internet, Sen. John Thune, R-S.D., said Congress would be examining whether forcing companies to be transparent and explain their algorithms could be a policy option.
While senators tried to glean information about the opaque world of Silicon Valley's proprietary algorithms, Project Veritas released a video based on interviews with Google insiders and more than 100 pages of internal documents purporting to show a political agenda behind the way the company trains its artificial intelligence systems.
In a secretly recorded interview, Jen Gennai, Google's head of "Responsible Innovation," which monitors and evaluates the implementation of AI technologies, said she believed her company, the people and the news media "got screwed over" in the 2016 election. "So we're rapidly been like, what happened there and how do we prevent it from happening again?"
She went on to criticize 2020 presidential candidate Elizabeth Warren's proposal to break up big tech, "because all these smaller companies who don’t have the same resources that we do will be charged with preventing the next Trump situation. It’s like, a small company cannot do that."
Gennai also told the undercover Project Veritas reporter that Google is "training our algorithms" so that if 2016 happened again "would the outcome be different?"
Project Veritas confirmed her statement against internal documents that show a policy of training artificial intelligence systems in "Machine Learning Fairness." The "fairness" does not necessarily reflect reality but is aimed at producing results to reduce "unfairness" found in the real world, like racism, sexism, bigotry and other forms of discrimination.
"They’re going to redefine a reality based on what they think is fair and based upon what they want, and what and is part of their agenda," a Google whistleblower told Project Veritas.
The whistleblower continued that the "Machine Learning Fairness" evolved from discussions within the company's leadership after the 2016 election. They concluded that hate, misogyny and racism helped President Donald Trump get elected, "so we need to fix that."
The insider asserted that Google is not an objective mediator of information."They're a highly biased political machine that is bent on never letting somebody like Donald Trump come to power again," the insider claimed.
During the Tuesday hearing with Google's director of user experience, Sen. Ted Cruz, R-Texas, referred to the material Project Veritas obtained from Google, stating, "These documents raise very serious questions about political bias at the company."
YouTube removed Project Veritas' video from its platform over privacy concerns. The video reportedly aggregated more than 50,000 likes and nearly 1 million views.
Gennai responded to the report and video in a statement claiming Project Veritas "selectively edited and spliced the video to distort my words and the actions of my employer." She maintained that Google "has no notion of political ideology in its rankings."
At the Senate hearing Tuesday, Google's director of user experience, Maggie Stanphill, distanced herself from the sentiments expressed by Google insiders and the internal documents as reported by Project Veritas.