Google now warns people about unreliable or quickly changing search results
Google is testing a new feature to warn people when they search for a topic that may have unreliable results. The move is a notable step for the world’s most popular search engine to give people more context on breaking news that is popular online – like suspected UFO sightings or the development of news articles – that are actively evolving.
The new prompt warns users that the results they see change quickly and says, in part, “If this topic is new, it can sometimes take a while for the results to be added by reliable sources.” Google confirmed to Recode that it started testing the feature about a week ago. Currently, the company claims that the review only appears in a small percentage of searches, which tend to focus on developing trending topics.
Companies like Google, Twitter and Facebook have often struggled to deal with the high volume of disinformation, conspiracy theories and unverified news that plagues the internet. In the past, they have largely refrained from removing content in all but the most extreme cases, citing a commitment to the values of free speech. During the Covid-19 pandemic and the 2020 US elections, some companies took the unprecedented step of suppressing popular accounts perpetuating disinformation. But the type of tag Google is rolling out – which simply warns users without blocking content – reflects a longer-term, incremental approach to educating users about questionable or incomplete information.
“When someone does a Google search, we try to show you the most relevant and reliable information possible,” said Danny Sullivan, public liaison for Google Search. “But we are getting a lot of things that are entirely new.”
Sullivan said the notice doesn’t say what you see in search results is good or bad, but that this is a changing situation and more information could be released later.
As an example, Sullivan cited a report of a suspected UFO sighting in the UK.
“Someone had this video taken out of the Wales police report, and it received little media coverage. But there’s still not much about it, ”Sullivan said. “But people are probably looking for it, maybe they’re on social media – so we can say it’s starting to become a trend. And we can also say that there aren’t a lot of necessarily great things out there. And we also think that maybe new things will happen ”,
Other examples of trending search queries that might currently trigger the advisory are “why is britney taking lithium” and “black ufo ocean triangle”.
The feature builds on Google’s recent efforts to help users “search” or better understand the context of what they’re looking for. In April 2020, the company released a feature telling people when there aren’t enough good matches for their search, and in February 2021, it added an “about” button next to most of the results of search showing people a brief Wikipedia description of the site they are viewing, when available.
Google told Recode that it did a user search for the review which showed people found it useful.
The new prompt is also part of a larger trend by big tech companies to give people more context on new information that could turn out to be wrong. Twitter, for example, released a slew of features ahead of the 2020 U.S. election, warning users if the information they saw has yet to be verified.
Some social media researchers welcome the kinds of added context like the one Google rolled out today, including Renee DiResta of the Stanford Internet Observatory who tweeted about the feature. It’s a welcome alternative, they say, to debates about whether or not to ban a certain account or post.
“It’s a great way to get people thinking before taking action or spreading more information,” said Evelyn Douek, a Harvard researcher who studies online speech. “It doesn’t involve anyone making judgments about the truth or falsity of a story, but just gives readers more context. … In almost all late-breaking news contexts, the first stories aren’t the most complete, so it’s good to remind people of this.
However, there are still questions about how this will all work. For example, it is not clear exactly which sources Google deems reliable on a given search result, and how many reliable sources must weigh before a questionable news item loses its tag. As the feature rolls out more widely, we can probably expect to see more talk about how it’s implemented.