Skip to main content
Blog post

How the Russian State Uses Western Big Tech to Promote the Kremlin’s Agenda

Lev Gershenzon

State-owned outlets or resources close to the state are not the only ones operating in the Russian information space. For example, Russian citizens still have access to Google Maps, Gmail, and YouTube, which are all parts of the Google universe. These resources, among others, still give Russians access to independent information. 


Surprisingly, however, this company, without wanting to, is helping support the interests of the Russian state. 


Google in Russia


A year ago, we noticed that the Google Discover recommendation service was directing huge amounts of traffic—we are talking about tens of millions of clicks per day—to Russian government websites, propaganda resources, and resources that support Russia’s military policy.


More than 80 percent of smartphones on the Russian market are not Apple iPhones but devices that use the Android operating system developed by Google. All these devices have the Google Chrome browser installed by default. When accessing this browser, a user sees Google’s recommendations for news, texts, images, and videos under the search bar. 


Among these recommendations, there are links not only directly related to the user’s interests but also to general news content sites, particularly sites with news about politics. This is content that the user has not requested. These links are offered to the user before he or she has had time to request something from the search engine. 


Google left the Russian market after being slapped with fines for blocking some of the state resources on YouTube and failing to remove information that the Russian side had banned.


Meanwhile, not only did the Russian state pressure Google administratively, it also learned how to use the company’s recommendations engine to push the Kremlin’s agenda and narratives by weaving them into personalized content.


From Active Search to the Passive Reception of Content 


If we could go back to the internet of twenty-five years ago, we would find “internet catalogs”: lists of sites systematized under headings. 


Then came the era of search engines. Some of the early search directories built site ranking systems: AltaVista, Yahoo, and Google, with Google soon becoming the leader in search. What emerged was a simple universal tool for accessing any information available on the internet. 


There followed a third era in information consumption. Big companies realized they were accumulating a lot of data about each user—what users searched for, what they ordered, how and where they moved. It became possible to personalize the search experience.


Users’ queries are often repetitive. As people go about their daily business, they look for goods, services, entertainment content, and news. Tech companies have decided that they can answer unasked questions because they already know what a particular user might need. 

This is true for news, too. The news can be simply offered or recommended to a user without waiting for a request. Active search gave way to the passive receipt of recommended content.


Users appreciated this opportunity, and the consumption of information supplied through recommendation services began to grow. This applies not only to search engines but also to social media.


Opaque Rules


Each social medium has a feed consisting of someone’s posts. A user can follow someone, be “friends” with someone, and subscribe to specific publications. But each user’s feed is still formed by the platform. The platform “decides” which of your subscriptions to show you, which posts of which friends should appear in your feed. A user’s own posts are also ranked: some get wide exposure, others less. 


The rules that guide the platforms in forming the feed are opaque. All social networks, streaming platforms, and online movie streaming services have recommendation engines that analyze knowledge about specific users and suggest something suitable for them. 


Platforms say that recommendation services are designed to make a person’s life easier. And indeed, knowing a user’s query history and interests helps in offering people what they might need. But we are dealing with businesses. Businesses care about user satisfaction, but they care more about profits. 


If we look at media recommendations from this perspective, we find that the attitude toward information valued by journalists and civil activists, for whom credibility of content and reliability of sources are uppermost, is not key for platforms. More important to the platforms is what increases engagement: what draws in a user, what evokes emotions, what makes users argue and fight. After all, the more engaged users are, the more time they spend on the networks, the more advertising can be shown to them, which brings in more views and revenue. 


We now face a conflict between the interests of society and the interests of the corporations that own social media platforms.


Algorithm Abuse


One result of this conflict is that platforms generate or disseminate fake news. For content to become fake news, it is not enough for it to be false. It must be widely disseminated, and it is the platforms that can do so. 


Twitter (now known as X) delivered the concept of “trending” as a way to build views and shares. X and all other social media now try to discover trends as quickly as possible and show what is trending to the largest audience possible. There is a danger here: at the early stage of message dissemination, trendiness can be falsified. The organized efforts of a small group of people who repost, “like,” and comment on a particular post are enough.


For a post to become viral, it is enough that a hundred or fewer users act quickly and in concert. The post they promote will be shown to tens and hundreds of thousands of people. If this post is filled with unsupportable or weird facts and has clickbait for a headline, it will spread further on its own. 


Enter Russia


Where does Russia fit into the picture?


The Russian state behaves aggressively. It has a carrot and a stick. It can block unwanted content, which then appears technically inaccessible to algorithms. On the other side of the coin, using state resources, it can amplify any content and deliver it to a mass audience.


This content, artificially made popular by the state, is seen by foreign platforms, including Google, that simply focus on numbers. The platforms see, for example, Tsargrad TV, a right-wing, pro-war resource that is visited by millions daily. It is a popular resource. The algorithm “decides” to show more of the outlet’s content. This is how, by falsifying what is truly found popular by users, the Russian state can turn recommendation engines to its own purposes. 


But the fundamental problem is that the spread of fakes and AI-generated content is possible because the algorithms of the platforms, particularly social media platforms, allow it. Platforms encourage the dissemination of anything that increases users’ involvement. This function is built into the algorithms.


The solution to the problems described above is to be found in the very foundations of the current algorithms. So far the tech companies, especially those owning social media, have shown no inclination to make any changes.

The opinions expressed in this article are those solely of the author and do not reflect the views of the Kennan Institute.

About the Author

Lev Gershenzon

Lev Gershenzon

Linguist and IT Manager; Founder and Head, True Story
Read More

Kennan Institute

The Kennan Institute is the premier U.S. center for advanced research on Russia and Eurasia and the oldest and largest regional program at the Woodrow Wilson International Center for Scholars. The Kennan Institute is committed to improving American understanding of Russia, Ukraine, Central Asia, the Caucasus, and the surrounding region though research and exchange.  Read more