INFORMATION ABOUT PROJECT,
SUPPORTED BY RUSSIAN SCIENCE FOUNDATION

The information is prepared on the basis of data from the information-analytical system RSF, informative part is represented in the author's edition. All rights belong to the authors, the use or reprinting of materials is permitted only with the prior consent of the authors.

 

COMMON PART


Project Number19-18-00206

Project titlePolitical news on Russia and its neighbors on social media: major content features, factors of trust and news truthfulness detection by users of different countries

Project LeadKoltsova Elena

AffiliationNational Research University Higher School of Economics, City of Saint-Petersburg,

Implementation period 2019 - 2021 

Research area 08 - HUMANITIES AND SOCIAL SCIENCES, 08-301 - Theory, methodology and history of sociology, sociological research methods

KeywordsTrust, deception detection, fake news, news frames, social networking sites, Russia, online experiment, international conflict


 

PROJECT CONTENT


Annotation
In recent years, fake news scandals, coupled with resonant leaks of personal data from social networking sites (SNSs) have shown the importance of SNS-distributed news both for national politics and international relations. In this research, fake (fraudulent, fabricated or deceptive) news are news texts containing false information. Electoral outcomes and large-scale international changes have been attributed to SNS-supported “fake” news campaigns, and so have been the rapid ups and downs of international reputations of political personas, organizations and entire countries (including Russia). In such situation it is critically important to understand to which extent deceptive news can really deceive audiences in an SNS-mediated environment that has had a profound effect on news production, dissemination, consumption and perception (including trust). First of all, this environment has removed a number of important legal, technical and professional barriers on the way of (fake) news production and targeted dissemination which in turn has influenced the processes of audience trust formation. And second, it has made news consumption increasingly mediated by user’s personal online networks that may reinforce in-group trust, but simultaneously may undermine trust to the outside information and the overall accuracy of fraud detection. To determine the overall potential impact of fake news on society, it is needed to understand how news consumers navigate in the growing ocean of virtually unverifiable information, how susceptible they are to fraudulent news, what influences their ability to detect them, and in particular whether this ability is related to the dominant political images and in-group closure. This task is especially important in the context of international conflicts when media of conflicting countries are weaponized against one another, and when the populations of those countries presumably get fewer accurate news. Russia has found itself involved in quite a number of such conflicts, including some happening with its nearest neighbors. Media spaces of former Soviet member states – even those that have never had any sharp conflict with Russia – have become increasingly separated from the Russian media space in the past few decades. However, so far no research has examined how mutual representation of Post-Soviet countries in news is related to audiences’ perception of news credibility and their ability to detect deception in news. Very little research is found in the neighboring spheres as well. For instance, to the best of our knowledge, there is no research on mutual news representations of European metropolitan powers and their former colonies. We are not aware of any research of fake news perception in the context of international relations and particularly of international conflicts. A growing, still thin, body of academic research aims at automatic detection of fake news and of algoritymized news dissemination in general, while applied research develops online fact checking systems – again, outside the described above context. Audiences’ trust in news and media and readers’ views on credibility of news sources has been since long an important topic in media studies, political science, sociology and psychology. However, very little has been done to measure users’ susceptibility to deceptive news or to infer factors that influence it, especially in the periods of international tensions. Higher level theories of trust and deception detection have mostly conceptualized either generalized trust in institutions or deception and trust formation in interpersonal relations, but not trust to mediated and professionally produced information or strategies to deal with possible deception in it. Our research aims to bridge this gap by developing (or significantly contributing into the development of) a middle range and testable theory of trust, news credibility perception by social media audiences and their deception detection abilities – both exposed and not exposed to a situation of international tensions. This theory is bound to be interdisciplinary and will be grounded in such disciplines as sociology, media and communication studies and social psychology. To verify components of our theory, we will present SNS users from three post-Soviet countries with examples of both truthful and untruthful news and thus examine the factors that have influenced users’ assessments of news truthfulness and ability to detect deception. Apart from Russia, the sample will include one country that has never had sharp conflicts with the Post-Soviet Russia (presumably Kazakhstan) and one country that has had such a conflict (presumably Ukraine). We plan that news presented to Kazakhstani and Ukrainian users will be related to Russia; news presented to Russian users will be related to either Kazakhstan or Ukraine, respectively, although certain elements of experiment design may change. We will formulate a set of interrelated hypotheses about the factors influencing users’ ability to detect fake news and their trust in news. As previous psychological research in interpersonal communication finds that the outcome of a deception judgment depends most of all on the credibility of the message or of the source, our hypotheses is as follows. First, controlling for age, education, interest in politics and news consumption experience, we will check the effect of a user network structure. We assume that larger, more clustered and more heterogeneous networks will be positively related to accurate fake news detection, and negatively – to trust, as they provide access to alternative sources and opportunities to expose lies in what was perceived as credible sources before. Of special importance will be presence of users from the neighbor country in the user’s network. Second, while the perceived plausibility of a news story is expected to influence the attitude to its truthfulness everywhere, in a situation of international tensions, when negative images of an adversary country dominate national news ecosystems, anything that contradicts these dominant images will have much greater chances to be perceived as “fake news”. Likewise, as a conflict is expected to undermine readers’ trust to news sources of the opponent countries, this will also play an important role. However, it is plausible that a user’s inclination towards opposing the current government might have an inverting effect on the perception of news truthfulness. Despite the aforementioned shortage of relevant research, our hypotheses are based on existing middle range theories from a number of disciplines. A first group of theories are conceptualizations of trust in media institutions based on broader sociological theories of trust; they explain major factors of trust generation that are to be tested in our specific context. A second group consists of media studies theories of selective news consumption employing such concepts as selective exposure, echo chambers and filter bubbles that may explain further reinforcement of frames that are already dominant in a given community. A third group unites theories from political communication and media research deals with such concepts as news frames, media bias, and political stance in news. Finally, we ground our hypotheses in a thin stream of empirical research on fake news that has not so far produced any substantial theories due to its novelty, yet has generated some important empirical evidence helpful for hypotheses generation. To test our hypotheses, we first will collect a sample of political news mentioning Russia and two other selected countries from the SNS pages of several leading news media in all three countries. Description of their main content features will be an important subtask. First, it will be aimed at mapping prevalent topics and frames in the coverage of the selected countries. This will be reached with a combination of automated text analysis, such as topic modeling, and manual content analysis. Second, this task will assist in construction of a news sample for further experiments that will include news aligning with the dominant images of respective countries and contradicting them. Next, this sample of news will be presented to SNS users via a gamified application that will (1) ask respondents to assess thruthfulness of a set of news items and to answer a limited number of additional questions, (2) give users a gamified feedback and (3) ask them for access to their SNS account data (the third is optional and will depend on user trust). The data from the user accounts will be used to get available socio-demographic information, user interest in news consumption (via such proxies as repost activity), their privacy settings, friendship network structure and some others. Additional questions will collect information about user age, education, political views / attitude to the current government and some others. All these data will be used in models explaining human assessments of news truthfulness. All experiments will comply with academic ethical norms, SNS privacy policies and relevant legal norms. As a result, we will obtain, first, a broad picture of mutual representations of Russia and its two neighboring countries in each other’ news, thus tackling problematic zones. Second, we will obtain statistically verified knowledge about factors influencing SNS users’ ability to detect fake news and their perceptions of news credibility, in situations of both conflict and of non-conflict international relations. This knowledge will contribute to a middle range theory of trust and deception detection by news audiences; it can also be used for countering disinformation campaigns, including more efficient dissemination of truthful news and balanced news frames.

Expected results
This project aims at obtaining new fundamental knowledge about human perception of news credibility and factors influencing this perception that, ultimately, define human ability to detect fake news. The main theoretical result will thus be a middle range theory of trust and deception detection in mass-distributed and professionally made messages in a networked online environment. It will complement the existing trust and deception theories in interpersonal offline communication and relate those the existing network research and media studies. This result will be obtained in the context of political SNS-mediated communication on the Post-Soviet space and incorporate media frames as an important factor, thus extending the existing news framing theories. The validity and reliability of the results will be ensured with an innovative experimental design of the study supplemented with advanced text analysis – that is, the research will not only comply with the most rigorous methodological criteria, but will make a visible contribution to methodology in social sciences. More specific results will broaden our empirical knowledge in several sub-domains of sociology (including sociology of media and communication and political sociology), and social psychology. We, first, will learn how alignment of a news item about an adversary/partner country with the news frame dominant in the respondent’s country influences respondents’ perception of credibility. Second, we will find out how credibility perception is influenced by the country of the news source and see whether it makes difference if this is the country of the respondent or not. Third, we will see whether the lack of support of the current government by the respondent inverts the influence of the previously listed factors on news credibility perception. Fourth, we will find out how the structure and, above all, heterogeneity of respondent’s online network influences his/her perception of news credibility, and of special importance will be the presence of friends from a neighbor country in a user’s network. Fifth, we will be able to measure general ability of respondents to detect fake news, in connection to their country of origin, level of education and experience in news consumption. Furthermore, it will be the first study investigating news credibility perception during an evolving international conflict that will be compared to a non-conflict situation. One of the major novelties of the research is that it will investigate those news consumers who receive news via social networking sites (SNS) because it is the SNSs that have fundamentally changed the nature of news production and dissemination and have allegedly opened the Pandora box of fake news. As research on SNS mode of news consumption is very young anywhere in the world, regularities that we find will be reported for the first time. Knowledge about human perception of news credibility in the new domain of SNS also possesses a visible practical value. It can be used for countering disinformation campaigns, including those aimed at Russia. This can be achieved, for instance, through targeting the most susceptible groups or their friendship networks, and through new approaches to trust production and negative frame repairing. In this context, it is significant that Russia is in the center of this research as it has experienced visible difficulties with maintaining its international reputation in recent years. It is thus of special value that, based on the large-scale news text analysis, we will additionally obtain a comprehensive picture of how news of Russia’s neighbors cover Russia, what the main topics and frames are, and what the problematic zones are. This knowledge can be directly used for frame repairing and more generally a more efficient international news policies.


 

REPORTS


Annotation of the results obtained in 2021
In the final year of the project, we concluded all of the remaining tasks related to data analysis and interpretation of the results. First of all, we analyzed and interpreted the data collected in the main experimental study that we conducted in the previous year. We ran simple regression models on the full data set to determine the feasibility of employing mixed regression modelling, then developed the main mixed model testing all our hypotheses in a targeted fashion. Secondly, we ran an exploratory analysis on the subsample of data generated by Russian-speaking users of VK for whom we had obtained the composition of friendship networks and several indicators of online activity. We performed a descriptive analysis of these data and generated preliminary inferential findings about the effects of users’ social networks and patterns of online activity on the accuracy of fake news recognition and perceptions of news credibility. We used the results of these two analyses to formulate a middle-range theory of online news credibility and fake news detection. Finally, we ran an additional pilot experiment to test a number of newly articulated hypotheses related to the effects of news literacy and confirmation bias on people’s ability to recognize fake news. Following the main experiment conducted in the previous year, we obtained a large dataset consisting of 9 subsamples of data collected from more than 8000 users across three countries (Russia, Kazakhstan, Ukraine) and two social platforms (Facebook and VK). We ran descriptive analyses and preliminary regression modelling of each of the subsamples to determine feasibility of specifying a mixed-effects model, which proved to match the crossed and nested structure of our data well in addition to demonstrating higher quality (as measured by interclass correlation coefficients) than separate simple regression models. Finally, we developed the main mixed regression model that utilized a custom contrast matrix that allowed us to test all the hypotheses related to news credibility and fake news recognition simultaneously. We found that, across all subsamples, users on average rated news items representing a dominant frame as more credible than those affiliated with an alternative frame (b = 0.31, p < 0.001). Real news items were reliably perceived as more credible than fabricated ones across the board (b = 0.30, p < 0.001). There was also a significant overall difference in the effect of frame between respondents seeing news about neutral versus adversary countries (b = 1.27, p < 0.05), as the presence of conflict increased the credibility of dominant-frame news while dampening the credibility of alternatively framed news. Notably, the relationship between frame and conflict also varied significantly between true and false news (b = -1.82, p < 0.01). For true news, in both conflict and no-conflict situations the dominant frame was associated with significantly higher credibility ratings. The pattern was different for fake news: while the frame had no effect on perceived credibility of fakes in a no-conflict situation, in the presence of conflict fakes that were consistent with the dominant frame (i.e., casting the adversary in a negative light) were perceived as significantly more credible than fake news consistent with the alternative frames. As expected, the level of support for respondents’ own government moderated the effect of frame in some national subsamples but not others. Russian and Kazakhstani respondents with the highest levels of government support tended to perceive dominantly framed news as significantly more credible than alternatively framed news, while for those who were the least supportive of their governments there was no significant difference in perceived credibility of these two types of messages (b = 0.05, p < 0.001). In contrast, the difference between credibility scores of dominant-frame and alternative-frame news reported by Ukrainian users did not vary across levels of government support. Ukrainian respondents who chose to read the news in Ukrainian were significantly more likely to rate news representing the dominant frame as true (b = -0.14, p < 0.01) than those Ukrainian users who chose to receive the news in Russian. The manipulation of news source’s affiliation (domestic media vs. the media of the country covered in the news) had no significant effect on credibility ratings. This is possibly due to the insufficiently strong operationalization of the source origin construct that we used to preserve validity. We also analyzed the data collected from a subsample of VK users. Unlike Facebook, at the time of the study VK allowed us to collect additional personal data from users who agreed to provide them. We obtained information on friend lists and on-platform community subscriptions from those respondents who consented. In the final year of the project, we built a series of preliminary regression models on these data without building comprehensive models. These analyses suggested a positive association between user’s accuracy and having people who resided in the country covered in the news on one’s VK friend list (b = 0.33, p < 0.01). Higher number of news-related VK subscriptions was associated with greater overall news credibility (b = 0.22, p < 0.05) and lower accuracy of false news recognition (b = -0.14, p < 0.05). These preliminary findings warrant further analysis utilizing comprehensive statistical models, which we include in our grant extension application. In the final year of the project, a new pilot experiment was designed and executed. An additional analysis of scholarly literature on news credibility and fake news recognition conducted in 2021 suggested a powerful role that both media literacy skills and confirmation bias can play in determining these outcomes. At the same time, we did not identify empirical studies investigating how these factors play out among arguably the most news-literate professional group: media employees. In the view of these considerations, we designed and ran a pilot study that relied on a sample of both media professionals and regular social media users. Based on the analysis of existing literature, we generated the following expectations: 1) Respondents with the experience of working at a media organization will be better at fake news recognition than regular users; 2) News items’ alignment with respondents’ preexisting beliefs will increase their credibility, while news’ inconsistency with prior beliefs will reduce credibility (confirmation bias); 3) For those with the experience of media work, the effect of confirmation bias will be smaller than for those without such experience. To test these expectations, we adjusted our online experimental tool for the new set of variables, presentation logic, and stimulus material. The pilot utilized a small sample of approximately three hundred social media users that included both current media employees and those without any media experience. We had respondents assess the credibility of 12 news items that varied in veracity, valence, and valence of social comment served along with a news item. Contrary to our hypothesis, preliminary analysis of the results suggested no significant difference in fake news recognition between media professionals and regular users. Confirmation bias was a significant predictor of news credibility, and its effect did not differ significantly between media professionals and regular users. These initial findings, obtained using a novel approach to teasing out the effects of media literacy on news credibility, can be seen as unexpected given the current state of the literature in the field. This calls for further comparative investigation of the effects of media literacy and news bias on a larger sample of social media users. Thus, all tasks scheduled for the year of 2021, including the analysis of the results of the main experiment and designing and executing a new pilot study, were completed. Specific deliverables are available at the following links: 1. Sample portion of the dataset collected in the main experiment, available at: https://topicminer.hse.ru/rsf2021/sample-data/ (Full dataset will be made publicly available after all planned analyses are concluded). 2. Corpora of media texts generated by each of the three countries’ top-30 publications: 1) Full collections of news texts referencing target countries: https://topicminer.hse.ru/rsf2019/news-collections/ and 2) News about presidential elections in target countries: https://topicminer.hse.ru/rsf2019/topic-modelling/subcorpora/ 3. Collections of stimulus material used in the main experiment and the 2021 pilot experiment, available at: https://topicminer.hse.ru/rsf_final/news-stimuli/ 4. Statistical models used in data analysis: https://topicminer.hse.ru/rsf2021/regression-models/ 5. Other materials are available on the webpage of the project at the website of the Laboratory for Social & Cognitive Informatics: https://scila.hse.ru/fakenews

 

Publications

1. Bryanov, K., Vziatysheva V. Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news PLoS one, Т.16. В.6. с.1-25. (year - 2021) https://doi.org/10.1371/journal.pone.0253717

2. Koltsova O., Judina D., Terpilovskii M., Pashakhin S., Kolycheva A. Освещение выборов в Казахстане и Украине российскими СМИ ПОЛИС. Политические исследования, № 6. С. 89-107. (year - 2021) https://doi.org/10.17976/jpps/2021.06.07

3. Vziatysheva V. How Fake News Spreads Online? International Journal of Media and Information Literacy, 5(2). P. 217-226 (year - 2020) https://doi.org/10.13187/ijmil.2020.2.217

4. Porshnev A., Miltsov A., Lokot T., Koltsova O. Effects of conspiracy thinking style, framing and political interest on accuracy of fake news recognition by social media users: evidence from Russia, Kazakhstan and Ukraine Social Computing and Social Media: Experience Design and Social Network Analysis. HCII 2021. Lecture Notes in Computer Science. Springer, Cham, Vol. C. 341-357. (year - 2021) https://doi.org/10.1007/978-3-030-77626-8_23

5. Vziatysheva V., Sinyavskaya Y., Porshnev A., Terpilovskii M., Koltcov S., Bryanov K. Testing users’ ability to recognize fake news in three countries. An experimental perspective Social Computing and Social Media: Experience Design and Social Network Analysis. HCII 2021. Lecture Notes in Computer Science, Springer, Cham, Vol. 12774, С. (year - 2021) https://doi.org/10.1007/978-3-030-77626-8_25


Annotation of the results obtained in 2019
The overall goal of this research is to understand how news consumers navigate in the growing ocean of hardly verifiable information, how susceptible they are to fraudulent news, what influences their ability to detect them, and in particular whether this ability is related to the dominant political images and in-group closure. This task is especially important in the context of international conflicts when media of conflicting countries are weaponized against one another, and when the populations of those countries presumably get fewer accurate news. In the first year of our project we did most of the preparatory work for our future online experiments that will allow us to solve the aforementioned tasks. We started from literature review and concluded that the concept of fake news detection should be divided into two related concepts: first, it can be understood as an act of assessment of news truthfulness based on individual’s cognitive resources. Second, it can be regarded as an act of trust into a news item, based on certain clues (such as news sources) whose trustworthiness is transferred to the news item. Trust transfer is usually used to save cognitive resources, so it is quite distinct from the latter. According to the preliminary version of our theory, these two concepts are expected to depend on different factors, and finding these dependencies demands slightly different research designs. According to our initial research proposal, the concept that we seek to study is best described as trust. In accordance with this, in our first year we have developed a set of hypotheses and a methodology to test them. We apply those to the case of online political news about neighbor countries, some of which may be in conflict with the country of the news consumer. We have operationalized trust in news as the probability of perceiving a given news item as truthful. Based on literature review, we expect that trust will increase if the news source belongs to the country of the user, as opposed to the country covered in the news, and if the news frame is dominant in the country of the user, as opposed to an alternative frame. We also expect that these relations will be stronger for the pairs of conflicting countries. Additionally, we assume that certain intervening factors may reverse the relations: thus, the less support a user has to his/her government, the higher the probability of trust to the sources of the neighbor country, especially if this country is in conflict with the country of the user. Likewise, if the conflict exists, the more friends a user has in a neighbor country, the more trust she will have to the alternative news frames. To test these hypotheses, we have developed an online instrument. It has the single back-end with the SQL database and two front-ends: one in the form of VKontakte application, and the other in the form of a stand-alone website. VK app will allow us to collect data from user accounts, upon user consent, in addition to experimental data. Stand-alone site will allow us to target users of Facebook – a social network whose data collection policy is very restrictive but whose users are important for our research. The instrument consists of the following parts: front page with a teaser, “About the project” page, experimental part followed by the survey part, and the final page with gamified feedback and the offer to learn the truth after the end of the research. In the experimental part, each participant is shown eight news randomly retrieved from our database so as to fit our 2x2x2 factorial design, where the varied factors are: news truthfulness (true / fake), news frame (dominant / alternative), news source (from user’s country / from the country covered in the news). The survey part includes blocks of questions about news consumption and fact-checking behavior, generalized trust in people and in politics, several other questions related to politics and standard demographic questions. In 2019, we carried out a pilot experiment that included around 200 VK users and 200 FB users from Russia. After technical testing of our software and bug removal, we created a database of 16 news about Ukraine and showed them to the first halves of VK and then FB audiences targeting both via ad managing systems of the respective social networking sites. After studying factors of user churn we permuted the order of questions and ran the second half of the pilot. Overall, user turnout was high, and their churn was low (around two thirds of those who clicked on the ad). About a half of VK users granted access to their accounts. On our second concept of fake news recognition understood as an act based on cognitive resources, we performed a separate literature review. We found out the following possible predictors of it all of which are related to thinking styles: conspiracy thinking, magical thinking and rational thinking. For each of them we constructed scales based on literature each consisting of four questions. We created a separate version of our stand-alone web-site that included all the previous questions and the listed scales. We then ran a separate pilot experiment on around 100 individuals from Russia. The goal of this pilot was to test the scale validity and to make the decision whether these scales should be included into the main experiment or constitute a separate branch of our research. The results are now being analyzed. The most difficult experimental variable to construct was the news frame. According to our plan, we carried out a large-scale research of news content of the three countries that were selected for our research – Russia, Ukraine and Kazakhstan. We selected the time period that embraced presidential elections in all three countries – from January 2018 to June 2019. Using data about media audience – general and that visiting their social media accounts – we formed the preliminary lists that were then shown to experts from all three countries. They selected the sub-samples of sources that best represent socio-political spectra of the respective countries, around 30 sources for each country. We then downloaded all news from all these sources for the aforementioned period thus obtaining around 6.5 million news articles. Of them, based on keywords, we formed four corpora: two corpora of Russian news related to presidential elections in Ukraine and Kazakhstan, respectively, and two corpora related to the Russian presidential elections from Kazakh and Ukrainian media. The corpora were restricted to the elections to make their volume suitable for further analysis. The first two corpora and the second two corpora were paired and modelled with a topic modeling algorithm with 100 topics; all the topics were then labeled by independent coders who later agreed on their labels. From the corpus of Russian news, we selected top texts from each of election-related topics obtaining a sub-corpus of around 800 news and manually coded them in terms of their frames, according to a specially developed codesheet. This was done by three independent coders. Later, we used this labeled dataset to select thruthful news for our pilot experiment, although mapping topics and frames was a goal per se. At the final stage of frame analysis we performed a comparative research of framing of Ukrainian and Kazakhstan elections in different types of the Russian media. A similar analysis is planned to be performed on the corpus of Ukrainian and Kazakh news covering Russian elections. To date, we have finished topic labeling on this second corpus. Our findings have been described in several papers. Literature review has been made into the paper “Is fake that scary? False news and their role in the modern world” [“Tak li strashen feyk? Lozhnyye novosti i ikh rol' v sovremennom mire”] by Anastasia Kazun accepted to Monitoring Obschestvennogo Mneniya. The results of the first pilot research have been described in “Designing an experiment on recognition of political fake news by social media users: factors of churn” by Olessia Koltsova, Yadviga Sinyavskaya and Maxim Terpilowsky. The results of the second pilot experiment are now being transformed into the paper “Effects of different types of conspiracy thinking and political suspiciousness on fake news recognition by social media users: evidence from Russia” by Alexander Porshnev. Both works have been accepted to be published as full papers in the Proceedings of Human-Computer Interaction International conference, based on extended abstracts. Finally, frame analysis of the Russian media has been done in the manuscript “Political bias of Russian media in covering presidential elections in Ukraine and Kazakhstan” [“Politicheskaya angazhirovannost' rossiyskikh SMI v osveshchenii prezidentskikh vyborov v Ukraine i Kazakhstane”] by Daria Judina, Maxim Terpilovsky and Sergei Pashakhin, to be submitted to Monitoring Obschestvennogo Mneniya. All editions are indexed in Scopus.

 

Publications

1. Judina D., Terpilovskii M., Pashakhin S. Политическая ангажированность российских СМИ в освещении президентских выборов в Украине и Казахстане Мониторинг общественного мнения, - (year - 2020)

2. Kazun A.D. Так ли страшен фейк? Ложные новости и их роль в современном мире Мониторинг общественного мнения, - (year - 2020)

3. Koltcova E., Siniavskaia I., Terpilovskii M. Designing an experiment on recognition of political fake news by social media users: factors of churn Springer, - (year - 2020)

4. Porshnev A. Effect of conspiracy thinking and trust in technology on fake news recognition by social network users: evidence from Russia Springer, - (year - 2020)


Annotation of the results obtained in 2020
Consistent with the original roadmap of our project, much of its second year was devoted to preparing and executing our main cross-national experiment on fake news recognition by social media users. For this end, we constructed four sets of news items to be used as stimulus material, each set targeting residents of one of the focal countries and covering one of the neighboring countries: 1) News covering Ukraine for Russian audience; 2) News covering Kazakhstan for Russian audience; 3) News covering of Russia for Ukrainian audience; 4) News covering Russia for Kazakhstani audience. Such structure of stimulus material corresponds to our initial goal of making pairwise comparisons between Russia and Ukraine, and Russia and Kazakhstan. Each set consisted of 24 news items and included all the four combinations of fake/real stories and dominant/alternative frames in equal proportions. We drew real stories from news media websites and constructed our own fake news items relying on the expertise of a team member with a journalistic background, while also validating all resulting material with media experts from respective countries. After adding one distractor news item (a coronavirus-related story) to each dataset, the total number of news amounted to 100. The list of news items with true/false labels is available here: https://fakenewsproject.org/answers. Additionally, news stories about Russia for Ukrainian users were translated into Ukrainian by a native speaker. In order to identify dominant and alternative frames for each of the national media systems under review, we conducted a mixed-methods investigation that combined topic modelling with qualitative text analysis. This work was started in 2019 and finalized in 2020. Unsurprisingly, in the pair Russia-Ukraine we found that the dominant news framing on both sides was highly politicized and hostile; Ukrainian dominant framing of Russia was more radical, especially on the most politically sensitive topics. In the Russian media, the dominant framing was largely aligned with pro-government discourse, while the alternative framing of Ukraine broadly corresponded with the picture of political reality promoted by the opposition. No such straightforward alignment was discovered in the Ukrainian news, apparently due to the greater political fragmentation and heterogeneity of the nation’s media system. In the pair Russia-Kazakhstan, such alignment was also not particularly conspicuous, but mostly due to the generally more restrained tone of all frames. Overall, mutual representations in this pair were found to be relatively friendly, yet the dominant coverage of Russia by Kazakhstani media was also often marked by connotations of distancing and estrangement. On the next step, we conducted an additional pilot study to assess the expediency of including survey instruments measuring various cognitive styles into the main study, and also to test some refinements of the experimental interface. Relying on a sample of 462 college students from Russia and Kazakhstan, we elicited several minor interface improvements and concluded that only a conspiracy thinking scale – and not the scales measuring rational and magical thinking styles – demonstrated sufficient reliability to be integrated into the main experiment. The final version of our experimental interface materialized in an application for social networking platform VKontakte (VK) designed to collect additional user data, and a standalone website to target Facebook users without collecting their user data. We created five distinct variations of each of these two tools that differed by the subset of stimulus material attached to them. These five subsets include four news sets listed above and the set of Ukrainian news about Russia translated into Ukrainian. The version of the interface that comes with the latter news set is in Ukrainian as well. One of the versions of the experimental VK app is available at https://vk.com/app7044177_-186521038. One of the versions of the standalone website can be accessed at: https://fakenewsproject.org/fb/ru/kz/ru/. The main experiment was in the field between April 13 and May 21, 2020, with the Ukrainian-language data collection being delayed to early July due to technical reasons. VK targeting in Ukraine, where access to the platform is blocked by the government, predictably worked poorly. As a result, we had to halt the promotion of the Ukrainian-language version of our VK tool, and discard the data already collected. Thus, ten versions of our experimental instrument produced nine subsamples of data. In total, our advertisements were shown 4 687 456 times on VK, and 402 241 times on Facebook. As a result, 44 600 users from both social platforms followed the link to a respective experimental interface. Of these, 30 702 individuals started participating in the experiment, and 10 830 completed both the experiment and the survey. Each age-gender group was targeted separately until its quota, reflecting its share in the population of the respective social platform, was filled. When the resulting sample was balanced according to regional quotas as well, our final dataset for analysis comprised 8559 complete and valid observations grouped in nine subsamples. Each subsample, except for one, contains from 902 to 1162 observations; the exception is the sample of Ukrainian VK users who were exposed to Russian-language news, with 571 observations. In accordance with our plan, in 2020 we only performed the descriptive analysis of our experimental and survey data. Descriptive analysis of the data from VK user accounts and hypothesis testing is the work of the next year. The average accuracy of fake news detection that could vary in the range [0;8] was 4.63, which is slightly better than random guess (mode — 5, median — 5, SD — 1.43) and consistent with our pilot research. VK and Kazakhstani users appeared to be a little less accurate than Facebook users and individuals residing in two other countries. The largest differences were observed in the accuracy of classification of specific news items (min=27.3%; max=87.8% of correct answers), which suggests the use of multilevel regression models at the next stage of our analysis. The majority of respondents stated they have never seen any of the news they were exposed to in the study (64.9%), and the overwhelming majority reported not having checked any of them while taking our fake news detection test (90.7%). At the same time, 16.1% , 17.7% and 21% of users from Russia, Ukraine and Kazakhstan, respectively, have claimed that they “always” check news they consume online. Predictably, the majority (70.5%) named social media as their main source of news, followed by news aggregators and television, both attracting more than 40% of respondents in a multiple-choice setting. Neither the generalized trust scale nor conspiracy thinking scale reached sufficient levels of reliability on our sample. The observed reliability of the generalized trust scale was below the conventional threshold of acceptability (Cronbach’s α= 0.48), which was also the case for our pilot study (Cronbach’s α=0.58). Reliability of the conspiracy thinking scale (Cronbach’s α=0.68) is lower than in our pilot study (Cronbach’s α=0.70) that was just enough to clear the threshold of acceptable reliability. Additionally, while all three questions on trust suggest higher levels of trust among Facebook users as compared to VKontakte users, no substantial differences in conspiracy thinking were observed between social platforms, countries or other subsamples. Distributions of politics-related responses demonstrated some trends that are significant for our future analyses. Interest in politics was visibly lower among women (41%) than men (63%), among Russians (49%) than others (53%) and among VK users (44%) than Facebook users (56%). Support for the government has been quite low in all countries, with an average of 63% of respondents being somewhat or entirely unsupportive of their governments. The lowest level of support (28%) was revealed in Ukraine and the highest (48%) – in Kazakhstan. Perception of bilateral relations as hostile was predictably much higher in the pair Russia-Ukraine than Russia-Kazakhstan, with 92% of Ukrainians and 64% of Russians having assessed the relations between their countries as somewhat or very antagonistic. By contrast, 92% of Russians and 93% of Kazakhstanis have evaluated the Russian-Kazakhstani relations as somewhat or very peaceful. It is evident that the nature of the relations between Russia and Ukraine results in a sharp asymmetry in the evaluation of these relations by the residents of the two countries. Based on the data collected from study participants, we constructed a set of outcome variables measuring respondents’ tendency to perceive news as credible; accuracy of fake news discernment; proneness to make type I and type II errors when making credibility judgements, and more. The stage is now set for using the data generated in the main experiment to address the main hypotheses of the study, which have been also refined and specified during the course of 2020. These hypotheses consider the relationships between our outcome variables and predictors beyond just news frames and news sources, such as news consumption behaviors, trust, political attitudes, and certain features of users’ friendship networks.

 

Publications

1. Kazun A. Так ли страшен фейк? Ложные новости и их роль в современном мире Мониторинг общественного мнения: Экономические и социальные перемены, № 4. С. 162-175. (year - 2020) https://doi.org/10.14515/monitoring.2020.4.791

2. Kazun A., Pashakhin S. «Чужие выборы»: новости украинских СМИ о выборах президента РФ 2018 Экономическая социология, - (year - 2021)

3. Koltsova O., Judina D., Terpilovskii M., Pashakhin S., Kolycheva A. Освещение выборов в Казахстане и Украине российскими СМИ ПОЛИС. Политические исследования, - (year - 2021)

4. Koltsova O., Sinyavskaya Y., Terpilowski M. Designing an Experiment on Recognition of Political Fake News by Social Media Users: Factors of Dropout Lecture Notes in Computer Science, Springer, Cham, In: Meiselwitz G. (eds) Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis. HCII 2020. Lecture Notes in Computer Science, vol 12194, pp. 261-277. (year - 2020) https://doi.org/10.1007/978-3-030-49570-1_18

5. Porshnev A., Miltsov A. The Effects of Thinking Styles and News Domain on Fake News Recognition by Social Media Users: Evidence from Russia Lecture Notes in Computer Science, Springer, Cham, In: Meiselwitz G. (eds) Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis. HCII 2020. Lecture Notes in Computer Science, vol 12194, pp. 305-320. (year - 2020) https://doi.org/10.1007/978-3-030-49570-1_21