en Aa

Digital Disinformation Lab

Digital disinformation is nowadays one of the most dangerous and at the same time effective tools of social influence, with which it is possible to impact politics and public opinion, destabilize the internal security of countries, polarize society, create panic or undermine trust in democratic governments and institutions. The objective of the Laboratory is to analyze the various conditions that foster the creation and spread (of) digital disinformation in various media and digital environments such as social media, Virtual Reality, Internet networks, etc. The research conducted by members and collaborators of the Laboratory is carried out in three areas concerning both the detection of digital disinformation and its prevention and combating:

Members:
Prof. Grzegorz Ptaszek – head of the Laboratory
Prof. Bohdan Yuskiv (Rivne State University for the Humanities in Rivne)
Dr. Rafał Olszowski
Dr. Kinga Sekerdej
Kamil Jaszczyński (AGH Doctoral School)
Grzegorz Świerk (AGH Doctoral School)

Publications by members of the Laboratory:

  1. Ptaszek, G., Yuskiv, B., Khomych, S. (2023). War on Frames: Text Mining of Conflict in Russian and Ukrainian News Agency Coverage on Telegram During the Russian Invasion of Ukraine in 2022. Media, War, & Conflict, https://doi.org/10.1177/17506352231166327
  2. Ptaszek, G. (2019). Edukacja medialna 3.0. Krytyczne rozumienie mediów cyfrowych w dobie Big Data i algorytmizacji, Kraków: Wydawnictwo Uniwersytetu Jagiellońskiego.
  3. Ptaszek, G. (2021) Media education 3.0? How Big Data, algorithms, and AI redefine media education. In The Handbook on Media Education Research, red. Divina Frau Meigs, Sirkku Kotilainen, Manisha Pathak-Shelat, Michael Hoechsmann, Stuart R. Poyntz (Eds.), New York: Wiley-Blackwell (series International Academic Media Studies and Communication Global Handbooks in Media and Communication Research).
  4. Ptaszek, G. (2019). From algorithmic surveillance to algorithmic awareness. Media education in the context of new economics of media and invisible technologies. In. Media education as a challenge, eds. J. Ratajski, Warszawa: Wydawnictwo ASP w Warszawie. Access: https://www.unesco.pl/sourcesmedia/mediaedaschallenge.pdf
  5. Yuskiv B., Karpchuk N., Khomych S. Media reports as a tool of hybrid and information warfare (the case of RT – Russia Today). Codrul Cosminului, XXVII, 2021, No. 1, P. 235-258. http://codrulcosminului.usv.ro/CC27/1/12.html
  6. Karpchuk N., Yuskiv B. (2021). Dominating Concepts of Russian Federation Propaganda Against Ukraine (Content and Collocation Analyses of Russia Today). Politologija, 102(2), P.116-152. https://doi.org/10.15388/Polit.2021.102.4
  7. Yuskiv B., Karpchuk N., Pelekh O. (2022). The Structure of Wartime Strategic Communications: Case Study of the Telegram Channel Insider Ukraine. Politologija. 3, vol. 107, P. 90-119, https://doi.org/10.15388/Polit.2022.107.3
  8. Olszowski R, Zabdyr-Jamróz M, Baran S, Pięta P, Ahmed W. (2022). A Social Network Analysis of Tweets Related to Mandatory COVID-19 Vaccination in Poland. Vaccines. 10(5):750. https://doi.org/10.3390/vaccines10050750
  9. Olszowski R. (2021). Combating fake news with the use of Collective Intelligence in hybrid systems. In: Proceedings of the 37th International Business Information Management Association Conference (IBIMA): 30-31 May 2021, Cordoba, Spain : Innovation Management and Information Technology Impact on Global Economy in the Era of Pandemic / ed. Khalid S. Soliman.

Current projects:

2023 – Disinformation on the Internet as a tool for securing the interests of the Russian Federation in spheres of influence. Analysis of fake news from the EuvsDisinfo database (Grzegorz Ptaszek, Bohdan Yuskiv)

The objective of the project is to examine how Russia uses disinformation on the Internet to influence the policies of countries within a specific Russian sphere of influence through fake news in the global media. The analytical material is an open database of fake news aggregated by the EuvsDisinfo project (https://euvsdisinfo.eu/pl/baza-dezinformacji/). It contains news items identified as fake news found in the international news space since 2015 that were deemed to present a biased, distorted or false picture of reality and to disseminate a significant pro-Kremlin message (but not necessarily linked to the Kremlin or representing pro-Kremlin views). The analysis was divided into two sub-periods: before the start of the second stage of the Russian-Ukrainian war (until 2022.02.24) and after the start (from 2022.02.24). The study used advanced computer data analysis methods.

2022 – 2026 NLP-based Fake News detection model on the Internet using AI algorithms in terms of ANT and micro-actors (Grzegorz Świerk)

The scientific objective of the doctoral dissertation is to develop an innovative model for detecting Fake News on the Internet based on natural language processing (NLP) using AI algorithms taking into account the theoretical assumptions of Actor-Network Theory about the causality of human agents in the context of micro-actors. The work focuses on optimizing the effectiveness of fake news detection. The research area combines computational social science with AI machine learning algorithms based on NLP. The research work will analyze the behavioral patterns of micro-actors in the context of improving the effectiveness of FN detection. The research question will be whether there are patterns of micro-actor behavior, the identification of which will improve the effectiveness of AI-based algorithms for detecting specific types of FNs.

2022 -2026 Fake News detection model on the Internet based on content propagation analysis and information source credibility, using AI algorithms in terms of ANT and macro-actors (Kamil Jaszczyński)

The main scientific objective of the doctoral dissertation is to develop an advanced model for Fake News (FN) detection based on the model of content propagation and credibility of the information source using the assumptions of Network Actor Theory. The model will be based on AI algorithms taking into account ANT theoretical assumptions about the causality of non-human factors in the context of macro-actors. The work will be focused on optimizing the efficiency of FN detection. The research area will combine computational social science with AI machine learning algorithms using propagation analysis and author reliability analysis. Macro-actor behavior patterns will be studied in the context of optimizing FN detection. Research question: a) what are the patterns of FN propagation and patterns of unreliable authors in the network view of ANT theory; b) to what extent the identified patterns can be used to create an effective FN detection system based on propagation model and source reliability.