SocialAssess - An Assessment Framework for “Information-Handling” Practices on Social Media

Overview

Online presence and the spread of misinformation have long been significant social and public health concerns. For example, during the COVID-19 pandemic, the amount and frequency of social media consumption increased, and information about COVID-19 spread rapidly on social media platforms, including both inaccurate and misleading information. This content appears to have contributed to beliefs in conspiracy theories and misinformation, as use of social media is associated with increased conspiracy theory and misinformation beliefs. This specific COVID-19 vaccine misinformation is in addition to the already high levels of misinformation about vaccines in general, and significant levels of misinformation related to other issues, such as climate change, which often claim that evidence-based scientific approaches are harmful or do not work. The viral spread of digital misinformation has become so serious that the World Economic Forum considers it among the main threats to human society. Therefore, training and educating people on how to recognize misinformation, verify sources, and then make responsible decisions on the sharing of information is therefore of paramount importance.

Misinformation rides the greased algorithmic rails of powerful social media platforms and travels at velocities and in volumes that make it nearly impossible to stop making information warfare an unfair fight for the average internet user. Researchers had argued that the way we're taught from a young age to evaluate and think critically about information is fundamentally flawed and out of step with the chaos of the current internet. The goal of disinformation is to capture attention, and critical thinking is deep attention. We envision to design, develop, and evaluate a living learning AI-based framework utilizing personalized and gamified learning to individuals on assessing information on social media. To realize this vision, many major modules/components need to be realized. In this project, we aim at developing essential and major components for our vision. More specifically, we aim at developing intelligent data-centric assessment tools to identify “information-handling” practices of individual user that may lead to spreading disinformation. This assessment will be done through monitoring individual’s interactions and responses with his social media activities (e.g., Twitter). More specifically, these tools monitor user’s “bad” practices in response to social media’s posts/newsfeed that could spread disinformation. Note that these tools will be integrated seamlessly into individuals’ daily social media activities so as not to interfere with users’ typical interactions and affect his/her typical behavior. The developed tools in this project are very essential in our vision since it will enable us to test, evaluate, and assess individuals.

People

Faculty

PhD Student

  • Mohamed Osman

Funding

  • The Commonwealth Cyber Initiative (CCI) HV-2Q23-001

Publications

  •  

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.