Welcome to the world of TITAN, a ground-breaking initiative situated at the intersection of battling disinformation and coaching.
While coaching traditionally revolves around improving personal, social, and professional skills through human-human and human-machine interactions, disinformation thrives on inattention and unwarranted self-confidence among news consumers. It perpetuates in filter bubbles on social media, where content is tailored to users' beliefs, trapping them in echo chambers where fake news spreads unchecked.
Fostering critical thinking amidst this digital chaos is a formidable challenge. Many parts of society are working diligently to boost attention and judgment during news consumption, but the existing technologies often demand technical expertise, making them unsuitable for mass adoption.
A one-size-fits-all coaching system for TITAN isn't feasible. Instead, we need to delineate a perimeter that encompasses user-friendly, effective AI-based solutions for facilitating critical reading and deceptive information detection. A significant challenge lies in establishing trust, where the intelligent coach can adapt to individual users' knowledge and needs while providing clear explanations for its decisions.
Trust in AI can no longer solely rely on performance statistics; it must hinge on subjective guarantees regarding the persuasiveness and convincingness of explanations offered to users. In this short blog, we present the core areas of research or design principles we are exploring to create our AI-based coaching tool.
Limitations and Challenges in Countering Disinformation
Disinformation is a multifaceted problem driven by various factors, including profit motives, ideological biases, and psychological manipulation. To combat it effectively, we must identify suspicious elements within each dimension, such as:
Ephemeral or prolonged discussions.
Mechanisms of misdirection.
Synthetic content manipulation.
Existing technologies are often specialised for specific tasks and as such require specific skills. AI struggles to analyse complex narratives evolving over time across different platforms and languages. Key areas of research need to include:
Multimodal content analysis (e.g., image-text integration).
Cross-platform content and network analysis.
Linguistic and country-specific environment analysis.
Detection of synthetic media content manipulation.
Automatic detection and flagging of synthetic content.
Dynamic AI updates to match disinformation actors.
Early detection of emerging disinformation narratives.
Causal, contextual, and cultural analysis of complex statements.
Analysis of complex narratives over time.
Automatic identification of check-worthy, potentially harmful elements.
Integrated analysis with blockchain-based authentication approaches.
Most available datasets are tailored to specific data types, sources, or topics, limiting their usefulness. Countering disinformation efforts require research into:
Multimodal and multilingual datasets.
Real-time detection-enabled datasets.
Synthetic datasets to overcome real data limitations.
Legal, ethical, and IPR-compliant certification for datasets.
Regulated datasets for specific public value purposes.
Specialized datasets for disinformation detection.
AI-powered tools are typically integrated into workflows supervised by humans due to technical, ethical, and legal considerations. Trustworthy AI in the context of human-AI collaboration is paramount.
Usability and User Experience
A gap exists between AI-based methods and user interfaces for end-users. Research efforts should focus on bridging this gap, including:
Staff roles and expertise to facilitate user-side adoption.
Alternative presentation methods for AI-generated output.
Translation of AI output into user-friendly language.
Personalized approaches to match user expertise.
Dashboards for AI analysis outcomes.
Simplifying complex AI analysis results.
TITAN's mission is to establish a fully autonomous AI coaching system for countering disinformation and improving critical thinking skills among citizens during fact-checking. The Socratic Method influences the design of the artificial coach, with considerations for psychological and social implications integrated into the AI's capabilities.
TITAN's digital coaching entity takes the form of a chatbot, a hybrid conversational generative chatbot with retrieval capabilities. Conversational skills promote natural language interaction, while retrieval capabilities aid fact-checking and insight retrieval.
While AI-based coaching chatbots are prevalent in healthcare, education, and customer care, there are still critical challenges:
Incorporating context to produce sensible responses.
Maintaining coherent personality in responses.
Evaluating the effectiveness of the chatbot.
Ensuring diverse and purpose-driven responses.
Designing AI Coaches Framework
The Designing AI Coaches (DAIC) framework provides a foundation for designing coaching chatbots. It is based on four principles:
Widely agreed human-coach efficacy elements.
Use of recognized theoretical models tailored to coaching.
Ethical conduct and unbiased tools.
Narrow coaching focus on specific tasks, employing Weak AI approaches.
The DAIC framework guides the design of coaching chatbots, emphasizing the importance of trust, empathy, transparency, predictability, reliability, ability, benevolence, and integrity in the coach-coachee relationship.
The Working Alliance Theory
The Working Alliance encompasses agreeing on goals, determining tasks, and building trust, appreciation, and respect between coach and coachee. AI coaches excel at task-level coaching but may struggle with problem identification, requiring clients to be aware of their core issues.
As we explore the world of AI coaching for countering disinformation and enhancing critical thinking, it's essential to understand the elements of success in human-human coaching and how they can be integrated into AI technologies. The DAIC framework and the concept of the working alliance provide valuable guidance on this transformative journey.
It's abundantly clear that AIs excel in their weaker forms, particularly when there's no intricate problem identification or lengthy chains of logic required to produce results. In scenarios where explanations take a backseat, AI's utility truly shines. Now, as we delve deeper into the implications and insights offered by research, a pivotal question emerges: Does TITAN need to grasp the underlying problems? To be more precise, within the realm of disinformation, two critical questions come to the fore:
Does TITAN need to comprehend why users propagate and disseminate disinformation?
Does TITAN need to fathom the gaps in a user's knowledge, essentially identifying the core issues to enhance their critical thinking?
Both questions carry profound implications, touching on psychology, sociology, culture, ethics, and more. They are questions that even human experts grapple with.
Regarding the first question, data collection methods like questionnaires, rule-based conversational chatbots, and AI-based chatbots may provide answers. However, when we turn our attention to the second question, we may find that it ventures beyond the scope of TITAN's mission.
In the pursuit of combating disinformation and nurturing critical thinking, TITAN may need to prioritise its efforts wisely. While understanding the motivations behind disinformation propagation is valuable, the endeavour to pinpoint the precise gaps in a user's knowledge might require a more multifaceted approach—one that may involve both AI and human expertise.
As we continue our exploration into the world of TITAN and its mission to counter disinformation, these questions and their multifaceted implications will remain at the forefront of our discussions and design approach. Together, we'll navigate the complex landscape of AI-based coaching in the fight against disinformation, forging a path towards a more discerning and critical society.