top of page
  • LinkedIn
  • Twitter

When AI Assistants Mislead Instead of Inform: The Case for TITAN

  • Writer: TITAN
    TITAN
  • 4 days ago
  • 4 min read

A major international study from the European Broadcasting Union (EBU) and BBC Research & Development has revealed something alarming about how AI assistants handle the news.


Black and white photo of a woman sitting in front of a desktop and laptop computer that has a generative AI assistant on the screens.
Image of a person using Generative AI


Do AI assistants mislead?

Across 3,000 queries in 14 languages, researchers tested leading AI platforms on their ability to deliver accurate information about real-world news. The findings seem stark:  AI assistants mislead users far more often than expected:


  • 45% of responses contained at least one significant error, from factual mistakes to misleading context.

  • 81% of responses had some kind of problem, including missing or misattributed sources.

  • One-third of all answers included sourcing errors, or no sources at all.



The EBU’s Director of Media, Jean Philip De Tender, put it bluntly:

“When people don’t know what to trust, they end up trusting nothing at all — and that can deter democratic participation.”

This is not a small glitch in the system, it’s a structural risk. If we rely on AI assistants to inform us, we may end up mis-informed. And when that happens often enough, we lose confidence not just in AI, but in all information sources.


What the research tells us about AI, trust, and critical thinking

This isn’t just about false facts. It’s about the automation of judgement. AI assistants, by design, are built to answer, not to question. But democracy, learning, and civic trust depend on precisely the opposite: our capacity to question, to think, to weigh alternatives.

The EBU study shows how fragile that boundary is becoming.


1. Information without depth

AI tools often give fast, fluent, and confident summaries, but they miss nuance, chronology, and context. When assistants summarise a war, an election, or a health story, the meaning shifts. The EBU study shows that this happens frequently, across languages and regions.


Our TITAN response: Our work focuses on building thinking tools, not information shortcuts. We help users slow down, interrogate claims, and ask 'what’s missing?' before accepting what’s on screen.


2. Trust in intermediaries

For many young people, the first contact with the news is now through AI or social platforms. This means that trust in information increasingly depends on trust in the intermediary.

When that intermediary, the AI assistant, is flawed, the entire chain collapses.


Our TITAN response: instead of telling users what to trust, we coach them to question what they see. Our model uses structured reflection and dialogue to teach people how to assess, not just absorb, information.


3. The illusion of objectivity

AI systems appear neutral, but their responses are shaped by training data, reinforcement signals, and content moderation policies, all of which embed human and institutional biases. The EBU research underscores this: answers varied not just by assistant, but by language, geography, and political context.


Our TITAN response: our AI isn’t a truth engine. It’s a thinking companion. We expose uncertainty, highlight missing evidence, and promote awareness of bias. The goal isn’t to replace human reasoning, but to make it more resilient.


From AI that informs to AI that thinks

The EBU’s findings point to a crucial pivot for society. Whilst AI assistants mislead, itsThe dominant model of AI that informs, built for speed and convenience is failing to meet the deeper need for understanding. That’s where TITAN comes in.


We’re developing AI-mediated learning experiences that build users’ ability to:

  • Ask better questions rather than accept quick answers

  • Interrogate sources and claims with guided critical reasoning

  • Spot disinformation signals and contextual manipulation

  • Reflect on their own assumptions and biases


Our pilots show that when people engage with AI critically, not passively, their confidence, curiosity, and discernment increase. They start to use AI as a lens for thought, not as an oracle of truth.


The bigger picture: democracy, education, and trust

The implications of the EBU study go far beyond journalism. When AI becomes a dominant gateway to knowledge, the way it frames that knowledge shapes civic life.


If AI tools fail to represent nuance and uncertainty, or fail to cite sources accurately, we risk creating a generation of users who know answers, but not methods. In that sense, TITAN’s work is about more than disinformation or infomation integrity. It’s about defending critical thinking as a public good.


For educators, policymakers, and innovators, this is a wake-up call:

  • Don’t just build AI that informs, build AI that helps us think.

  • Invest in AI literacy, not only in data literacy.

  • Design for reflection, not just for relevance.


As the EBU’s findings make clear, information alone isn’t enough. Without reflection, information becomes noise. At TITAN, we’re building the next generation of tools that put thinking back at the centre of the digital experience.

Comments


eu flag.png

Thanks for subscribing!

Project Information

Objectives

Work Packages

Deliverables

Consortium

Cluster Projects

Disclaimer

TITAN has received funding from the EU Horizon Europe research and innovation programme under grant agreement No.101070658, and by UK Research and innovation under the UK governments Horizon funding guarantee grant numbers 10040483 and 10055990.

 

This website represents the views of the TITAN project only.  ​By entering your email address above, you agree to be sent email communications from the TITAN project. Your email address is being used only to keep you updated with our work. It is not being shared with any other third party. If you wish to no longer receive info from us, you can send us an unsubscribe message anytime.

© 2023 by TITAN

bottom of page