top of page
  • Writer's pictureDBT

From Einstein to AI: Combating Disinformation a Legal, Technological and Social Phenomenon

Senior project manager at The Danish Board of Technology, Nicklas Bang Bådum has written an article about how disinformation and generative AI creates challenges for our democratic institutions. The article highlights that we must approach the challenges created by generative AI as both a legal, technological, and social phenomenon to be able to fight disinformation. The original article is in Danish and can be found here. Below you will find a short recap of the article in English.

Albert Einstein, a robotic hand and a sign saying Disinformation

The article references an anecdote about Einstein, who believed the invention of the radio created a medium to reach an enlightened population because information was made available to everyone. It would enable cultural and political exchange between countries, which could enhance social cohesion. What Einstein did not consider was how the radio also had the potential of the exact opposite, namely, to spread propaganda and politically motivated disinformation to influence public opinion. The use of the radio has done both.


Bådum continues with comparing how this can be compared to how the internet was perceived when it was first made available to the broad population. The internet would enable intercultural exchange, thereby creating common and greater mutual understanding between countries and within populations. It is indeed being used for this, but just like the radio, it is also used for the exact opposite. Disinformation is a challenge that institutions like the UN and EU pay attention to, because disinformation can mislead and ignite individuals and groups. With generative AI it is now possible to mass produce and spread disinformation in text and images. This creates a challenge for societies to keep a democratic conversation.


The article establishes with the example of the radio and the internet, that technology by itself cannot solve what can be called human challenges, such as disinformation. As was the case with both the internet and the radio, both technical solutions reinforced a reproduction of the challenge of disinformation. The issue remains how we as society should approach the possible challenges of generative AI. Bådum presents in the article three classic approaches to how to deal with technology.

  1. Approaching it as a legal challenge where regulation is the solution. There is a general agreement that there is a need for regulation of the use of AI. One attempt to do this is the EU AI Act and we are still to see if they will solve the challenges we currently face with generative AI and disinformation. However, no matter how good the regulations will be, regulations always have loopholes. Although, good regulation to some extent is needed for a successful spread of technology in society, we cannot alone regulate ourselves out of the challenges that arise.

  2. Another approach is to solve a technical problem with a technical solution, which can be called the “there’s an app for that” approach. One example of this approach is how Open AI, who developed ChatGPT, launched a tool to identify AI-generated text. However, the tool only recognized AI-generated text in 26% of the cases, while also having 9% false positives. In the case of disinformation, it is tempting to call for an AI solution to solve the problem, however, a technical definition of disinformation might not work properly when meeting the reality of disinformation.

  3. The third example is approaching a challenge like disinformation as a social phenomenon since disinformation is a result of human behaviour. This solution would try to influence human behaviour through information, education, and room for reflection, to make them able to identify disinformation and thereby avoid spreading disinformation. This is, among other things, what the EU-funded TITAN project tries to do with the help of an AI-based intelligent coach that uses the technology of large language models to ask the user critical questions and have a Socratic dialogue to make the user reflect on the information the person is presented with.

None of these approaches can stand by themselves and this is important to remember for the project of TITAN. The TITAN system cannot stand alone with the challenges of disinformation. Firstly, because those who have already bought into conspiracy theories will never use such a system, and secondly, because it places the responsibility for dealing with disinformation solely with the individual, which will never solve a societal problem.


Challenges of disinformation are transversal challenges that require interdisciplinary and multi-actor approaches and perspectives. Each of the approaches mentioned here are based on specific expertise – if we are to solve the challenge, these must be brought together. If we are to solve society's major challenges, we must dare to use the whole of society to do so.

69 views0 comments
bottom of page