top of page
Writer's pictureTITAN

Disinformation and the Ongoing Challenge for Social Media Regulation

In the dynamic landscape of social media, a critical challenge has emerged - the presence of disinformation. Prominent platforms like Twitter, Facebook and TikTok, initially praised for fostering global connectivity, now find themselves confronted with the responsibility of countering the spread of false information online. As the gravity of the issue intensifies, these platforms have taken measures to combat disinformation. Nevertheless, they also face increasing scrutiny from government regulatory policies, such as the EU's Digital Services Act (DSA) package, which adds another layer of complexity to the regulatory landscape.

Young woman looking and smiling at her phone, whilst friends in the background decide if what they are reading on their devices is true or false.

The Pursuit of Truth

Twitter and Facebook are committed to combating disinformation and have implemented complex algorithms to detect and flag suspicious content. These algorithms leverage artificial intelligence to analyse patterns in posts, keywords, and user behaviour, with the intention of curbing the spread of harmful false information. Generally algorithms have been fairly poor at contextual interpretation and can struggle with nuance such as satire or use of content for educational purposes, so the platforms have additionally sought the assistance of a community of fact-checkers who work diligently to verify information and highlight inaccuracies. However, government regulatory policies, like the DSA, now place additional demands on the platforms to be more proactive in their efforts to tackle disinformation.


The EC states that the DSA will:

“…create horizontal rules to ensure accountability, transparency and public oversight around how online platforms shape the information space in which our societies thrive.”

Platforms with more than 45 million users in the EU will have to reveal data about the number of users they suspend, information about their staff, and their use of artificial intelligence to remove disinformation amongst other rules.


Government Regulatory Policies and Unforeseen Challenges

With the introduction of regulatory policies like the DSA, social media platforms face a new set of challenges. The DSA aims to establish a framework for digital services, including social media platforms, to take more responsibility for content moderation and disinformation control. While this presents an opportunity for more systematic and coordinated efforts, it also raises concerns about potential limitations on freedom of expression and innovation.


Content moderation, a delicate balancing act for platforms, now becomes subject to the DSA's requirements, adding to the complexity of decision-making processes. The platforms have to make public reports on how they limit serious risks to society regarding freedom of speech, public health and elections. Striking the right balance between tackling disinformation effectively and ensuring a fair environment for users becomes a more intricate task under the watchful eye of government regulators.


Whack-A-Mole and Echo Chambers

Countering disinformation often feels like a never-ending game of Whack-A-Mole, where removing one piece of misinformation leads to the emergence of new falsehoods. The platforms' algorithms, designed to prioritise content users are likely to engage with, can unintentionally create echo chambers, reinforcing users' existing beliefs and potentially exposing them to more disinformation. With the DSA's emphasis on addressing systemic issues related to disinformation, platforms must find innovative ways to disrupt echo chambers and ensure diverse perspectives are heard.


Whilst Facebook downranks content that has been proven to be untrue rather than remove it, Twitter may ask users to remove content, and TikTok states it will remove content deemed to violate guidelines, its not clear yet how these platforms will tackle the echo chamber conundrum.


Collateral Damage, Accountability, and Government Oversight

As the social media platforms work to combat disinformation, they must also navigate concerns about inadvertently suppressing legitimate content or satire. The DSA's provisions may require platforms to be more transparent about their content moderation processes, which can foster trust among users. However, it also demands accountability for the platforms' actions and decisions, making the need for clear communication and avenues for users to appeal decisions even more critical.


Government regulatory policies, like the DSA, also introduce external oversight into the platforms' operations, which can have implications for freedom of expression and privacy. Striking a balance between effective content moderation and preserving users' rights becomes a more delicate challenge with the involvement of government regulators.


The Ever-Evolving Battlefield

As disinformation tactics evolve, Twitter ,Facebook, TikTok and other social media platforms face a difficult battle to counter disinformation without becoming global arbiters of 'truth', and so must continuously review and adapt their strategies. What may prove effective today could be insufficient against new forms of disinformation tomorrow. Staying ahead of this phenomenon requires constant vigilance and adaptation, with government regulatory policies adding another dimension to the ever-evolving battlefield, alongside user education and support. The latter being TITAN's goal - helping users decide themselves whether content is valid.


As the battle continues, it becomes evident that the pursuit of truth in the digital realm is an ongoing journey. To make meaningful progress, the platforms must foster open dialogue with users, government entities, and the wider community, working together to find viable solutions. By embracing transparency, accountability, and a commitment to adaptability, we can hope to create a safer and more reliable digital environment for all, while ensuring that regulatory policies strike the right balance between addressing disinformation and safeguarding fundamental rights.

Comments


bottom of page