Skip to content

“Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or European Commission. Neither​ the European Union nor the granting authority can be held responsible for them.”

Making EU digital environments safer

To foster trust in today’s evolving digital landscapes, users require a cutting-edge strategy that ensures the delivery of reliable and accurate information while actively combating online disinformation and eliminating harmful content.

Introducing AI-CODE: a groundbreaking three-year interdisciplinary EU-Funded initiative designed to leverage the power of artificial intelligence and develop future-proof technological solutions tailored to meet pressing real-world demands.

Current Challenges

The media sector is exposed to continuous innovations that occur at a pace never seen before. A significant booster behind such ongoing evolution is intense development of technologies, especially those based on Artificial Intelligence (AI) that heavily influence and shape the online environment.

AI plays a critical role, both positively and negatively, in creating and spreading information in the current digital era, having a potentially disruptive impact on citizens, democracy and society as a whole.

Why AI-CODE?

Large-scale disinformation campaigns are a major challenge for Europe. There is a tremendous need for innovative (AI-based) solutions ensuring media freedom and pluralism, delivering credible and truthful information as well as combating disinformation and harmful content.

Our mission

The main goal of the interdisciplinary AI-CODE project is to evolve state-of-the-art research results from the past and ongoing EU-funded research projects focused on disinformation to a novel ecosystem of services that will proactively support media professionals in trusted information production through AI.

USE CASE 1

AI TOOLING FOR TRUSTED CONTENT

AI-CODE services to be tested:
•Disinformation Detection in the Next-generation Social Media
•Trustability and Credibility Assessment for Content and Sources
•Transparency Service for AI-model Cards
•Media Asset Annotation and Management (MAAM)
•Personal Companion for Understanding Disinformation for Media Professionals

USE CASE 2

USE OF AI TOOLS TO DISCOVER POTENTIAL FOREIGN INFLUENCE OPERATIONS FASTER

AI-CODE services to be tested:
•Trustability and Credibility Assessment for Content and Sources
•Disinformation Detection in the Next-generation Social Media
•Media Asset Annotation and Management (MAAM)

USE CASE 3

INTERACTIVE COACHING FOR HARNESSING GENERATIVE AI TO CREATE HIGH-QUALITY TRUSTED CONTENT

AI-CODE services to be tested:
•Generative AI Interactive Coaching Service & Dynamic Simulator for Media Professionals
•Personal Companion for Understanding Disinformation for Media Professionals

Subscribe to our newsletter