Today, I spoke about several facts that are obvious to those working in the industry, but which are often not so for the general public. I recalled that large AI platforms respond to economic incentives that do not always coincide with democratic resilience. Engagement generates revenue. Growth generates valuations. This influences design choices.
We have seen how easy it is to manipulate perception. The AI-generated video of the “Albanian Minister” was perceived by some as neutral evidence, almost incorruptible. In reality, behind every output, there is a model trained by someone, with choices regarding data, filters, and objectives. And that infrastructure can be attacked, altered, or compromised. The question is not whether a content is convincing, but who designed it, who controls it, and who can intervene on it.
I emphasized that ethics is often treated as communication or compliance, whereas it should be a structural constraint. A single, generic ethical framework is not enough for systems operating in different cultural and legal contexts. We need contextual, configurable, and verifiable ethical parameters.
Then there is a geopolitical dimension. States regulate at a national level, but systems operate on a global scale. Offshore hosting, VPNs, and multiple jurisdictions make the application of regulations fragmented. Without international coordination, regulation remains fragile.
Even when talking about open source or open weights, transparency regarding training data remains limited. We do not know for certain what has been included, excluded, licensed, or acquired. And in local models, guardrails can be easily removed.
Meanwhile, daily behavior is changing. More and more people, even very young ones, are turning directly to an AI system to inform themselves or make decisions.
The central point concerns artificial decisions. As long as a system suggests, the human being formally maintains control. When a system passes from assisting to acting, the risk profile changes. Decisions are delegated, along with power, responsibility, and risk.
In healthcare, systems already exist that contribute to establishing priorities for interventions and access to care. In the financial sector, they evaluate mortgage grants and economic conditions. With domestic robots and autonomous systems, these choices can extend to physical safety. If a robot detects a fire in a house with an isolated elderly person, which action does it prioritize? On what criteria? On what ethical rules incorporated into the model?
These systems appear objective because they speak in a fluid and coherent way. We tend to trust what reasons like us. But they operate invisibly, within rankings, moderation systems, recommendation engines, and conversational interfaces.
A single probabilistic error can seem irrelevant. Millions of decisions delegated every day redefine access to information, reputations, and opportunities. A security problem, a contamination in the data or in the model, can propagate on a large scale before we even notice.
For this reason, I proposed that the World Council focus on high-impact decision-making systems: real independence from governments and platforms, transparency on funding, mandatory audits with public and repeatable criteria, and clear responsibility in case of systemic damage.
The concrete proposal is an independent certification dedicated to high-impact artificial decision-making systems, with repeatable public tests, mandatory real-time reporting of incidents, and defined escalation procedures. Certification as a condition for participating in the public space and for obtaining trust.