AI is not neutral
We often talk to AI as if it were a neutral machine, smarter and more fair than us. Stay with me until the end and I will explain why this is wrong.
Every system is built on human choices: which data to use, which errors to accept, which goal to optimise. In several US hospitals, triage algorithms gave lower priority to Black patients because historical data were already biased. On social media, recommendation systems push what keeps us online, even if it is anger or conspiracy, because that is good for business, not for democracy.
Tech companies exist to grow and make profits. Rights and fairness enter only when laws, regulators or public pressure force a change. If we forget this, we say “AI decided” and the real decision makers disappear.
So we need clear rules: transparency for AI used in health, justice, finance, work and security, independent regulators with real power, and bans on uses like mass biometric tracking. And every time an AI system affects us, a simple question: who controls it, whose interests does it serve, who answers when it gets it wrong?
#ArtificialDecisions #MCC #AD #AI
✅ This video is brought to you by: https://www.ethicsprofile.ai
👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com
