Category: Artificial Decisions

Artificial Decisions

179 – The real AI problem at work is not the tech

The real AI problem at work is not the tech. It’s employees left alone.

When a company gives no clear rules and no approved tools, people use free ChatGPT, Gemini, or Claude to work faster. They paste contracts, client emails, spreadsheets, and internal notes.

Then the company loses control of where that data goes, how long it stays there, and who can access it. That creates GDPR and AI Act risk, with real legal and compliance consequences.

Banning AI won’t stop it. Governance will. Train staff, provide approved “no training” tools with proper data controls, set simple guardrails, and write short internal policies.

This already has a name: shadow AI. Search for it.

#ArtificialDecisions #MCC

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

177 – Our AI conversations were stolen and sold

Attention! Our AI conversations were stolen and sold to data brokers.

Inside those chats there is the most private part of our lives. People talk about anxiety, depression, therapy, addictions. People ask advice about cheating, breakups, legal problems. Everything we tell an AI is written, word by word. Stay with me, because this affects everyone. I’ll explain what happened and how to avoid it.

The case starts with a browser extension, Urban VPN Proxy, a free VPN with millions of users. I know many people who use it. That’s why this story matters.

Security researchers found that after a July 9, 2025 update, the extension started capturing what users typed into AI chatbots and the replies, plus session data. It happened directly inside the browser page, while people were typing. The collection could continue even if the VPN was turned off. There was no clear switch to stop it. The only real solution was uninstalling the extension.

The most serious part is where the data went. The collected chats were shared with companies that work on analytics and data trading. A personal conversation stopped being private and became a data point in a commercial system.

Here in the United States, data brokers are real. There is a market that buys and sells information to build profiles and predict behavior. When AI chats enter that market, privacy is gone.

There is one rule we must keep in mind. Never give an AI information from your life that you truly want to keep private. Not because AI is evil, but because databases get breached. It has happened to banks, hospitals, telecom companies, social networks. It can happen here too. What you write today could become public tomorrow. It could be shared. It could be used for blackmail. It could reach people you never wanted to read those things.

What can we do? Remove browser extensions you don’t really need. Avoid free VPNs and unknown “privacy” tools inside the browser. If you need a VPN, use a trusted system app, not a browser plugin. Check extension permissions. Full access to websites means access to chats. Use a clean browser or a separate profile for AI tools. Never share sensitive, identifying, or deeply personal information.

AI chats are useful. They are not a safe diary.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

175 – Today you can protect your rights better with AI

Today you can protect your rights better with AI

AI can read a contract for you and flag the parts that can cost you money.

Most people sign without reading, or they read and do not understand. For years, power was in the wording: long text, unclear terms, hidden fees, auto renewals, exit penalties, exclusions. The person who writes the contract has the advantage.

AI reduces that advantage. You paste the key clauses and ask where do I pay, how long am I locked in, how do I exit, what is the worst case. AI rewrites legal language in plain words and highlights the risky points. It does not decide for you, but it helps you see what matters before you sign.

Here in the United States you can see it with flight refunds. Rules were public already, but AI helps people check their case fast using the airline email and ticket conditions.

One rule: do not share full documents with personal data. Remove account numbers, IDs, addresses, signatures. Always double check the original text before you sign or file a claim.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

174 – AI is starting to smell

AI is starting to smell

Computers used to only see and hear. Now they are starting to smell the world. Stay with me until the end, I will explain what this means for perfumes, home products and robots.

Here in the United States and in Europe, researchers train AI on data from real scents and essential oils. Startups like Patina turn smells into digital data and design new scent molecules, for example cheaper versions of rose oil that do not depend on harvests or climate.

The perfume and fragrance market is worth tens of billions of dollars. When we buy detergent, candles or air fresheners, more and more often the smell will be chosen and optimized by an algorithm.

Other teams are building electronic noses. Sensors record the chemical “breath” of food, air and materials, then AI learns to detect gas leaks, spoiled food or traces of allergens. Here in the United States they are already testing these tools in factories and safety systems.

This field is still young and full of limits. But AI is clearly gaining a third sense, smell. Cameras see, microphones listen, and sensors start to smell for us.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

173 – Model collapse. AI is eating the data, then it decides for us

Model collapse. AI is eating the data, then it decides for us

We are filling the web with synthetic text, then we expect AI models to stay close to reality. Stay with me until the end because this hits jobs and money. “Model collapse” is what happens when a model is trained again and again on AI made data, or on polluted data. It loses rare details, edge cases, nuance. What remains looks clean and confident, but it is average and flatter. A 2024 Nature paper shows this decay can compound over generations of training.

We already pay for it in real decisions. Hiring filters, shortlists, screening. Here in the United States, Reuters reported that Amazon stopped an internal recruiting tool because it systematically disadvantaged women, due to biased historical data turning into automatic rules.

Now the loop is closing. AI written resumes, AI written job posts, AI screening. People write to please an algorithm, companies select with another algorithm. Reality drops out of the process.

Security is no longer only firewalls. It is decision integrity: logs, traceability, real human oversight. A Thomson Reuters Institute report says 91% of C-suite leaders already use GenAI or plan to within 18 months. If we ignore model collapse, we accept decisions that are more automatic and harder to audit.

#ArtificialDecisions #MCC #AD

✅ This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

172 – The war we do not see

Everyone is watching missiles. The real war is turning off the lights

People watch the bomb war. The real war is happening somewhere else. Stay with me to the end, because this one can hurt more, and we often do not even know who started it.

We think war means missiles, tanks, explosions. When a missile hits, we usually know where it came from. There is a radar track, a direction, a clear enemy. But the most effective attacks today can be silent.

A power plant stops working. The electric grid goes down. Water systems fail. Hospitals lose access to their software. Trains stop. Ports freeze. Markets crash for minutes or hours. No smoke, no crater, no warning. And often no one claims responsibility.

This is cyber warfare. Hacking, sabotage, malware, and attacks on critical infrastructure. It can hit civilians first. It can cost far less than a missile. It is harder to prove. And it creates fear without firing a shot.

Here is the key problem. If a missile hits, leaders respond fast. If a power plant shuts down, the first question is different. Was it a technical failure. A human mistake. Or an attack. If it was an attack, who did it.

At the same time, more decisions are run by software. Systems that control energy, logistics, communications, and defense. They are built for efficiency, not for a constant hidden conflict. When something breaks, the line between accident and attack becomes thin.

Deterrence works when you can name the attacker and punish them. Cyber attacks break that logic. If you cannot prove who hit you, your response becomes slower, weaker, and uncertain.

This is why the silent war can do more damage. It stays below the level of a declared war, but it slowly destroys trust, safety, and stability. The front line today is digital.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

171 – Speed beats rules. Why fake videos spread first

Speed beats rules. Why fake videos spread first

A fake video can be created and pushed online before anyone can verify it. Follow to the end for four quick checks you can do in seconds.

The cycle is fixed: minutes to make, instant post, algorithm boost, downloads, reposts, endless copies. Verification is human time, distribution is machine time. December 2025, Bondi Beach (Australia): altered clips spread fast, including a deepfake of the New South Wales premier saying things he never said. By the time it was debunked, it had already reached huge audiences.

MIT research on Twitter found false news spreads more than true news, with about a 70% higher chance of being retweeted. AI video is built to trigger fast emotion and fast sharing. UK, December 2025: deepfake ads used real doctors’ faces and voices to sell paid “treatments”. Removals came after reports, while copies kept reappearing. Ireland, October 2025: deepfakes mimicked RTÉ news during an election period. Some stayed up for hours, enough to be reshared widely.

Here in the United States, the pattern is the same: the viral clip travels faster than the correction.

Do this. Check the account history and creation date, find the story on two trusted outlets, search the name plus “deepfake”, report and save the link.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

170 – If AI turns into a toll to pay: who really controls this infrastructure?

If AI turns into a toll to pay: who really controls this infrastructure?

We are renting artificial intelligence from a few companies in the world and it seems normal. Stay with me until the end and I will explain why this matters for our money and for our democracy.

Every time we use a large AI model, we are using private infrastructure. Here in the United States, a few companies in a few cities build these systems and decide what is allowed to appear, how content is filtered, and what “safe” answers look like for billions of people.

Training these models is very expensive. It needs huge data centers and thousands of special chips. This gives a small number of players a lot of power. They set the prices and the rules, while universities, public bodies, and small companies struggle to keep up.

In the past, big infrastructures like railways, power grids, and the early Internet were often built with public money and clear rules. Something similar can happen with AI. States, universities, and companies can create shared projects, with common computing power and shared public data, to build models that serve schools, hospitals, and public services.

For citizens, the question is simple: will AI stay a product rented from a few global giants, or will it also become a shared infrastructure that we can govern together with clear and transparent rules?

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

169 – Invisible AI attacks that change decisions

Invisible AI attacks that change decisions

Many people still imagine a cyber attack as a blackout: stolen data or systems shut down. Stay with me until the end and I will explain a third type of attack, quieter, that changes how AI decides and hits money, health, and jobs.

With AI, attackers do not need to turn anything off. The system stays online, replies as usual, but its choices start to shift slowly over time. The trick is in the data. Many AI models are updated with new information all the time. If someone injects fake or manipulated data, the AI learns the wrong patterns. This is called data poisoning. In practice, it moves the “compass” of decisions a few degrees, without alarms.

In banking, a poisoned model that scores risk can start to treat certain customers or areas as high risk even if they always paid on time.

In healthcare, especially here in the United States where hospitals use automated triage, a poisoned model can change priorities. Some groups of patients wait longer, others are pushed ahead. No error message appears, but access to care becomes less fair.

In logistics, an AI that optimizes routes and suppliers can be pushed to favor certain partners or to choose routes that look efficient on paper but increase real costs and delays. The app works, the dashboard works, but the company loses money.

To defend against this, firewalls and backups are not enough. AI models must be treated as critical infrastructure. Their decisions need to be monitored over time, versions logged, and clear thresholds set that trigger human review.

As citizens, we should ask very simple questions to banks, insurers, hospitals, and platforms: where do you use AI to decide about me, who checks that these models are not manipulated, and who can stop them if something looks wrong? The new frontier is not only attacks on data or servers. It is attacks on decisions.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

168 – AI is not neutral

AI is not neutral

We often talk to AI as if it were a neutral machine, smarter and more fair than us. Stay with me until the end and I will explain why this is wrong.

Every system is built on human choices: which data to use, which errors to accept, which goal to optimise. In several US hospitals, triage algorithms gave lower priority to Black patients because historical data were already biased. On social media, recommendation systems push what keeps us online, even if it is anger or conspiracy, because that is good for business, not for democracy.

Tech companies exist to grow and make profits. Rights and fairness enter only when laws, regulators or public pressure force a change. If we forget this, we say “AI decided” and the real decision makers disappear.

So we need clear rules: transparency for AI used in health, justice, finance, work and security, independent regulators with real power, and bans on uses like mass biometric tracking. And every time an AI system affects us, a simple question: who controls it, whose interests does it serve, who answers when it gets it wrong?

#ArtificialDecisions #MCC #AD #AI

✅ This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

167 – The hidden market of humanoid robots

The hidden market of humanoid robots

The market for humanoid robots already exists, it moves billions, and almost no one really talks about it. Stay with me until the end and I will show you where these robots work, who is investing in them, and why this matters in Europe and here in the United States.

In China today there are more than 150 companies building humanoid robots: machines about as tall as a person, with arms, legs, and sensors, designed to move in environments built for human beings. These are not just trade show prototypes. The government has included them in official industrial plans and sees them as part of the next wave of “embodied” artificial intelligence.

One of the most concrete examples is UBTech with its Walker S2 robot. In 2025 it secured orders worth more than 800 million yuan, around 110 million dollars. Some contracts involve data centers: robots that walk through corridors between servers, open doors, read indicators, perform repetitive safety checks. Where technicians used to be physically present, “artificial bodies” guided by software are starting to arrive.

Around these cases there are very clear numbers. In just a few years China has gone from fewer than 100 industrial robots for every 10,000 workers to about 470. It is now among the countries with the highest number of robots per worker in the world. Humanoid robots are the next step. Instead of rebuilding factories from scratch, companies introduce machines that can use stairs, corridors, and workstations designed for people.

Here in the United States the race is just as strong. Tesla is testing its humanoid robot Optimus in factories for simple, repetitive tasks. Elon Musk has said that in the future a very large share of the company’s value could come from these robots, with the idea of reaching hundreds of thousands of units per year.

In California, the startup Figure has raised more than 1 billion dollars in funding, with a valuation of around 39 billion. Its goal is to develop “general purpose” humanoid robots for factories, logistics, and, over time, also for homes. In Austin, Texas, Apptronik is working on the Apollo robot, born in a university lab and now tested on Mercedes production lines.

How much is all this worth? Some analyses talk about a market that could go from about 3 billion dollars in 2025 to more than 15 billion in 2030. Investment banks such as Morgan Stanley go as far as to imagine, by 2050, almost 1 billion humanoid robots in operation and an ecosystem worth trillions of dollars per year in machines, software, and services.

For now, they work in factories, warehouses, and data centers, far from everyday life and the public eye.

#ArtificialDecisions #MCC #AD #AI

✅ This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

166 – ChatGPT risks selling the truth to the highest bidder

ChatGPT risks selling the truth to the highest bidder

If AI answers start to follow money, we have a serious problem. Follow me until the end, because this changes what we read, what we write, and what we think every day.

Today we already know how Google works. Some results are organic, some are ads, and at least there is a small “sponsored” label. It is not very clear, but at least we see something.

With AI chatbots it is very different. We see one single answer. A long text. It looks neutral. It sounds like advice. If, inside that answer, some products or services are pushed because someone paid, we do not see it. We do not know where the business ends and where the information starts.

Now imagine this on everything. We ask AI what software to use, which school is better, which treatment to consider, which course to buy. If the model is tuned to help paying partners, every suggestion can move us a little in that direction. Step by step, our choices change.

Then there is another level. We use AI to write emails, articles, school books, corporate documents. If the text that AI gives us is already influenced by who pays, a piece of that bias enters our culture. Quietly.

If we do nothing, in a few years most of what we read and write with AI could be shaped first by money, and only after by truth.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

164 – What is happening in the world of adult content

What is happening in the world of adult content: AI has changed everything

The world of adult content has always pushed new technologies, from videocassettes to streaming, from online payments to subscriptions. Today it is doing the same with artificial intelligence. Stay with me until the end, because this story is really about identity, data and money.

In the last few years, AI-generated and AI-edited content has exploded on the main platforms in the United States and Europe. Videos can be created in hours, without sets or performers, using only models that invent faces and bodies. This lowers costs and opens the door to anyone with a computer, but it also shifts even more power toward those who control the algorithms and the platforms.

The most worrying part is identity. Centers that monitor abuse report a strong rise in synthetic images created without consent. A single selfie on a social network can be copied, fed into a generator and turned into content that a person has never recorded. This can hit celebrities, students, workers, anyone with photos online.

At the same time, many AI systems are trained on huge archives of images from this sector. People often do not know that their body or face may have been used to “teach” an algorithm. Lawyers and organisations are already preparing legal actions on who owns the digital image and who can profit from it.

What should we do? We need tools that help recognise generated faces, platforms that react quickly when users report abuse, and clear rules on digital image rights. Most of all, we need awareness. A profile picture is no longer just a photo. It is a piece of our digital identity.

Tell me in the comments what you think and which part of this worries you most.

#ArtificialDecisions #MCC #AI

✅ This video is brought to you by: https://www.ethicsprofile.ai

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

163 – They can switch off parts of our digital life from far away

They can switch off parts of our digital life from far away

Here in the United States, from December 23 a test begins: how much a country can limit or block a technology used by everyone. The case is DJI drones, but the logic touches phones, connected cars and home cameras. Stay with me until the end and I will explain why this matters to all of us.

Every drone records video and location. For years, thousands of DJI drones have flown over dams, pipelines and power lines, creating a detailed map of critical places using devices tied to a foreign company. At the same time, these drones depend on remote software rules. A change in regulation or an update can ground whole fleets, as happened to several police forces in Florida, with high costs and slower emergency response.

In Ukraine, similar commercial drones have been adapted for war, showing how the same hardware can move from weddings to battlefields. This is why the US now treats drones as critical infrastructure.

For hobby pilots, December 23 will not switch drones off, but it will push them into a grey zone. Harder to buy, harder to repair, future software uncertain. For anyone who uses connected devices, the message is simple: when we rely on tools controlled by a few distant players, we accept that someone else, one day, might own the off switch.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

162 – Earn more working less

Earn more working less: what McKinsey really says about AI

If we use AI properly, we can work less and earn more. McKinsey explains this clearly in its analysis of the labor market here in the United States. With the technologies that already exist today, a huge share of paid working hours could be automated, and this would free up hundreds of billions of dollars every year. The real question is how we decide to use that money and that time.

We usually use AI to cut jobs and routine tasks. The report describes a different direction. People, artificial agents and robots working together. Where AI enters in a serious way, human work changes shape, becomes more about decisions, relationships and complex cases. In radiology, for example, staff numbers went up in the same years when automated systems for reading images arrived, because value shifted to the final decision and the relationship with the patient instead of pure routine reading.

The interesting part is what happens to wages. Jobs where people manage AI agents without being programmers pay more than average today. Think of finance, consulting, technical design, teaching with advanced AI tools. In these roles, the combination “human plus artificial agent” has more value than human work alone. When that combination allows the same level of production in fewer hours, there is real room for better contracts.

Here in the United States a simple expression is spreading, “AI fluency”. This is not about writing code. It means knowing how to use AI tools well, understanding their limits, getting help from artificial agents to write, analyze, talk to customers, prepare documents. Demand for this skill in job postings is growing extremely fast. As more of us become fluent, we can go to the negotiation table and ask for the same productivity with fewer hours and for part of the productivity gain to show up in our paycheck.

McKinsey also highlights something we often ignore. The skills that support this transition are our classic human skills. Communication, problem solving, management, writing, customer relations. Employers ask for these skills in tasks that can be automated and also in tasks that must stay human. AI does not erase these abilities, it amplifies them. An agent writes the first draft. We make it meaningful, correct and suitable for real people.

The outcome is still open. The same numbers that can fund four day weeks, higher salaries and more time with family can also be used to push labor costs down and widen gaps. Everything depends on how ready we are, as workers and as citizens, to demand that the new productivity does not stay only in company balance sheets.

Technology is opening a very concrete door, producing more with fewer hours of work. If we build the right skills, contracts and rules, that door leads to earning more while working less. If we fail to do that, artificial agents will end up deciding for us how our time is used.

#ArtificialDecisions #MCC #AI

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com

Guarda il post »
Artificial Decisions

161 – The man who imagined our digital present before anyone else

The man who imagined our digital present before anyone else. And no one talks about him.

Alan Kay is a name that incredibly few people know. Yet he innovated far more than Steve Jobs. Stay with me until the end because the digital world we live in today comes above all from his vision, not from the founders everyone celebrates.

We always make the same mistake. We confuse the people who tell the future well with the people who actually thought it. Here in the United States, computer historians have been saying it for years. Alan Kay is the greatest forgotten innovator. Outside universities, almost no one knows who he is. In 1972, while computers were giant cabinets locked inside companies, he at Xerox PARC was designing the Dynabook, a portable, lightweight personal computer meant for children and adults. The conceptual ancestor of laptops and tablets.

Kay did not want a screen for consuming content. He wanted a thinking environment. A device that helped people understand, create, simulate. Today we debate AI in schools and assignments written by chatbots. He was asking the same questions fifty years ago with one clear goal. Use technology to increase our autonomy, not reduce it.

Then there is the interface. Windows, icons, mouse, menus, the desktop metaphor. The way we naturally use every screen today, from computers to phones, was born in the PARC labs where Kay was a central figure. Apple turned those ideas into iconic products. Microsoft brought them into the lives of billions. But the mental framework behind that model comes from there, not from the companies that commercialized it.

This makes the comparison with Jobs unavoidable. Jobs created desirable objects. Kay imagined the conceptual frame that makes those objects possible. Without the first, no iPhone. Without the second, no personal computer as we understand it.

His story matters now. We live in an age where AI can write, speak, and choose for us. We risk turning powerful tools into mental shortcuts. Kay would remind us of a simple question. Does this technology make us more capable or more dependent.

Alan Kay shows that the people who truly invent the future often stay outside the spotlight. But their work shapes the world we live in every day, click after click. Not by chance this series is called Decisioni Artificiali. Because important decisions should never be delegated.

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

160 – A tiny gut sensor

A tiny gut sensor to better understand anxiety, mood and digestive health

Researchers have found a way to listen in real time to our “second brain”, the one in the gut, with a micro sensor thinner than a hair. Follow me until the end and I will explain why this technology can change the way we understand anxiety, mood and digestive diseases.

A team at the University of Cambridge, together with the Thayer School of Engineering at Dartmouth, has developed a soft and flexible implant that sits between the walls of the intestine and records the electrical activity of the enteric nervous system. They tested it in rodents and pigs, even while the animals are awake and moving, and they observed how the neurons in the gut react to food, stress and physical pressure. Until now these signals were almost impossible to measure in real conditions, because experiments were done under anesthesia and the intestine is always moving.

This matters because for years clinical studies have linked the gut–brain axis to disorders such as irritable bowel syndrome, anxiety, depression and Parkinson’s disease. Here in the United States several research centers work on microbiota and mood, but often with indirect data: questionnaires, blood tests, stool samples. With a sensor like this, we can finally measure continuously how the “second brain” reacts to drugs, diet and stress. For people with ulcerative colitis, Crohn’s disease or gastroparesis, it could become a tool to personalize treatments instead of relying on trial and error.

There is also the question of data. A continuous trace of gut activity is a new and very sensitive type of health information. Here in the United States the debate on who owns health data is already intense with smartwatches and apps. If technologies like this are used with clear rules and in the interest of patients, they can help us understand how much our mental well being depends on the body and open a new season of more precise and less guess based medicine.

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

159 – The Words We Don’t Write Anymore on Social Media

THE WORDS WE DON’T WRITE ANYMORE ON SOCIAL MEDIA

On social media it is not only platforms that filter, we are the ones who change our language because we are afraid of the algorithm. Stay with me until the end because I will walk you through the words many of us avoid, and then I want to know which ones you think really scare the systems behind social platforms.

More and more often we see “unalived” instead of saying that a person’s life has been taken in a violent way, “seggs” to indicate intimate relations between consenting adults, “pew pew” instead of objects that fire bullets. It looks like a game, but it is actually a form of self-censorship: we change the words so that the video is not hidden, penalised or demonetised. Or we write words with numbers instead of some letters, or with asterisks. This happens even when the words are only written inside images.

Creators see this every day. Comedian Alex Pearlman says that on TikTok he avoids even mentioning competing platforms, because when he invites people to go to another service, his views collapse. When he talked about a famous financier at the centre of abuse scandals linked to a well known private island, several of his videos disappeared only from that platform, with penalties on his account, while they stayed up on other social networks. The result: he started using nicknames, hints, coded language.

Companies deny having secret lists of banned words, but we know that in the past platforms like TikTok and Meta have changed the visibility of “sensitive” content, and that internal tools exist to manually push some videos. Here in the United States this has even more impact, because for a huge share of the population social media are already one of the main sources of news. If we believe that certain words cannot be used, we start to avoid entire topics: political violence, rights, mental health, emotional and intimate life. Families see a domesticated public debate, where the most delicate issues appear only disguised as jokes in code.

For us the point is simple: platforms think in terms of advertising and the risk of ending up in front of regulators, not in terms of the quality of public conversation. If we accept without thinking that we must always speak using “unalived”, “seggs” and “pew pew”, we are doing the job for them.

Now tell me: which words do you consider “negative” for the algorithms today, the ones you avoid because you are afraid your content will disappear from the feed?

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

158 – They are stealing our attention, piece by piece

They are stealing our attention, piece by piece

They are stealing our attention every day. Stay with me until the end and I will explain why long-form reading is one of the only ways to take our mind back.

We live inside systems that interrupt us all the time. Every notification cuts the thread. Here in the United States, the University of California, Irvine has been studying for years how our concentration collapses after each interruption. The mind does not go back to where it was. It breaks into pieces. It becomes faster, but less deep. And this changes how we think.

Long-form reading means any text that needs time and continuity. Books, essays, long articles, reports, investigations. You cannot consume them in a few seconds. They force you to follow an argument, connect ideas, remember what you read before. It is a mental exercise, not a pastime. It reactivates parts of the brain that stay off when we only read short content.

Maryanne Wolf, working between the United States and Europe, has shown how long-form reading supports the mental processes we need to understand, analyze, and evaluate. These are the same abilities we need when we make serious decisions in real life, like reading a contract or understanding a medical report. Short reading does not train these abilities.

There are concrete examples. In Finland, schools introduced long-form reading to improve students’ ability to detect online manipulation. In the following years, students improved their critical evaluation of information. They reached this result simply by training attention.

Workplaces show the same pattern. Boston Consulting Group examined international teams and saw that those who practice deep work, including long-form reading, make stronger decisions and commit fewer mistakes. A trained mind resists confusion better.

The point is clear. Long-form reading is a form of self-defence. It rebuilds our ability to think without being pulled by the rhythm of algorithms. Without this ability, we become more vulnerable and less autonomous.

Now the question is for you. How much time do you still give to long-form reading? And how much do you notice that your attention is changing?

#ArtificialDecisions #MCC #AI

Guarda il post »
Artificial Decisions

157 – With AI, one person can build an entire company

With artificial intelligence, one person can build an entire company

Here in the United States, in 2025, we’re seeing something extraordinary: individuals building entire companies on their own and reaching one million dollars in annual revenue. No team, no office, no investors. Just one mind and ten digital tools. There are at least three reasons why this really matters: money, speed, and control. Stay with me until the end, because if you have an idea, using the right tools, this could be your moment. And these are the tools they use most often.

Cursor codes like a senior developer. Perplexity and Gemini handle research and validation. ChatGPT, Claude, and Grok plan strategies, write, and organize projects. Midjourney, Figma, and Canva handle design and branding. Zapier connects everything, while Calendly and Tidio manage clients and appointments. A full startup inside one laptop.

First reason: money. Here in the United States, many software startups start with almost no fixed costs. The tools are free or cheap. The point isn’t to save money, it’s to move resources from structure to decision-making. When money doesn’t go into bureaucracy, it goes into improving the product.

Second reason: speed. These tools shorten decision cycles. You can test a landing page in the morning, analyze the data in the afternoon, and relaunch by evening. Organizational latency disappears. But it only works if there’s a director who knows what to watch and what to cut.

Third reason: control. One founder sees the entire funnel. No departments, no delays. Decisions happen in hours, not weeks. But control without judgment becomes obsession. That’s why you still need a leader who knows when to stop, rewrite, or change direction.

This model, for now, applies to digital startups: software, services, online experiences. When it comes to physical products, it’s different. No one can build a car company or an industrial robotics firm alone. Not yet. But when humanoid robots can actually make things, that boundary will shift again. The same domestic robot that shops, walks the dog, and sets the table could soon help prototype products, build physical objects, and power new one-person startups from home.

And that’s the key point: the tools amplify, they don’t replace leadership. The founder is the director. With the same lights, actors, and technology, two directors can make opposite films. One becomes a hit, the other fails. The difference lies in the choices, creative, strategic, and relational, with the product and the market.

And this highlights a central truth: humans and their ability to choose remain irreplaceable. Soft skills, knowledge of the world, the ability to face problems and recognize the right path from the wrong one stay in our hands.

We must face this transformation differently. Artificial intelligence doesn’t replace humans. It makes those who don’t use it inefficient.

You can have the same digital “crew” as anyone else, but it’s your direction that decides whether the film becomes a success or a flop.

Not by chance, this series is called Artificial Decisions. Because power today doesn’t lie in tools. It lies in choices.

#ArtificialDecisions #MCC #AI

Guarda il post »