<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/" >

<channel>
	<title>Marco Camisani Calzolari</title>
	<atom:link href="https://www.camisanicalzolari.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.camisanicalzolari.com</link>
	<description>Strategic Advisor &#124; AI, Cybersecurity &#38; Digital Transformation</description>
	<lastBuildDate>Wed, 15 Apr 2026 16:02:02 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">47191908</site>	<item>
		<title>284 &#8211; Here&#8217;s an Example of AI Agents in Action</title>
		<link>https://www.camisanicalzolari.com/284-heres-an-example-of-ai-agents-in-action/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Wed, 15 Apr 2026 16:02:02 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/284-heres-an-example-of-ai-agents-in-action/</guid>

					<description><![CDATA[Here's an Example of AI Agents in Action That I'm Sure Will Inspire You

A guy who sells pools in Florida took OpenClaw, an open-source AI agent, and programmed it to find homes without a pool and convince homeowners to build one. He wrote the instructions, hit enter and went to sleep.

The agent starts from satellite imagery. It scans lots one by one in the Tampa area, measures the land, checks zoning constraints and decides if there's room for a pool. If the lot qualifies, it keeps going on its own. It generates a rendering: the homeowner's backyard seen from above, with a pool already in it. It calculates the construction cost and how much the home value goes up.

Then it searches public records for the homeowner. Finds the listing agent tied to the property. Prepares a postcard with the aerial rendering and a QR code. Ships it through Lob, an automated print and mail service. Three days later it's in the mailbox. On top of that the AI agent automatically creates a personalized website just for that homeowner, with all the details, reachable from the QR code.

What used to take months of work gets done in one night while the seller sleeps. The agent runs, scans new homes, produces new postcards, mails them out. Every homeowner gets a different message, built on their actual house, with numbers calculated on their lot. One single AI agent doing the job of an entire team: real estate analyst, graphic designer, copywriter, mailing service and salesperson. For a few dozen dollars in API costs.

And it doesn't just work for pools. Same logic applies to solar panels, roofing, fencing, landscaping. Anything visible from above and sellable door to door, without knocking. As I always say, the first to be replaced by AI are those who don't use AI. What do you think?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2368</post-id>	</item>
		<item>
		<title>273 &#8211; Automation Is the Wrong Word</title>
		<link>https://www.camisanicalzolari.com/273-automation-is-the-wrong-word/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Fri, 03 Apr 2026 16:02:23 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/273-automation-is-the-wrong-word/</guid>

					<description><![CDATA[Automation Is the Wrong Word. AI Doesn't Automate, It Autonomizes

Algorithm, virtual, and automation. Three words we use incorrectly. Three videos, one per word. Today: why we're wrong to use "automation."

Automation means making something automatic. A process that you used to do by hand, now a machine does it. Same steps, same result, every time. A washing machine is automation. A badge gate is automation. An email triggered by a purchase is automation. A happens, B follows. Predefined, repeatable, controllable.

The problem is when we use the same word for Artificial Intelligence. When we ask AI to answer customers, screen résumés, or analyze medical data, we're not automating anything. We're autonomizing. We're telling a system to decide on its own.

The request no longer passes through a predefined sequence. It passes through the model's training, its weights, whatever guardrails someone put in place or didn't. The result isn't predictable. It changes every time. It's an autonomous decision. The difference is enormous, especially when it touches the real world.

If I automate a weapon, I press a button and it fires ten rounds. Always ten. Always when I press. If I autonomize a weapon, the system decides when to fire, at whom, and how many rounds. Based on what it "learned."

If I automate a banking process, the transfer goes when the customer clicks send. If I autonomize it, the system decides whether to approve, block, or flag. And the money is real, as we said in the previous videos. Digital, but absolutely real.

When we say "we automated customer service with AI," we're hiding what actually happened. We gave a system the power to decide on our behalf. That's not automation. That's autonomy. We should say autonomize, so people understand we're talking about systems that make autonomous decisions, that choose, that don't just execute. That's why this series is called Artificial Decisions.

What do you think?

#ArtificialDecisions #MCC

https://www.youtube.com/watch?v=Mp-1XK8-snA

https://www.youtube.com/watch?v=Utzft_O_b9w

👉 Important note: We’re planning the upcoming months.
If you’d like to request my presence as a speaker at your event, please contact my team at: management@camisanicalzolari.com]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2365</post-id>	</item>
		<item>
		<title>271 &#8211; Virtual Is Not the Opposite of Real!</title>
		<link>https://www.camisanicalzolari.com/271-virtual-is-not-the-opposite-of-real/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Wed, 01 Apr 2026 16:03:28 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/271-virtual-is-not-the-opposite-of-real/</guid>

					<description><![CDATA[Virtual Is Not the Opposite of Real! We're Using the Wrong Word, Causing Real Damage

Algorithm, virtual, and automation. Three words we use incorrectly. Three videos, one per word. Today: why we're wrong to use "virtual."

Newspapers keep writing: real world versus virtual world. As if they were two separate places. Completely wrong. Here's how it actually works.

The real world is the big container. Inside it, two spaces: the physical world and the digital world. Physical is where we move, touch, look each other in the face. Digital is where we write, work, buy, argue, fall in love. Two different spaces, but both produce real consequences. Both are real.

If I'm on a video call and we make decisions, those decisions are real. You can't touch them with your hands, but you touch them with the facts. If I buy something online, the money is gone for real. If someone insults me in a chat, it lands, it hurts, it stays. Especially if you are young. None of this is virtual. It's digital. And it's absolutely real.

The virtual world is something else. It sits outside the real world. It uses digital technology as a tool, but produces no effects beyond itself. In a videogame you kill a character, it dies, another respawns. No consequences off-screen.

But we use "virtual" for everything online, and this can cause damage. A kid bullies a classmate in a group chat. Headlines say: "bullying in the virtual world." That word cuts the weight in half. Kids believe it. They think certain things are okay because it's the virtual world. It doesn't count. Except it counts exactly like the physical world, because it's digital, and digital is part of the real world.

Every time we say "virtual" when we should say "digital," we're removing weight from something that carries real weight. What do you think?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2362</post-id>	</item>
		<item>
		<title>269 &#8211; Algorithm. The Wrong Word for AI</title>
		<link>https://www.camisanicalzolari.com/269-algorithm-the-wrong-word-for-ai/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Mon, 30 Mar 2026 16:03:51 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/269-algorithm-the-wrong-word-for-ai/</guid>

					<description><![CDATA[The Wrong Word for AI

#Algorithm, virtual, and automation. Three words we use incorrectly, and I'll explain why. I'll make three videos, one for each word. Today: why we're wrong to use "algorithm."

We hear it everywhere. We know what it means: a sequence of instructions. If A happens, do B. Same input, same output. Always. On social media we got it. An algorithm ranks content inside fixed parameters. More likes, higher up. Rules written by someone, verifiable, predictable.

Now people use the same word for Artificial Intelligence. Wrong word. Technically it works. The problem is what the listener hears. "Algorithm" means procedure, calculation, control. Something predictable.

AI runs on training, weights, probabilities. Same question twice, different answer. No formula. A system that estimates, interprets, and decides on its own.

If you think "algorithm," you think the system is under control. You trust it with hiring, patient evaluation, customer service. You don't realize how much autonomy you're handing over. AI doesn't execute like a calculator. It chooses. Every time differently. Nobody can explain exactly why.

With an algorithm you give a task to a machine. With AI you give autonomy to a machine. Completely different things.

When we say "algorithm" for AI, we're reassuring people. We're saying: it's under control, it's predictable. It's not. What do you think?

#ArtificialDecisions #MCC #AI]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2356</post-id>	</item>
		<item>
		<title>268 &#8211; We&#8217;re Hitting Like on Millions of Fake Photos</title>
		<link>https://www.camisanicalzolari.com/268-were-hitting-like-on-millions-of-fake-photos/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Sun, 29 Mar 2026 16:01:32 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/268-were-hitting-like-on-millions-of-fake-photos/</guid>

					<description><![CDATA[We're Hitting Like on Millions of Fake Photos. We Either Stop or the Damage Will Be Irreversible

The AI wedding photos of Zendaya and Tom Holland passed eleven million likes on Instagram. Posted on March 4 by a random creator. Completely generated by Artificial Intelligence. Eleven million people who hit like on an event that never existed. But who notices when the light is perfect and the setting is romantic?

January 2025, California wildfires. Someone generated AI images of the Hollywood Sign engulfed in flames and put them into circulation on X and Instagram. They spread within minutes. Authorities had to publicly announce that the landmark was untouched. Not to correct a news story, to stop the panic.

At the 2024 Met Gala, AI photos of Katy Perry were so convincing that her own mother thought she was there that evening.

We're at this point now. No hacking required, no infiltrating anything. A few seconds and a free tool are enough to put a convincing photo into circulation: a crime that didn't happen, a politician saying things they never said, a person in trouble for something they never did.

And the platforms? Meta cut its fact-checking teams while the problem was exploding. X deliberately lowered filters on Grok for generating images of public figures. OpenAI in 2025 updated GPT-4o's policies to allow the creation of images of politicians and public figures on simple request. The people running these tools know exactly what's happening.

We got used to not believing words. Now we have to learn not to believe images. Nobody prepared us for this.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2353</post-id>	</item>
		<item>
		<title>267 &#8211; Watch Out for Loyalty Cards and Health Insurance!</title>
		<link>https://www.camisanicalzolari.com/267-watch-out-for-loyalty-cards-and-health-insurance/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Sat, 28 Mar 2026 17:01:52 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/267-watch-out-for-loyalty-cards-and-health-insurance/</guid>

					<description><![CDATA[Watch Out for Loyalty Cards and Health Insurance!

Every time we use a loyalty card, we leave a very clear trail: what we buy, when we buy it, how often, and in what quantities. Those digital receipts turn into data. That data turns into a profile.

And profiles create inferences. Over-the-counter medicines, gluten-free products, items linked to a medical diet. Over time, these signals can reveal habits and sensitive details about a household.

On the health side, the stakes are even higher. In the US, more processes around claims and approvals involve automation. And the Change Healthcare cyberattack showed how fragile these large health and payment systems can be, affecting a huge number of people.

California launched a public tool called DROP to help people request deletion and opt-out from data brokers. The personal data market is now a major issue.

What does this mean for a family? Shopping habits and health data can contribute to a detailed risk and money profile, not just ads. Use loyalty cards only when the benefit is real. Check the privacy settings of loyalty programs and limit data sharing when possible. For health and insurance, ask what data is collected, how long it's kept, and who it's shared with.

Data doesn't stay still. It moves. And the more detailed it is, the more valuable it becomes to someone else.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2350</post-id>	</item>
		<item>
		<title>266 &#8211; AI Is Frying Our Brains! We&#8217;re Producing More. Thinking Less</title>
		<link>https://www.camisanicalzolari.com/266-ai-is-frying-our-brains-were-producing-more-thinking-less/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 17:02:12 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/266-ai-is-frying-our-brains-were-producing-more-thinking-less/</guid>

					<description><![CDATA[AI Is Frying Our Brains! We're Producing More. Thinking Less

Researchers interviewed hundreds of full-time American workers and found they're suffering from something new called AI brain fry. Fried brain from Artificial Intelligence!

AI makes individual tasks faster. What used to take three hours now takes 45 minutes. But the days get harder. When each task takes less time, you don't do fewer tasks, you do more. Our apparent capacity expands, the work expands to fill it. Managers see output growing and expectations rise.

Before AI, you'd spend a whole day on one design problem. Sketch on paper, think in the shower, go for a walk, come back with clarity. One problem, one day. Today that one problem becomes six, and each one "only takes an hour with AI." But context-switching six times is brutally expensive for the human brain. AI doesn't get tired between problems. We do.

Before AI, the job was: think, produce, test, ship. You were the creator. After AI, the job became: write a prompt, wait, read output, evaluate it, decide if it's correct, fix what doesn't work, repeat. You became a reviewer, a judge, an inspector on an assembly line that never stops.

People who use AI intensively spend 14% more mental energy at work, accumulate 12% more cognitive fatigue, face 19% more information overload. People in a state of AI brain fry make 33% more decisions while exhausted, make serious errors 39% more often, and are 39% more likely to want to quit. We're burning out the people who use the tools most, the ones companies depend on most.

Use it to free yourself from the useless and you perform better. Use it to do as many things as possible in the least amount of time and you burn out.

Using AI to free up time, then filling that time with more work, isn't productivity. It's running on a treadmill that never stops and it fries our brains. What do you think?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2347</post-id>	</item>
		<item>
		<title>264 &#8211; The Cloud Doesn&#8217;t Exist</title>
		<link>https://www.camisanicalzolari.com/264-the-cloud-doesnt-exist/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Wed, 25 Mar 2026 17:02:50 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/264-the-cloud-doesnt-exist/</guid>

					<description><![CDATA[The Cloud Doesn't Exist

The cloud doesn't exist. There's a warehouse with servers inside, on a specific piece of land, in a specific place in the world. On March 1st, someone was reminded of that.

Iran struck three Amazon data centers. Two in the UAE, one in Bahrain. Banks down, payments frozen, delivery apps offline. Millions of people with no access to anything. This is the first documented military attack on a hyperscale cloud provider, ever.

Data is usually replicated across multiple locations. Amazon's system is designed to survive the loss of one availability zone. Not two. Two zones hit simultaneously, and the system fails. That's exactly what happened.

Think for a moment about how many things you only have online: tools, files, workflows, suppliers, banking systems. Everything on the cloud.

There's another angle almost no one talks about. It has nothing to do with bombs, it's about governments. Your provider guarantees confidentiality, says no one else sees your data. But the provider has a headquarters, it's registered in a country. And that country, in many cases, has legal authority over its data. It can knock on the door and say: give me those files. The company may have signed anything with you, but the state comes first. And often, the state is not required to tell you.

There are countries you'd never want to share your data with. Less friendly countries. And yet your data sits there, in a warehouse, in that country, under that country's laws.

Ask yourself where your data physically is. Not in the cloud. In a building, in a country, with that country's rules on top. What do you think?

#ArtificialDecisions #MCC #AI]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2344</post-id>	</item>
		<item>
		<title>262 &#8211; The War Is Going Cyber and Companies Are the First to Be Attacked</title>
		<link>https://www.camisanicalzolari.com/262-the-war-is-going-cyber-and-companies-are-the-first-to-be-attacked/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Mon, 23 Mar 2026 17:01:48 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/262-the-war-is-going-cyber-and-companies-are-the-first-to-be-attacked/</guid>

					<description><![CDATA[The War Is Going Cyber and Companies Are the First to Be Attacked

Dear companies, you need to protect yourselves. The warnings are strong. Attacks on European companies are increasing, and the situation in the Middle East is accelerating everything. Companies that have nothing to do with Israel or Iran are involved too: companies that manufacture components, manage logistics, provide financial services. Perfect targets, precisely because they feel far from the problem. Attacks don't stop at the borders of the region at war. They expand, hit civilian networks, supply chains, banks in countries that have nothing to do with the conflict. It happened in previous crises and it's happening now.

FortiGuard Labs by Fortinet are currently recording app tampering, intrusions into broadcasters, Telegram posts announcing attacks against critical infrastructure. It's not yet a coordinated campaign. But attackers don't wait for it to become one. Iran historically doesn't respond immediately. It waits for attention to drop. Weeks later, when security teams have let their guard down, it strikes. The silence of these days is not a signal that the risk has passed.

European companies need to invest in ongoing training for their employees. Not a one-time course that gets forgotten. Realistic phishing simulations, regular updates, a security culture that becomes part of daily habits. A prepared employee recognizes a suspicious email before opening it. That's why you need to run events and training continuously. You need to turn your employees into the first line of defense, not the first point of entry.

Alongside training you need basic technical measures: multi-factor authentication on VPNs and remote access, patches on all systems, segmented networks so whoever gets in through one point can't reach everything else, backups isolated from the network, tested, ready to work when you actually need them.

Those who fix these things now will have the advantage. Those who wait for the explosion to start will arrive too late. There's a parallel mechanism that doesn't depend on any state actor. Geopolitical chaos is perfect cover. And the easiest target is not the firewall, it's the person who receives the email. Most attacks come in from there: an employee who clicks on the wrong link, an attachment opened without thinking, a password reused across a personal and a company account. Not out of negligence, out of lack of preparation.

Are you doing this? Is your company doing this?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2341</post-id>	</item>
		<item>
		<title>260 &#8211; Robots Need a License Plate</title>
		<link>https://www.camisanicalzolari.com/260-robots-need-a-license-plate/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Sat, 21 Mar 2026 17:01:56 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/260-robots-need-a-license-plate/</guid>

					<description><![CDATA[Robots Need a License Plate

A robot at a restaurant in San Jose started dancing uncontrollably a few days ago. It smashed plates, sent chopsticks flying. Three employees spent minutes trying to stop it, searching an app for the shutdown command. No emergency button, no switch. Just three people trying to physically restrain a machine that wouldn't stop. The robot was wearing an apron that read "I'm good." It wasn't.

This is what happens when a chatbot hallucinates. Except here it happens physically. Autonomous robots run on probabilistic systems, the same engine that makes Artificial Intelligence assistants invent data. When a chatbot hallucinates, you close the tab. When a robot hallucinates, it comes at you.

In San Jose the system made autonomous decisions, kept executing them, and nobody knew how to stop it. That was a small robot. Think about a 65-kilogram one in a warehouse, or a hospital.

We regulated drones: you can't fly one without a license, insurance, airspace rules. Cars: no vehicle circulates without a license plate, inspection, mandatory insurance. Autonomous robots in public spaces operate with none of these rules. Just because you don't see them in your city yet doesn't mean they're not coming.

I've proposed something simple: mandatory visible license plates on every autonomous robot in public space. Large, readable, traceable. If a robot damages you, you need to know who's responsible.

The international standard for industrial robots exists, it covers factories but not the restaurant where you're having dinner. They're high-risk and not regulated for civilian use, and meanwhile you can buy them on Amazon. With physical robots the hallucination has material consequences. It has weight, it has arms.

Mandatory license plates, physical emergency stop, mandatory insurance, clear liability. We already did all of this for every other machine that moves among people. What do you think?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2338</post-id>	</item>
		<item>
		<title>259 &#8211; Watch Out for Photos of Kids Online</title>
		<link>https://www.camisanicalzolari.com/259-watch-out-for-photos-of-kids-online/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Fri, 20 Mar 2026 17:02:32 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/259-watch-out-for-photos-of-kids-online/</guid>

					<description><![CDATA[Watch Out for Photos of Kids Online. They Can Become Raw Material

A family photo feels harmless: a birthday, a school trip, a sports game. But once it's online, it can be copied, saved, reposted, and reused in ways we don't control. And today there's a new risk: Artificial Intelligence tools can manipulate faces and bodies in minutes.

UNICEF warned that AI-made or AI-altered sexualized images of children are abuse, with real and lasting harm. In the US, authorities have already handled cases involving AI-generated child sexual abuse material, and watchdog groups reported sharp increases in AI-generated content being found and reported.

For a family, the practical risk is simple: a clear, high-quality photo with a visible face can be used for face matching, fake profiles, and image manipulation, sometimes for criminal purposes.

Avoid posting clear front-facing photos when kids are identifiable, especially with school names, uniforms, locations, license plates, or signs in the background. Use strict privacy settings and recheck them regularly. Platforms change defaults. When schools or sports clubs ask for media consent, ask where photos will be posted, how long they stay online, and how removal works.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2335</post-id>	</item>
		<item>
		<title>251 &#8211; Smart Robots Will Be Everywhere Like Smartphones</title>
		<link>https://www.camisanicalzolari.com/251-smart-robots-will-be-everywhere-like-smartphones/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Thu, 12 Mar 2026 17:02:28 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/251-smart-robots-will-be-everywhere-like-smartphones/</guid>

					<description><![CDATA[Smart Robots Will Be Everywhere Like Smartphones: Here's Who Will Run the Next Machine Era

You'll see them at home, in factories, in warehouses, in stores. Like phones today, they'll feel normal. They'll be as smart as the models we use to write and answer. This time they'll have a body: hands, strength, balance. Walk the dog, clean the floor, fix a drawer, swap a broken part, cover a night shift on a line.

They'll learn us, our routines, our preferences, our spaces.

Here in the U.S., real tests are already happening. BMW has publicly tested Figure 02. Hyundai has controlled Boston Dynamics since 2021. NVIDIA is building the compute and simulation stack for humanoid training. Apptronik and Agility are pushing robots into real warehouse operations.

The leaders are taking shape: Tesla for scale ambitions, Figure for general-purpose factory work, Boston Dynamics for mobility, Apptronik for logistics, Agility for distribution centers, NVIDIA for the underlying infrastructure.

The trigger is price. When a robot costs roughly a year of wages, it becomes a straightforward business decision. Jobs will shift fast, some roles will grow, others will shrink. Policy needs to move before this becomes a social emergency.

What do you think?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2330</post-id>	</item>
		<item>
		<title>249 &#8211; What if AI Won&#8217;t Replace People, but Companies?</title>
		<link>https://www.camisanicalzolari.com/249-what-if-ai-wont-replace-people-but-companies/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 17:02:36 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/249-what-if-ai-wont-replace-people-but-companies/</guid>

					<description><![CDATA[What if AI Won't Replace People, but Companies?

Will AI replace us because it can do office work better than humans? That's the question everyone is asking. My answer stays the same: the first to be replaced will be the ones who don't use Artificial Intelligence. I'm convinced.

Now push it one step further. What if this applies to companies too? The first companies to be replaced will be the ones that don't adopt AI, replaced by firms that deliver the same service faster, cheaper, with AI at the core.

Past industrial revolutions weren't just about machines replacing jobs. They were about productivity: more output with less input. AI follows the same path, with one key difference: it operates on language, and language is the raw material of entire industries.

When companies adopt AI to stay competitive, they expose their industry's "grammar" to the system. Every prompt, every revised document, every client response becomes training data about how that business thinks, writes, argues, and manages risk. At scale, AI doesn't learn one company. It learns the patterns of the whole ecosystem, then can reproduce them in an optimized way.

Think law, consulting, finance. AI absorbs contract structures, due-diligence patterns, standard answers. Over time it can deliver parts of those services directly to the end client, without the full organizational layer in the middle. The risk isn't only the employee. It's the company as an intermediary.

AI still needs competent human supervision. It's probabilistic. It can be wrong in subtle ways. The advantage isn't "using AI." The advantage is knowing its limits, controlling its output, deciding what to delegate and what must stay under human responsibility.

That's why I keep saying it: first go the people who don't use AI. Then the companies that don't use AI.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2325</post-id>	</item>
		<item>
		<title>248 &#8211; How AI Works, in a Simple Way</title>
		<link>https://www.camisanicalzolari.com/248-how-ai-works-in-a-simple-way/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Mon, 09 Mar 2026 17:02:02 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/248-how-ai-works-in-a-simple-way/</guid>

					<description><![CDATA[How AI Works, in a Simple Way

I'll explain, simply, how LLMs "think." Stay with me. It's dense, but this is the easiest version that still stays accurate.

We write a prompt. One sentence. The model breaks it into tiny pieces called tokens. Sometimes a token is a full word. Often it's part of a word. That matters because it decides how much text fits in the context, and it changes by language. Then each token becomes numbers. Lots of numbers. Coordinates in a huge space. For an LLM, text is vectors. And it runs repeated operations on those vectors, mainly matrix multiplications. Computation. Just computation.

Next, the tokens get linked to each other. Some weigh more, some less. The model assigns numeric weights across the text, based on what's in front of it right now. It works. That's why it fools us. It looks like understanding. After a few passes, it does the key step: it produces probabilities for the next token. It picks one, appends it, recalculates, and repeats. One token at a time. Dozens, hundreds of times.

The "intelligence" feeling comes from continuity. Correct grammar, consistent tone, smooth flow. But the engine is prediction. If a continuation sounds plausible because it matches patterns it has seen, it may choose it even when it's wrong.

So yes, it can write perfect sentences with incorrect content. If we don't give strong constraints or reliable documents, it fills gaps with what sounds best.

There's also a setting called temperature. Low temperature means safer, more predictable choices. Higher means more variation.

When we ask "how does it know?", often it doesn't. It has seen similar patterns. It has learned how sentences usually continue on that topic. And the more we use it for money, health, contracts, or reputation, the more we should remember what it is: a machine that predicts words.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2322</post-id>	</item>
		<item>
		<title>246 Sometimes Delegating to AI Causes Damage. Sometimes It&#8217;s Extremely Useful and Saves Time</title>
		<link>https://www.camisanicalzolari.com/246-sometimes-delegating-to-ai-causes-damage-sometimes-its-extremely-useful-and-saves-time/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Sat, 07 Mar 2026 17:01:44 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/246-sometimes-delegating-to-ai-causes-damage-sometimes-its-extremely-useful-and-saves-time/</guid>

					<description><![CDATA[Sometimes Delegating to AI Causes Damage. Sometimes It's Extremely Useful and Saves Time. But When Should You Use It, and When Shouldn't You?

The decision is simple. It comes down to three things you have to consider together.

First: Human Baseline Time. The real time it takes you to do the task yourself. If a tricky email takes you 5 minutes, Artificial Intelligence often isn't worth it, because you'll spend more time prompting and fixing than writing. If a report would take you two hours, AI can be a real advantage.

Second: Probability of Success. The chance the AI gives you something good enough on the first try. Summaries, first drafts, and translations are usually high. Legal, medical, or strategic calls are usually low, even when the answer sounds confident.

Third: AI Process Time. The time you spend asking, waiting, reading, checking, correcting, and redoing. If that process time matches or beats your human time, delegation doesn't pay.

AI delegation works when human time is high, success probability is high, and AI process time is low. If one of these breaks, AI stops being an accelerator and becomes a brake.

One example: a standard blog post. By hand, 45 minutes. AI can give a usable draft quickly. Worth delegating. Another example: a sensitive reply to an angry customer. By hand, 10 minutes. AI often misses the tone, and your total time becomes 20. Not worth it.

One counterintuitive truth: the more expert you are, the more useful AI becomes. Experts give better instructions and spot errors fast. Non-experts spend too long figuring out whether the output is right, and risk goes up.

AI is a speed multiplier, not a substitute for judgment. This isn't ideology. It's a calculation: time, probability, and control.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2319</post-id>	</item>
		<item>
		<title>244 &#8211; They Can Make Us Believe an Idea Is Supported by Everyone</title>
		<link>https://www.camisanicalzolari.com/244-they-can-make-us-believe-an-idea-is-supported-by-everyone/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Thu, 05 Mar 2026 17:02:07 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/244-they-can-make-us-believe-an-idea-is-supported-by-everyone/</guid>

					<description><![CDATA[Attention. Today Online It's Easy to Make Us Believe an Idea Is Supported by "Everyone," Even When It Isn't

Thousands of comments cheering it on. Thousands of likes. You see that and you adjust your opinion. Now that kind of consensus is easy to fake, because it's often not people talking. It's networks of autonomous Artificial Intelligence agents writing, replying, arguing, applauding, attacking, 24/7.

Consensus becomes something you can copy. You read the same comment a hundred times, you see a thousand likes, the mood feels set. Some people follow it. Some stay silent. Some get angry.

Tools anyone can use can flood social platforms and forums with credible profiles: photos, stories, natural language, human mistakes, jokes, rage, even warmth. And they can push thousands of posts. No need for one big "media lie" like the old days. Small phrases, same direction, are enough.

Whoever controls these networks can steer them for or against anything. One message to one group, a different one to another group. Manipulation becomes scalable, personalized, and invisible. Finding who's behind it is hard: campaigns spread across thousands of nodes and platforms slow everything down.

We still judge individual people by reputation, while what we need are global rules: traceability for coordinated campaigns, real transparency obligations for anyone using networks of autonomous agents to influence public opinion and democratic processes. And the platforms? They don't look in a hurry.

What do you think?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2315</post-id>	</item>
		<item>
		<title>243 &#8211; The Whole Truth About AI and Water Consumption</title>
		<link>https://www.camisanicalzolari.com/243-the-whole-truth-about-ai-and-water-consumption/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Wed, 04 Mar 2026 17:02:06 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/243-the-whole-truth-about-ai-and-water-consumption/</guid>

					<description><![CDATA[The Whole Truth About AI and Water Consumption

In Texas, in 2025, data centers used about 90 billion liters of water, including cooling and water linked to electricity production. By 2030, estimates reach up to 600 billion liters per year.

In just a few years, Artificial Intelligence is competing with agriculture, cities, and industry. Computing systems generate heat. Heat must be removed. Some data centers can use up to 1.5 million liters of water per day just for cooling.

You might think: water evaporates and comes back as rain. So what is the problem?

Water returns, but not where it is needed and not when it is needed. A data center draws fresh water from a local basin. Part of it evaporates and leaves that territory. It may fall elsewhere, even over the ocean, or months later. Meanwhile, the local community has less water available, often during the hottest weeks, when demand is already high.

There is another issue. Demand grows fast. Water recharge is slow. Aquifers are not infinite. If withdrawals exceed natural recharge, water levels drop, wells go deeper, costs rise, drought risks increase. A global water cycle does not fix a local shortage created in a few years.

There is also indirect water use. Electricity production often consumes water. That indirect share can represent up to 75% of the total footprint. The data center reports one number. The energy-producing region carries another, often larger, burden.

AI infrastructure expands where land and energy are cheaper and permits move faster. These areas are often already exposed to heat and water stress. Multiply that pattern across regions and the pressure on local water systems increases.

Water does not disappear. Availability, in the right place at the right time, becomes scarcer. What do you think?

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2310</post-id>	</item>
		<item>
		<title>240 &#8211; Warning, They Can Know Where You Are in Real Time. How to Protect Yourself</title>
		<link>https://www.camisanicalzolari.com/240-warning-they-can-know-where-you-are-in-real-time-how-to-protect-yourself/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Sun, 01 Mar 2026 17:01:41 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/240-warning-they-can-know-where-you-are-in-real-time-how-to-protect-yourself/</guid>

					<description><![CDATA[Warning, They Can Know Where You Are in Real Time. How to Protect Yourself

All it takes is an AirTag placed in your bag for anyone to know where you are, in real time, without you noticing. This part is well known. iPhones alert you automatically if an AirTag that is not yours is following you. But if you have an Android phone, nobody warns you. Here is what you need to do to stop anyone from spying on your location.

Roughly 70 percent of smartphones run Android. Most people are exposed and have no idea.

AirTags work through a network of nearly two billion Apple devices worldwide. Every iPhone that passes near an AirTag silently updates its position, even if that iPhone is not yours. Someone just needs to walk past you. Great for finding lost keys, but unfortunately also great for following a person without them knowing.

On iPhone protection is automatic. If an AirTag that is not yours follows you, you get a notification, no setup needed: "Watch out, there is an AirTag that is not yours following you."

On Android alerts exist, but they only work if you do not have an old phone and if you have Google Play Services updated. If you do not, nobody warns you. And even when it works, independent tests show that detection is slower and less reliable. You could be followed for hours before anything warns you. If it warns you at all.

To help protect you, AirTags have a small speaker that makes a sound after several hours if separated from their owner. A signal to notice something strange in your bag. The problem is that a small hole drilled under the battery disconnects the speaker. No more sound. No more warning. And these modified AirTags are already being sold ready to use on eBay.

How to protect yourself: if you have Android, go into settings and search for "unknown tracker alerts." Make sure it is on. Download AirGuard, free, open source, built by a German university, no commercial interest. It scans in the background and alerts you if something is following you.

Apple and Google are working together on a shared standard called DULT for detecting unwanted trackers. The direction is right. But the problem exists right now.

iPhone users are mostly covered. Android users: settings, "unknown tracker alerts," turn it on. And install AirGuard.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2307</post-id>	</item>
		<item>
		<title>237 &#8211; Why AI Hallucinations Are Causing Real Damage</title>
		<link>https://www.camisanicalzolari.com/237-why-ai-hallucinations-are-causing-real-damage/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Thu, 26 Feb 2026 17:02:09 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/237-why-ai-hallucinations-are-causing-real-damage/</guid>

					<description><![CDATA[Why AI Hallucinations Are Causing Real Damage and We Don't Notice

Artificial Intelligence hallucinations cause damage because they end up inside decisions. We ask a question, we get an answer that looks clean, organized, confident. We read it fast, paste it into a document, forward it. The damage starts there.

The mistake is usually in the details, and details are what people check the least: a number with the comma in the wrong place, a date, an agency name, a deadline, a rule that is real but applied to the wrong place. The text still sounds credible, so the error stays.

At work it happens like this. We ask for market growth, user counts, a percentage to use. We get a precise-looking number with a neat explanation. It goes into a report, then a slide. Nobody opens the original source because the answer feels ready. If the number is invented or just wrong, budgets and priorities go off track, and we notice later.

At home it's the same, with higher stakes. We ask about symptoms, medicines, dosages, interactions. The answer is calm and structured. One wrong detail can change what someone does.

The biggest risk is the chain. A false line becomes the base for the next question. We paste it into a new prompt and the AI builds on it. The next answer feels even stronger because it has more context, but it is strengthening the first mistake.

Protection is simple. Any AI answer that contains facts stays a draft until we verify a primary source ourselves: the original document, an official page, the full text. Check details first: names, dates, numbers, quotes, deadlines. When money, health, contracts, or identity are involved, decisions wait for an external check.

AI can write fast, but verification stays on us.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2301</post-id>	</item>
		<item>
		<title>236 &#8211; Big Tech Is Laying People Off Because of AI. What to Do to Keep Your Job</title>
		<link>https://www.camisanicalzolari.com/236-big-tech-is-laying-people-off-because-of-ai-what-to-do-to-keep-your-job/</link>
		
		<dc:creator><![CDATA[Team]]></dc:creator>
		<pubDate>Wed, 25 Feb 2026 17:02:26 +0000</pubDate>
				<category><![CDATA[Artificial Decisions]]></category>
		<guid isPermaLink="false">https://www.camisanicalzolari.com/236-big-tech-is-laying-people-off-because-of-ai-what-to-do-to-keep-your-job/</guid>

					<description><![CDATA[Big Tech Is Laying People Off Because of AI. What to Do Now to Keep Your Job

In 2025, the tech sector recorded 122,549 layoffs across 257 companies. HP plans 6,000 job cuts by 2028 to move resources into Artificial Intelligence. Meta cut about 3,600 roles, Microsoft 6,000, Amazon 14,000 corporate jobs. Office work gets reduced, AI-related roles get funded. Follow me to the end, because the fix is practical.

The people leaving and the people being hired are not the same. Companies cut back office, admin, content moderation, customer support. They hire ML engineers, researchers, data and security specialists. Moving from one side to the other takes time.

Goldman Sachs Research estimates generative AI could reduce 6–7% of US jobs. The roles most exposed include programmers, accountants, legal assistants, customer service, and credit analysts. Tasks like writing, summarizing, classifying, and answering can be automated.

So what do we do? Use AI seriously, every day, on your real work. Put it into documents, emails, processes, numbers. Track time saved and quality improved. Save examples and results so you can prove what you can do.

Become the person who runs the work with AI: sets context, checks output, verifies, signs off, takes responsibility. Push your company for real training, tied to actual roles, with clear rules and a practical plan for the next 6–12 months.

Tell your CEO to bring in experts and make decisions now: map where AI is already used, decide what can be sped up and what must stay human, invest in skills and tools, assign clear ownership. Tag your CEO or your manager.

#ArtificialDecisions #MCC]]></description>
		
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">2298</post-id>	</item>
	</channel>
</rss>
