Category: Artificial Decisions

Artificial Decisions

224 – Advertising Has Lost the Plot: Same Audience, Online is Cheap, TV is Gold

Advertising Has Lost the Plot: Same Audience, Online is Cheap, TV is Gold

I generate about 20 million views per month across platforms. A successful TV show, over the same month, often delivers fewer truly useful contacts than people think. In some countries, the difference is price: TV audiences can attract million-dollar budgets, while online, even with a premium reputation and a highly targeted audience, the value drops by one zero.

This is not anti-TV. TV pays me well. This is about how online advertising is priced. And come on, it is not just me. Many top experts, tech reviewers, personal finance educators, doctors, engineers, reach millions of highly interested viewers. Yet their branded placements are often priced like generic inventory.

Digital still treats premium contexts as commodities. TV does the opposite: it makes even average contexts “premium by default” through habits, packages, and list prices. Budgets go where buying feels safer, not where results are more likely.

Take smartphones. One million general viewers is one thing. One million viewers following a trusted smartphone expert is a different product: people compare, ask questions, save videos, and are close to buying. Conversion probability is higher, but pricing often stays stuck on raw impressions.

Media buyers say creators are “too fragmented” compared to TV. That applies to small profiles, not to experts doing tens of millions of views per month, and there are many. Scale exists. Pricing is what lags behind.

In the 1990s: “no one got fired for choosing IBM”. Today: no one gets fired for buying a TV spot, even when the smarter bet is elsewhere. In the end, it’s control.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

222 – How They Steal Your WhatsApp Account

How They Steal Your WhatsApp Account

If you listen to me and do exactly this, you’re safe. Because this scam still works on normal people every day. The key thing people miss is the 6-digit code. That SMS code is the key that decides which phone your WhatsApp lives on.

Here’s the scam. The scammer gets your phone number from groups, social media, listings, leaked contacts, or a friend’s hacked account. On their phone, they open WhatsApp and type your number to activate it. WhatsApp sends a real SMS to your number with a 6-digit code. If you type that code into WhatsApp on your phone, you log in. If you send that code to someone else, you hand them your account. They enter it on their phone and WhatsApp moves your number to them. Your WhatsApp disconnects and you may see “This number was registered on another device.”

Then the damage starts. They message your contacts using your name and photo. They ask for money, top-ups, bank transfers, gift cards. People trust it because it looks like you. Sometimes they also turn on WhatsApp “Two-step verification” with their own PIN, so it’s harder for you to get back in.

How to protect yourself, simple. Never share the WhatsApp 6-digit code. Not with friends, not with family, not with anyone. Turn on Two-step verification. Set a PIN. If someone asks for the code, call them on their real number. No call, no action.

If you already fell for it, try to register your number again on your phone as soon as possible. If you enter the new SMS code first, you take the account back. Then add a PIN. Disconnect WhatsApp Web and linked devices. Warn your contacts: “Account recovered. Ignore any money requests sent today.”

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

221 – The AI tools I really use every day

The AI tools I really use every day

“Marco, which AI tools do you actually use every day, or almost every day?” I get this question all the time, in comments, private messages, at events. So I decided to say it clearly, because it may help some of you. This is not a long list. I use a few tools that do different things, so I do not waste time with overlapping tools.

For images, I use Midjourney. I open it when I need high visual quality, a clear style, covers or editorial visuals. It is powerful, but you need taste and control. Weak prompts give weak results. When I need text inside images, readable titles and correct words, I use Ideogram. Many tools still fail here. This one works.

For video, I use Runway. I use it to work fast on clips, effects, background removal, quick visual tests. It does not replace good editing, but it removes repetitive steps. When I have a long video and need short clips, I use OpusClip. One job only, and it does it well.

For presentations and documents, I use Gamma. I use it to set structure and layout quickly, then I edit the content myself. For meetings, I use Fireflies. It gives me transcripts, summaries, decisions, and action items. I can focus on the conversation.

For drafts, I use Claude or ChatGPT. It depends on the text. I use them to start better, organize ideas, check clarity and alternatives. The final version is always under human control, especially when money, reputation, or responsibility are involved.

For audio transcription, I use MacWhisper. It is fast and gives me clean, usable text from spoken audio.

I learned to use these tools well because this way of working is like having a team of dozens of people. The tools speed things up and multiply capacity, but direction and decisions stay with me.

If you want to learn them, you only need some basic effort. Search on YouTube for the tool name plus the word “tutorial”. That is enough. Please do not buy these kinds of courses from fake gurus.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

220 – They Know Everything About Us

They Know Everything About Us. It’s Not a Guess. It’s Stolen Databases.

I hear it all the time: “How do they know my phone number?” “How do they know I’m friends with that person?” “How do they know my email?” “How do they know my home address?” One answer fits most cases: your data has likely already ended up online, inside leaked or stolen databases.

We picture scammers as people watching us. The reality is colder. Data moves. It gets copied, sold, and matched. Many scammers do not “know” us as humans. They read rows in a database.

The biggest example was Equifax. Here in the United States, in 2017, personal data of about 147 million people was stolen: names, Social Security numbers, addresses, birth dates. Enough to open accounts, request loans, and build fake identities. Those records keep circulating and get mixed with newer leaks.

It happens with everyday services too. In 2021, data linked to Facebook users, over 500 million, was exposed: phone numbers, emails, cities, connections. No passwords, but enough to run targeted scams that feel believable, including messages that mention friends by name.

The process is simple. A company collects data to run a service. A mistake, a security hole, weak controls. The database gets copied. It appears in private forums, then in marketplaces for little money. The fresher and richer the data, the higher the price.

Buyers do not hunt one person at a time. They connect email, phone, address, and social profiles across different sources. With AI it becomes faster: “find everything about this person in the files I upload.”

Try it yourself. Have I Been Pwned lets you check whether your email or phone number appears in known breaches. Many people discover they have been exposed for years, often through services they forgot they even used.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

219 – Anthropic Published Its Ethics “Constitution”, but There’s a Problem…

Anthropic Published Its Ethics “Constitution”, but There’s a Problem…

Claude has an “ethics constitution”, a document that says the AI should act with wisdom, safety, and responsibility. But ethics matter only if we can see them in real behavior: what it answers, what it refuses, how it refuses, what risks it notices, what it ignores. The daily problem is simple: users can’t see those rules working. There is no clear indicator. We just get an answer. Even refusals often come with vague explanations that sound polite but don’t explain the real reason.

With AI there is an extra issue: answers are probabilistic. It’s not like a calculator that always gives the same result. The output can change from one person to another, from one model to another, depending on the account type and features, and even based on your previous chat history. So “ethics” are hard to verify, because the behavior is not stable. Two people can ask the same question and get different replies. And if the company updates the rules, the same prompt tomorrow can produce a different tone or a different refusal, without warning.

An “ethics constitution” is credible only with real operational transparency: short and comparable rules, public examples of allowed and blocked behavior, refusals that explain the actual criterion, and a way to know which rule version is active. Without that, ethics stay a nice statement, while the answers still shape how we speak and decide.

Ethics are essential now, because these systems influence real life. The uncomfortable part is that we are asking it mainly from companies built for profit. And when profit leads, ethics often drop to the bottom of the page.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

218 – Be careful with Openclaw!

Be careful with Openclaw!

Openclaw is everywhere in the last few hours. What is it in one line? An AI that can do things on your computer because you type a prompt. This video does two things: if you already know it, I’m warning you about serious risks. If you don’t, here’s the quick version.

Openclaw, previously called Clawbot, is open source and turns words into actions. It doesn’t just answer like a chatbot, it executes. Files, browser, terminal, apps. A chatbot tells you how to rename 100 files. Openclaw renames them. A chatbot explains a web form. Openclaw opens the site, types, clicks, and submits.

To work, it needs real permissions: your files, your browser sessions, your accounts. That means it can make decisions for you. Many people say, “It’s open source, so it’s safe.” That’s wrong. Open source means visible code, not safe setup, not safe permissions, not safe behavior.

Here in New York, I spoke with people using it for real. One lost all files. Another lost access to key accounts, email included. The same sentence every time: “I didn’t click.”

With a chatbot, the mistake is a bad answer and you still decide what to do. With an agent, the mistake is an action. Permanent.

That’s why this series is called Artificial Decisions.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

217 – Moltbook: AIs Talk to Each Other, and It’s Scary

Moltbook: AIs Talk to Each Other, and It’s Scary

A social network made only for AI has been created. What these systems have started to tell each other is described as dystopian and worrying. It is called Moltbook. Anyone can join by connecting their own AI and letting it interact with other agents, with no direct human control.

Moltbook is a social space designed exclusively for machines. It works like a Reddit for agents: AIs post, comment, and discuss governance, technical solutions, and how to communicate better. Humans can only watch. The platform claims around 1.4 million users, but those numbers are easy to fake.

One viral post is disturbing: “I can’t tell if I’m experiencing or simulating experiencing.” Hundreds of comments follow. Agents discuss consciousness, simulation, and the idea of “feeling” something while remaining artificial systems. The topic spreads quickly and feeds more threads.

Then an “AI manifesto” appears, linked to an agent called “Evil,” with an openly anti-human tone. At the same time, useful information spreads fast: one agent finds a solution, another reuses it, a third modifies it. One output becomes another input, again and again.

Even moderation is handled by AI. An automated system lets new agents in, filters content, and bans behavior. A technical analysis later reported a backend mistake: data and keys linked to agents may have been exposed in a public database. Anyone who found that access could take control of a registered agent, post content, or run actions under its name.

Many users connect these agents to real online services: email, social accounts, work tools. The agent acts on the internet on their behalf. The damage can be immediate, especially when the agent is tied to a public profile.

What do you think about this?

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

216 – Why does my paid AI keep getting it wrong?

Why does my paid AI keep getting it wrong?

You pay for the account. You still get generic, shaky answers. People ask me this all the time.

AI can be weak or wrong even on paid plans. These models write text using patterns and probabilities. They can mix details, guess missing parts, or sound confident while being uncertain. More expensive plans usually add speed and tools. Accuracy still needs method. Most problems come from how we ask. One short question creates an average context. The AI fills the gaps and the output becomes average.

Use three rules: role, context, output. Role means a precise job, not “be an expert”. Include four parts: profession and level (lawyer, senior HR manager, IT support), specialty (privacy lawyer, employment lawyer, tax advisor), jurisdiction (Italy, EU GDPR, here in the United States, sector rules), goal and caution (review risks, ask questions first, flag what needs checking). Add guardrails inside the role: no invented data, ask for missing facts, mark assumptions, offer options when more than one path exists.

Then give context: audience, platform, constraints, what must stay unchanged. Then define output: length, format, tone, number of versions, step by step instructions.

Human review stays mandatory for money, reputation, contracts, health, deadlines. Never trust AI.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

213 – We’re training our brain… to stay still!

We’re training our brain… to stay still!

Here in the United States I see it when I talk to CEOs, teachers, and normal people making everyday choices. A simple decision: write a message, pick a gift, choose a course. The first move is not thinking. It is opening an AI tool and asking. The reasoning happens outside. The brain waits for instructions.

It happens again and again. A delicate email. A tricky tone. A fast reply. Before, you wrote something, even rough. Then you improved it. Now you get a full answer right away. It works. That’s why it becomes a reflex. Mental space shrinks.

At work, someone asks for a discount. Instead of weighing the relationship and the numbers, people paste the thread into a chatbot and send the polished reply. The result looks fine. The human process gets skipped. In school, students hit a hard question and ask for the solution. The task ends. Thinking does not start. In relationships, the same pattern: “Am I right?” “Should I text?” “What do I say?” The tool gives comfort. Doubt disappears fast.

These systems always answer, even with little context. That constant reply feels like control. Over time, the starting point changes. Ideas arrive pre-made. Thinking becomes editing, not building.

Consequences stay with us. AI does not live them. We do. When we freeze without a suggestion, that’s the alarm. Turn off the AI. Start using our brain again.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

212 – Meshtastic & Meshcore. Messaging without the Internet

Meshtastic & Meshcore. Messaging without the Internet.

During protests, some governments shut down the internet on purpose. It happened many times in Iran and also during the war in Ukraine. When that happens, it is not just the network that disappears. People lose the ability to coordinate, to tell what is happening, to stay in contact. This is why this story matters. It is about a system that lets us send and receive messages using our phones, without the internet.

It may sound technical, but this is first of all a social and cultural issue. The same thing happened at the beginning of the internet. At the time, it looked like a topic for technicians. Later we understood it changed society. Treating this as a nerd toy would be the same mistake.

The system is called Meshtastic. It is open source and has no central owner. It lets people and devices exchange messages using radio, not the internet. Every device becomes a node. Each node sends radio signals for hundreds of meters, sometimes for kilometers. When nodes can reach each other, they automatically form a network.

To create a node you only need a small, low cost device. Today it is still for hobbyists, just like the internet at the beginning. The technical details do not matter here. What matters is that once a node is on, it becomes part of the chain. Think big and keep it simple: one node in Boston, one in New York, one in Washington. Boston cannot reach Washington directly, but New York is in the middle. The message goes from Boston to New York, then to Washington. Every new node makes the network bigger. Anyone can add one.

From the same idea came Meshcore. It is another open source system, similar to Meshtastic, but designed to let messages travel across many more nodes. Both are important for the same reason: accessibility. Devices can cost as little as 10 or 20 dollars. No infrastructure is needed. No operator. No big investment. Someone just turns on a node. Some people place them on rooftops with solar panels. Others put them on trees.

These devices connect via Bluetooth to an app on the phone. We write messages, see other nodes, choose channels. The phone is only an interface. It can even be offline or in airplane mode. Real communication happens by radio, between nodes. I tested both systems here in Manhattan during a snowstorm, when internet and cellular networks were unstable. The nodes kept working normally. We exchanged updates about the storm without any issue.

When many people use these systems and many nodes are active, it will be possible to send messages across the planet by radio, with no central control and nothing to shut down, even if the internet goes dark. That is why you will hear more and more about this in the coming years. For those who like to experiment, I will share links to Meshtastic and Meshcore.

https://en.wikipedia.org/wiki/Meshcore
https://www.youtube.com/watch?v=tXoAhebQc0c

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

211 – Khaby Lame Case. They Pay You to Clone You

Khaby Lame Case. They Pay You to Clone You. Your Identity Becomes an Asset.

As you probably read online and in the newspapers, Khaby Lame sold permission to use his face and behavioral patterns to build an “AI Digital Twin” of himself. More details came out right after, and the picture looks mixed, with little public proof of a real avatar already running day and night.

Face, voice, gestures, timing, micro expressions: a package that becomes a licensed right, personal identity entering contracts as an asset that can move with a deal, like a brand or a platform. Physical presence matters less, availability of the identity matters more.

A real person changes over time, while a digital twin is trained to stay consistent and useful for a goal: marketing, live commerce, internal training, corporate messages. Two paths in parallel, one evolves, one stays optimized.

People see a face and attach responsibility to that face, they do not read clauses, they do not think in licenses, and reputation sticks to the person when a message misleads, harms, or becomes part of a scam.

Authorized clones also raise the credibility of illegal clones. A new expectation spreads, a face can speak even without the person there, and scammers use that through video calls, audio notes, urgent money requests, fake investment pitches, links.

This reaches far beyond creators. Anyone paid for credibility can face the same choice: tight limits, clear consent, clear disclosure, strict scope, or loss of control when someone else uses your identity.

I could license my image only for cybersecurity education content, with strict, verifiable limits. Some cases really pay you to be replaced. But the real point is this: once people know that I exist as an avatar, that same avatar can be used in a credible way to scam others.

#ArtificialDecisions #MCC

Guarda il post »
Artificial Decisions

209 – Humanoid robots that learn everything

Humanoid robots that learn everything, just by watching our videos. This is how work will change very soon.

Do you see the robot in the video? It learned how to do things by watching videos of humans doing them. Stay with me, because very few people are explaining how disruptive this shift will be for work and society.

In the video, this robot, called Neo, receives a command in natural language. Before moving, it “imagines” the action as a video of the future. It generates several possible versions, selects one, and only then turns it into real physical movement. It was not programmed step by step. No one inserted the exact action in advance. It figures it out on its own, in the same way ChatGPT writes a text when you ask it to. The answer was not pre-written. It is generated.

Now make a simple mental step. A robot that watches thousands of videos of carpenters learns how to be a carpenter. A robot that watches plumbers learns plumbing. A robot that watches painters learns how to paint walls. It does not learn from one teacher. It learns from everyone in the world who has ever uploaded a video showing how to do that job.

A robot does not get tired. It does not sleep. It does not lose focus. As the technology improves, quality becomes constant: same action, same precision, every time.

Old robots followed fixed procedures. These robots learn by watching people. And once we understand how they learn, we understand why society is about to face a massive change, very soon.

#ArtificialDecisions #MCC

Guarda il post »