248 – How AI Works, in a Simple Way

How AI Works, in a Simple Way

I’ll explain, simply, how LLMs “think.” Stay with me. It’s dense, but this is the easiest version that still stays accurate.

We write a prompt. One sentence. The model breaks it into tiny pieces called tokens. Sometimes a token is a full word. Often it’s part of a word. That matters because it decides how much text fits in the context, and it changes by language. Then each token becomes numbers. Lots of numbers. Coordinates in a huge space. For an LLM, text is vectors. And it runs repeated operations on those vectors, mainly matrix multiplications. Computation. Just computation.

Next, the tokens get linked to each other. Some weigh more, some less. The model assigns numeric weights across the text, based on what’s in front of it right now. It works. That’s why it fools us. It looks like understanding. After a few passes, it does the key step: it produces probabilities for the next token. It picks one, appends it, recalculates, and repeats. One token at a time. Dozens, hundreds of times.

The “intelligence” feeling comes from continuity. Correct grammar, consistent tone, smooth flow. But the engine is prediction. If a continuation sounds plausible because it matches patterns it has seen, it may choose it even when it’s wrong.

So yes, it can write perfect sentences with incorrect content. If we don’t give strong constraints or reliable documents, it fills gaps with what sounds best.

There’s also a setting called temperature. Low temperature means safer, more predictable choices. Higher means more variation.

When we ask “how does it know?”, often it doesn’t. It has seen similar patterns. It has learned how sentences usually continue on that topic. And the more we use it for money, health, contracts, or reputation, the more we should remember what it is: a machine that predicts words.

#ArtificialDecisions #MCC

Share: