Machine Learning · BETA

Clusterflick

Not all AI is GenAI. Clusterflick is a K-Nearest Neighbors physics board game: place labelled training data, flick photo tokens onto the board, and classification happens by proximity.

Sister game to FuzzNet Labs. Where FuzzNet opens the GenAI black box, Clusterflick shows you machine learning outside GenAI: transparent, distance-based, and no black box.

🌐

Beyond the Black Box

Machine learning is a whole family - GenAI is just one branch

📍

Distance, Not Probability

K-NN classifies by similarity to known examples - no neural net required

🧪

Data IS the Model

The training set isn't compressed into weights - every sample stays visible

🧭

You are AI-Fluent

Knowing which kind of ML fits the job is what real AI literacy looks like

2–4 Players
~30 Min · 6 Rounds
Ages 14+

Classify by similarity, not by neural network.

Most AI literacy training stops at GenAI. But "machine learning" covers a whole family of algorithms - K-Nearest Neighbors, decision trees, clustering, regression, support vector machines - most of which look nothing like a transformer. Clusterflick puts you inside one of the simplest and most transparent: K-NN. Place the labelled training data. Flick a new sample onto the board. Where it lands - and which neighbours it lands closest to - decides what it is.

  • 🌐 Beyond the Black Box

    When people say "AI" today they almost always mean GenAI - large neural networks like ChatGPT or image generators. But machine learning is a whole family of algorithms, and most of them aren't black boxes at all. K-Nearest Neighbors, decision trees, clustering, linear regression - these are all "AI" too, and many of them produce predictions you can actually inspect and explain. Clusterflick lets you experience one of those algorithms directly. Once you see how K-NN works, the GenAI hype starts to feel like one tool in a much larger toolbox.

  • 📍 Distance, Not Probability

    K-NN doesn't "learn" anything in the neural-network sense. There's no training run, no gradient descent, no weights. To classify a new input, K-NN just measures the distance to every labelled example it knows about, looks at the k closest ones, and takes a vote. That's it. In Clusterflick, that "distance" is literal - physical distance on the board between your flicked token and the sample squares around it. Understanding this distance-based logic is half of intuitively understanding most classical ML.

  • 🧪 Data IS the Model

    In a neural net, the training data gets compressed into millions of weights and effectively disappears - you can't point at any one number and say "this is what the model learned about cats." K-NN is the opposite extreme: the training data is the model. Every labelled sample stays visible, and predictions are made by direct comparison against them. That makes the consequences of bad data immediately obvious - mislabel one sample and you can watch its bad influence spread to everything that lands near it. It's a powerful, tactile lesson about why data quality matters more than algorithmic cleverness in most real ML projects.

  • 🧭 You are AI-Fluent

    Real AI fluency isn't being able to use ChatGPT. It's knowing which kind of model fits which kind of problem - and pushing back when someone reaches for GenAI on a problem that a 30-line classical ML algorithm would solve faster, cheaper, and more reliably. After Clusterflick, "AI" stops meaning "the magic chatbot" and starts meaning a family of techniques with very different tradeoffs. That's the difference between being talked at by AI vendors and asking the right questions.

Pair Clusterflick with FuzzNet Labs for a complete picture of ML.

Use Clusterflick to teach the variety of machine learning that isn't GenAI - classical, transparent, distance-based methods. Then use FuzzNet Labs to teach the neural-network side: layered architectures, training cycles, and why GenAI is the way it is. Together they show participants that "AI" is bigger than the chatbot they used yesterday.

🎯 Suggested Workshop Flow

Open with Clusterflick: Establish that ML is a family of algorithms - and that not all of them are mysterious. Participants build intuition for distance-based classification and the importance of training data placement.

Bridge to FuzzNet Labs: Now contrast: what if your data has thousands of dimensions? What if "distance" stops being meaningful? That's where neural networks come in.

Close with reflection: Which kind of model would you reach for on the problem your team is actually facing? Most of the time, the answer isn't GenAI.

🧰 Run only the sections you need

Clusterflick's tutorial is a self-paced overlay you can have participants run before or during the session. The Learn How to Play button walks through K-NN concepts and gameplay together, so you don't need to lecture before play begins.

🎮
You've been invited to room Enter your name and pick a role in the Join card below.

Create Game

Share the room code to invite players, or play solo vs a bot.

Player

Join a Game

Have a room code? Enter it below to join.

Player
Room Code
----
Share the code or link with other players to join
Observer Mode - watching only
Goal: 8+ confidence for all 6 animals to win (6 rounds max).
Your Turn: 5 flicks from your corner zone - drag back like a slingshot.
📷 Identify: Token stays where it lands. Scores 5 / 3 / 2 / 1 pt by confidence band.
🎲 Add Samples: Once per round. If token lands in an animal zone, place that many sample squares in that zone, then the token is removed. After placing, you get a free Reflick of any identified token.
Sample Walls: Sample squares block tokens - flicked discs bounce off them.
Sample Bonus: +1 confidence per matching-animal square touching your token or in the same band.
Unclassified: Centre = 0 pts.
Doubled: One animal's points 2× each round (gold ⭐).
Board clears after every round.
----
Round Schedule
Your Turn
Waiting for game to start…
Confidence Scores
Step 1 of 8

End Game?

This will cancel the game for all players. Are you sure?

Game Over!