1
Set the Stage
AI fundamentals + how to play FuzzNet Labs.
10 min  ·  or assign offline
2
Host the Game
Get players into rooms and compete in real time.
40 min
3
?
Reflect Together
Three structured questions connecting the game to real-world AI decisions.
10–20 min

Step 1 - Set the Stage  10 min · or assign offline

The FuzzNet Labs interactive tutorial covers AI fundamental knowledge for context - Data is Destiny, why 100% accuracy is impossible, backpropagation, and how neural networks connect to LLMs like ChatGPT - and how to play the game. No prior knowledge needed. You have two ways to run it.

Option A - Recommended
Independent Exploration
Send participants the tutorial link before the session and ask them to complete it on their own. This gives everyone uninterrupted time to read, click, and process at their own pace. It also frees your live session time entirely for the game and debrief.
Send the link 24 hours in advance with a one-line note: "Complete this 10-minute interactive tutorial before our session. No prep needed, just click through."
Option B - Live Screenshare
Facilitated Walkthrough
Screenshare the tutorial and narrate as you click through it. This works well when participants are less tech-familiar or when you want to add commentary specific to your team or industry context.
Pause at each concept step and tie it to a real example your audience recognizes. An HR leader connects data bias to hiring decisions. An executive connects the accuracy trade-off to vendor AI claims.
Step 1 Action
Open the FuzzNet Labs interactive tutorial. Send this link to participants in advance, or share your screen and walk through it together during the session.
Launch the Interactive Demo

Tailoring Your Commentary

If you are running the screenshare option, here are the moments in the tutorial that invite audience-specific commentary and what to say for different groups:

Step 2 - Host the Game  40 min

The game room is created live, during the session - not shared in advance. Each room generates a unique code that expires, so participants join the specific session you create in the moment. How you set it up depends on how many people are playing.

Up to 4 Players
Single Room
The host creates a room directly in the FuzzNet Labs lobby. A unique room code appears on screen. Share that code in your video call chat and players enter it to join. Everyone is in one game together.
Passive observers can watch over a player's shoulder or ask you to screenshare one of the player screens. The game board is easy to follow as a spectator.
5+ People / Multiple Tables
Event Hub - Multiple Rooms
For larger groups, use the Event Hub to create multiple rooms at once. Each room gets its own unique code. Assign players to rooms and share each group's code directly - via breakout room chat or a shared doc. Groups play simultaneously, then reconvene to compare results.
Split groups of 2-4. Mixing experience levels within a room usually produces richer debrief conversations than separating them.
Step 2 Action - Do this live, during your session
For a single group (up to 4): open FuzzNet Labs and create a room. For larger groups: use the Event Hub to create multiple rooms at once and assign players to each.
Create a Room - FuzzNet Labs
10 min
Set the Stage
or assign offline
40 min
Host the Game
rooms + play
10–20 min
Reflect Together
use what time remains
Total: 1 hour

Step 3 - Reflect Together  10–20 min

The debrief is where the game experience converts into durable knowledge. Use these three questions in order. Each one moves participants from the game world into the real world. The answers below show you what strong comprehension looks like, and what common misunderstandings to listen for so you can address them in the moment.

1
What does "Data is Destiny" mean for an AI system - and where does that show up in the real world?
Strong Answer Shows understanding of the concept
"The model can only learn from what it's trained on. If your training data is imbalanced - say, 80% dogs and 20% everything else - the model builds a world where most things are dogs. It's not broken. It's doing exactly what the data taught it. A real-world example is a hiring algorithm trained mostly on resumes from one demographic: it learns to associate that demographic with success, not because it's biased in intent, but because the data was biased in composition. The quality, balance, and diversity of your data sets the ceiling for what the AI can ever do."
Common Misunderstanding What you might hear
"It means if you give the AI enough data, it will eventually figure everything out on its own."
How to respond
Volume alone does not fix a biased training set. A million dog photos still teaches the model that fur plus four legs equals dog. More of the same data reinforces the same blind spots. What changes the outcome is diversity and balance in the training set - making sure the model has seen enough examples of everything it needs to recognize. In the game, the balanced data row in the tutorial showed one of each animal for a reason.
2
Why is 100% accuracy impossible for any real AI classification problem - and what happens when you try to push toward it?
Strong Answer Shows understanding of the concept
"Because there is always a trade-off between false positives and false negatives. If you tune a model to catch every possible positive case, you increase false positives. If you tune it to minimize false alarms, you start missing real cases. There is no threshold where both are zero for a complex problem. Real-world data has noise, edge cases, and overlap that no model can cleanly separate. The goal is to find the right trade-off for what matters in your specific context, not to chase a perfect score that does not exist in the wild."
Common Misunderstanding What you might hear
"With enough training and computing power, you can eventually get to 100%."
How to respond
Pushing toward 100% on your training data actually causes overfitting. The model memorizes the training set so precisely that it fails on any new data it has not seen before. In the game, over-training a single path forces you to block one of your own edges - the model becomes so committed to what it knows that it cannot adapt. A model that scores 98% in training and 52% on real-world data is not a powerful model. It is a memorized model. Real accuracy is measured on new inputs, not the data you trained it on.
3
How does playing FuzzNet Labs connect to the way real AI systems are built and adjusted?
Strong Answer Shows understanding of the concept
"The game maps directly to the AI development cycle. Designing the network is the architecture phase. Placing data tokens to train individual paths is the training phase. Running a test to see which animals get classified correctly is evaluation. And when a test fails and you go back to adjust your paths and token placements, that is backpropagation in miniature - sending the error signal backward through the network to update what is weighted. The competition element models something real too: in practice, speed to a working model matters. The first team to test and validate an AI system captures the advantage."
Common Misunderstanding What you might hear
"The game is a metaphor, but real AI is completely different math. It doesn't actually look like this."
How to respond
The math behind real neural networks is more complex, but the logic is the same. Real networks have input nodes, hidden layers, and output classifications - exactly the structure the game uses. Backpropagation really does work by sending error signals backward through the network to adjust connection weights. LLMs like ChatGPT are built on this same foundation - they are a specific architecture called transformers, trained on massive datasets, but the underlying learning mechanism of forward pass, error measurement, and backward adjustment is what the game models. The scale is different. The principles are not.

A Note on Facilitation

You do not need to be an AI expert to run this workshop effectively. The game carries the conceptual weight. Your job is to create the conditions for good conversation, not to be the authority in the room. If a participant raises a question you cannot answer, the correct response is: "I don't know - what does everyone else think?" Peer discussion after a game session produces stronger retention than a facilitator explanation.

The most common mistake facilitators make is over-explaining before the game. Resist it. Let participants be confused for the first few minutes. Confusion followed by discovery is one of the most powerful learning sequences that exists. The tutorial exists to prevent the kind of confusion that is just frustrating. The game creates the kind of confusion that is productive.

Ready to Run Your Workshop?

FuzzNet Labs is available now. Grab a copy, share the link, and run your first session this week. Setup takes under 10 minutes.

Get FuzzNet Labs Preview the Tutorial