A step-by-step facilitator guide for running a live session with FuzzNet Labs. From the interactive demo to the final reflection - everything you need in one place.
This is a one-hour experience. The structure below fits exactly into that window - 10 minutes to set the stage (or assign it offline before the session), 40 minutes to get everyone into rooms and play, and 10-20 minutes to debrief depending on how much time you have left. Follow the three stages and your participants will leave understanding how AI actually works.
1
Set the Stage
AI fundamentals + how to play FuzzNet Labs.
10 min · or assign offline
2
Host the Game
Get players into rooms and compete in real time.
40 min
3
Reflect Together
Three structured questions connecting the game to real-world AI decisions.
10–20 min
Step 1 - Set the Stage 10 min · or assign offline
The FuzzNet Labs interactive tutorial covers AI fundamental knowledge for context - Data is Destiny, why 100% accuracy is impossible, backpropagation, and how neural networks connect to LLMs like ChatGPT - and how to play the game. No prior knowledge needed. You have two ways to run it.
Option A - Recommended
Independent Exploration
Send participants the tutorial link before the session and ask them to complete it on their own. This gives everyone uninterrupted time to read, click, and process at their own pace. It also frees your live session time entirely for the game and debrief.
Send the link 24 hours in advance with a one-line note: "Complete this 10-minute interactive tutorial before our session. No prep needed, just click through."
Option B - Live Screenshare
Facilitated Walkthrough
Screenshare the tutorial and narrate as you click through it. This works well when participants are less tech-familiar or when you want to add commentary specific to your team or industry context.
Pause at each concept step and tie it to a real example your audience recognizes. An HR leader connects data bias to hiring decisions. An executive connects the accuracy trade-off to vendor AI claims.
Step 1 Action
Open the FuzzNet Labs interactive tutorial. Send this link to participants in advance, or share your screen and walk through it together during the session.
If you are running the screenshare option, here are the moments in the tutorial that invite audience-specific commentary and what to say for different groups:
1
Data is Destiny (Step 1 of the tutorial)
For business leaders: "Think about your CRM data. If your historical deal data is mostly from one industry or one customer size, any AI system you build on it will only really understand that segment." For HR: "If most of your promoted employees in the training data are from one background, the model learns to associate that background with success."
2
100% Is Not Possible (Step 2 of the tutorial)
For security teams: "This is your false positive vs false negative problem in spam filtering or threat detection. If you tune it to catch everything, you block legitimate email. If you tune it to avoid false alarms, some threats get through. There is no setting where both are zero." For executives: "When a vendor promises their AI model is 99.9% accurate, ask: accurate on what data, under what conditions, and what is the cost of the 0.1%?"
3
Backpropagation and Neural Networks (Steps 3-4)
For non-technical audiences: "You do not need to understand the math. What matters is this: the model is adjusting itself based on what went wrong. It gets feedback, and it updates. That loop is what makes AI a learner rather than a rule-follower." For tech teams: "The game uses this mechanic directly. When a test fails, you go back and adjust the path. That is gradient descent made tangible."
Step 2 - Host the Game 40 min
The game room is created live, during the session - not shared in advance. Each room generates a unique code that expires, so participants join the specific session you create in the moment. How you set it up depends on how many people are playing.
Up to 4 Players
Single Room
The host creates a room directly in the FuzzNet Labs lobby. A unique room code appears on screen. Share that code in your video call chat and players enter it to join. Everyone is in one game together.
Passive observers can watch over a player's shoulder or ask you to screenshare one of the player screens. The game board is easy to follow as a spectator.
5+ People / Multiple Tables
Event Hub - Multiple Rooms
For larger groups, use the Event Hub to create multiple rooms at once. Each room gets its own unique code. Assign players to rooms and share each group's code directly - via breakout room chat or a shared doc. Groups play simultaneously, then reconvene to compare results.
Split groups of 2-4. Mixing experience levels within a room usually produces richer debrief conversations than separating them.
1
Create the room during the session
Open FuzzNet Labs while on the call. Create a room and share the unique code via the video call chat. Players click the link or enter the code to join. Do not pre-share a generic game link - the room code is what connects everyone to your specific live session.
2
Brief the rules in 90 seconds
Once everyone is in the lobby, give the one-sentence summary: "You are building neural network paths to classify animals. Place nodes, train with data tokens, test your model, and score points by testing animals before your opponents do." That is all they need. Resist the urge to over-explain - the game teaches the rest through play.
3
Run the game and observe
Once the game starts, your job is to watch. Note who trains aggressively early, who focuses on blocking opponents, and who tests before feeling fully ready. These patterns map directly to how people manage uncertainty and risk in real work - they are your best debrief material.
4
Call time and capture the scoreboard
At the end, ask someone to screenshot the final scoreboard. The scores matter less than the decisions that produced them. "You were leading after round two - what changed?" is a richer question than "who won?"
Step 2 Action - Do this live, during your session
For a single group (up to 4): open FuzzNet Labs and create a room. For larger groups: use the Event Hub to create multiple rooms at once and assign players to each.
The debrief is where the game experience converts into durable knowledge. Use these three questions in order. Each one moves participants from the game world into the real world. The answers below show you what strong comprehension looks like, and what common misunderstandings to listen for so you can address them in the moment.
1
What does "Data is Destiny" mean for an AI system - and where does that show up in the real world?
Strong Answer
Shows understanding of the concept
"The model can only learn from what it's trained on. If your training data is imbalanced - say, 80% dogs and 20% everything else - the model builds a world where most things are dogs. It's not broken. It's doing exactly what the data taught it. A real-world example is a hiring algorithm trained mostly on resumes from one demographic: it learns to associate that demographic with success, not because it's biased in intent, but because the data was biased in composition. The quality, balance, and diversity of your data sets the ceiling for what the AI can ever do."
Common Misunderstanding
What you might hear
"It means if you give the AI enough data, it will eventually figure everything out on its own."
How to respond
Volume alone does not fix a biased training set. A million dog photos still teaches the model that fur plus four legs equals dog. More of the same data reinforces the same blind spots. What changes the outcome is diversity and balance in the training set - making sure the model has seen enough examples of everything it needs to recognize. In the game, the balanced data row in the tutorial showed one of each animal for a reason.
2
Why is 100% accuracy impossible for any real AI classification problem - and what happens when you try to push toward it?
Strong Answer
Shows understanding of the concept
"Because there is always a trade-off between false positives and false negatives. If you tune a model to catch every possible positive case, you increase false positives. If you tune it to minimize false alarms, you start missing real cases. There is no threshold where both are zero for a complex problem. Real-world data has noise, edge cases, and overlap that no model can cleanly separate. The goal is to find the right trade-off for what matters in your specific context, not to chase a perfect score that does not exist in the wild."
Common Misunderstanding
What you might hear
"With enough training and computing power, you can eventually get to 100%."
How to respond
Pushing toward 100% on your training data actually causes overfitting. The model memorizes the training set so precisely that it fails on any new data it has not seen before. In the game, over-training a single path forces you to block one of your own edges - the model becomes so committed to what it knows that it cannot adapt. A model that scores 98% in training and 52% on real-world data is not a powerful model. It is a memorized model. Real accuracy is measured on new inputs, not the data you trained it on.
3
How does playing FuzzNet Labs connect to the way real AI systems are built and adjusted?
Strong Answer
Shows understanding of the concept
"The game maps directly to the AI development cycle. Designing the network is the architecture phase. Placing data tokens to train individual paths is the training phase. Running a test to see which animals get classified correctly is evaluation. And when a test fails and you go back to adjust your paths and token placements, that is backpropagation in miniature - sending the error signal backward through the network to update what is weighted. The competition element models something real too: in practice, speed to a working model matters. The first team to test and validate an AI system captures the advantage."
Common Misunderstanding
What you might hear
"The game is a metaphor, but real AI is completely different math. It doesn't actually look like this."
How to respond
The math behind real neural networks is more complex, but the logic is the same. Real networks have input nodes, hidden layers, and output classifications - exactly the structure the game uses. Backpropagation really does work by sending error signals backward through the network to adjust connection weights. LLMs like ChatGPT are built on this same foundation - they are a specific architecture called transformers, trained on massive datasets, but the underlying learning mechanism of forward pass, error measurement, and backward adjustment is what the game models. The scale is different. The principles are not.
A Note on Facilitation
You do not need to be an AI expert to run this workshop effectively. The game carries the conceptual weight. Your job is to create the conditions for good conversation, not to be the authority in the room. If a participant raises a question you cannot answer, the correct response is: "I don't know - what does everyone else think?" Peer discussion after a game session produces stronger retention than a facilitator explanation.
The most common mistake facilitators make is over-explaining before the game. Resist it. Let participants be confused for the first few minutes. Confusion followed by discovery is one of the most powerful learning sequences that exists. The tutorial exists to prevent the kind of confusion that is just frustrating. The game creates the kind of confusion that is productive.
Ready to Run Your Workshop?
FuzzNet Labs is available now. Grab a copy, share the link, and run your first session this week. Setup takes under 10 minutes.