🧠 Neural Network Playground

Build, train, and visualize neural networks in real-time. Experiment with architectures, activation functions, and watch your network learn!


🎨 Network Visualization & Dataset
📊 Training Progress
Epoch
0
Loss
Accuracy
💡 How to Use

1. Select a dataset or click the canvas to create custom data points
2. Design your network architecture by adding/removing layers
3. Choose activation functions and adjust hyperparameters
4. Hit Play and watch your neural network learn in real-time!

📦 Dataset Selection
Spiral
XOR
Circle
Moons
🏗️ Network Architecture
⚡ Activation Function
🎛️ Hyperparameters
Learning Rate 0.01
Batch Size 32
Training Speed Normal
🎮 Training Controls
🎯 Add Data Points

About this Playground

This interactive tool lets you build and train a small neural network to classify 2D points. You can add points manually or use preset datasets, change the network architecture (layers and neurons), choose activation functions, and watch the decision boundary evolve as training progresses.

What is a Neural Network?
  • Layers: Input → one or more Hidden layers → Output
  • Neurons: Compute weighted sums and apply an activation function
  • Weights & Biases: Learnable parameters updated during training
What will you see?
  • Decision boundary: Colored background showing model’s prediction regions
  • Loss curve: How “wrong” the model is (lower is better)
  • Accuracy: Percent of correctly classified points
How training works (high level)
  1. Forward pass: Inputs flow through layers to produce outputs (probabilities)
  2. Loss: Measures the error (e.g., cross-entropy)
  3. Backpropagation: Computes gradients of loss w.r.t. weights
  4. Gradient descent: Updates weights to reduce loss using the learning rate
Activation functions
  • ReLU: Fast, helps deep nets; outputs 0 for negatives
  • Sigmoid: Outputs 0–1 probabilities; can saturate
  • Tanh: Outputs −1..1; zero-centered
  • Leaky ReLU: Like ReLU but allows small negative slope
Interpreting Results
Reading the visuals
  • Decision boundary: Green areas predict Class 1; red areas predict Class 0. The sharper and better aligned the boundary, the better the fit.
  • Loss: Should trend down as training progresses. If it stalls or increases, try a smaller learning rate or adjust the architecture.
  • Accuracy: Higher is better, but be mindful of class balance (imbalanced data can mislead accuracy).
Troubleshooting
  • Underfitting: Boundary is too simple; add neurons/layers or change activation
  • Overfitting: Boundary is too wiggly; reduce neurons/layers or gather more data
  • Training unstable: Lower the learning rate
Datasets (what to expect)
  • Spiral: Complex, non-linear; requires deeper/wider hidden layers
  • XOR: Classic non-linear problem; single hidden layer solves it
  • Circle (concentric): Requires non-linear boundary; activations matter
  • Moons: Curved classes; neural nets capture shape better than lines

Support This Free Tool

Every coffee helps keep the servers running. Every book sale funds the next tool I'm dreaming up. You're not just supporting a site — you're helping me build what developers actually need.

500K+ users
200+ tools
100% private
Privacy Guarantee: Private keys you enter or generate are never stored on our servers. All tools are served over HTTPS.

About This Tool & Methodology

Interactive neural network sandbox: choose layers, neurons, activations, and observe decision boundaries and loss curves on synthetic datasets. Trains in‑browser using gradient descent variants.

Learning Outcomes

  • Relate depth/width and activations to representational capacity.
  • Observe under/overfitting and regularization effects.
  • Understand how learning rate and batch size influence training.

Authorship & Review

  • Author: 8gwifi.org engineering team
  • Reviewed by: Anish Nath
  • Last updated: 2025-11-19

Trust & Privacy

  • All training/inference runs locally on synthetic data by default.