Skip to main content
← Back to Lab
PrototypeAI Experiment

Vibe Rater

Real-time audio mood classification. Analyzes sound from your microphone to detect emotional characteristics.

Click the button above to start analyzing audio

How It Works

This prototype uses real-time audio analysis to extract features from your microphone input:

  • Energy: Overall loudness and dynamic range
  • Brightness: Ratio of high to low frequencies
  • Activity: Rate of change in the audio signal

These features are then mapped to mood categories using a simple rule-based classifier. In production, this would be replaced with a trained neural network (e.g., TensorFlow.js with a pre-trained audio classification model).

Technical Notes

Current Implementation

Rule-based classifier using audio features

Planned Upgrade

TensorFlow.js with YAMNet or similar pre-trained model

Audio Processing

Web Audio API AnalyserNode, FFT analysis

Real-time Performance

~60fps analysis and visualization