Skip to main content
← Back to Blog

December 2025 • 12 min read

Optimizing Real-Time Audio Performance in React

Why useState breaks your audio, when to use useRef, and the patterns that separate professional audio apps from glitchy toys.

Share

React is amazing for building UIs. It's also, by default, terrible for building audio applications. Every time I see a Web Audio tutorial that uses useState for synth parameters, I cringe—because I know it's going to glitch.

After building several production audio apps in React, I've developed a set of patterns that let you have the best of both worlds: React's component model for your UI, and glitch-free real-time audio. Here's what I've learned.

The Problem: React's Render Cycle

React re-renders components when state changes. This is great for keeping your UI in sync with your data. But re-renders take time—usually 1-16ms depending on your component tree.

For audio running at 44.1kHz, you have about 23ms before an audio buffer underrun (128 samples ÷ 44100 samples/sec). That sounds like plenty of time, but here's the catch: the main thread also handles user input, layout, paint, and garbage collection. Under load, your render might not complete before the audio thread needs new data.

The result? Clicks, pops, and glitches. The audio thread is starved while React figures out which components need updating.

Pattern 1: Refs for Audio Parameters

The most important pattern: never read state directly in audio callbacks. Instead, use useRef to create a "window" into your current state that audio code can read without triggering re-renders.

The Pattern

// State for UI (shows current value, controls knob position)
const [cutoff, setCutoff] = useState(800)

// Ref for audio (audio callback reads this directly)
const cutoffRef = useRef(cutoff)

// Sync ref to state on every render
cutoffRef.current = cutoff

// Audio callback reads the ref, NOT the state
const playNote = useCallback(() => {
  filter.frequency.value = cutoffRef.current  // ✅ Instant
  // filter.frequency.value = cutoff           // ❌ Stale closure
}, [])  // Empty deps - callback never changes

Why does this work? The ref update (cutoffRef.current = cutoff) happens synchronously during render, before any audio callback could possibly run. The audio callback always sees the latest value without needing to re-subscribe or trigger additional renders.

Pattern 2: Look-Ahead Scheduling

JavaScript timers (setTimeout, setInterval) are not accurate enough for music. They can drift by 10-50ms when the main thread is busy. But they're fine for triggering audio events—as long as you schedule those events ahead of time using the Web Audio clock.

Look-Ahead Scheduling

const scheduleAheadTime = 0.1  // Schedule 100ms ahead
const lookahead = 25            // Check every 25ms

const scheduler = useCallback(() => {
  const ctx = audioContextRef.current
  
  // Schedule all notes that should play in the next 100ms
  while (nextNoteTime < ctx.currentTime + scheduleAheadTime) {
    // Schedule at EXACT audio clock time
    scheduleNote(nextNoteTime)
    nextNoteTime += secondsPerBeat
  }
}, [])

// Loose JS timer wakes us up to schedule more notes
useEffect(() => {
  const interval = setInterval(scheduler, lookahead)
  return () => clearInterval(interval)
}, [scheduler])

The key insight: the JavaScript timer can be late, but the scheduled audio events will play at exactly the right time because they're timestamped in audio clock time. The audio thread has a 100ms buffer of pre-scheduled events to work through.

Pattern 3: Cancel Before Reschedule

When you schedule automation on an AudioParam (like a filter sweep), that automation will fight with any new values you try to set. The param is locked to the scheduled curve until it completes.

Clearing Scheduled Values

// When user moves the cutoff knob:
useEffect(() => {
  if (filterRef.current && audioContextRef.current) {
    const now = audioContextRef.current.currentTime
    
    // Cancel any scheduled automation
    filterRef.current.frequency.cancelScheduledValues(now)
    
    // Set the new value immediately
    filterRef.current.frequency.setValueAtTime(cutoff, now)
  }
}, [cutoff])

Without the cancelScheduledValues call, the filter would ignore your knob changes until the previous envelope completed. The user would turn the knob and... nothing would happen. Frustrating.

Pattern 4: Separate Timing from Display

A sequencer needs to show which step is currently playing. The naive approach is to update state on every step—but that triggers a re-render 4-16 times per beat, which adds up fast at high tempos.

Decoupled Display Updates

// Audio scheduler tracks steps internally (no state!)
const synthStepRef = useRef(0)

// Queue scheduled steps with their audio timestamps
const scheduledStepsRef = useRef<{time: number, step: number}[]>([])

// When scheduling a note:
scheduledStepsRef.current.push({ 
  time: nextNoteTime, 
  step: synthStepRef.current 
})

// Separate UI update loop (requestAnimationFrame, not tied to audio)
const updateUI = useCallback(() => {
  const now = audioContextRef.current.currentTime
  
  // Display steps that should be showing NOW
  while (scheduledSteps[0]?.time <= now) {
    const { step } = scheduledSteps.shift()
    setCurrentStep(step)  // Only update state when visible
  }
  
  requestAnimationFrame(updateUI)
}, [])

Now the audio scheduler runs at its own pace, and the display updates only when there's something new to show—and only at 60fps max, not at audio sample rate.

Pattern 5: Refs for Audio Nodes Too

Audio nodes themselves should be stored in refs, not state. You don't want to re-create your entire audio graph every time a component re-renders.

Stable Audio Node References

const audioContextRef = useRef<AudioContext | null>(null)
const oscillatorRef = useRef<OscillatorNode | null>(null)
const filterRef = useRef<BiquadFilterNode | null>(null)
const gainRef = useRef<GainNode | null>(null)

const initAudio = async () => {
  // Only create context once
  if (!audioContextRef.current) {
    audioContextRef.current = new AudioContext()
  }
  
  // Resume if suspended (browser autoplay policy)
  if (audioContextRef.current.state === "suspended") {
    await audioContextRef.current.resume()
  }
  
  // Create nodes only if they don't exist
  if (!filterRef.current) {
    filterRef.current = audioContextRef.current.createBiquadFilter()
    // ... wire up the audio graph
  }
}

When To Use State vs Refs

Use State For
Use Refs For
UI display values (knob positions, current step)
Audio parameter values (read by callbacks)
Play/stop status (affects component rendering)
Audio nodes (OscillatorNode, FilterNode, etc.)
Pattern data (step sequence, notes)
Scheduler state (next note time, current step index)
Preset selection, mode toggles
Animation frame IDs, interval IDs

The Mental Model

Think of it this way: React manages what the user sees. Refs manage what the audio thread hears. They sync on every render, but they operate independently. The audio thread never waits for React, and React never blocks audio.

This separation is what lets you build audio apps that feel responsive. The knobs feel instant because they are—the ref updates synchronously. The audio sounds stable because it's scheduled ahead of time on the audio clock. And the UI stays smooth because it only re-renders when there's something worth showing.

Real-World Results

Using these patterns, I've built synthesizers and sequencers that achieve:

  • Sub-millisecond timing jitter (compared to 10-50ms with naive setInterval)
  • Zero audio glitches during UI interaction (dragging knobs, switching presets)
  • ~2% CPU usage on modern hardware at 138 BPM with visualization
  • Instant parameter response (no perceptible latency when moving controls)

These aren't just nice-to-haves. They're the difference between a toy and a tool. Musicians can tell when something feels off, even if they can't articulate why. Getting the fundamentals right is what makes people want to actually use your app.

Summary

Building audio apps in React is absolutely possible—you just need to respect the boundary between the UI and audio worlds:

  • 1. Use refs for audio parameters - bypass React's render cycle
  • 2. Schedule ahead with the audio clock - don't trust JS timers for timing
  • 3. Cancel before rescheduling - prevent automation conflicts
  • 4. Decouple display from timing - update UI at 60fps, not audio rate
  • 5. Store nodes in refs - avoid recreating your audio graph

Want to see these patterns in action? Check out the ACID-303 synthesizer or read the technical case study for a deeper dive into the implementation.