Bandwidth Estimation & Congestion Control

Real-time media delivery requires precise coordination between sender pacing, receiver feedback, and transport-layer capacity. Modern WebRTC relies on Transport-wide Congestion Control (TWCC) to replace legacy REMB implementations, enabling per-packet timing analysis and sub-200ms latency targets.

Core Architecture of WebRTC Congestion Control

The congestion controller operates on a closed feedback loop. Receivers analyze inter-arrival times and packet loss, then signal adjustments via RTCP Transport Layer Feedback (TLFB) packets. Senders adjust pacing and encoder targets accordingly.

Implementation Steps

  1. Negotiate urn:ietf:params:rtp-hdrext:transport-wide-cc-01 in your SDP offer/answer. Without this extension, TWCC remains disabled.
  2. Configure RTCP interval to 50–100ms for high-frequency feedback.
  3. Enable delay-based estimation (GCC) as the primary controller, falling back to loss-based only when delay signals saturate.

Troubleshooting

Sender-Side vs Receiver-Side Bandwidth Estimation

WebRTC splits estimation across endpoints to isolate network degradation from local hardware constraints. The receiver calculates available bandwidth via Kalman-filtered delay trends, while the sender enforces pacing and encoder limits.

Implementation Steps

  1. Align track-level constraints (RTCRtpEncodingParameters) with transport capacity. Mismatched constraints cause encoder starvation during rapid network shifts.
  2. Monitor googCpuOveruseDetection thresholds. If encoding latency exceeds 100ms without RTCP delay spikes, the controller flags CPU overuse, not network congestion.
  3. Proper Audio/Video Track Management ensures track-level constraints align with transport-level capacity, preventing encoder starvation during rapid network shifts.

Troubleshooting

Implementing Adaptive Bitrate Strategies

Bitrate adaptation must account for codec efficiency, keyframe intervals, and network volatility. Dynamic scaling minimizes rebuffering while maintaining target latency.

Implementation Steps

  1. Calculate target bitrate using RTCRtpSender.getParameters() and apply caps via setParameters().
  2. Trigger FIR/PLI keyframe requests only after confirmed bitrate increases or layer switches to avoid recovery latency spikes.
  3. Configure Simulcast/SVC layer switching thresholds at 15–20% bandwidth deltas to prevent flapping.
  4. Pairing dynamic scaling with optimal VP8 vs H264 vs AV1 Codec Selection minimizes rebuffering and visual artifacts while maintaining target latency under 200ms.

Code: Dynamic Bitrate Adjustment

const sender = pc.getSenders().find(s => s.track.kind === 'video');
const params = sender.getParameters();

// Cap max bitrate to prevent network saturation during congestion spikes
params.encodings[0].maxBitrate = 1500000; 
await sender.setParameters(params);

Troubleshooting

Debugging Packet Loss & Jitter in Production

Isolating congestion from hardware bottlenecks requires systematic telemetry correlation. Use getStats() to validate estimator accuracy and distinguish true network degradation from encoder overload.

Implementation Steps

  1. Poll pc.getStats() at 1000–2000ms intervals. Higher frequencies increase main-thread overhead without improving estimator granularity.
  2. Parse inbound-rtp and transport reports to correlate loss rates with jitter buffer depth.
  3. Cross-reference with network packet captures (Wireshark/tcpdump) to validate TWCC sequence numbers.

Code: Extracting Congestion Metrics

const stats = await pc.getStats();
stats.forEach(report => {
 if (report.type === 'inbound-rtp' && report.kind === 'video') {
 const lossRate = (report.packetsLost / report.packetsReceived) * 100;
 console.log(`Packet Loss: ${lossRate.toFixed(2)}% | Jitter: ${report.jitter.toFixed(3)}s`);
 console.log(`Est. BW: ${report.availableOutgoingBitrate} bps`);
 }
});

Troubleshooting

Production Tuning & Network Resilience

Deploying real-time media at scale demands proactive parameter adjustments and explicit fallback routing. Cellular handoffs and asymmetric routing require aggressive estimator resets and ICE recovery paths.

Implementation Steps

  1. Adjust googCpuOveruseDetection thresholds for edge devices. Lower sensitivity on mobile to prevent premature bitrate cuts.
  2. Implement custom pacing algorithms to smooth burst transmission during TWCC ramp-up phases.
  3. Configure explicit fallback routing: if availableOutgoingBitrate < 300kbps for >5s, downgrade to audio-only or trigger ICE restart.
  4. Refer to Tuning WebRTC bandwidth estimator for unstable networks for advanced configuration patterns that handle cellular handoffs and asymmetric routing.

Browser Limits & Network Fallbacks

Common Pitfalls & FAQ

Critical Mistakes to Avoid

Frequently Asked Questions