Skip to contentSkip to navigationSkip to topbar
Page toolsOn this page
Looking for more inspiration?Visit the

Voice JavaScript SDK: AudioProcessor


You can add a local AudioProcessor to the SDK to access and process the audio input stream before sending it to Twilio. Similarly, you can add a remote AudioProcessor to access and process the audio output stream before it is rendered on the speaker.

  • To add a processor, implement the AudioProcessor interface and use device.audio.addProcessor.
  • To remove a processor, use device.audio.removeProcessor.
  • To specify whether the processor is local or remote, use the optional isRemote parameter.

Use cases include:

  • Background noise removal using a noise cancellation library of your choice
  • Music playback when putting the call on hold
  • Audio filters
  • AI audio classification
  • ... and more!

Example:

The following example demonstrates how to utilize AudioProcessor APIs to use background music for local audio instead of using a microphone.

1
import { AudioProcessor, Device } from '@twilio/voice-sdk';
2
3
let audioContext;
4
5
class BackgroundAudioProcessor implements AudioProcessor {
6
7
private audioContext: AudioContext;
8
private background: MediaElementAudioSourceNode;
9
private destination: MediaStreamAudioDestinationNode;
10
11
constructor() {
12
if (!audioContext) {
13
audioContext = new AudioContext();
14
}
15
this.audioContext = audioContext;
16
}
17
18
async createProcessedStream(stream: MediaStream): Promise<MediaStream> {
19
// Create the source node
20
const audioEl = new Audio('/background.mp3');
21
audioEl.addEventListener('canplaythrough', () => audioEl.play());
22
this.background = this.audioContext.createMediaElementSource(audioEl);
23
24
// Create the destination node and connect the source node
25
this.destination = this.audioContext.createMediaStreamDestination();
26
this.background.connect(this.destination);
27
28
// Return the resulting MediaStream
29
return this.destination.stream;
30
}
31
32
async destroyProcessedStream(stream: MediaStream): Promise<void> {
33
// Cleanup
34
this.background.disconnect();
35
this.destination.disconnect();
36
}
37
}
38
// Construct a device object, passing your own token and desired options
39
const device = new Device(token, options);
40
41
// Construct the AudioProcessor
42
const processor = new BackgroundAudioProcessor();
43
44
// Add the local processor
45
await device.audio.addProcessor(processor, false);
46
// Remove the local processor later
47
// await device.audio.removeProcessor(processor, false);
48
49
// Or add the remote processor
50
// await device.audio.addProcessor(processor, true);
51
// Remove the remote processor later
52
// await device.audio.removeProcessor(processor, true);

Method Reference

method-reference page anchor

audioProcessor.createProcessedStream(stream)

audioprocessorcreateprocessedstreamstream page anchor

Called by the SDK whenever the active input audio stream is updated. Use this method to initiate your audio processing pipeline and return the resulting audio stream in a Promise<MediaStream>.

This method has one argument which represents the current input audio stream. This is the MediaStream object from the input device, such as the microphone. You can process or analyze this stream and create a new stream that will be sent to Twilio.

audioProcessor.destroyProcessedStream(stream)

audioprocessordestroyprocessedstreamstream page anchor

Called by the SDK after the original input audio stream and the processed stream has been destroyed. The stream is considered destroyed when all of its tracks are stopped and its references in the SDK are removed. This method is called whenever the current input stream is updated. Use this method to run any necessary teardown routines needed by your audio processing pipeline and return a Promise<void> representing the result of the teardown process.

This method has one argument which represents the destroyed processed audio stream.