FREELessons: 11Length: 57 minutes

Next lesson playing in 5 seconds

  • Overview
  • Transcript

1.3 Hello, Web Audio API

What is the Web Audio API? And what makes it ideal for sound on the web? In this lesson, we’ll learn about audio nodes and the node graph.

Related Links

1.3 Hello, Web Audio API

In this lesson we're going to talk about the Web Audio API. We'll cover what it is, how it works, as well as some important concepts you'll need to grasp in order to use it. To quote the spec, the Web Audio API is a high-level JavaScript API for processing and synthesizing audio in web applications. This basically means we can use it to either create or process sound directly in the browser using JavaScript. Before diving in, let's talk about a concept that's at the heart of the Web Audio API. Think of your favorite electric guitarist. In order to get a specific sound, they'll connect a lead from their guitar to an effect pedal. Most commonly a distortion pedal. They'll then take a lead from that effect pedal and connect it to another effect in order to change the sound even more. Finally, they'll take a lead from that effect pedal and connect it to their amplifier in order to hear the sound they're making. This is exactly how the Web Audio API works. In the API, this is called the node graph, a collection of little nodes all wired together passing sound to each other. Let's write some code. If you plan on following along with the code in this course, make sure the browser you're using supports the Web Audio API. Head over to to check. At the time of recording this tutorial, we can see that it isn't currently supported in Internet Explorer, but that it's planned to be part of the next major release, which is great news for us. Okay, here we have a basic HTML page with a script tag in it. First we create what is known as an audio context. An audio context is a little container where all our sound will live. It's what gives us access to the Web Audio API and all of its various functions. We do this by saying var context = new AudioContext. Next, we create an oscillator. An oscillator is a simple way of generating a tone at a certain frequency. You don't need to worry about how it works at the moment, just think of it as something that makes a noise. We now have our first node in our node graph. Let's create another node. This time, a gain node. A gain node simply controls the volume of any sound that comes through it. Let's make it so it halves the volume of the sound. Okay, that looks good. Now, all we have to do is join them up. So let's take a lead from our oscillator and connect it to our gain node like so. We then need to connect our lead to our speakers so we can hear something. We do this by taking a lead from our gain node and connecting it to a special output node called context.destination. Context.destination basically represents your computer speakers. Now that we're all connected we just need the oscillator to start playing. We do this by simply calling oscillator.start. We want this to happen straight away, so use the audio context's own precise timing system by passing in context.currentTime as the parameter. This is basically saying start the oscillator at the current time, which is now. We don't want this oscillator running forever. So to make it stop, we use oscillator.stop. Let's say we want it to stop two seconds from now. So, as the parameter, we pass context.currentTime + 2. Let's load our page in the browser to hear how it sounds. [SOUND] There you go. We've created sound in the browser without a plugin or audio file in sight. Don't worry about remembering the syntax, or what the oscillator is doing. The important part of this lesson is to understand that the Web Audio API works by connecting different audio nodes together in a way that allows sound to be sent to your speakers. The Web Audio API works in exactly the same way as this when working with audio files. In the next tutorial, we'll learn a bit more about the various audio file formats before using them with the API itself.

Back to the top