Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From $16.50/m
by
Lessons:11Length:57 minutes
Webaudio 1
  • Overview
  • Transcript

4.1 Loading and Playing an Audio File

In this lesson, I’ll show you how we could use an audio file as our success motif, instead of oscillators.

Related Links

4.1 Loading and Playing an Audio File

Let's put our oscillators aside and talk about a different aspect of the wave audio API. And that is its ability to play audio files, now you may think we don't need the wave audio API for that. We have the perfectly fine audio tag, well audio tag is great for many things such as adding a song to your blog or streaming an internet radio station, but we don't really have a lot of control over it. Importantly, we don't have split second timing or the ability to add effects. Controlling timing is essential when responding to user interaction. Like the audio element, the wave audio api only supports certain file types. These types are determined by which audio codecs each browser supports. You can see what audio files your browser supports by visiting html5test.com. You can see here that my current browser, Chrome, has good support for a lot of different codecs. While if I switch over to Safari. Things are not so great. MP3 files, however, are supported by all major browsers and are extremely common. So let's try loading and playing one of them. You probably have at least one or two MP3 files lying around in your computer. But if not I've included one in the Get Help repository which contains all the code for this course. Let's create an audio directory in our project where we'll put an MP3 file. I'll preview this one so you'll know what it sounds like. [MUSIC] You can hear it's another little success motif. Then you can use it in response to a user buying something from our mock store. Though because we're not using any sort of html element to load in audio we're going to have to load programatically using JavaScript. First, let's head back to our editor. So I'm not overwriting a previous work I made a duplicate of our JavaScript file, I've called it play-file.js and I've referenced it from our index.html. In this file let's just delete everything we won't need to play a file. Meaning we can get rid of all the oscillator code. So let's remove the play note and place success sound functions like so. And let's get rid of the call to play success sound. Okay, in order to play an MP3 using the web audio API, we first need to load it, because loading a file isn't instantaneous we should do it well before we actually want to play it. It wouldn't be a great experience for a user to press a button, wait for the MP3 file to load, then hear it play. We can anticipate that the sound will probably be used at some point. And as such, we can start loading it as soon as the page loads. To do this in JavaScript we use an XML http request. Which we create by saying var request equals new XML http request. Then we open our request. So request.open. And we want this to be a get request. And for it to get our MP3 file. Which is at, well it's down at right three, so ../audio/success.mp3. And we want this to be an asynchronous request so that our browser remains responsive while the file is loading. And we do that by passing true as the last value. Next, we want to specify the type of file we're expecting. In this case, it's an array buffer. Array buffers are just large arrays that are used for storing data. We do the same request.responseType equals greybuffer. Okay, next we want to specify the callback function that gets called when the file is finally loaded. So we say request.onload equals, and let's just give it an empty function at the moment. Now, the array body we appear in doesn't quite understand the data in an array buffer. So, it needs to convert it into something more playable, which we can do by saying Context.decodeAudioData and here we pass in the response data which is request.response. And we'll need a call back function for when the decoding is done The callback gives us the decoded audio data in the form of an audio buffer, like so. Audio buffers are what the Web Audio API uses to hold and play short snippets of audio. In order to use the outside of this function, let's create the debatable of the top here, which we can then use elsewhere by assigning our decoded buffer to it. Okay, that looks good, the only thing we have to do now is actually send the request off, which is as easy as saying request.send. Let's head over to the browser to check that's being loaded okay. So our request has thrown an error because I've just opened our index file in the browser directly. In order for XML HTTP requests to work, the files need to be served from a web server. There are many ways to do this. You may have an Apache server running locally, or use a gulp task that sets one up for you. But, if you're on a Mac an easy way to do this is on the command lane. Type python- M simpleHTTPserver. This starts a little web server running from directory you typed the command. Let's have a look. By default, it's accessible from local host on port 8000. Great, no errors. Well, apart from this unrelated on about us not specifying a five icon. If we look under the network tab of Chrome dev tools, we can see that success.mp3 is getting loaded successfully. Okay, so how do we play it? Let's create a play audio file function, so we can play our file whenever we want. And within this function, we're going to make an audio buffer source node by saying VARSOURCE equals context.create buffer source. Remember, all web audio API methods are available through the context object. Now the buffer source node is similar to our oscillator nodes from before. But instead of generating a tone, they play audio buffers, like the one we just loaded. We just need to tell it what buffer to use, by saying source.buffer = audio buffer. And then by connecting to our speakers, in exactly the same way as before. Source.connect Context.destination. And we want it to start as soon as the function is called, by doing just like before, source.start(context.currentTime). Then we just wire into our old friend, the fake Ajax call, so that it's played a few seconds after we press our button. Okay, that all looks good. Let's try it out. [NOISE] Fantastic. The benefit of the support is that if you have a complicated or sophisticated piece of audio you wish to use, or perhaps your company has its own bit of audio branding like Intel. Then you can still precisely schedule when that audio starts or stops, or even automate the volume of that audio over time. This technique of using audio files and the web audio API is only really ideal for fairly short pieces of audio. If your audio file is longer than 45 seconds, then you should probably be using the audio tag instead. I think we've covered a lot in this course. So let's wrap up.

Back to the top