Hi guys, what I had to do few days ago is dynamically change speed of audio playback.
Sample is sound amplitude.
Audio file is just encoded/compressed list of samples.
Samplerate is number of samples per second. (the one I will be using is 44100)
Buffer is Array of samples that are going to go out of your speaker. (Common sizes are 256, 512, 1024, etc. sample rates)
Technology I was using for that is Web Audio API.
AudioNode is basicaly Object which you pipe audio into and/or get audio out of it.
AudioNodes connect to each other just like any audio equipment would in real life.
For example you maybe have microphone, speaker, spectrum analyzer with passthrough and amplifier.
You can connect for example microphone -> amplifier -> spectrum analyzer -> speaker
What most of the people in Web Audio API would use if they wanted to play music inside it is
MediaElementAudioSourceNode (implementation of AudioNode that gets sound from HTML5
It simply plays audio and that is it.
Problem with this one is that it is impossible to change speed. (might be possible to slow it down, but I did not find any ways to make it play music faster)
What I ended up doing is getting mp3 using XHR as ArrayBuffer and running decodeAudioData from audio context on it.
That gave me back buffer containing whole audio:
What I can do now is take data from buffer at any rate.
Now that I have list of amplitudes I can play that audio back.
Web Audio API does not have any way to generate sound, just process it (at least I did not find anything like that).
What I ended up doing is using ScriptProcessor (AudioNode for transforming sound).
I did not hook up any audio sources to it, just hooked it up to sink.
transformationFunction is function that is getting called each time you need to process audio. Inside it I am taking audio from music buffer and putting it into output buffer.
What I have is huge buffer that contains song and I have very small buffer I am writing to.
I have counter
i which tells me where I am inside input buffer.
What I do is iterate inside output buffer and calculate what I want to take from input buffer for that sample.
Let’s say playback speed is 1.5.
What I need to put into output buffer are samples 1.5, 3, 4.5, etc.
But, there are samples 1 and 2, and there is no sample 1.5, what do I do now?
Well, what you can do is: