How to create a progressive audio player with React hooks
Nico Martin
Posted on April 26, 2020
I'm a huge fan of the web as an open platform to distribute software. That's why I'm always looking for new ideas to experiment with upcoming browser APIs. Some time ago I've stumbled upon a Twitter-thread where Aleksej and Jonny were talking about a webapp that would allow you to listen to the audio stream of a YouTube video in the background.
Long story short, I built it:
nico-martin / yt-audio
A ProgressiveWebApp that allows you to listen to youtube videos in the background
The main idea was to create a useful implementation of the share target API. But that was just the beginning. The most interesting part was definitely the audio player. My first prototype was using a plain audio HTML Element. But soon there were quite some requests for a more extensive audio player.
useAudio
I've written the whole app with React (using Preact under the hood) and since I'm a big fan of React hooks I thought it would be a good idea to outsource the player to a custom useAudio-hook.
I quickly found a great inspiration on GitHub where Vadim Dalecky published this huge library of React Hooks. I really like his implementation, but there were some features missing and I thought I could simplify some things.
One of the most important things is the separation between state
(the current state of the player) and controls
(which are used to interact with the player).
So in the end I had a useAudio
-hook that looks like this:
// useAudio.jsx
import React, { useEffect, useRef, useState } from 'react';
const parseTimeRange = ranges =>
ranges.length < 1
? {
start: 0,
end: 0,
}
: {
start: ranges.start(0),
end: ranges.end(0),
};
export default ({
src,
autoPlay = false,
startPlaybackRate = 1
}) => {
const [state, setOrgState] = useState({
buffered: {
start: 0,
end: 0,
},
time: 0,
duration: 0,
paused: true,
waiting: false,
playbackRate: 1,
endedCallback: null,
});
const setState = partState => setOrgState({ ...state, ...partState });
const ref = useRef(null);
const element = React.createElement(
'audio',
{
src,
controls: false,
ref,
onPlay: () => setState({ paused: false }),
onPause: () => setState({ paused: true }),
onWaiting: () => setState({ waiting: true }),
onPlaying: () => setState({ waiting: false }),
onEnded: state.endedCallback,
onDurationChange: () => {
const el = ref.current;
if (!el) {
return;
}
const { duration, buffered } = el;
setState({
duration,
buffered: parseTimeRange(buffered),
});
},
onTimeUpdate: () => {
const el = ref.current;
if (!el) {
return;
}
setState({ time: el.currentTime });
},
onProgress: () => {
const el = ref.current;
if (!el) {
return;
}
setState({ buffered: parseTimeRange(el.buffered) });
},
}
);
let lockPlay = false;
const controls = {
play: () => {
const el = ref.current;
if (!el) {
return undefined;
}
if (!lockPlay) {
const promise = el.play();
const isPromise = typeof promise === 'object';
if (isPromise) {
lockPlay = true;
const resetLock = () => {
lockPlay = false;
};
promise.then(resetLock, resetLock);
}
return promise;
}
return undefined;
},
pause: () => {
const el = ref.current;
if (el && !lockPlay) {
return el.pause();
}
},
seek: time => {
const el = ref.current;
if (!el || state.duration === undefined) {
return;
}
time = Math.min(state.duration, Math.max(0, time));
el.currentTime = time || 0;
},
setPlaybackRate: rate => {
const el = ref.current;
if (!el || state.duration === undefined) {
return;
}
setState({
playbackRate: rate,
});
el.playbackRate = rate;
},
setEndedCallback: callback => {
setState({ endedCallback: callback });
},
};
useEffect(() => {
const el = ref.current;
setState({
paused: el.paused,
});
controls.setPlaybackRate(startPlaybackRate);
if (autoPlay && el.paused) {
controls.play();
}
}, [src]);
return { element, state, controls };
};
YTAudio is written in TypeScript. If you are using TypeScript you should use the hook I'm using there.
In the end we still need to create an HTML-Audio element that we then also need to "mount" it to the dom. But the state
/controls
abstractions make it really easy to interact with it:
// player.jsx
import React from 'react';
import useAudio from './useAudio';
const Player = () => {
const { element, state, controls } = useAudio({
src:
'https://file-examples.com/wp-content/uploads/2017/11/file_example_MP3_2MG.mp3',
});
return (
<div>
{element}
<button onClick={() => controls.seek(state.time - 10)}>-10 sec</button>
<button
onClick={() => {
state.paused ? controls.play() : controls.pause();
}}
>
{state.paused ? 'play' : 'pause'}
</button>
<button onClick={() => controls.seek(state.time + 10)}>+10 sec</button>
<br />
{Math.round(state.time)} / {Math.round(state.duration)}
<br />
Playback Speed (100 = 1)
<br />
<input
onChange={e => controls.setPlaybackRate(e.target.value / 100)}
type="number"
value={state.playbackRate * 100}
/>
</div>
);
};
And where does the "progressive" come from?
Well, to be honest I first wanted to write one article about the whole project. But then I decided to move the "progressive" parts to their own posts. So just keep an eye on my "YTAudio"-Series here on dev.to.
The full example of my custom audio player is available on GitHub: https://github.com/nico-martin/yt-audio/tree/master/src/app/Player
Posted on April 26, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.