APIs For Beginners (With Examples)
Richard Rembert
Posted on September 16, 2022
Application Programming Interfaces (APIs) can be defined as sets of functions and procedures which are created by an operating system, application or other service. They can be utilized by developers who wish to interact with those set procedures when building their own applications. Essentially they’re created to take away a lot of the more complex code and provide an easier syntax to work with.
For example, you might want to add a live twitter feed into your website. By working with the relevant methods provided by the Twitter API (via the developer documentation), you could query twitter & display the latest tweets by a particular user or hashtag.
APIs in JavaScript
JavaScript has numerous APIs available to it, which are typically defined as either Browser APIs or Third Party APIs. Let’s take a look at each.
Browser APIs
Browser APIs are built into the browser — containing data from the browser. With this data we can do many useful things, from simply manipulating the window
or element
, to generating intricate effects using APIs such as WebGL.
Some of the more common browser APIs that you’ll encounter are:
- APIs for document manipulation: The DOM (Document Object Model) is in fact an API! It allows you to manipulate HTML and CSS — creating, removing and changing HTML, dynamically applying new styles to your page, etc.
-
APIs for fetching server data: Such as
XMLHttpRequest
and the Fetch API, which we often use for data exchange and partial page updates. This technique is often called Ajax. -
APIs for manipulating graphics: Here we’re talking about Canvas and WebGL, which allow you to programmatically update the pixel data contained in an HTML
<canvas>
element to create 2D and 3D scenes. -
Audio/Video APIs: Such as
HTMLMediaElement
and the Web Audio API. You can use these to create custom controls for playing audio and video, display text tracks like captions and subtitles along with your videos. For audio you can add effects to audio tracks (such as gain, distortion etc). - Device APIs: Are for manipulating and retrieving data from device hardware to use with web apps. Such as notifying the user that an update is available.
Third Party APIs
Third party APIs are not built into the browser, and you’ll need to retrieve their code and information from somewhere else. By connecting with the third party API you can access and then work with the methods provided by the API.
Some common third party APIs are:
- Openweathermap.org: allows you to query weather data. For example, you could capture a users location and display their current temperature.
- Twitter API: you could display your latest tweets on your website.
- Google Maps API: allows you to completely customize maps to include on your web pages.
- YouTube API: allows you to embed YouTube videos on your site, search YouTube and automatically generate playlists.
- Twilio API: this API provides a framework for building voice and video call functionality into your app, sending SMS/MMS from your apps, and more.
How Do APIs Work?
Despite different APIs working in different ways, they all share some common base factors which we’ll take a look at now:
APIs Are Based on Objects
The code you write to interact with APIs will use one or more JavaScript objects. These objects serve as containers for the data the API uses (contained in object properties), and the functionality the API makes available (contained in object methods).
Let’s take a look at an example using the Web Audio API — the API consists of a number of objects. For example, we have:
-
AudioContext
this represents an audio graph that can be used to manipulate audio playing inside the browser, and has a number of methods and properties available to manipulate that audio. -
MediaElementAudioSourceNode
this represents an<audio>
element containing sound you want to play and manipulate inside the audio context. -
AudioDestinationNode
this represents the destination of the audio, i.e. the device on your computer that will actually output it — usually either your speakers or headphones.
How do these objects interact? Let’s look at the following HTML:
<audio src = "funkybeats.mp3"></audio>
<button class = "paused">Play</button>
<br />
<input type = "range" min = "0" max = "1" step = "0.10" value = "1" class="volume">
We have here an <audio>
element which we use to embed our MP3 into the page. We don't include any default browser controls. Then we’ve added a <button>
that we'll use to play and stop the music, and an <input>
element of type range, which we'll use to adjust the track volume while it's playing.
Let’s see the JavaScript for this example.
We’ll start by creating an AudioContext
instance inside which to manipulate our track:
const AudioContext = window.AudioContext || window.webkitAudioContext;
const audioCtx = new AudioContext();
Then we’ll create constants that store references to our <audio>
, <button>
, and <input>
elements, and use the AudioContext.createMediaElementSource()
method to create an MediaElementAudioSourceNode
representing the source of our audio — the <audio>
element it will be played from:
const audioElement = document.querySelector('audio');
const playBtn = document.querySelector('button');
const volumeSlider = document.querySelector('.volume');
const audioSource = audioCtx.createMediaElementSource(audioElement);
Next we’ll include a couple of event handlers that toggle between play and pause whenever the button is pressed, and reset the display back to the beginning when the song has finished playing:
// Play + Pause audio
playBtn.addEventListener('click', function() {
// check if context is in suspended state (autoplay policy)
if (audioCtx.state === 'suspended') {
audioCtx.resume();
}
// If track is stopped, play it
if (this.getAttribute('class') === 'paused') {
audioElement.play();
this.setAttribute('class', 'playing');
this.textContent = 'Pause'
// If track is playing, stop it
} else if (this.getAttribute('class') === 'playing') {
audioElement.pause();
this.setAttribute('class', 'paused');
this.textContent = 'Play';
}
});
// If the track ends
audioElement.addEventListener('ended', function() {
playBtn.setAttribute('class', 'paused');
this.textContent = 'Play'
});
Note: The play()
and pause()
methods being used are actually part of the HTMLMediaElement
API, and not the Web Audio API (though they’re closely-related!).
Now we create a GainNode
object using the AudioContext.createGain()
method, which can be used to adjust the volume of audio fed through it, and create another event handler that changes the value of the audio graph's gain (volume) whenever the slider value is changed:
const gainNode = audioCtx.createGain();
volumeSlider.addEventListener('input', function() {
gainNode.gain.value = this.value;
});
Then finally we connect the different nodes in the audio graph up, which is done using the AudioNode.connect()
method available on every node type:
audioSource.connect(gainNode).connect(audioCtx.destination);
The audio starts in the source, which is then connected to the gain node so the audio’s volume can be adjusted. The gain node is then connected to the destination node so the sound can be played on your computer (the AudioContext.destination
property represents whatever is the default AudioDestinationNode
available on your computer's hardware, i.e. your speakers).
APIs Have Recognizable Entry Points
Whenever you use an API, make sure you know where the entry point is! In the Web Audio API — it’s the AudioContext
object. This needs to be used to do any manipulation whatsoever.
When using the Document Object Model (DOM) API — its features tend to be found hanging off the Document
object, or an instance of an HTML element that you want to affect in some way, for example:
// Create a new em element
let em = document.createElement('em');
// Reference an existing p element
let p = document.querySelector('p');
// Give em some text content
em.textContent = 'Hello, world!';
// Embed em inside
p.appendChild(em);
The Canvas API also relies on getting a context object to use to manipulate things. Its context object is created by getting a reference to the <canvas>
element you want to draw on, and then calling its HTMLCanvasElement.getContext()
method:
let canvas = document.querySelector('canvas');
let ctx = canvas.getContext('2d');
Anything that we want to do to the canvas is then achieved by calling properties and methods of the context object (which is an instance of CanvasRenderingContext2D
), for example:
Circle.prototype.draw = function() {
ctx.beginPath();
ctx.fillStyle = this.color;
ctx.arc(this.x, this.y, this.size, 0, 2 * Math.PI);
ctx.fill();
};
APIs Handle Changes in State With Events
When using the XMLHttpRequest
object (each one represents an HTTP request to the server to retrieve a new resource) we have a number of events available, for example the load
event is fired when a response has been successfully returned containing the requested resource, and it is now available.
The following code provides an example of how this could be used:
let requestURL = 'https://a-website.com/json/usernames.json';
let request = new XMLHttpRequest();
request.open('GET', requestURL);
request.responseType = 'json';
request.send();
request.onload = function() {
let usernames = request.response;
populateHeader(usernames);
showNames(usernames);
}
The first five lines specify the location of resource we want to fetch, create a new instance of a request object using the XMLHttpRequest()
constructor, open an HTTP GET
request to retrieve the specified resource, specify that the response should be sent in JSON format, then send the request.
The onload
handler function then specifies what we do with the response. We know the response will be successfully returned and available after the load event has required (unless an error occurs), so we save the response containing the returned JSON in the usernames
variable, then pass it to two different functions for further processing.
Note: In the next article, we’re going to take a look at fetching data from a server in more detail.
Summary
We’ve looked at what APIs are, how they work, we compared browser and third party APIs and finally we looked at some of the common characteristics that many APIs share. I hope this introduction gave you more of an understanding of what APIs are, and how we can use them in our projects!
Also, if you're into Formula One Racing, check out this project I created using the F1 API to track who won each race & drivers standings for the season.
You can find the GitHub repo here.
Conclusion
If you liked this blog post, follow me on Twitter where I post daily about Tech related things!
If you enjoyed this article & would like to leave a tip — click here
🌎 Let's Connect
Posted on September 16, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.