Building real-time dashboard using React, GraphQL subscriptions and Redis PubSub
Navaneesh Kumar
Posted on March 10, 2019
In this post, we will be creating a simple scalable dashboard that updates in real-time using React, GraphQL Subscriptions, and Redis PubSub. Real-time dashboards are used for monitoring infrastructure (servers, network, services), application traffic (transaction volume, number of users), alerts (application health, notify of critical issues, downtimes) etc. In most cases, dashboards are driven by one or more datasources.
Developers utilize a few open-source applications to create rich and useful dashboards. For example, Kibana is used for visualizing application logs integrated with ELK Stack. Grafana provides the platform for building variety of visualizations on top of time series databases such as Prometheus, Graphite, and OpenTSDB. But, as of today, they support only pull-based model. That is, when a user opens the browser, the application queries the datasource to render the dashboard. It is the most widely used model as compared to a Push model.
When push-model can be used?
Assume you have a dashboard consisting of 20 panels; querying data from multiple datasources in real-time. The User has set a refresh rate of 5 seconds. If, on an average 100 users open the dashboard at any given time results in 20 x 100 = 2000 requests every 5 seconds! This is manageable if you have good infrastructure for your underlying time-series database. Otherwise multiple heavy queries can pile-up the memory causing delay in retrieving result. This problem can be solved either by introducing an intelligent caching solution, or a simple push-model using WebSockets. It is useful (and simple), for the situation where multiple users are querying for the same data, at the same or slightly-different time.
Here's a minimal flow of how push-model can work:
A Connection is established between server and client using WebSocket.
Server sends the required data to client at regular intervals
If the connection breaks, the client can retry (even indefinitely).
At any given point of time, all clients display the same data
What are we building?
Here's the preview of a simple real-time dashboard we will be building. It contains 4 panels - CPU Utilization, Traffic information, Data-center distribution, and alerts.
GraphQL Subscriptions
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. Check out graphql.org for more info if you are not familiar with GraphQL.
just as the list of mutations that the server supports describes all of the actions that a client can take, the list of subscriptions that the server supports describes all of the events that it can subscribe to. Just as a client can tell the server what data to refetch after it performs a mutation with a GraphQL selection, the client can tell the server what data it wants to be pushed with the subscription with a GraphQL selection. - GraphQL blog
For example, the client can subscribe for CPU data using the following subscription syntax
Using Redis as a mediator for publishing events from client to server enables horizontal scaling. The package graphql-redis-subscriptions can be plugged as a PubSubEngine interface to graphql-subscriptions.
Starting worker
Scheduled Jobs for CPU, Traffic, distribution, messages
Fetched new results for MESSAGES
Fetched new results for CPU
Fetched new results for DISTRIBUTION
Fetched new results for CPU
Fetched new results for MESSAGES
Fetched new results for TRAFFIC
...
Query - for getting the initial result from Redis.
Mutation - for publishing new messages.
Subscription - for data exchange in real-time between client and server.
const{gql}=require("apollo-server");constschema=gql`
type Dps {
timestamp: Int!
value: Float!
}
type Traffic {
total: Int!
dps: [Dps]
}
type CPU {
percentage: Float!
}
type Distribution {
region: String!
percentage: Float!
}
type Message {
title: String!
description: String!
color: String!
}
type Query {
cpu: CPU
traffic: Traffic
distribution: [Distribution]
messages: [Message]
}
type Mutation {
cpu: CPU
traffic: Traffic
distribution: [Distribution]
messages: [Message]
}
type Subscription {
cpu: CPU
traffic: Traffic
distribution: [Distribution]
messages: [Message]
}
`;module.exports=schema;
The helper functions are provided to generate dummy data for all 4 panels - refer server/utils/generator.js. Using these data generators, write a wrapper function publishRandomData.
const{ApolloServer}=require("apollo-server");consttypeDefs=require("./schema");constresolvers=require("./resolvers");// Serverconstserver=newApolloServer({typeDefs,resolvers});server.listen().then(({url})=>{console.log(`π Server ready at ${url}`);});
$ yarn start
yarn run v1.13.0
$ nodemon index.js
...
π Server ready at http://localhost:4000/
Subscribe to CPU percentage in Tab 1 and hit the play button
subscription{cpu{percentage}}
Run the mutation for CPU in Tab 2 for publishing a random percentage value. The same will be received as an event in Tab 1. Try the mutation multiple times to receive different values.
mutation{cpu{percentage}}
Run the query for CPU in Tab 3. The last published value is returned - this is because the recent value is cached in Redis.
query{cpu{percentage}}
{"data":{"cpu":{"percentage":25}}}
Client
Create a new React application using create-react-app for client
Setup Apollo HTTP client and websocket client, since both types of connection are required. HTTP server will be running at http://localhost:4000 and websocket subscription server at ws://localhost:4000/graphql.
importReact,{Component}from"react";import{ApolloClient}from"apollo-client";import{InMemoryCache}from"apollo-cache-inmemory";import{ApolloProvider}from"react-apollo";import{split}from"apollo-link";import{HttpLink}from"apollo-link-http";import{WebSocketLink}from"apollo-link-ws";import{getMainDefinition}from"apollo-utilities";import'./App.css'importHomefrom"./Pages/Home";// Create an http link:consthttpLink=newHttpLink({uri:"http://localhost:4000"});// Create a WebSocket link:constwsLink=newWebSocketLink({uri:`ws://localhost:4000/graphql`,options:{reconnect:true}});// using the ability to split links, you can send data to each link// depending on what kind of operation is being sentconstlink=split(// split based on operation type({query})=>{const{kind,operation}=getMainDefinition(query);returnkind==="OperationDefinition"&&operation==="subscription";},wsLink,httpLink);constclient=newApolloClient({link,cache:newInMemoryCache()});classAppextendsComponent{render(){return (<ApolloProviderclient={client}><Home/></ApolloProvider>);}}exportdefaultApp;
The Home component is wrapped with ApolloProvider, which enables running queries and subscriptions.
Refer the file CpuUsage.js for complete class definition with Pie chart
Worker
Real events can be mocked using a simple scheduler script by calling mutation for the 4 panels at regular intervals. The package node-schedule can be used for creating asynchronous schedulers.
Install the dependencies
yarn add node-schedule request request-promise
Define the mutations for each panels
constqueries={CPU:`
mutation {
cpu {
percentage
}
}
`,TRAFFIC:`
mutation {
traffic {
total
dps {
timestamp
value
}
}
}
`,DISTRIBUTION:`
mutation {
distribution {
region
percentage
}
}
`,MESSAGES:`
mutation {
messages {
title
description
color
}
}
`};
For example, add a scheduler for CPU using schedule.scheduleJob for every 3 seconds
constschedule=require("node-schedule");schedule.scheduleJob("*/3 * * * * *",async ()=>{awaitmakeHttpRequest("CPU");// Call mutation for CPU panelconsole.log("Fetched new results for CPU");});
$ yarn start
yarn run v1.13.0
$ node worker.js
Starting worker
Scheduled Jobs for CPU, Traffic, distribution, messages
Fetched new results for TRAFFIC
Fetched new results for MESSAGES
Fetched new results for CPU
Fetched new results for DISTRIBUTION
Fetched new results for CPU
Fetched new results for MESSAGES
Fetched new results for TRAFFIC
...
...
Scaling
For high-availability, server program would be deployed in multiple instances connected using a Load-balancer.
Consider 4 servers S1, S2, S3 and S4. When a user opens the browser (client), it can connect to any of the servers via load-balancer. All of these servers are connected to a redis cluster R.