Building an app to help civil engineers
Bruno Oliveira
Posted on May 10, 2020
Background
I'm a developer with almost 4 years of professional experience, and, having worked with Java, SpringBoot, Vaadin (a server-side framework to build UIs with SASS and Java), a bit of JS and having tweaked a bit with the MEAN stack back in the days, I had never built a complete application on my own that I would deem to be production-ready. The recent Twilio hackathon app was quite nice, but, I was using Heroku and Python, which, while being great services and languages, still fell short to the flexibility of Java I experienced in all places I've worked so far.
For as long as I had enrolled in Computer Science, my dad (who is a civil engineer) had envisioned the development of a web application that could aid him and his colleagues to monitor dams, specifically to keep historical records of seismic responses of dams to any events such as small earthquakes or dam discharges, that could affect the structural integrity of dams.
I never really thought one day I'd be able to do it all on my own. I mean, it would involve database work (which I know very little of), it would require a decent-looking front-end UI, with the capability of displaying charts in the web, eventually data file uploads, authorization, some maps to pinpoint dam locations... and to wrap it all, I'd need to be able to deploy this so it could be showcased. Sounds like an impossible task!
Well, after almost 4 years of professional web development, this is the story of how I finally managed to use my higher education and work experience to create something that will have a positive impact on my dad's work! This is very gratifying and a milestone in my professional and personal achievements!
Technology Stack: back-end, front-end and deployment stacks
In 2020, it's both easier and harder than ever before to deploy professional grade applications completely for free. Easier than ever because there are lots of options to choose from, and, at the same time harder than ever before, because, there are a lot of moving parts and everything needs to be orchestrated perfectly in order to work. Let's see the choices I made in detail:
- Front-end:
The front-end has been where the most options have become available and where more moving parts are required to be well orchestrated, here are all the things I've used:
Svelte, as the front-end framework. Svelte is a new, modern way of building reactive web applications. It uses components as its the main building block, where components are all stored in a single file, with
.svelte
extension, where the CSS styles, JS logic and declarative HTML are all bundled together. Components can be composed and re-used easily, making it very intuitive to work with;Sapper - Sapper is a framework for building web applications of all sizes, with a beautiful development experience and flexible filesystem-based routing. Based on Svelte, and allowed me to have routing, page navigation, and a ready-to-use template I could build on top of, all while using Svelte :)
Unlike single-page apps, Sapper doesn't compromise on SEO, progressive enhancement or the initial load experience — but unlike traditional server-rendered apps, navigation is instantaneous for that app-like feel.,
the Fetch API to communicate with the backend;
d3.JS, which is a JavaScript library for manipulating documents based on data. D3 helps you bring data to life using HTML, SVG, and CSS. D3’s emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework, combining powerful visualization components and a data-driven approach to DOM manipulation;
Leaflet, as the library to display and manipulate the maps and markers;
Bootstrap, to make it look more visually appealing and to "componentize" my UI, with the flexibility of Bootstrap grids and their out-of-the-box assets like icons, buttons, etc.
Material design components, very specifically, used to build this "card-like" view component to show details of a dam:
So, a lot of components and frameworks working in orchestration under the hood to power the front-end of the app.
For front-end deployment, I used Vercel. Super easy to use!
- Back-end:
The back-end was powered by a Java and SpringBoot REST API, with which the fetch API would communicate to, to get results and data.
- SpringBoot as the main framework to build the REST API. It's one of the most popular Java web development frameworks, I've written about it here before :)
Since the application will need to have capability of handling upload and download of Excel files, I've used the Apache POI library to handle excel on the backend.
In order to persist uploaded data, and to associate the uploaded files to each dam registry, I needed to use some persistent storage, and for that task, I chose the Mongo Java driver to integrate MongoDB communication capabilities in my Spring backend.
Very simply, we define a property in our application.properties
file that specifies our application configuration, and, from there onwards, Spring can communicate with a MongoDB cluster specified in a property like spring.data.mongodb.uri
.
Obviously, I needed a real MongoDB database to use and, for that effect, I setup my own free cluster at the MongoDB cloud.
This wraps it up for the backend. Now, on to the deployment.
- Handling deployment of the backend services:
Since Vercel provided me with a great service to handle my front-end deployment, I only needed to find a way to deploy the backend code.
My backend, in essence, was a SpringBoot REST API, communicating with an external MongoDB provider, so, if I could deploy the SpringBoot app, I could effectively have a "full-stack production" scenario.
Fortunately, thanks to my own job, and my efforts in learning how to work with Docker, I wrote a previous article on working with Docker, here.
However, there are some important details I need to mention here:
When deploying in an intranet, or on a closed network simply to provide access to a dockerfile to coworkers or to test things locally on the same page, the Dockerfile
can be written assuming the "local" environment, i.e., you assume that all the resources referenced in the Dockerfile are available in the environment on which you're building your image. This is not true when deploying an image to an external Docker registry, like DockerHub. There you need to explicitly grab the dependencies as you go from a central repo, since they aren't available in the build environment anymore. Such a Dockerfile
can look like:
# our base build image
FROM maven:3-jdk-8 as maven
WORKDIR /my-project
# copy the Project Object Model file
COPY ./pom.xml ./pom.xml
# fetch all dependencies
RUN mvn dependency:go-offline -B
# copy your other files
COPY ./src ./src
# build for release
# NOTE: my-project-* should be replaced with the proper prefix
RUN mvn package && cp /my-dir/target/my-project-0.0.1-SNAPSHOT.jar my-project.jar
FROM openjdk:8-alpine
WORKDIR /my-project
# copy over the built artifact from the maven image
COPY --from=maven /my-project/my-project.jar my-project.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar", "/my-project/my-project.jar"]
Essentially, replicate your own local build in the cloud ;)
Deploying Docker with a simple Kubernetes cluster service
Finally, in order to use my newly created docker image, I needed a way to deploy the image, and startup my container, and leave it running.
For that, I used Kubernetes, which allows, in very simple terms, to specify a reverse proxy, nginx configuration, to expose the running service in a docker container to the outside world in a very declarative manner using yaml configuration files. In order to do this for free, I ended up choosing KubeSail as my hosted Kubernetes provider, which is very flexible for what I need.
To work with Kubernetes, you need to have a file with your cluster configuration in your machine at ~/.kube/config
, and from that file, when using the kubectl
command from your terminal, you have access to a set of commands and additional configurations to manage your clusters, deployments, pods, services, issue secrets, certificates, etc. All of it is done via commands like kubectl get deployments
, etc.
KubeSail takes away some of this complexity, as you can simply point it to a github repo and branch, and on commit, your new deployment and associated pod(s) will be recreated. Essentially, a pod contains a running instance of a service, that, when exposed to the outside world via a loadBalancer
configuration (or one of several other possibilities) will constitute a deployment, that can then get assigned it's own external IP address from the pod, that will be mapped to a friendly URL in the deployment yaml configuration making it accessible to the outside world.
And...this was it for the deployment.
A small showcase in screenshots
After all the details, here is a small sample of how the app currently looks:
The idea for future improvements is to keep adding functionalities and improving the app, and, eventually make it production-ready and who knows, maybe something really useful for future civil engineering work!
Posted on May 10, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.