Dev-Loop a new approach to local development
Cynthia Coan
Posted on April 14, 2020
Dev-Loop is a new "localized task runner". It's built to help make a declarative, easy to use, command runner for all your local tasks. It's incremental to adopt so you don't need to do one massive change, It's piecemeal so if you like your Makefiles you can keep them, and it makes it possible to have install instructions that are two steps.
Why Another Tool Though?
How many times have you started a new job, and heard: "So go through the setup steps, but be warned they're probably horribly broken". How many times have you looked at a new open source project, and just immediately said "nope" after looking at the install instructions?
If you haven't looked at other projects recently. How many times have you been scared to make a build change because it would mean diving into a depth of Makefiles, or Shell Scripts that amass in bin/
? How many times have you struggled to remember a command that changed, or been disappointed by the fact you couldn't make a change because it'd break everyone's workflows?
Chances are you've probably experienced one of these before in your time spent developing. Because as it turns out creating a really great local development experience is a really hard task. Not only that, but it rarely gets prioritized. Everyone can build the code and knows the commands, why do we need to change it? Really the only time it gets noticed is when you're upgrading a tool, or bringing on someone new to the team.
I remember once hearing from a team: "We'll put all the candidates in a room, and give them a local checkout of the repository. Whoever can figure out how to build our product gets hired." Jokingly to be sure, but it really shows how bad the state of local development is.
Some languages make this easier: rust has cargo
, node has npm
, etc. These are great when you're only dealing with that one language, however even for small projects these days it's becoming common to have multiple languages in play. For example a nodejs static site for documentation, and whatever your app is actually written in.
Why Not Existing Solutions?
There isn't really a solution that attempts to combat deficiencies, of using multiple languages effectively:
There are tools to change how we build our code cross language (like bazel, or buck) but these generally don't tackle things that aren't build related e.g. linting is still probably another step that's just some shell script, or you still have to install some local dependencies.
Makefiles/Shell Scripts can be implemented well (but usually aren't). Not to mention whenever you want to do something like use docker to have the user not install something you end up with code like:
docker run --rm -d -v "$(pwd):/mnt/src/" --workdir "/mnt/src/" --entrypoint "" --name "jdk-builder" openjdk:11 tail -f /dev/null || {
status_code=$?
echo "Failed to start container jdk-builder perhaps it's already running?"
exit $status_code
}
docker exec -it jdk-builder /bin/bash -c "javac src/MyApp.java" || {
status_code=$?
docker kill jdk-builder || echo "Failed to kill jdk-builder please kill manually."
exit $status_code
}
docker exec -it jdk-builder /bin/bash -c "jar -cvf MyJarFile.jar src/MyApp.class" || {...}
docker kill jdk-builder || {
status_code=$?
echo "Failed to kill jdk-builder, please kill it manually"
exit $status_code
}
- Tools like https://taskfile.dev/#/ which simply move the builds into a more understandable format than a shell script.
To be clear these are all great tools, they each do what they intend to do. However they're like a base piece. They give you a way of writing tasks, and running them. It's like writing in Assembly. Sure you can do anything but how you assemble it, and maintain it is critically important. Not to mention it doesn't make it easy for others to contribute unless they know how you're piecing that together.
Okay, That's Great, How Does Dev-Loop Solve These Problems?
The best way to see what's happening is see it run right? Well let's take a look at it running:
So what's actually happening under the hood here? Why is this good?
Creating a Declarative CLI
The first thing you've probably noticed is the fact when I ran those commands they were declarative. Instead of running a specific tool, I simply said what I wanted to have happen. "Execute a build for Dev-Loop". This is really beneficial because it means if I want to change how my project is built I don't have to go teach a new command to anyone. They run the same thing. Even if directories change, I still run the same command.
To make this easy to create we group things to do into "tasks". Each task should follow the Unix Tradition "Do one thing, and do it well". However one task is not the same as the others. There are three types of tasks:
- Command: Actually run a shell script.
- Oneof: Choose one of these tasks to run based off of a particular argument.
- Pipeline: Run these tasks in this specific order, only continuing if the previous one succeeded.
These three types provide the building blocks to build whatever it is you need, while making it easy to reuse. The task above actually uses all three of these types of tasks.
First we use a "Oneof" task to provide a nice list of things you can build:
- name: build
description: the top level build command
type: oneof
options:
- name: dl
description: build the Dev-Loop binary
task: build-dl-debug
- name: dl-release
description: build the Dev-Loop binary in release
task: build-dl-release
- name: docs
description: build the documentation site
task: npm
args:
- "docs/"
- "run"
- "build"
Here we have a series of options the user can choose from (which are discoverable with the "list" command you saw in the beginning of the video).
A thing you might also notice here is we can pass arguments down to whatever we end up invoking. This helps us keep a declarative CLI over time, when we keep the arguments in config. Imagine if I wanted to change the arguments being sent to npm. Something contrived from: run build
to: run build-and-compile-assets
. With this I just change the config and push it up to git. All of a sudden as people checkout my patch they start using the new command without even noticing. Even if they for some reason need to reset to a much earlier point in time (before build-and-compile-assets
was introduced) they can still run the same command. You don't have to go teach someone a new command every time you wanna make a change to how your product builds.
We executed build dl
, so let's take a peek at: build-dl-debug
:
- name: build-dl-debug
type: pipeline
steps:
- name: rustc-build
task: cargo-build
- name: rename-bin
task: rename
args:
- ./target/x86_64-unknown-linux-musl/debug/dev-loop
- ./target/dl
internal: true
This isn't all to different from oneof
, just we define what tasks we want to run in what order. So we build our project, and than move it to a more suitable location. One thing you may notice is the: internal: true
flag. This is a way of telling Dev-Loop "I don't want anyone to run this directly". This could either be because it accepts arguments that aren't guaranteed to stay the same, or it's just awkward to run (i.e. dl exec build dl
is much nicer than: dl exec build-dl-debug
).
Let's dig into something more juicy though, the actual cargo-build
command which actually runs our build:
- name: cargo-build
type: command
location:
type: path
at: cargo-build.sh
execution_needs:
- name: rustc
internal: true
The command type instead of specifying steps, or options instead tells Dev-Loop what script to run. In this case it says: "look for the file called cargo-build.sh
in the same directory".
The other new stanza is: execution_needs
. What's going on with that? To explain this in it's entirety we need to talk about Executors. Which also are the same reason your install instruction can be two steps.
Abstracting Away the Host System
Dev-Loop provides a way of "abstracting away" the host system. Dev-Loop realizes that host machines have different commands, behaviors, and installations. I mean even the "date" command acts differently between Mac, and Linux. Docker can solve this, however as we saw above it has a tendency of being very hard to manage docker container lifetimes, and generally isn't the most pleasant thing to do. There are a couple different ways to solve this too (maybe you run everything over ssh on a remote host?), but every single previous solution requires your task knowing where it's running. Not to mention keeping state between tasks "has this container already started up? if so don't try to stand it up again.", "Am I the last task using this container? Should I kill this container?".
Luckily because Dev-Loop already decides what tasks to execute when a user types a command. It is also in the perfect place to answer these sort of questions, and also be knowledgeable about where tasks should run. This is what the execution_needs
stanza is for in the previous configuration. It's saying to Dev-Loop: "hey please run me in an environment with 'rustc', I don't care about the version".
Next Dev-Loop is told about what types of runtime environments there are:
- type: host
- type: docker
params:
export_env: 'RUST_BACKTRACE'
extra_mounts: 'scratch/rust-git-cache/:/root/.cargo/git/,scratch/rust-registry-cache/:/root/.cargo/registry/'
image: 'clux/muslrust:1.41.0-stable'
name_prefix: 'rustc-musl-'
provides:
- name: bash
version: '4.0.0'
- name: rustc
version: '1.40.0'
- name: linux
- type: docker
params:
image: 'node:12.14'
name_prefix: 'nodejs-'
provides:
- name: nodejs
version: '12.14.0'
- name: bash
version: '4.0.0'
- name: linux
This list of possible "executors" tell Dev-Loop where it could run. In this case there's only one executor that provides "rustc". So Dev-Loop knows when it wants to run the "cargo-build" task it needs to stand up that specific docker container if it hasn't been already. Than at the end Dev-Loop knows the container was stood up, so it shuts it down.
No longer does the task itself have to worry about where it's run. It just knows it's executing in the correct environment.
But Wait, Don't I Need To Get The File Out Of The Docker Container?
Well yes, and no. By default Dev-Loop is going to mount two directories. First the root of the repository, and second is the temporary directory of the system. These are the most common directories people will need to access.
This means the files you are creating (like build output) are automatically synchronized to the file system. Again this is making the actual execution environment as invisible as possible. Allowing them to compose together with no problem.
Okay so is that it?
Yes, basically! These may seem like two small things, but regardless of how small they may seem, they change how you approach local development, both of which make this possible. If you take advantage of executors, you as a maintainer don't have to troubleshoot what version of things people have installed (since they're just in docker) and, as a consumer, you don't have to try and figure out what versions things are used. You just run a command and, as long as you have docker/dev-loop installed, you're off to the races.
If you use tasks you have a much easier way of discovering commands. Not to mention, you can easily change the actual underlying implementations of the commands that are being run.
Both of these together can be used to create a whole new, easy-to-use developer experience. If you want to learn more, you can go through the walkthrough, which is meant to help you get familiar with using dev-loop. If you just want to try using it, why not try it out in the dev-loop repo itself?
Regardless of what you do, I hope that this has either given you ideas of how you can improve your local dev experience, or provide a tool that does actually improve the experience for you.
Posted on April 14, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.
Related
November 28, 2024