Thoughts and Notes on Shells

southclaws

Barnaby

Posted on December 30, 2019

Thoughts and Notes on Shells

Shells are strange. I am throwing around the idea of building a shell that solves some of the issues I have with most current shells. So here are my thoughts and notes regarding current problems and some proposed solutions

Issues

To get the ball rolling and present my cool ideas in a positive light, I’m going to list a couple of gripes I have with how most shells work currently. “most shells” essentially refers to Bash, Zsh and Fish. Fish has been my favourite for the last year thanks to its speedier startup time compared to Zsh and slightly nicer non-bash syntax but it’s still subject to the following criticisms.

Shell Languages

There are two ways of telling a shell what to do:

  • writing into it with your hands
  • running scripts, which at the most basic level just automate the hands thing

This is simple, because a short command can evolve into a script by just putting it into a file and running the file as if it were a program.

But I feel like it causes problems.

Partly because the syntax of the language you want to type into the terminal is usually quite minimal, you’re doing stuff like ls | grep '\.conf$' | sort -r, a neat little one liner that does something. There are usually no if-statements, no trees of syntax, no switches across multiple conditions, etc. Very functional.

Each component is a command, and a command is just a binary that lives somewhere on the filesystem:

  • ls
  • grep
  • sort

This is great for when you’re writing one-liners into the terminal, but in my opinion it’s absolutely awful for writing anything with conditional logic.

Probably because of the aforementioned structure of how the components of an invocation are just commands, as in binaries that live somewhere on the filesystem.

This leads to a situation where you have a language without any actual keywords because all the keywords are just binaries that somehow manipulate the input and pass it to the next program.

Have you ever tried to write an if-statement in Bash? Or a switch? Or abstract some functionality into a function? I personally find this really really awkward and generally avoid it as much as possible.

So, for these reasons I think it could be more productive if a shell had two languages. One designed and optimised for REPL-style usage, as in, writing commands into your terminal and another for writing scripts to automate things. The one you write directly into the terminal would be more geared towards the familiar piping and composition of small components and the other should be more like a familiar programming language with “normal” looking C-ish or Python-ish syntax where stuff does what you would expect without strange hacky-feeling syntax that’s trying to solve two different interface problems.

Text

This is one that PowerShell and Nu Shell resolve by passing objects between commands. I really like this approach because once you’re working with data and not text you can more accurately and more deterministically do things like filters, selections, clauses, etc.

With text, you are forced to resort to flaky pattern matching that may rely heavily on the syntactical structure. That syntactical structure may have leniencies that permit different forms of the textual representation that have no semantic effects on the underlying data.

For example, here’s some JSON:

{
  "key": [
    "value1",
    "value2"
  ]
}

Here’s the exact same JSON:

{ "key": [
    "value1",
    "value2" ] }

And here’s the exact same JSON again:

{ "key": ["value1", "value2"] }

If you ran a grep value2 on the first result, you’d get "value2" which you could mess with a bit more to get the raw data. If you ran that same grep on the second result, you’d get "value" ] } and on the third result you’d just get the entire thing matched.

And yes, I know jq exists for this job but it’s just an example of how a generic text-based tool used on structured data isn’t such a great idea. Sometimes that data is printed out as a table, or YAML, or XML, or some other esoteric format that may not have a neat little jq style query utility, and in these cases the simple tools like grep and sed quickly turn into a mess.

PowerShell and Nu Shell both solve this by abandoning the idea of text in favour of structured data being passed between commands. PowerShell is at home on Windows and has a long history of commands and plugins etc.

Nu Shell, however has to re-implement common commands so it can provide its RPC layer between the commands. You can of course fall back to native versions with the ^ character too, such as ^ls instead of ls which does the normal thing of finding the first executable file named ls in the system PATH and executing it.


So there are pros and cons to these approaches. It usually comes down to tradeoffs between simplicity and backwards-compatibility of the implementation and ability to provide additional functionality.

A Proposed Solution

Of course I’m going to do what any programmer does when they find problems with software: write their own!

However I probably won’t leave the “this is just an experiment” stage. I want to play with some ideas, test it for myself and gather feedback. I really like NuShell (which is why I’ve also started contributing) so I might even propose some of my ideas for that shell, given they solve the aforementioned problems and said problems are shared by others.

Shell Language

As mentioned above, shells have this strange dual purpose. They exist to both offer a realtime, manual, typing-commands-in interface to humans but also this secondary purpose of being a sort of programming language to automate things. My core issue with this isn’t the overloaded purpose, it’s that these two utilities are often intertwined in such a way that I feel like both parts are held back in their potential to offer maximum productivity.

At this point you can probably guess where I’m going with this

Why Not Two Languages?

I’m serious!

Why not a prompt language that’s functional and is all about piping data through a set of filters and a scripting language that’s more imperative and closer to what you’d usually write software with?

Since I’m already interested in using Rust for this project, I have decided that the scripting language may as well be JavaScript and TypeScript and the runtime will be, you guessed it, Deno.

The point?

You may be wondering that this seems pointless since you can just install Node or Python or Ruby or Perl or whatever language you want and start doing scripting with that. However this means that firstly you must install an additional dependency, yes I know that’s not the end of the world but it also means you have all the cruft that comes with it. Which means npm for JS, additional stuff for TS, whatever excuse for an ecosystem Python has nowadays, Gem for Ruby, PPM for Perl, etc.

Because of how Deno handles dependencies, there is no package.json so in theory, it would be as simple as just running ./script.ts and the hypothetical shell that doesn’t exist yet will fire up a Deno VM to run that script. And of course it will support shebangs and the usual stuff so you can run other scripts if you really want to!

Another reason for this is ergonomics. One of the nice things about Bash and its friends is that calling other commands is really simple. You literally just write the command and the language calls it, after all the language is pretty much mostly opening processes anyway…

Doing this in Python is awkward as hell, you have to use the subprocess package to construct an object, prepare it with parameters and then execute it. subprocess.run(["ls", "-l"])is quite a bit of text to write compared to what you’d write in Bash: ls -l so I believe this should be addressed by this experiment too.

I’m not entirely sure how unresolved function calls are handled by Deno, but it would be neat if I could capture one then use the attempted function name to search for an executable within the system Path. So by running ls("-l"); the shell would search the path for ls/ls.exe and pass -l to it.

Failing that, a nice short builtin to invoke a command and return the response as a string or stream would do the job. let r = call("ls -l");

Piping

So the other thing I want to address with this project is similar to that of Nu Shell. However, I have a slightly different idea of how to do it. Nu implements structured pipes using a simple JSON-RPC format. This means that if you want to write a command that can fit into a Nu pipeline, it must conform to the RPC structure.

As a developer who does a lot of web work, I work with JSON a lot. Which also means I work with jq a lot. So I had the crazy idea of: why not just build jq directly into the shell’s piping system.

So a rough example might look like:

ls
[
  "CODE_OF_CONDUCT.md",
  "Cargo.lock",
  "Cargo.toml",
  "LICENSE",
  "Makefile.toml",
  "README.md"
]
ls | / [.[] | select(contains("C"))] /
[
  "CODE_OF_CONDUCT.md",
  "Cargo.lock",
  "Cargo.toml",
  "LICENSE"
]

However this may be a little overkill… I’m undecided!

Some notes

  • I’ve opted to use | as a pipe symbol but the jq query is inside / – I’m undecided on this, I have also considered the idea of two different pipe symbols: one for piping to other programs and one for piping into filters.
  • JSON isn’t the nicest thing to read, but commands can know if they are being piped, so maybe commands can only output JSON when they are being piped. The downside here is that the user will need to inspect the output of a command in order to know how to write a filter to solve their task, so I thought that maybe a pipe into nowhere yields JSON: ls | -> [ JSON...] I want this to be a shell thing not a per-command thing, reducing the need to write this logic in commands.

Why JS and JQ and JSON?

The reason is I don’t want to write yet another language. I really don’t think the world needs another language to learn. Hence the desire to stitch together existing tools that are widely known.

  • JS is easy to pick up, forgiving due how it’s dynamic, familiar syntax to anyone who’s written imperative C-like languages, can use TS for safety and I know it’s got a bunch of terrible things but overall it feels like a good fit.
  • JQ is a somewhat well known way to query JSON. It can get complicated but the basics are easy to pick up. Overall it’s very powerful and is probably turing complete…
  • JSON is the defacto data format in the web world, not as powerful as XML but also not as complex!
  • It’s web-focused, meaning web-developers are the target audience. If you write C or Haskell all day long, you’re probably not going to benefit from the ideas here!

This is something Nu is doing, and there are very clever people working on it! (Yehuda Katz!) so I’m confident the scripting language they build will be great. But I am still adamant on “the world doesn’t need another language”!

More ideas

Here are some rapid fire ideas:

Simple http command:

http get httpbin.org/json | /.slideshow.slides[].title/ | sort
"Overview"
"Wake up to WonderWidgets!"

A set of shell-specific functions (a stdlib?) exposed to the Deno runtime.

shell.env.path.push("~/go/bin");
shell.prompt = ">> "
shell.alias("cat", "concatenate")
shell.load("another-script.js")

Fish-like command embedding:

http post some.website (cat somefile.txt)

Dangling pipe to output raw JSON data structure so you can check underlying data structures. In this example I’ve used the Deno FileInfo structure: https://deno.land/typedoc/interfaces/deno.fileinfo.html

ls |
[
 {
  "accessed": 1571554543,
  "created": 1571551049,
  "len": 1092,
  "mode": 420,
  "modified": 1571554543,
  "name": "main.go",
 }, {
  "accessed": 1571555492,
  "created": 1571553091,
  "len": 2382,
  "mode": 420,
  "modified": 1571555492,
  "name": "README.md"
 }
]

Piping into a shell script (JS/TS in Deno) provides a nice simple JS object to work with:

> cat script.js
const files = shell.input
  .filter((v) => v.len > 1000)
  .map((v) => {
    if (v.name.endsWith('.ts')) {
      return v.name
    }
  })
console.log(files)
const list = files.join("\n")
await Deno.writeFile('ts-files.txt', list)
> ls | script.js
["main.ts", "routes.ts", "data.ts"]
> cat ts-files.txt
main.ts
routes.ts
data.ts

Shell scripts should also support streaming, via event listeners:

> cat script.js
addEventListener('shell.input', (chunk) => {
  // do something with the data chunk
})
shell.wait()
> tail -f somelog | script.js

Maybe anything path-like should be executed or pulled into the shell context? We already do this with commands, so why not the internet:

> ./shell-script.sh
You ran a shell script! You dinosaur...
> ./js-script.js
You ran a JS script via Deno, nice and modern!
> some-global-script
You ran a script that was on the PATH
> file:///usr/bin/some-command
You ran a command with an explicit scheme
> https://southcla.ws/script.js
You ran a remote script!

People read data all the time, especially JSON if you’re a web developer. Why not just implicitly turn path invocations into cat invocations.

> ./file.txt
Some text!
> ./file.json
{ "name": "My data",
"format": "Pretty printed automatically!" }
> file:///Users/southclaws/file.txt
Some text!
> https://southcla.ws/api/posts
[{"name": "Thoughts and Notes on Shells",
"date": "2020-01-01T12:34:16",
"path": "/blog/thoughts-notes-shells"}]
> https://southcla.ws/api/posts / .[].path / open https://southcla.ws/ + $1
(opens your browser to each post? I dunno...)




Conclusion

That’s all I have for now. I’m quite busy and rarely work on "toy" projects but maybe this will turn into something one day!

Thanks for reading.

💖 💪 🙅 🚩
southclaws
Barnaby

Posted on December 30, 2019

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related