How Computers Work (Part 3): Software
Josh Carvel
Posted on November 6, 2020
Intro π
In Parts 1 and 2, we learned the fundamentals of how computers are able to do what they do, and the hardware of computers.
In this final part, we'll look more closely at computer instructions: software.
Prerequisites
- Parts 1 & 2, or basic understanding of computing concepts and hardware
What is software? πΏ
A computer can be thought of in terms of hardware, the physical components, and software, which is a term for computer programs. Programs are just specific instructions to achieve certain tasks. Writing these instructions is programming.
The computer can only process binary instructions, but they can be written in various other ways and converted to binary. In the early days of digital computing, there was a system involving punching holes in cards that would be fed into the computer. The computer would run the instructions (that did some mathematical calculation, for example), you would get some output, then you would give it another program, and so on.
These days, things are a lot more sophisticated. For one thing, we can make the computers do the work of making computers understand us! We can write instructions using high-level programming languages, which are abstracted away from the low-level details of how computers work. Instructions are written in more human-readable form, and other programs do the conversions that eventually turn our instructions into binary.
We can also run many programs at once, handle multiple users, and much more, thanks to operating systems.
Let's look at the different types of modern software.
BIOS π₯Ύ
Your computer can't do much without BIOS (Basic Input/Output System).
BIOS is some instructions that run when the computer first powers on. As mentioned in Part 2, the instructions are stored in flash memory on a chip attached to the motherboard. You may be familiar with configuring some BIOS settings, which in many older computers was accessible to the user only by pressing a certain key very quickly when the first screen came up.
A key BIOS task is loading the operating system from memory. Because the computer loads its own instructions without any external input, this is known as booting, after the old phrase 'to pull oneself up by one's own bootstraps'.
BIOS is also a go-between for the CPU and hardware, including external devices (hence the name input/output system). Part of this role involves testing that all the hardware components are working when the computer switches on, known as a power-on self-test (POST). This is a task BIOS does as part of the boot sequence: the steps it takes when it starts up.
BIOS is a type of firmware. This means it is software that controls the low-level details of hardware on a device, and is specifically configured for that device. In the past, the firmware of a device was rarely replaced, although nowadays firmware can be updated and even physically replaced.
Operating systems π
An operating system (OS) is just a computer program, but it has privileged access to the hardware and handles a lot of fundamental operations, like running all the other programs you want to run. It has a number of elements to it.
The kernel
The core part of any OS is called the kernel. The kernel code is always present in RAM while your computer is on. It handles a number of essential tasks.
Process management
To run a program, the CPU needs to be told to move the program data from the disk (or other storage device) into RAM and start running it. The kernel handles this.
This 'running' of the program is called a process. If you've ever opened up Task Manager on Windows, or the equivalent on other OSs, you will have seen these processes listed, and noticed that while some relate to programs you opened yourself, most are background processes that the OS initiated. You may also notice that you can open up a program more than once, and see multiple processes, one for each instance of the program.
A process needs space in RAM to hold various values that it needs to do its work, which the kernel gives it. However, the process also needs to do things that only the kernel is trusted to do (because if they go wrong, the whole computer could crash), like interact with input/output devices, access files, start new processes, and so on.
So the process asks the kernel to do it with a magic word (not 'please', just an instruction name the kernel understands). In programming, a request for an action is known as a call, in the sense that you are calling on something to be done. In this case, the call is to the operating system and it's known as a system call.
The kernel handles sensitive access sort of like a vault. When no special access is needed, the kernel runs a process's code in what's called user mode, where access is locked down. When greater access is needed, the kernel switches to kernel mode (also called 'supervisor mode') to open things up.
Another important aspect of process management is that processes are frequently interrupted. Interrupts may be generated by hardware e.g. keyboard input, software e.g. a request for a system call, or the CPU, e.g. indicating an error.
Interrupts may be responded to immediately if serious, otherwise they are scheduled according to priority. The CPU stops the process it is running, saves its status in memory, switches to kernel mode and executes a response (these are pre-set responses which map to specific types of interrupts, just like with system calls).
Most operating systems are also designed to run processes 'simultaneously', i.e. multitasking. It does this pretty much how we would do multitasking (but much, much quicker) - kicking off one task, then another, then switching back to the first one when it needs attention, and so on. As noted in Part 2, it can also make use of the multiple cores of the CPU. The code that handles all this is called the scheduler, and it runs for very short periods between the chunks of process code.
Memory management
The kernel also handles memory management, using the memory management unit (MMU). It makes sure the data used by one program is not overwritten by data used by another. At its core, this just means allocating specific blocks of memory for a specific process, which belong to that process only while it is running.
This also provides a level of memory protection: programs should not be able to read or modify the memory of other programs. This helps limit the damage done if one program starts doing undesirable things with memory, either unintentionally or intentionally (malicious software).
Computers usually use an abstraction called virtual memory, where the computer's memory is represented in a more ideal way to processes than it actually is in reality. One problem this solves is that a process's data may become scattered across memory addresses over time, which is difficult for the process to work with. The process can instead be given a nice, consecutive block of logical addresses (also known as virtual addresses) which are mapped to physical memory addresses by the MMU. Each block is known as a page.
Another benefit is that the computer can optimise the space available in RAM, by keeping some process data that isn't currently needed in disk storage temporarily, a technique known as paging (if the entire process's memory is moved to storage, it's called swapping). Paging causes some delay due to moving data back and forth between RAM and storage, but the trade-off is seen to be worth it.
File management
Another crucial role of the kernel is handling files, i.e. data that we want to be self-contained. The kernel provides a file system so we can work with data easily. File systems help with moving data into storage, where data is just a long sequence of binary values. To know where one file begins and another ends, we need a file that stores this information: a directory.
On our desktop we have the more abstract concept of 'folders', and think of them as places our files live. But most of the time the folder just refers to a directory listing the file information. For each file this contains details of the address in storage, its length, name, creation and last modification dates, the file owner and read/write access of the file. This is updated whenever there is a change.
A particular file has a particular file format, which indicates to the operating system how to decode it and what sorts of applications it should be opened in. The file extension at the end of the name indicates the format, e.g. a plain text file has extension .txt. The file itself also begins with some metadata (data about data) which helps the computer interpret the data in the file.
Operating systems such as Windows and macOS hide the file extension in their file explorer, because simply changing the extension doesn't change the data, so the file may be interpreted incorrectly (at best, the incorrect extension will just be ignored). But you can change the encoding by saving it to a different file format, when possible.
Modern file systems store files in blocks so that there is extra space for the file to become bigger. If the file exceeds its block space however, the rest of the file may be stored somewhere else. This is known as fragmentation, and the directory has to keep track of where all the fragments are in storage. Reading a fragmented file can cause delay in hard disk drives (not so much in SSDs), which can be mitigated by running a process called defragmentation, where the file fragments are put back together. Most OSs now do this automatically for you.
If you 'delete' a file, the record is removed from the directory, but the data in storage isn't overwritten until something else needs that block. This is why you are advised to clear your computer's disk drive when getting rid of an old computer - some 'deleted' data may still be recoverable.
When you have lots of files, you need a hierarchical file system, where files are grouped in directories and subdirectories, to keep things organised. At the top of the hierarchy is the root directory, which references other files, including directories. We can describe the route to those files with a file path. Moving a file's location is as simple as changing the two affected directories - the data itself doesn't move.
On Windows the root directory represents the storage disk drive as a whole, which for historical reasons (drives A & B were floppy disk drives), is known as the C: drive. On other systems the root directory is usually just '/'. OSs that allow multiple users can deal with this by assigning users their own subdirectory of files they have read/write access to, and not showing them other user's files, while allowing an administrator to access all files.
A file system is implemented in what's known as a partition, i.e. a segment of the disk. Often a disk is divided into multiple partitions with separate file systems, for example to allow the user to have different OSs in separate partitions on the same computer.
Device management
We can think of software as something that communicates with the hardware. However, not all computer hardware is the same: there are lots of different configurations. A certain hardware component may expect instructions in a different form than a similar one created by someone else for a different computer. So it seems you would have to write different instructions for each version of the component.
This is the problem device drivers aim to solve. They are a middleman, a translator. The software asks for a particular action in a standard language, and the device driver translates the request to account for hardware differences. These days many hardware components, such as keyboards, tend to conform to a generic standard for their type, so the OS can include generic drivers for these components. The OS also allows you to install more specific device drivers where necessary.
Networking and security
Aside from the tasks mentioned above, the kernel does some other things. It provides the ability for networking i.e. accessing the resources of other computers, and various security features.
User interfaces
The main OS feature outside of the kernel is user interfaces, which you use to interact with the computer.
Command line interface (CLI)
The CLI is the interface at which you enter commands with the keyboard.
There has to be a graphical element to this interface for the user to see the input and output, but it's very basic and doesn't accept mouse input. We can call this part the terminal.
Beneath that is a program that receives certain commands, interprets them, executes the instruction then sends the output back to the screen. This program is not part of the kernel, it just communicates with it. It's known as the shell (the kernel of a nut or seed is surrounded by its shell), or in this context could also be called a command line interpreter.
Operating systems usually have a default command line interface, and they can vary in various ways, such as the commands they understand. But on modern computers you can download and use different types.
Text-based user interface (TUI)
A TUI is a less common interface that was seen in older computers for certain purposes and you might still see sometimes, for example on the BIOS screen. It's just a much more basic graphical interface where you select text options.
Graphical user interface (GUI)
The GUI (pronounced 'gooey') dates back to 1973 when Xerox developed the Alto computer at their research centre Xerox PARC. The computer featured a mouse and the GUI had windows, icons and drop-down menus, and began the trend of the desktop metaphor that is still used today (the desktop environment).
The Alto wasn't a commercial product, but the GUI began to go mainstream after Steve Jobs visited Xerox PARC in 1979 and saw the potential. Apple's Lisa computer in 1983 featured a GUI, though the Macintosh in 1984 was the first commercially successful usage.
The desktop environment on modern computers is actually made up of a number of components, such as the windowing system, file system and graphical theme, which are customisable to a greater or lesser extent depending on the operating system. Beneath them there is still a shell that processes the instructions.
Though GUIs revolutionised how we use computers, and made many tasks simpler, they didn't replace CLIs. Although these days there are GUIs for a great many things, CLIs still give you the flexibility to achieve any computing task, which is not quite true of GUIs.
Utility programs
Utility programs are what they sound like: programs to support the computer infrastructure. This includes things like backup, anti-virus utilities and disk defragmentation. These are considered part of the operating system, though the user may install replacements or additional utilities.
Modern OSs in context π₯οΈ
Now we'll look at the different operating systems we have today and how they came about.
Unix
Most operating systems around today derive from an OS called Unix, developed at the famous Bell Labs in the 1970s.
AT&T, which ran Bell Labs, was legally restricted from selling software, and required to cheaply grant licences for the Unix source code to other educational institutions. In 1982, AT&T became legally free to sell Unix. But by then, the University of California, Berkeley, had already developed a slightly different version - the Berkeley Software Distribution (BSD) - based on the source code they had been licensed in the 1970s.
By the 1990s, Berkeley made the BSD source code freely available to study and use. Today we would call this open-source software. It contained no source files from the AT&T version (though a legal battle went on for years). OSs like this are known as Unix-like operating systems and are not AT&T copyright. Though Berkeley stopped working on BSD, others took on the mantle.
One of the derivatives is known as FreeBSD. Like BSD, it uses a permissive software license. This means you can modify the software and claim full copyright rights over the derivative product. This is one reason FreeBSD is used in some way by many companies. It is a key part of the PS4 operating system and in OSs created by Apple.
Apple OSs
Apple became a successful company after releasing the Apple II in 1977, one of the first highly successful, mass-produced personal computers. However, they failed to produce a personal computer and operating system to match their initial success in the following two decades.
Since the early 2000s, Apple computers have used macOS (previously known as Mac OS X and OS X). The kernel of macOS, called Darwin, used many elements of Free BSD, and Darwin is the basis for all the other Apple OSs - iOS, iPadOS, watchOS and tvOS.
Linux
In 1991, a Helsinki University student called Linus Torvalds released a Unix-like kernel he had made called Linux (a combination of his name and the 'x' from Unix, pronounced 'lin-ucks'). However, he hadn't written the other software needed to create a fully-fledged OS, such as a shell and a program to compile (translate) the code he had written.
As it happened, there was a project that had begun in the 1980s and had already made that other software, but didn't yet have a working kernel. It was called the GNU Project, with GNU standing for 'GNU's not Unix!' (yes, really!). GNU's aim was to freely distribute a Unix-like operating system and lots of software to go with it. So Linux and GNU developers began co-operating. GNU developers made their software compatible with Linux so a fully-fledged OS could be produced, and Torvalds started releasing Linux versions under a GNU license.
GNU is about free software (in the sense that it is not owned by any individual). It's more ethically opinionated than open-source (though definitions of the two movements are disputed). While BSD was one stop short of public domain, more or less just requiring a copyright notice, GNU software is distributed under a 'copyleft' scheme, meaning the work can never be redistributed under a more restrictive scheme.
OSs that derive from the Linux kernel and other software are known as Linux distributions, and there are hundreds of them, for all sorts of different needs and preferences. At one time, distributions using GNU software were referred to as 'GNU/Linux' distributions. Now they're usually just called Linux, a point of controversy to this day.
Linux systems are everywhere. Though not exactly a Linux distribution, Google's Android, the OS that runs on roughly 3/4 of the world's smartphones and tablets, is based on a modified version of the Linux kernel, as is the Chrome OS used in its Chromebooks.
You will find it running on mainframe computers, supercomputers, routers, smart TVs, smart home technology, smartwatches, video game consoles and much more. By most estimates, most web servers are run on Linux, and it continues to be popular among programmers and those who enjoy its high configurability.
Windows
"But what about Microsoft Windows?!", I hear you cry. Well, Windows is the big exception.
As it happens, Microsoft had acquired a licence for a version of Unix from AT&T in the late 1970s, which they would sell to other companies, under the name Xenix for legal reasons. Xenix actually became the most popular Unix variant of the time, and Microsoft believed it would be the standard operating system of the future.
In 1980, IBM began to compete in the personal computer market, and went to Microsoft for various things, notably their programming language Microsoft BASIC, and an operating system. Xenix was not suitable for the hardware of the PC, so IBM considered CP/M, a common operating system of the time for business computers, but couldn't reach an agreement with its creator. Microsoft jumped in, buying the rights to an OS initially known as QDOS (Quick and Dirty Operating System), essentially a CP/M clone, for just $50,000. Their version was called MS-DOS (Microsoft Disk Operating System).
Microsoft was paid a fixed fee by IBM for MS-DOS, but retained the rights to sell MS-DOS to other companies, which IBM didn't think was important. It turned out to be very important indeed when lots of companies began reverse-engineering and selling clones of the IBM PC (using a legal loophole where an engineer who has seen the copyrighted source code can write instructions, which will be used by another engineer in a 'clean room' who has never seen the source code). Microsoft sold MS-DOS to many of these companies, while IBM got pushed out of the PC market.
When AT&T became free to sell Unix and enter the PC market in 1982, Microsoft decided that pursuing Xenix was not a profitable strategy. MS-DOS was successful, so they worked on a follow up, Microsoft Windows, which is used on at least 3/4 of all personal computers today.
Applications π±
User-facing programs are known as applications or apps. An app communicates with what we might call the platform or environment it will be run on, using the application programming interface (API) that the environment makes available.
The API is the code that understands a particular set of instructions. The instructions are documented somewhere so the programmer can use them without knowing the details of the hardware. For example, OS system calls are implemented via an API (though they vary depending on the OS). Other environments such as mobile devices and web browsers have different APIs that allow software to be written for them.
An app that is run directly by a particular device's OS is called a native application. More recently, web applications have become more popular. In a web app, the code is stored on a server and the device's web browser downloads the code via the internet, so the user doesn't have to download any code themselves. There are trade-offs to both approaches.
Applications for different platforms tend to have different programming languages that they require the software to be written in, for various reasons such as suitability as well as business reasons. For example, even to specifically write for mobile devices you have to write in Java for Android, but in Swift for iOS. This is one advantage that web applications have - they can be written in one language, JavaScript, and be run on any device so long as it has a web browser and an internet connection.
Conclusion
Let's briefly recap:
- Software is just instructions the computer understands, but today it can be written in a variety of high-level programming languages for a variety of different platforms.
- Applications are not the only type of software, and modern applications couldn't be run without the OS kernel managing things like processes and memory.
- There are various financial, political and technical reasons for the different forms of software we have today.
That concludes this three-part series on computers. I hope it has provided a solid introduction and a good jumping-off point for further exploration.
Thanks for reading! π
Sources
I cross-reference my sources as much as possible. If you think some information in this article is incorrect, please leave a polite comment or message me with supporting evidence π.
* = particularly recommended for further study
- * Crash Course Computer Science Episodes 18 and 20
- * Operating System Basics - Brian Will
- * Understanding the Shell - dwmkerr.com
- How BIOS works - howstuffworks
- Operating system (and associated pages) - Wikipedia
- Virtual Memory in Operating System - GeeksforGeeks
- Hard Disk Partitions Explained - How-To Geek
- Shell (computing) - Wikipedia
- What Is Unix, and Why Does It Matter? - How-To Geek
- Unix vs Linux - Gary Explains
- Quora question on origin of Windows
- Usage share of operating systems - Wikipedia
- Triumph of the Nerds - documentary (1996)
- Revolution OS - documentary (2001)
Posted on November 6, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.