Tai Kedzierski
Posted on July 17, 2020
This is a small, naive project I started because I remember one day wondering, "how would I visually diff binary files for small subtle changes?" I cannot remember what it was I was trying to inspect at the time, but I know it's not been the only time I've wanted to do this. So here we are.
(Note that this project was mainly for fun and exercising some python programming)
When asking the Internets about diffing binaries, there are some lackluster answers on StackOverflow that suggest to "just use hexdump and then diff." (like this)
The problem is that if you use hexdump
, you can be guaranteed that after the first change, unless the byte count remained the same between the files, the entire remainder of the stream (swathes of null-bytes notwithstanding) will be different from its counterpart for each file, owing to the offset codes and the displacement of the bytes.
For example, two files in which there is a list of animals, where only one line has changed, with diff run on their hexdump outputs, shows:
< 00000000 41 20 66 65 72 72 65 74 0a 41 20 64 6f 67 0a 41 |A ferret.A dog.A|
< 00000010 20 67 6f 61 74 0a 41 20 73 77 61 6e 0a 41 20 63 | goat.A swan.A c|
< 00000020 61 74 0a |at.|
< 00000023
---
> 00000000 41 20 66 65 72 72 65 74 0a 41 20 64 6f 67 0a 47 |A ferret.A dog.G|
> 00000010 6f 61 74 73 0a 41 20 73 77 61 6e 0a 41 20 63 61 |oats.A swan.A ca|
> 00000020 74 0a |t.|
> 00000022
Because of the displacement of the bytes, diff decides to tell me that the files are fundamentally different at every point.
The tool I knocked together (in one afternoon, that's to say how naive it still is) gets around that by artificially re-introducing the concept of "chunks".
Where diff
is most useful is in showing which parts of source code have changed, and then recognising the rest as "unchanged", but to do so it has to work on its smallest data sequence - the line. There are no "lines" in binary files, so instead we need to pre-process the binary data and split it into chunks. In this way, even if the byte count at the point of change is different, diff
will only show which chunk was affected. Once the chunk ends, a new chunk begins in each file, and since these are now identical, diff
doesn't continue marking the two as different.
I used Python to implement a custom file reader (that could certainly be optimised for disk access efficiency, I know), the script uses a list of specific bytes it expects to split chunks on (currently, CR, LF and NULL). When it encounters a grouped sequence of such bytes (for example, the fields of NULL that occur in compiled programs), it keeps them all together as a single "terminator." This keeps the chunking sane.
In case it wasn't quite clear:
Example
Lets say we have these two hello-world programs:
#include <stdio.h>
int main() {
printf("Hello world");
}
#include <stdio.h>
int main() {
printf("Adieu monde cruel!");
}
And we compile them each to hello.exe
and adieu.exe
.
The naive approach would be to do this:
diff <(hd hello.exe) <(hd adieu.exe) -u --color
After the first change in bytes, a vast swathe of the stream is highlighted as changed, even though only a few bytes changed (only interrupted due to a vast field of null bytes)
Conversely, you can run
bdiff.sh hello.exe adieu.exe -u --color
As well as no longer needing to use the command/file substitution, we also see in the output that only the relevant chunks have changed. Whereas in the naive version we are none the wiser as to what byte sequences differed (because "everything" differed), we can see the more discrete and precise locations where the data differs.
I doubt this tool will ever be of in fact production usefulness, but who knows. Other people were wondering too, so there must be some use-case.
Posted on July 17, 2020
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.