Will BL
Posted on January 5, 2023
OpenAI's ChatGPT, like many similar Large Language Models, has been used to perform many programming-related tasks. Writing obfuscation mappings seems to be a good fit for its abilities, as it involves reading and summarising text (code). I tested its capabilities.
Background
If you already understand what obfuscation mappings are, feel free to skip this part.
Minecraft: Java Edition is a game written in Java (quelle surprise). Java source code is compiled to JVM bytecode for distribution, a little like the relationship between C and machine code.
To modify the game, we want access to the Java source. However, there are two things in the way:
Firstly, we have to decompile the JAR, turning the bytecode back into Java. This is a very difficult job: a lot of work has been put into the tools to make this happen.
However, even then, we do not have readable code: Mojang, the developers of Minecraft, put their code through ProGuard, which shrinks the bytecode by doing things such as renaming all methods and classes to things like a
, b
, c
, az
, and so on. So we need a mapping from these obfuscated names to readable names!
For a while, the main mappings available were made by the MCP project, and had a restrictive license. These were used first by MCP itself, and later Minecraft Forge, a modloader and API.
When Fabric, another modloader/API project, came onto the scene in 2018, it had to make its own mappings, which it called Yarn. It made Yarn freely available (CC-0), and effected a 'cleanroom policy' - no MCP names were to be mentioned in any Fabric community spaces, in case a Yarn contributor was 'tainted' by the strictly-licensed MCP mappings.
And so it came to pass that the mappings were fractured; for there were two names for each class. "Lo," said Mojang, "we shall bestow unto them an Official Mappings set, exported from Proguard itself".
Forge quickly switched to the Official Mappings. Fabric decided that a non-public domain mappings set wouldn't do; and besides, these names were all a bit strange and unlike what they were used to anyway, so they would prefer to maintain and use their own mappings, thank you very much.
These days, there are four main mappings sets:
- Official Mappings (aka. Mojmap) exactly like what Mojang sees but without parameter names.
- Parchment, a project to create mappings for parameter names, and also including Javadoc, to be layered on top of the Official Mappings.
- Yarn, still going with its cleanroom meant to protect against a mappings set that no longer exists.
- Quilt Mappings. Quilt is a fork of Fabric made by many former-Fabric developers. Quilt Mappings are like Yarn, but without a cleanroom, and so may take inspiration from the Official Mappings. Its main purpose is to be like Yarn, but Mojmappier, or to be like Mojmap, but Yarnier - depending on who you ask.
First Tests
First, I set up a Quilt Mappings workspace. Why Quilt Mappings and not Yarn? Two reasons:
- QM is missing mappings for more things, so it was easier to find obfuscated names for testing.
- The QM workspace allows using the QuiltFlower decompiler, which can produce much nicer output than many other decompilers.
It took me a few tries to find a prompt that worked well. I settled on this:
I have this Java method in the class
ClassName
, but I don't know what to call it:code snippet
Please suggest a name for this method and each of its parameters.
NativeImage#copyRect
I first tried this method in NativeImage
:
public void m_vmneozhc(NativeImage arg, int i, int j, int k, int l, int m, int n, boolean bl, boolean bl2) {
for(int o = 0; o < n; ++o) {
for(int p = 0; p < m; ++p) {
int q = bl ? m - 1 - p : p;
int r = bl2 ? n - 1 - o : o;
int s = this.getPixelColor(i + p, j + o);
arg.setPixelColor(k + q, l + r, s);
}
}
}
It suggests I call the method copyImage(NativeImage sourceImage, int sourceX, int sourceY, int targetX, int targetY, int width, int height, boolean flipHorizontally, boolean flipVertically)
.
This is pretty good! Mojang calls the method copyRect
, quite similar to ChatGPT's naming. It also names the parameters sensibly.
LocalPlayer#sendIsSprintingIfNeeded
Next, I tried this method, in the class which QM calls ClientPlayerEntity
(but Mojang calls LocalPlayer
):
private void m_nfwipcth() {
boolean bl = this.isSprinting();
if (bl != this.lastSprinting) {
ClientCommandC2SPacket.Mode lv = bl ? ClientCommandC2SPacket.Mode.START_SPRINTING : ClientCommandC2SPacket.Mode.STOP_SPRINTING;
this.networkHandler.sendPacket(new ClientCommandC2SPacket(this, lv));
this.lastSprinting = bl;
}
}
It suggests updateSprintingStatus()
.
This is not wrong, but it's a bit generic as a name - it doesn't capture that it involves telling the server over the network about the player's sprinting status.
MultiPlayerGameMode#performUseItemOn
Lets try a longer method. I'm not going to show you the whole method this time (Minecraft is closed-source, after all), but I'll show you a snippet of this method in the class QM calls ClientPlayerInteractionManager
, and Mojang calls MultiPlayerGameMode
:
private ActionResult m_wiqrsdhj(ClientPlayerEntity arg, Hand arg2, BlockHitResult arg3) {
BlockPos lv = arg3.getBlockPos();
ItemStack lv2 = arg.getStackInHand(arg2);
if (this.gameMode == GameMode.SPECTATOR) {
return ActionResult.SUCCESS;
} else {
// ...
This is alright. The method is run when right-clicking on a block - but only when holding an item. ChatGPT's name doesn't really capture this. Still, it's less ugly than Mojang's performUseItemOn
.
Can it do more?
So ChatGPT can name methods, not perfectly, but alright. But can it map whole classes?
I tried it out with this prompt:
I have this Java class, but all the names are garbled:
code snippet
For context, <context of how the class is used>.
For each garbled name, suggest a proper name.
BlockStatePredictionHandler
I found a class which wasn't mapped in QM - Mojang calls it BlockStatePredictionHandler
- and stuck it in the prompt. I added some context, too:
For context, the class
ClientWorld
has a private final field of this class. The methods of this class are called from the classesClientWorld
andClientPlayerInteractionManager
.
Placeholder | Mojmap | ChatGPT |
---|---|---|
C_czisrdmd |
BlockStatePredictionHandler |
BlockStateTracker |
f_irkdwisi |
serverVerifiedStates |
blockStateMap |
f_hcvpxtyt |
currentSequenceNr |
tickCounter |
f_bmpbngyn |
isPredicting |
isTracking |
m_plwiurco |
retainKnownServerState |
addBlockState |
m_mmtdqsga |
updateKnownServerState |
updateBlockState |
m_vfahppjl |
endPredictionsUpTo |
rollbackBlockStates |
m_rhbdpkkw |
startPredicting |
startTracking |
close |
N/A | stopTracking |
m_gqtwgmpw |
currentSequence |
getTickCounter |
m_nssimvch |
isPredicting |
isTracking |
C_ivvmpfib |
ServerVerifiedState |
TrackedBlockState |
f_bfinrexc |
playerPos |
playerPosition |
f_ovpelcnt |
sequence |
trackedTick |
state |
blockState |
blockState |
m_twfpffxt |
setSequence |
updateTrackedTick |
m_fxchmlnj |
setBlockState |
updateBlockState |
While ChatGPT's names are coherent, and certainly consistent (see the consistency between fields and their getters/setters), it has a few flaws. See, for example, how it tried to rename close
, a non-obfuscated name inherited from Java's AutoCloseable
.
I didn't know what this class was actually for, so I asked ChatGPT - perhaps this would also give us some insight into why it named the symbols in the way it did?
The last part of that, at least, is completely incorrect - block changes made by a player in creative mode are definitely not rolled back when the player leaves the game. This cast doubt on the rest of the explanation, so I searched through a few Discord servers to see if anyone else had an explanation.
Here's a (trimmed down) explanation by SizableShrimp.
Basically, clients keep track of a sequence counter in their
BlockStatePredictionHandler
. When client wants to perform any block modifying actions fromMultiPlayerGameMode
, it will gen a new sequence number by increasing the counter.
Now, while in prediction mode, any calls toClientLevel#setBlock
will not be actually set in-world! They will be stored in the prediction handler.
Now, after client finishes its clientside block-modifying action, this sequence number is sent to the server alongside the action the client is requesting to do. Server will then do the block-modifying action that the client requested (if possible).
Once the server is "done", it will send a packet back to the client that initiated the action to process all block sequences up to that sequence id.
They use this predictive action system thingy so that there aren't visual glitches on the client when what the client thinks should happen is different from what the server actually performs.
The client will now delay setting blocks during predictive actions, wait for a server response on what actually happened, and then update it on the client.
(Full explanation is available in this message on the Forge Discord)
Interesting - so ChatGPT correctly told us that the class tracks block changes and can roll them back, but it misinterprets the sequence number as a tick counter, and doesn't understand that the class is used to hold off on committing to clientside predictions until verified by the server.
It doesn't understand anything, in fact. This is one of the flaws of asking ChatGPT to explain something to you - it has no consistent internal model of the world. ChatGPT, like other Large Language Models, is able only to read some text and write more text that sounds like it should come after - as a result, it is very prone to giving completely incorrect answers about things while sounding very confident and authoritative.
MatrixUtil
Let's try one more class. This one has some methods for messing with matrices. It's unmapped in QM; Mojang calls it MatrixUtil
. It's quite long (~200 lines) compared to our other tests.
I gave ChatGPT the class, with the context:
For context,
m_xockxffa
is called from the classItemRenderer
, andm_jeovkadj
andm_ibrymiig
are called from the classAffineTransformation
.
Placeholder | Mojmap | ChatGPT |
---|---|---|
C_bmzqkdhp |
MatrixUtil |
MatrixMathHelper |
f_arxnkhow |
G |
RADIUS_FACTOR |
f_ddnzxrmh |
CS |
COSINE_OF_45_DEGREES |
f_kqccjreh |
SS |
SINE_OF_45_DEGREES |
m_xockxffa |
mulComponentWise |
scaleMatrix |
m_kcqrzkbi |
approxGivensQuat |
computeUnitVectors |
m_kniviwwf |
qrGivensQuat |
computeUnitVectorsFromYawPitch |
m_skxdryqk |
stepJacobi |
computeQuaternionFromMatrix |
m_kkfnnzhk |
N/A | computeQuaternionFromYawPitch |
m_kjbxjxks |
N/A | computeUnitVectorsFromQuaternion |
m_pjmgwfwq |
N/A | computeMatrixFromQuaternion |
m_geumxflx |
N/A | computeMatrixFromYawPitch |
m_ibrymiig |
svdDecompose |
N/A |
m_jeovkadj |
toAffine |
N/A |
Whoever named the constants must've been a mathematician.
ChatGPT has a good start: MatrixMathHelper
is a solid name; SINE
and COSINE
_OF_45_DEGREES
is descriptive (and impressive, seeing as it's computed in radians). Unfortunately, it goes downhill from there.
The rest of the names look descriptive - in many cases, they look at first glance to be better names than Mojang's. But once again, it's a case of ChatGPT being very good at sounding confident whilst being totally incorrect.
The so-called scaleMatrix
method, taking a 4x4 Matrix and a float and returning a matrix with each element multiplied by the float, is named in a way that confuses matrix-scalar multiplication with the linear transformation of scaling.
computeQuaternionFromMatrix
is a technically-correct name - the method takes a 3x3 matrix and returns a quaternion. But it fails to convey what the method actually does. stepJacobi
at least points you in the right direction.
There's an even more damning problem, though: ChatGPT started hallucinating methods.
In the table, you'll see some methods for which there are no ChatGPT names (because it got distracted and didn't name them), and some without Mojang names (because they don't exist - ChatGPT made them up).
Oh dear.
Conclusion
ChatGPT is an interesting tool. There is no doubt that it is very good at text prediction and generation. This sometimes generalises to being able to paraphrase, summarise, and appear to understand the meaning behind text. This is not true - it cannot understand meaning.
For deobfuscation name mapping, Large Language models such as ChatGPT may be useful as assistive tools. They cannot be relied on to automate the naming process.
In the specific case of Minecraft, with the existence of Mojmap, there is limited value in maintaining alternate mappings. There is more value in augmenting Mojmap, e.g. with Parchment. Therefore it would be useful to see how the process of mapping parameter names and writing Javadoc can be made more efficient.
There is less room for error in parameter naming than class/member naming. Large Language Models could be used to help automate this process - however, specialised non-ML tools may be as or more useful.
Stay tuned - I will test ChatGPT's ability to write Javadoc for decompiled methods in another post. I don't suspect it will do very well.
Thanks to:
- Shedaniel's Linkie, an enormously useful tool for translating between mappings.
- The QM and Yarn contributors
- OpenAI.
- The Quiltflower contributors.
Posted on January 5, 2023
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.