Project: Using After Effects Expressions To Generate Character Lip Sync
Kat
Posted on July 26, 2024
Contents
- Introduction
- Create Your Lip Flaps
- Importing Into After Effects
- Convert Audio To Keyframes
- Creating The Expression
- Working Out The Audio Range
- Final
if
Statements - Accounting For Audio Peaks
- Final Step
Introduction
Lip sync can be one of the most painful parts of character animation. But if you're looking for a simple solution to sell in a character speaking, you may find that using expressions in After Effects can help automate the process.
Here's how I do it.
Create Your Lip Flaps
First, create your lip flaps. This can be as simple as a line and 2 stretched out circles, one horizontally and the other vertically. Feel free to add more shape and detail.
Importing Into After Effects
Start a new After Effects project, and import your lip flaps and the audio you want to create lip sync for.
Create a new composition and drag all of these assets into your timeline.
Convert Audio To Keyframes
We have one last thing to do before we can start writing our expression.
Right click on your audio layer, and select Keyframe Assistant
> Convert Audio To Keyframes
. This will create a new layer called Audio Amplitude. You'll see this layer comes with 3 sliders: Left Channel
, Right Channel
, and Both Channels
. We only need the Both Channels
effect, so feel free to delete the other 2. If you open up the effect, you will see that there is a new keyframe for every frame of the composition.
Creating The Expression
Now the fun bit! Writing the expression. It's important to understand what we need to tell After Effects, in order to know how we write our code.
Essentially, we want to work out the range of our audio, from lowest point to highest. We can then use this range to control the opacity of our lip flaps. When the audio is mute, the closed mouth option should be visible. If the audio is in the first half of the audio range, the horizontal mouth option should be visible. And if the audio is in the second half of the audio range, the vertical mouth option should be visible. This will give the illusion of a lip sync.
Working Out The Audio Range
An easy way to work out the range of our audio is to do a quick check. We can run our cursor through our timeline and check the value of all the keyframes in the Both Channels
slider manually. However if your audio track is long, or you're working at a high framerate, you can easily miss values. So it would be better in this circumstance if we got After Effects to do the checking for us.
We can do this using some for
loops from within the opacity parameter of all of our lip flap layers:
audio = thisComp.layer("Audio Amplitude").effect("Both Channels")("Slider");
maxV = 0;
minV = 0;
for (i = 1; i<=audio.numKeys; i++){
maxV = Math.max(maxV, audio.key(i).value);
minV = Math.min(minV, audio.key(i).value);
};
First, we reference the Both Channels
slider. Then, create a variable to store your max and min values in: maxV
and minV
.
The arguments in our for
loop are set to i = 1; i<=audio.numKeys; i++
. This creates the variable i
for our loop, and uses it to check every keyframe on the Both Channels
slider. The loop then stops when it reaches the last keyframe.
Let's start with the maxV
value. While looping, the for
loop checks maxV
. It uses the Math.max()
function to compare our current maxV
value to the current keyframe value. If the keyframe value is bigger than maxV
, it updates maxV
to be that value.
We then do the same to work out our minimum value, using Math.min()
on the next line in the loop. In most cases this value will be 0, but running this line can account for background noise, or negative values in the audio track.
Final if
Statements
We now have the values we need to work out our audio range. In order to return the correct opacity value, we need to provide an if
statement for our 3 lip flap layers.
First, the statement for our closed mouth layer:
if (audio < .1) 100
else 0
Here, I have specified that if the value of the audio
variable is under 0.1
the audio of this comp should be considered mute. Therefore the closed mouth opacity is set to 100. If the variable's value is higher than 0.1
, the opacity will be set to 0. Because our final if
statement for this layer doesn't require working out the audio range, the for
loops can be deleted from this layer to help with the project's efficiency. Make sure to substitute 0.1
for a value which makes sense for your project, if necessary.
Next, our vertical mouth layer:
if (audio < (maxV - minV)/2) 0
else 100
This layer will only be visible in the top range of our audio. If the audio
variable is less than half of our audio range, this layer's opacity will be set to 0. This accounts for when our audio is mute, and in its first half of the audio range. However, when the audio
variable is equal to or higher than half of our audio range, the opacity is set to 100.
Lastly, our horizontal mouth layer:
if (audio < .1) 0
else
if (audio < (maxV - minV)/2) 100
else 0
The horizontal mouth layer needs to account for when the vertical mouth layer and the closed mouth layer is on. Therefore, the if
statement needs to check for both of these possibilities. Combining our last 2 if
statements into 1 achieves this, making sure we set the correct argument to 100. Since we want the layer to be visible in the first half of our audio range, we want to set the argument if (audio < (maxV - minV)/2)
to 100, and all others to 0.
Accounting For Audio Peaks
You might at this point there is less variation in the horizontal and vertical mouth flaps than you would like. This may mean there are points in the Both Channels
slider which are skewing the range, since we take the maximum and minimum values of the entire audio track. Because of this, we might want to finesse the scale a little bit to account for peaks.
I did this by adding a slider to my expression. I made a null layer and added the slider there, then named it Max Value Adjust
. I made a variable to store the slider in both the horizontal and vertical mouth layers:
endV = thisComp.layer("SLIDERS").effect("Max Value Adjust")("Slider");
And added it to their if
statements like so:
Horizontal if statement
if (audio < .1) 0
else
if (audio < ((maxV + endV) - minV)/2) 100
else 0
Vertical if statement
if (audio < ((maxV + endV) - minV)/2) 0
else 100
Now the slider is in place, it can be used to adjust the maxV
of our audio range. You can do it by eye, dragging the slider into minus values until you're happy with the look of your lip flaps.
You can also follow the same process to add a slider to the minV
value of the audio range, if you need that flexibility. To do this, create another slider and add it to the if
statements of your closed mouth and horizontal mouth layers:
Closed Mouth If Statement
if (audio < .1 + startV) 100
else 0
Horizontal If Statement
if (audio < .1 + startV) 0
else
if (audio < ((maxV + endV) - minV)/2) 100
else 0
Final Step
Now, you should have your lip flaps moving in time to your audio. However, you may find that there are some flash frames, given that the Both Channels
slider has a keyframe on every frame of the composition. To mitigate this, create an adjustment layer and add the Posterize Time
effect to it. Set the value to half of your composition's frame rate to start. Then, if you still feel there is some flashing, gradually take the number down until you're happy with the rate the lip flaps change. In my composition set to 25fps, I tend to set Posterize Time
to 10.
And that's it! Generated, simple lip sync for when you need to spend your time elsewhere in your project.
Please leave a comment if you have any questions, or know of a more efficient way of achieving this.
Posted on July 26, 2024
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.