With the footsteps recorded and ready to go within Wwise, I could now begin the process of creating the footsteps system in the Unreal Engine. It uses a node based coding system that I had to get to grips with for the first time, but allows for programming by stringing processes together in a very visual way. As I am not a programmer, this is very intuative and accessable to me.
The first step to adding footsteps is to define the physical surfaces that the player will walk on. Doing this will enable the creation of physical materials of objects to have these surfaces applied to them. For example, the stone texture in my level can be applied with a stone physical material, telling Wwise that the player is walking on stone.
The above nodes are part of a section of the Player Character blueprint, the script that controls everything to do with the player. The nodes start from the left and move across the screen to the right and this is the start of the footsteps audio process.
Beginning at the Event Tick node which will fire a signal every frame of gameplay (60 frames per second). This adds to a timer to for the footsteps to keep time with. This then moves into a Branch node which works like an If statement in coding. If the red input (red is boolean so can either be true or false) is true then perform the next action.
In this case the red input is checking if the player is moving and that 2.2 seconds has passed since they started moving. Editing the float value at the bottom of the screen will change the rate in which the sound is triggered. This needed to be tested to choose a realistic pace for the footstep sounds to be triggered.
From the True outlet of the Branch node, the next process is started which checks if the player is on the ground. It does this by using a LineTraceByChannel node which fires a line at the ground. If the line hits the ground at the height of the player, they must be touching the ground. This outputs another boolean output into another branch node.
The next node in the process is another LineTraceByChannel that this time fires a line at the ground but instead checks the Physical Material of the surface by having the OutHit node travel into a Break Hit Result Node. This then outputs the name of the material which can then be sent into Wwise with a Set Switch node that will change the switch to the name of the material.
The final step of the footsteps audio process is to proceed through a delay node that waits 0.2 seconds, then into a Post Event node that will play a Wwise event (in this case the Player_Footsteps event) at the position of a selected actor which is self. Self refers to the current blueprint so the sound will be played at the location of the player.
Next it goes into a Set Occlusion Refresh Interval node which is set to 0 which will ensure that the footsteps sounds are never occluded. Finally it travels into a Set node which resets the timer variable to 0 so the process can start again.
With the system set up within Wwise, I went out to record some footsteps with my Zoom H4n portable recorder. I went out to a field as far away from the roads in Hatfield as possible. While the weather was very windy on this day, I still managed to get good grass footsteps recordings by setting up my microphone on a tripod with a hill to the backside of the microphone. This allowed me to avoid the majority of the wind sounds in my recordings as the hill would act as a wind block low to the ground where my feet were. I made multiple recordings of footsteps from various positions so that I would have a larger pool of source recordings to work with.
With my recordings collected, I next had to clean up the sounds. For this task I used Izotope RX.
As my recordings contained a fair amount of noise throughout, the cleaning up stage was a very important one when preparing sounds for use in game. The first step was to remove the lower frequencies using the EQ function. The footsteps frequency content exists in the high-mid area so I could remove a decent amount from the bottom end without affecting the content that I needed.
The next step was to use spectral denoise to remove the unwanted background noise of distant traffic and wind from my recordings. To do this I selected a section of noise from my recordings and used the learn feature to create a profile of the noise. I then selected the whole audio file and changed the parameters while checking the output only to make sure I was only removing noise and not the footstep content. This process then left me with a clean recording of the footsteps which I then loaded into Reaper to begin editing.
With the audio file imported into Reaper, I began by converting the stereo file into just the left channel in mono. I did this as it would be much more difficult to realistically pan a stereo audio file in Wwise to simulate each leg.
I then sliced up the file at the transients of the footsteps that I liked the sound of, however some steps ended up being unusable due to the bird sounds in the background.
Once I had collected my samples and added fades to each one, I realised that some steps had more of a shoe impact, while others had more of a grass/foliage texture. Due to this, I decided to layer my favourite impact sounds with the best grass textures to create steps with more character and fullness.
As a final step to add some variation, I automated a frequency shifter to slightly alter each step. This would make each step sound similar yet unique.
A great feature of Reaper is that it allows for batch exporting of multiple audio files at once which is perfect for making footsteps! To do this I created regions for each step and then its as simple as choosing to export all regions, each one getting an automatic incremental file name. They are now ready for implementation into Wwise!
In the next post I will explain how I connected the Wwise footstep system to be useable within Unreal!
While not the most exciting part of a game's soundtrack, footsteps are an aspect that should not be overlooked; especially for first-person games like mine. They serve the obvious purpose of letting the player know that their character is moving, but also present more information than you might expect. They can let you know what surface you're walking on, gravel grass, rocks, bones... and will often be the first sound you hear a differing reverb applied to when entering a new space like a huge hall or a cramped cave. They make characters feel grounded and in multiplayer games the player can listen to their surroundings for their enemy's approaching footsteps to give them an advantage in a gunfight.
As you can see, they do quite a lot!
Keeping all of that in mind, I began setting up my footsteps system in Wwise and Unreal. I started by creating my hierarchy in Wwise that would house my footsteps sounds.
In the above image, you can see that the hierarchy begins with a "Footsteps" Actor-Mixer. I use this mainly for housekeeping reasons to keep everything tidy within a contained structure, however I will be using this later to control gain and auxiliary sends when I get to the mixing stage. Next is the "Surface" Switch Container which is an important component for footsteps as this will allow for switching of the material that is being walked on.
The Switch Container interacts with Switches that I created in the Game Syncs tab. As you can see I have created a "Surface" Switch Group that contains Dirt, Grass and Stone which are some of the materials that will be present within my game. If, for example, the player walks onto dirt in the game, the Switch Container will trigger the "Dirt" Sequence Container. This completes a set of actions in a selected order chosen by myself. In this case, I have a sequence that will simulate right and left leg movement.
The above image shows the playlist for the Dirt sequence. First it will play the "Left Foot" Sequence Container which consists of a clothing Foley sample, followed by a random container that will play a random footstep sample from a pool of chosen dirt footstep samples. This will all happen when one instance of footsteps is called while the player is standing on dirt. It will then follow the same pattern for the right foot the next time it is called.
To give the illusion of left and right foot movement, I edited the speaker panning position ever so slightly to the right and left for each corresponding side and slightly pitched up the right foot to give that side more distinct variation. As I have not recorded my footsteps sounds at this stage, I am currently using placeholder samples while I ensure that it is working within Unreal Engine.
Next time, integrating into Unreal Engine!