As the game has 3 distinct areas, the ambiences for each need to be suitable and unique. The game also needs to have the ability to seamlessly switch between each zone in a natural way. If the sounds were to instantly switch between each other, it would sound jarring and unnatural so I needed to find a way to achieve this.
I first thought of have the ambiences be inputted into the games as emitters that would have a spherical shape of attenuation. While this could have worked, the shape of my level meant that circular audio zones could not cover the entire map sufficiently.
Next I decided to have the sounds be 2D emitters with no attenuation, meaning that they are always heard in stereo. I tried to use States in Wwise to switch the ambiences as the player walked through trigger boxes, which worked relatively well. This however did not full achieve the seamless switching as States can only be either on or off, no in betweens.
Finally I decided to try blend containers in Wwise for this purpose. I could set a range of values and have place my sound files onto the graph in the order that they will appear in the level. As seen in the image above there are 3 blocks for the tomb, the cave and the forest respectively. When the areas overlap, they create crossfades which will turn down one at the same rate that the other turns up. I now needed to attach these values to my game so that the fades can be triggered by the player.
For Unreal Engine to talk to Wwise and control the ambience, a game parameter must be created. In this case I created one with a maximum of 20,000, a minimum of 0 and a default of 18,945. These figures relate to the units within my game from the start of the level to the furthest away point.
The above set of nodes are found within the Level Blueprint, which is the blueprint that houses the entire game world. From the Event Tick node it travels into the SetRTPCValue node. This is a Wwise specific node that will send game parameters and data into Wwise so that Game Parameters can be controlled.
The green wire named Value is calculating the distance in 2 dimensions from the furthest point away in the tomb, to the player. This number is then sent into Wwise and changes the LevelProgression parameter.
As you can now see from this blend container, the LevelProgression value will continually update as the player moves through the level. The crossfade points are at the boundaries of each ambience zone and soothly fade into each other in a realistic manner.
With the footsteps recorded and ready to go within Wwise, I could now begin the process of creating the footsteps system in the Unreal Engine. It uses a node based coding system that I had to get to grips with for the first time, but allows for programming by stringing processes together in a very visual way. As I am not a programmer, this is very intuative and accessable to me.
The first step to adding footsteps is to define the physical surfaces that the player will walk on. Doing this will enable the creation of physical materials of objects to have these surfaces applied to them. For example, the stone texture in my level can be applied with a stone physical material, telling Wwise that the player is walking on stone.
The above nodes are part of a section of the Player Character blueprint, the script that controls everything to do with the player. The nodes start from the left and move across the screen to the right and this is the start of the footsteps audio process.
Beginning at the Event Tick node which will fire a signal every frame of gameplay (60 frames per second). This adds to a timer to for the footsteps to keep time with. This then moves into a Branch node which works like an If statement in coding. If the red input (red is boolean so can either be true or false) is true then perform the next action.
In this case the red input is checking if the player is moving and that 2.2 seconds has passed since they started moving. Editing the float value at the bottom of the screen will change the rate in which the sound is triggered. This needed to be tested to choose a realistic pace for the footstep sounds to be triggered.
From the True outlet of the Branch node, the next process is started which checks if the player is on the ground. It does this by using a LineTraceByChannel node which fires a line at the ground. If the line hits the ground at the height of the player, they must be touching the ground. This outputs another boolean output into another branch node.
The next node in the process is another LineTraceByChannel that this time fires a line at the ground but instead checks the Physical Material of the surface by having the OutHit node travel into a Break Hit Result Node. This then outputs the name of the material which can then be sent into Wwise with a Set Switch node that will change the switch to the name of the material.
The final step of the footsteps audio process is to proceed through a delay node that waits 0.2 seconds, then into a Post Event node that will play a Wwise event (in this case the Player_Footsteps event) at the position of a selected actor which is self. Self refers to the current blueprint so the sound will be played at the location of the player.
Next it goes into a Set Occlusion Refresh Interval node which is set to 0 which will ensure that the footsteps sounds are never occluded. Finally it travels into a Set node which resets the timer variable to 0 so the process can start again.
With the system set up within Wwise, I went out to record some footsteps with my Zoom H4n portable recorder. I went out to a field as far away from the roads in Hatfield as possible. While the weather was very windy on this day, I still managed to get good grass footsteps recordings by setting up my microphone on a tripod with a hill to the backside of the microphone. This allowed me to avoid the majority of the wind sounds in my recordings as the hill would act as a wind block low to the ground where my feet were. I made multiple recordings of footsteps from various positions so that I would have a larger pool of source recordings to work with.
With my recordings collected, I next had to clean up the sounds. For this task I used Izotope RX.
As my recordings contained a fair amount of noise throughout, the cleaning up stage was a very important one when preparing sounds for use in game. The first step was to remove the lower frequencies using the EQ function. The footsteps frequency content exists in the high-mid area so I could remove a decent amount from the bottom end without affecting the content that I needed.
The next step was to use spectral denoise to remove the unwanted background noise of distant traffic and wind from my recordings. To do this I selected a section of noise from my recordings and used the learn feature to create a profile of the noise. I then selected the whole audio file and changed the parameters while checking the output only to make sure I was only removing noise and not the footstep content. This process then left me with a clean recording of the footsteps which I then loaded into Reaper to begin editing.
With the audio file imported into Reaper, I began by converting the stereo file into just the left channel in mono. I did this as it would be much more difficult to realistically pan a stereo audio file in Wwise to simulate each leg.
I then sliced up the file at the transients of the footsteps that I liked the sound of, however some steps ended up being unusable due to the bird sounds in the background.
Once I had collected my samples and added fades to each one, I realised that some steps had more of a shoe impact, while others had more of a grass/foliage texture. Due to this, I decided to layer my favourite impact sounds with the best grass textures to create steps with more character and fullness.
As a final step to add some variation, I automated a frequency shifter to slightly alter each step. This would make each step sound similar yet unique.
A great feature of Reaper is that it allows for batch exporting of multiple audio files at once which is perfect for making footsteps! To do this I created regions for each step and then its as simple as choosing to export all regions, each one getting an automatic incremental file name. They are now ready for implementation into Wwise!
In the next post I will explain how I connected the Wwise footstep system to be useable within Unreal!
While not the most exciting part of a game's soundtrack, footsteps are an aspect that should not be overlooked; especially for first-person games like mine. They serve the obvious purpose of letting the player know that their character is moving, but also present more information than you might expect. They can let you know what surface you're walking on, gravel grass, rocks, bones... and will often be the first sound you hear a differing reverb applied to when entering a new space like a huge hall or a cramped cave. They make characters feel grounded and in multiplayer games the player can listen to their surroundings for their enemy's approaching footsteps to give them an advantage in a gunfight.
As you can see, they do quite a lot!
Keeping all of that in mind, I began setting up my footsteps system in Wwise and Unreal. I started by creating my hierarchy in Wwise that would house my footsteps sounds.
In the above image, you can see that the hierarchy begins with a "Footsteps" Actor-Mixer. I use this mainly for housekeeping reasons to keep everything tidy within a contained structure, however I will be using this later to control gain and auxiliary sends when I get to the mixing stage. Next is the "Surface" Switch Container which is an important component for footsteps as this will allow for switching of the material that is being walked on.
The Switch Container interacts with Switches that I created in the Game Syncs tab. As you can see I have created a "Surface" Switch Group that contains Dirt, Grass and Stone which are some of the materials that will be present within my game. If, for example, the player walks onto dirt in the game, the Switch Container will trigger the "Dirt" Sequence Container. This completes a set of actions in a selected order chosen by myself. In this case, I have a sequence that will simulate right and left leg movement.
The above image shows the playlist for the Dirt sequence. First it will play the "Left Foot" Sequence Container which consists of a clothing Foley sample, followed by a random container that will play a random footstep sample from a pool of chosen dirt footstep samples. This will all happen when one instance of footsteps is called while the player is standing on dirt. It will then follow the same pattern for the right foot the next time it is called.
To give the illusion of left and right foot movement, I edited the speaker panning position ever so slightly to the right and left for each corresponding side and slightly pitched up the right foot to give that side more distinct variation. As I have not recorded my footsteps sounds at this stage, I am currently using placeholder samples while I ensure that it is working within Unreal Engine.
Next time, integrating into Unreal Engine!
Since my last post I have completed the design of the level and finalised the general gameplay. I created the maze within the tomb of my level and added events to trap the player as well as a way to exit the tomb to complete the game.
I chose to make a very simple maze that, while not particularly challenging to complete on its own, becomes more difficult due to the darkness that can easily cause the player to get lost and disorientated. The addition of the chasing enemy will also give extra complexity and danger to what is in fact a very simple game. The beginning of the maze starts when the player collects the statue towards the entrance of the tomb causing a large boulder to block the exit. I created this effect by making a simple blueprint that would move the rock when the player interacts with the statue using the "E" key. This then makes the statue disappear, suggesting that the character has collected it, and triggers the movement of the boulder. Later in the project I will add audio events for both the statue collection and the boulder movement which will be implemented via the blueprints.
I added various hallways, doorways, false paths and pillars into the maze to hopefully slow down the player as they try to escape from the enemy in the game. Using a similar blueprint to the statue collection, I created a button in one of the rooms that would move another boulder, this time allowing the player to exit the tomb and win the game. I placed this button at the furthest point from the start in order to challenge to the player as they avoid the enemy.
The above image shows a top down view of the whole maze area. The green area on the floor is the NavMesh which represents the area that the enemy can move within and is used to calculate how it should chase the player using pathfinding. Once the enemy senses the player, it will then start chasing the player along the shortest path within the green area.
Setting this up is very important for my game as it not only will allow the enemy to chase correctly, it will also allow me to dynamically impact the audio of this enemy based on how far away it is from the player along the shortest path. If I were to set up my dynamic sounds to be affected only by the distance to my player, there would be unwanted moments where the player is technically close to the enemy but there is a wall in the way. This means that the path to the enemy is much further away, therefore the danger is minimal and the audio should reflect that.
The enemy in the game is currently represented by a large cube that fills the hallways of the game maze. When the player walks near the enemy, it will relentlessly follow them until it either touches them, killing the player and resulting in a game over, or until the player escapes and wins the game. As the player cannot walk around the enemy in the narrow corridors, it will have to be kited around various obstacles to strategically manoeuvre it out of the way.
I am planning for the enemy to be invisible so that the player will have to carefully listen to their surroundings in order to locate it. This concept will need to be thoroughly tested however to ensure that it leads to fun gameplay rather than frustration. As a contingency, if I cannot get it to work as intended while invisible, I will explore other options such as having the enemy be adaptively invisible or a flickering light source.
Or maybe I'll just keep the big terrifying cube!
The main focus of my project is the audio (asset creation, implementation and innovation). With that in mind, I needed to spend as little time as possible on the level and game design elements. I knew that I wanted to have the level begin in a forest area, progress into a cave which would then lead to a hidden temple. For a quick way to generate a game area, I used the landscape tool to generate an area and carved out the gameplay zone with mountains at the edges to be used as the bounds of the level. I then applied a grass material to the landscape and painted on materials to simulate a dirt path and the outline of my cave. Using materials in Unreal Engine will come in handy later when I add my footstep sounds as I will be able to switch the surface sounds based on the material being walked over.
The next step was to created my cave. I used simple geometry to map out the area of my cave and then surrounded it pre-made rock assets. I sped up this process by reusing multiples of the same rock shape and rotating them, giving the illusion of unique rock formations.
Once this was completed I finished off my starting area by adding trees and a small lake to create a forest environment. While adding these details is mostly unnecessary, I wanted to include them as inspiration for my final forest soundscape.
Within the cave there is almost no available light, preventing the player from being able to see. While I like this idea to an extent, as limited visuals will allow me to rely on my audio abilities more, complete darkness would make the game near unplayable. With is in mind, I created a simple torch light using blueprints that could be toggled on and off. Within the cave and temple this will be the primary source of light for the player.
The final stage of my level design was to create the temple area, where the bulk of the game will take place. Using geometry I created a large enclosed box that will house the dungeon-style gameplay area where the enemy will hunt you down. Once the mazy interior is completed my level will be finalised and I can fully focus on the audio orientated gameplay aspects of the game!
While messing around with the first person template in UE4, I created a simple enemy AI that would follow the player throughout the level until it reaches the player.
This gave me the idea to have a sound attached to this enemy so that you could hear where it is coming from.
Adding to this, I thought that having the sound not only get louder, but have more of a stereo presence as the enemy gets near would allow the player to locate the enemy realistically through sound. I also figured out a way to have the audio trigger an event (stop in this case) on contact with the player.
With these things in mind I have decided to create a game where the player will be chased by an invisible foe (maybe a monster?) that can only be heard. The gameplay will come from trying to avoid the enemy by using their ears and making their way though a level such as a cave or tomb to escape.
On to the level design next!
As my final major project (FMP) is worth a whole module of my degree it needs to be of relatively large scope. I would like this project to be a major part of my sound design showreel that I will eventually send to potential employers, therefore I have decided to create and implement sound into a game. From my research in the field of game audio I have determined that many studios are utilising Unreal Engine 4 (UE4) to create their games combined with Audiokinetic's Wwise to implement the audio. With this in mind I have decided to create my own game within UE4 and implement the audio using Wwise.
As a Sound Design Technology student, sound is obviously the most important part of my project. Knowing this, I have decided that in order to make sound the star of my project I will use simple pre-made visual assets and game mechanics so as to not get bogged down in those aspects of development. In some manner I will make the audio an integral element of the gameplay to allow the sound to shine.
At this stage I don't have a fully fledged idea. I'll see what direction the project naturally takes itself in :)