ForestThe forest ambience is made up of a bed of ambient wind and birds that I recorded out on a quiet field. This sound is a 2D loop that covers the entire area of the forest. I wanted to make the wind in the trees sound realistic so for this I decided to have them be 3D emitter sounds that I would place on top of the tree areas and would cover a spherical area around them. I created a simple loop of wind and leaves that had no variation and in Wwise used an RTPC curve to modulate the sound. Using an LFO with randomised frequency and depth, I modulated both the volume and low pass filter over time. This gives the effect of the sound rising and falling naturally as wind does. Spread is a function in the positioning tab of Wwise that affects how sounds are attenuated. Throughout the project I used this function to simulate realism in how sound acts. When you are far away from a sound, you hear the direction that it is coming from in a 3D space. As you get closer to it, this positioning perception will lessen and it will sound more stereo. In this case, as you walk along the path, you hear the wind sounds coming from the trees themselves. As you walk towards the trees, the sound envelopes you and becomes stereo. In a blend container with the trees, are some scattered bird sounds. I created a random container full of different bird sounds with randomised pitch and volume and then used the position editor (seen above) to select some positions that the birds could trigger in the stereo field. These different positions will randomly cycle, simulating bird sounds coming from different trees around the player. PondI wanted to create a serene summer atmosphere around the pond. To achieve this I created a loop of lapping water that I placed around the pond for coverage; this can only be heard when very close to the water. To supplement this, I created a loop of a cicada using an LFO modulated wavetable within Serum. To give this sound some randomised movement in game, I used a similar method to the LFO affecting the RTPC on the wind. However this time I modulated the frequency of a peaking filter effect with a slow LFO.
0 Comments
As the game has 3 distinct areas, the ambiences for each need to be suitable and unique. The game also needs to have the ability to seamlessly switch between each zone in a natural way. If the sounds were to instantly switch between each other, it would sound jarring and unnatural so I needed to find a way to achieve this. I first thought of have the ambiences be inputted into the games as emitters that would have a spherical shape of attenuation. While this could have worked, the shape of my level meant that circular audio zones could not cover the entire map sufficiently. Next I decided to have the sounds be 2D emitters with no attenuation, meaning that they are always heard in stereo. I tried to use States in Wwise to switch the ambiences as the player walked through trigger boxes, which worked relatively well. This however did not full achieve the seamless switching as States can only be either on or off, no in betweens. Finally I decided to try blend containers in Wwise for this purpose. I could set a range of values and have place my sound files onto the graph in the order that they will appear in the level. As seen in the image above there are 3 blocks for the tomb, the cave and the forest respectively. When the areas overlap, they create crossfades which will turn down one at the same rate that the other turns up. I now needed to attach these values to my game so that the fades can be triggered by the player. For Unreal Engine to talk to Wwise and control the ambience, a game parameter must be created. In this case I created one with a maximum of 20,000, a minimum of 0 and a default of 18,945. These figures relate to the units within my game from the start of the level to the furthest away point. The above set of nodes are found within the Level Blueprint, which is the blueprint that houses the entire game world. From the Event Tick node it travels into the SetRTPCValue node. This is a Wwise specific node that will send game parameters and data into Wwise so that Game Parameters can be controlled. The green wire named Value is calculating the distance in 2 dimensions from the furthest point away in the tomb, to the player. This number is then sent into Wwise and changes the LevelProgression parameter. As you can now see from this blend container, the LevelProgression value will continually update as the player moves through the level. The crossfade points are at the boundaries of each ambience zone and soothly fade into each other in a realistic manner.
With the footsteps recorded and ready to go within Wwise, I could now begin the process of creating the footsteps system in the Unreal Engine. It uses a node based coding system that I had to get to grips with for the first time, but allows for programming by stringing processes together in a very visual way. As I am not a programmer, this is very intuative and accessable to me. The first step to adding footsteps is to define the physical surfaces that the player will walk on. Doing this will enable the creation of physical materials of objects to have these surfaces applied to them. For example, the stone texture in my level can be applied with a stone physical material, telling Wwise that the player is walking on stone. The above nodes are part of a section of the Player Character blueprint, the script that controls everything to do with the player. The nodes start from the left and move across the screen to the right and this is the start of the footsteps audio process. Beginning at the Event Tick node which will fire a signal every frame of gameplay (60 frames per second). This adds to a timer to for the footsteps to keep time with. This then moves into a Branch node which works like an If statement in coding. If the red input (red is boolean so can either be true or false) is true then perform the next action. In this case the red input is checking if the player is moving and that 2.2 seconds has passed since they started moving. Editing the float value at the bottom of the screen will change the rate in which the sound is triggered. This needed to be tested to choose a realistic pace for the footstep sounds to be triggered. From the True outlet of the Branch node, the next process is started which checks if the player is on the ground. It does this by using a LineTraceByChannel node which fires a line at the ground. If the line hits the ground at the height of the player, they must be touching the ground. This outputs another boolean output into another branch node. The next node in the process is another LineTraceByChannel that this time fires a line at the ground but instead checks the Physical Material of the surface by having the OutHit node travel into a Break Hit Result Node. This then outputs the name of the material which can then be sent into Wwise with a Set Switch node that will change the switch to the name of the material. The final step of the footsteps audio process is to proceed through a delay node that waits 0.2 seconds, then into a Post Event node that will play a Wwise event (in this case the Player_Footsteps event) at the position of a selected actor which is self. Self refers to the current blueprint so the sound will be played at the location of the player.
Next it goes into a Set Occlusion Refresh Interval node which is set to 0 which will ensure that the footsteps sounds are never occluded. Finally it travels into a Set node which resets the timer variable to 0 so the process can start again. With the system set up within Wwise, I went out to record some footsteps with my Zoom H4n portable recorder. I went out to a field as far away from the roads in Hatfield as possible. While the weather was very windy on this day, I still managed to get good grass footsteps recordings by setting up my microphone on a tripod with a hill to the backside of the microphone. This allowed me to avoid the majority of the wind sounds in my recordings as the hill would act as a wind block low to the ground where my feet were. I made multiple recordings of footsteps from various positions so that I would have a larger pool of source recordings to work with. With my recordings collected, I next had to clean up the sounds. For this task I used Izotope RX. As my recordings contained a fair amount of noise throughout, the cleaning up stage was a very important one when preparing sounds for use in game. The first step was to remove the lower frequencies using the EQ function. The footsteps frequency content exists in the high-mid area so I could remove a decent amount from the bottom end without affecting the content that I needed. The next step was to use spectral denoise to remove the unwanted background noise of distant traffic and wind from my recordings. To do this I selected a section of noise from my recordings and used the learn feature to create a profile of the noise. I then selected the whole audio file and changed the parameters while checking the output only to make sure I was only removing noise and not the footstep content. This process then left me with a clean recording of the footsteps which I then loaded into Reaper to begin editing. With the audio file imported into Reaper, I began by converting the stereo file into just the left channel in mono. I did this as it would be much more difficult to realistically pan a stereo audio file in Wwise to simulate each leg. I then sliced up the file at the transients of the footsteps that I liked the sound of, however some steps ended up being unusable due to the bird sounds in the background. Once I had collected my samples and added fades to each one, I realised that some steps had more of a shoe impact, while others had more of a grass/foliage texture. Due to this, I decided to layer my favourite impact sounds with the best grass textures to create steps with more character and fullness. As a final step to add some variation, I automated a frequency shifter to slightly alter each step. This would make each step sound similar yet unique. A great feature of Reaper is that it allows for batch exporting of multiple audio files at once which is perfect for making footsteps! To do this I created regions for each step and then its as simple as choosing to export all regions, each one getting an automatic incremental file name. They are now ready for implementation into Wwise!
In the next post I will explain how I connected the Wwise footstep system to be useable within Unreal! Photo by Olia Gozha on Unsplash While not the most exciting part of a game's soundtrack, footsteps are an aspect that should not be overlooked; especially for first-person games like mine. They serve the obvious purpose of letting the player know that their character is moving, but also present more information than you might expect. They can let you know what surface you're walking on, gravel grass, rocks, bones... and will often be the first sound you hear a differing reverb applied to when entering a new space like a huge hall or a cramped cave. They make characters feel grounded and in multiplayer games the player can listen to their surroundings for their enemy's approaching footsteps to give them an advantage in a gunfight. As you can see, they do quite a lot! Keeping all of that in mind, I began setting up my footsteps system in Wwise and Unreal. I started by creating my hierarchy in Wwise that would house my footsteps sounds. In the above image, you can see that the hierarchy begins with a "Footsteps" Actor-Mixer. I use this mainly for housekeeping reasons to keep everything tidy within a contained structure, however I will be using this later to control gain and auxiliary sends when I get to the mixing stage. Next is the "Surface" Switch Container which is an important component for footsteps as this will allow for switching of the material that is being walked on. The Switch Container interacts with Switches that I created in the Game Syncs tab. As you can see I have created a "Surface" Switch Group that contains Dirt, Grass and Stone which are some of the materials that will be present within my game. If, for example, the player walks onto dirt in the game, the Switch Container will trigger the "Dirt" Sequence Container. This completes a set of actions in a selected order chosen by myself. In this case, I have a sequence that will simulate right and left leg movement. The above image shows the playlist for the Dirt sequence. First it will play the "Left Foot" Sequence Container which consists of a clothing Foley sample, followed by a random container that will play a random footstep sample from a pool of chosen dirt footstep samples. This will all happen when one instance of footsteps is called while the player is standing on dirt. It will then follow the same pattern for the right foot the next time it is called. To give the illusion of left and right foot movement, I edited the speaker panning position ever so slightly to the right and left for each corresponding side and slightly pitched up the right foot to give that side more distinct variation. As I have not recorded my footsteps sounds at this stage, I am currently using placeholder samples while I ensure that it is working within Unreal Engine.
Next time, integrating into Unreal Engine! Cosmic Horror is a subgenre of horror that emphasises the fear of the unknown to create mysterious stories. Pioneered by novelist H.P. Lovecraft, famous for works such as The Call of Cthulhu and At the Mountains of Madness, the genre can be characterised by 3 main elements:
Annihilation Firstly I watched the film Annihilation directed by Alex Garland. The premise is about a group of explorers who enter a mysterious area known as "The Shimmer" where plant and animal life is constantly being mutated by an extra-terrestrial force, initialising from a meteor landing. The clip above shows the characters entering “The Shimmer” for the first time. Warped and glitchy sounds can be heard coming from the forcefield structure suggesting an otherworldly quality. As I watched on in the film, there were some moments of Cosmic Horror where the characters would come across mutated plant and animal life, however from a sound perspective I was disappointed. The sounds of these mutations were very much grounded in reality and while it worked well for this film, did not show the creative strangeness I was looking for. The above scene shows a moment where an alien entity takes over the body of one of the characters. The visuals of the scene are very striking and the emotion of the scene is portrayed through the use of music which works well. The sound effects for the scene however take a back seat to the music and are almost silenced during the most chaotic moment of the sequence. This works as the score blurs the line between music and sound design with the instruments taking on a variety of strange and glitchy textures. As I wanted to hear how they tackled the sound effects in this scene, this film was not as useful as I had hoped. The musical approach could work for my project, however I don’t feel as though it captures Cosmic Horror in the way I had envisioned. Color Out of Space I decided to watch this film directed by Richard Stanley as it is a modern adaptation of the novel written by H.P. Lovecraft of the same name. It has a similar premise to Annihilation with a meteor falling to earth and causing strange happenings and aberrations. The difference with this one however is, instead of inducing mutations, this phenomenon causes the characters to gradually lose their minds as strange occurrences that can't be comprehended frequently happen. Throughout the film, pink light represents the extent of what the characters can comprehend of the situation. The sounds that this entity makes are very otherworldly and can't really be pinned down as sounding like anything in particular. This works really well for this movie as it reinforces to the audience that this entity is far out of our scope of understanding as humans. I would like to use this approach to tackling the sounds of the enemy in my game by utilising unlikely sound sources to create an otherworldly experience. (An interesting, non-audio related feature of this film: The colour pink, or magenta, was used to represent the entity because no wavelength of light exists for the colour. We only see it as our brains stitch the frequencies together creating magenta somewhere between violet and red. This is a great way to convey a "new" colour that has never been seen before!) During one sequence in the film, one of the characters has a vision of what is assumed to be the alien planet that the meteor has come from. The world looks like a planet sizes organism that has millions of tentacle-like arms along its surface. These are accompanied by fittingly squishy sounds along with other unnatural audio drones as the camera sweeps through the landscape. Tentacles and other organic appendages are frequently featured in Lovecraft's works, most notable in his best know story "The Call of Cthulhu" which features a building sized squid-like monster. While my game will not include such visuals due to my limitations in the art department, I will attempt to convey Lovecraft's signature monster features, through sound. Why are there so few Cosmic Horror Films?
After watching these two films I began to understand why this genre is not very common. Due to the intrinsic nature of the genre itself, conveying something that cannot be comprehended in a visual format is always going to be a challenge. I do however think it could work far better by cleverly utilising and focusing on audio. The mind can easily be tricked by audio and this technique is used by any sound designer on a budget. E.g. cooking bacon can sounds like a crackling fire or snapping celery can sound like a broken leg. By this logic, an offscreen (or invisible in my case) monster can be made to sound unfathomable with the use of interesting layers of incomprehensible sounds to suggest a powerful otherworldly being. Since my last post I have completed the design of the level and finalised the general gameplay. I created the maze within the tomb of my level and added events to trap the player as well as a way to exit the tomb to complete the game. I chose to make a very simple maze that, while not particularly challenging to complete on its own, becomes more difficult due to the darkness that can easily cause the player to get lost and disorientated. The addition of the chasing enemy will also give extra complexity and danger to what is in fact a very simple game. The beginning of the maze starts when the player collects the statue towards the entrance of the tomb causing a large boulder to block the exit. I created this effect by making a simple blueprint that would move the rock when the player interacts with the statue using the "E" key. This then makes the statue disappear, suggesting that the character has collected it, and triggers the movement of the boulder. Later in the project I will add audio events for both the statue collection and the boulder movement which will be implemented via the blueprints. I added various hallways, doorways, false paths and pillars into the maze to hopefully slow down the player as they try to escape from the enemy in the game. Using a similar blueprint to the statue collection, I created a button in one of the rooms that would move another boulder, this time allowing the player to exit the tomb and win the game. I placed this button at the furthest point from the start in order to challenge to the player as they avoid the enemy. The above image shows a top down view of the whole maze area. The green area on the floor is the NavMesh which represents the area that the enemy can move within and is used to calculate how it should chase the player using pathfinding. Once the enemy senses the player, it will then start chasing the player along the shortest path within the green area. Setting this up is very important for my game as it not only will allow the enemy to chase correctly, it will also allow me to dynamically impact the audio of this enemy based on how far away it is from the player along the shortest path. If I were to set up my dynamic sounds to be affected only by the distance to my player, there would be unwanted moments where the player is technically close to the enemy but there is a wall in the way. This means that the path to the enemy is much further away, therefore the danger is minimal and the audio should reflect that. The enemy in the game is currently represented by a large cube that fills the hallways of the game maze. When the player walks near the enemy, it will relentlessly follow them until it either touches them, killing the player and resulting in a game over, or until the player escapes and wins the game. As the player cannot walk around the enemy in the narrow corridors, it will have to be kited around various obstacles to strategically manoeuvre it out of the way. I am planning for the enemy to be invisible so that the player will have to carefully listen to their surroundings in order to locate it. This concept will need to be thoroughly tested however to ensure that it leads to fun gameplay rather than frustration. As a contingency, if I cannot get it to work as intended while invisible, I will explore other options such as having the enemy be adaptively invisible or a flickering light source.
Or maybe I'll just keep the big terrifying cube! Through the ideas stage of designing my game level, I decided that I wanted my cave section (where the enemy will reside) to include horror elements. Following this decision, I began to look for inspiration in various media to find out how exactly horror is achieved through audio specifically. I started by research with Layers of Fear by Blooper Team.
In Layers of Fear you play as a disturbed painter trying to complete his most important piece of work, his "Magnum Opus". Due to his mental instability while traversing his home, scary and strange things begin to occur causing frights to the player. The game has simple gameplay where the player mostly walks around a large mansion-style house, interacting with doors and items. While the game doesn't include any sort of the enemies that I am planning for my game, the way the ambience and sparse use of sound effects build tension as you walk through this building is something I am looking to recreate. The initial area of the game is the porch of the building where the audio is completely diegetic, consisting of muffled rain and thunder sounds from outside and footsteps when the player moves. This grounds the player into the game world, making it feel believable straight away. As soon as you enter the foyer of the building, the peaceful yet slightly unnerving non-diegetic piano music begins. From this point the player is able to explore some of the unlocked rooms in the house at their own pace. As the music is constantly present throughout this section, it gives the player a sense of security in what is a creepy and lonely environment. The way tension is built is initially showcased when the player makes their way down to the basement. Upon descending into the darkness, the music cuts out completely. This instantly instills dread into the player, causing their senses to heighten as they become very aware of their surroundings. After clambering around in the basement and hearing the odd rustle and squeak of a rat, it is apparent that this moment is ultimately a red herring. After leaving the basement, the music is introduced back into the mix. As an introduction, this is an effective way of putting the player slightly on edge as a taster of what is to come. When progressing further into the game, the ability to freely explore is taken away as the perceived architecture of the building warps into a more linear experience while the character's mind deteriorates. The music shifts from the coherent classic piano tune, to a more abstract drone based soundtrack. At this point the game uses similar technique to build tension, now with added jump scares to frighten the player. I noticed an audio sequence loop within the general ambience while playing. It would begin with louder environmental diegetic sounds alongside non-diegetic drones in the background. This would then quiet down as the player approaches an interactable such as a door. Once the door is opened there would either be a loud jump scare followed by a scary visual to match, or the soundtrack would revert to the initial state with the drones. This varies the outcome of such events to subvert the player's expectations, attempting to avoid repetitiveness. However, I believe this trick would more often than not lead to a jump scare which I eventually began to anticipate, reducing its effectiveness. Overall the soundscape of the game is very impressive. I particularly enjoyed the use of silence when approaching uncertainty. Unfortunately the game often has some cliché jump scares and audio assets such as frequent use of dry ice on metal to create screeches, which I personally think is overused in horror. In my project I will take inspiration from the sparse sound design during quiet moments to build a false sense of security, ready for when I need to ramp up the tension. The main focus of my project is the audio (asset creation, implementation and innovation). With that in mind, I needed to spend as little time as possible on the level and game design elements. I knew that I wanted to have the level begin in a forest area, progress into a cave which would then lead to a hidden temple. For a quick way to generate a game area, I used the landscape tool to generate an area and carved out the gameplay zone with mountains at the edges to be used as the bounds of the level. I then applied a grass material to the landscape and painted on materials to simulate a dirt path and the outline of my cave. Using materials in Unreal Engine will come in handy later when I add my footstep sounds as I will be able to switch the surface sounds based on the material being walked over. The next step was to created my cave. I used simple geometry to map out the area of my cave and then surrounded it pre-made rock assets. I sped up this process by reusing multiples of the same rock shape and rotating them, giving the illusion of unique rock formations. Once this was completed I finished off my starting area by adding trees and a small lake to create a forest environment. While adding these details is mostly unnecessary, I wanted to include them as inspiration for my final forest soundscape. Within the cave there is almost no available light, preventing the player from being able to see. While I like this idea to an extent, as limited visuals will allow me to rely on my audio abilities more, complete darkness would make the game near unplayable. With is in mind, I created a simple torch light using blueprints that could be toggled on and off. Within the cave and temple this will be the primary source of light for the player. The final stage of my level design was to create the temple area, where the bulk of the game will take place. Using geometry I created a large enclosed box that will house the dungeon-style gameplay area where the enemy will hunt you down. Once the mazy interior is completed my level will be finalised and I can fully focus on the audio orientated gameplay aspects of the game!
While messing around with the first person template in UE4, I created a simple enemy AI that would follow the player throughout the level until it reaches the player. This gave me the idea to have a sound attached to this enemy so that you could hear where it is coming from. Adding to this, I thought that having the sound not only get louder, but have more of a stereo presence as the enemy gets near would allow the player to locate the enemy realistically through sound. I also figured out a way to have the audio trigger an event (stop in this case) on contact with the player. With these things in mind I have decided to create a game where the player will be chased by an invisible foe (maybe a monster?) that can only be heard. The gameplay will come from trying to avoid the enemy by using their ears and making their way though a level such as a cave or tomb to escape.
On to the level design next! |
Matt BurrowsDocumenting my final year at university studying sound design. Archives
April 2021
Categories |