Sonic Innovations

cymata-1.4-waveform

Contact

joe@sonicinnovations.co.uk

Blog

By Joe Thom, Sep 29 2016 09:53AM

I had a great time recording some sheet metal impacts and scrapes the other day. They're to be included in a destruction ambience in the opening area of INT. They'll be used in the environmental sound to set the scene of an apartment building being destroyed around you as you try to escape.



The mic's I used were as follows:


Neumann U87 - for a nice clear, transparent layer.


Pair of Neumann KM184's - To allow for some interesting movement. I placed these pretty wide so I could get creative with the stereo image.


Miniture DPA - Taped to the metal to get nice and close. Was hoping for some proximity effect from this one.


2 X Contact mics - I wanted to try to capture the resonances of the metal with these.


So I'm pretty please with how the recordings turned out. Got plenty of material and loads of pleasant surprises from the metal. I particularly liked some of the interesting stereo effects from moving the metal around as it was resonating.


Here are a few small examples of what came out of the session, including a processed version of a scrape. To process this I used pitch shifting, Izotope Trash 2 and Waves RVerb. This processed sound is indicative of what will be going in game for INT.




Thanks for reading!




By Joe Thom, Aug 4 2016 11:51AM

I was in need of some snowy footsteps for a new portfolio piece I'm working on and as bad as the weather usually is in the north of England, unfortunately there's still not a right lot of snow around.


I read up and was given advice by the folks over at the Game Audio Slack channel that Cornflour could make a good snowy crunch sound when walked on or handled so I thought I'd give it a go.


I needed to find something to put the Cornflour in that wasn't going to make a noise when handled so I decided on a Condom..... and I think it turned out pretty good!


The mic's I used were a Neumann U87 and a pair of Neumann KM184's.



I wanted to get the stereo version so that i could have more creative design options in the future.


Upon listening back to the recordings I felt that I may have got a little too close to the mic's and inadvertantly introduced some proximity effect, so the examples below have had a high pass filter applied at around 190Hz.


U87




KM184's





I'm failry pleased with the results but given more time (unfortunately my time in the studio was fairly limited witht his attempt) I'd try recording from further away to avoid having to filter in post. I'd also like to experiment with different materials to hold the cornflour in and try out actually walking on it as opposed to just mangling it with my hands.


I'd also try not to accidentally pour Cornflour all over the studio floor next time:




As always thanks for reading!




By Joe Thom, Jul 22 2016 01:41PM

I was recently lucky enough to give a talk at Game Audio North about my work in the Spatial Invaders project. I speak about how you can communicate between Max/MSP and the Unreal Engine using OSC messages and how we used this in the project. You can watch it here:




By Joe Thom, Feb 7 2016 11:28AM

I'm involved in a few really exciting projects at the moment which I'll share here whenever I can. But for now I though I'd offer a quick first look at a work in progress of one of my own projects.




"SoundSphere" is an experimental, physics based, generative audio system, built entirely within Unreal Engine 4. The user is placed at the edge of a 3d space within which they can add spheres at any location of their choice. Each sphere has a sound attached to it, which it adopts from the last virtual wall that it come into contact with. The user can then add spheres on top of one another to cause the spheres to bounce around the space, offering interesting and evolving soundscapes. The user can also remove spheres at will.


The system is currently set up to apply a low pass filter to the droning sounds of the spheres, that is dependent upon the amount of collisions happening in the 3d space at that time. I.e. the more collisions, the less low pass filter. This means that on screen action can be mimicked by the soundscape that is created. Another way in which this is achieved is with varying levels of impact dependent upon the speed that a sphere is moving when it collides with a wall.


This project is definitely a work in progress so there are still some bugs such as occasional clicks and pops. It is believed that this is caused by filter overload when the low pass filter on the drones is altered.


Moving forward it is intended to add functionality for the user to add their own samples in to the system. It is also planned to place the user in the middle of the 3d space and add effects such as Doppler to offer a more immersive 3d environment. This could be interesting in uses such as VR or even the sound design of3d environments with moving sounds.


But for now here is a work in progress video:




Thanks for reading. Don't forget to follow me on Twitter @thatjoethom


Until next time.


By Joe Thom, Dec 24 2015 01:19AM

Recently I was lucky enough to travel down to a RAF training area in Wales to record fighter jet passbys. It was pretty great.


Given that this was my second attempt (the first was cancelled due to snow) I was extremely happy with the results.


I used a Zoom H6 portable recorders on board stereo mic and a Rode NTG2 shotgun. Here are some results from the day!




I'm thrilled for some of these recordings to be used for the book "Game Audio Implementation" which is available right here.


Thanks for reading / listening.


By Joe Thom, Oct 28 2015 09:33AM

As discussed in the previous post "Non-repetitive design sound design in video games" one of the great challenges of game audio is creating realistic sound that not only suits the environment of a game but does so without repeating itself and without breking the games memory budget.


We have already discussed how non-repetitive design can work toward achieving this. But as effective as non-repetitive sound design methods are, they can not create completely original sounds that have never been heard by the player, only alter existing sounds to form variations, or layer existing samples. A possible exception to this perhaps is the example of using oscillation and/or pitch shifting on machine hum to create a completely different object sound.


So is it possible to generate completely new but appropriate sounds every time one is needed? Yes. Procedural audio can work in a number of ways with a number of approaches to maximise variation and realism.


What?


Although there are different approaches to procedural audio, which will be discussed later, the majority of descriptions are similar (Fournel 2012, Farnell 2007, Collins 2009, Verron 2012). Andy Farnell, a pioneer of one approach to Procedural Audio states that "Procedural audio is non-linear, often synthetic sound, created in real time according to a set of programmatic rules and live input" (Farnell, A 2012). In other words, procedural audio is the process of sound being created based upon a number of user defined rules which can be adapted in real time. Farnell also states in his book 'Designing Sound' that as Procedural Audio is such a broad term and is often used in different ways, it is sometimes easier to just say what it is not, in this vain, pocedural audio is not the linear creation or playback of sound.


Why?


One reason to use procedural audio is the age old and all important issue of memory. Whilst it may be argued that memory is becoming less of an issue with newer, higher powered platforms, some may say that as consoles and gaming PC's become more powerful, consumers expectations rise, so memory is just as much an issue now as it has ever been.


Another reason is of course the possibilities to avoid repetition. Not only isit possible to create unique sound effects every time one is triggered, but these soundeffects can also be adapted to what is shown on screen. For example, in the forthcoming, completely procedurally generated 'No Mans Sky' from Hello games, sound is created based upon rules that are used in the generation of the games creatures. Sound Designer Paul Weir passes these rules into a physically mdeled vocal tract, which includes a virtual mouth, larynx and vocal chords, to create entirely unique sounds for every procedurally generated creature in the game. "Rather than working against the game’s algorithmic chaos, he embrace(s) it"(Khatchadourian, 2015). Read this article in The New Yorker for more information and samples of the audio. Paul also gave a presentation at a proceduralaudionow.com meetup earlier this year on the pros and cons of procedural audio.


Another example of how procedural audio may be utilised in this way comes from Verron & Drettakis in the from of their Audio Engineering Society presentation in 2012. Their paper "Procedural audio modeling for particle-based environmental effects" outlines the creation of a sound synthesizer that "simultaneously drive(s) graphical parameters and sound parameters" resulting in a "tightly-coupled interaction between the two modalities that enhances the naturalness of the scene" (Verron, Drettakis 2012) Here is a video example of the synthesizer in action.




But why is procedural audio predominately used for video games and not other more traditional forms of media? in her 2009 paper "An Introduction to Procedural Music in Video Games" Karen Collins suggests that video games are an "ideal media form for procedural (audio)" as "many

elements of gameplay—especially the timing of events and actions—are unpredictable

and occur in a non-linear fashion" (Collins, 2009) Though Andy Farnell suggests that procedural audio also has its uses in film.


How?


Its is generally considered that there a two main schools of thought in how procedural audio should be generated. We will refer to these methods as "Bottom up" and "Top Down".


Bottom up approach


Pioneered by Andy Farnell the bottom up approach to procedural audio is based upon the idea that sound effects can be created from nothing, or "generated from first principles, guided by analysis and synthesis" (Farnel, 2010). Farnell argues that by utilising the bottom up approach, a sound designer can create 'sound objects' which unlike audio recordings can be kept and changed in real time and can mimic the unpredictable nature of real world sounds.


In the book 'Designing Sound' Farnell suggests the use of Pure Data (PD) to create such 'sound objects'. PD is an open source visual programming language which enables the development of sound based software without the need to write code.


Throughout Designing Sound, Farnell adopts a scientific approach to dissecting numerous sound sources such as fire, running water, motors and explosions. Farnell dissects an explosion into the following elements:


>Early ground waves (prerumble or dull thud)


> shock front (dilated N-wave)


>Burning gasses from a moving fireball (phasing / roaring)


>Collisions and fragmentation (noisy textures)


>Relativistic shifts in frequency (time dilation and compression)


>Discrete environmental reflections (echo and reverb)


By focusing on the creation of each of these individual elements it is possible to create a Pure Data patch which can be adapted to suit different forms of explosions, and offers endless variation.


Drawbacks of the bottom up approach include the fact that a huge amount of work is required to develop a system of synthesis for a single sound effect. This, coupled with an in-depth application of sound propogation principles such as reflection, dispersion and oblique boundary loss, unfortunately puts this approach out of the reach of all but the most technologically and scientifically savvy sound designers.


Another common criticism of the bottom up approach is that currently, it is very difficult, if not near impossible to create a sound that wholly resembles the real thing.


So whilst an exciting and academically satisfying artform, this method of procedural audio still seems to be out of our grasp. But perhaps as it improves and evolves, sound designers will become more like programmers, and utilise scrippting to create more unique and varied sounds. If you'd like to have a listen to some effects made with Pure Data, Andy Farnells website has a selection of examples, plus some tutorials that you can work through yourself.


Top down approach


The top down approach, in line with how it sounds, is quite the opposite to bottom up. Whilst in 'bottom up' the aim is to recreate each minute detail of a sound with the use of coding and synthesis, when applying a top down approach the first step is to find or create a complete, pre-recorded/designed sound and work down from this.


In an interview with Designing Sound, Nicolas Fournel detailed the Spark procedural audio system that he built for Sony. He states that with Spark "You can create procedural models very quickly by analysing existing samples, extracting the features of interest, and then finding a way to model them".


According to Fournel the analysis system of sounds could be split into three main categories: audio generators, event generators and update modules. He then explains that you could extract transients, pitch contour, amplitude envelope and spectral flux amongst other parameters to build a model of a 'reference' sound. The tool can then create a model based upon this data and render countless variations of sounds from it.


Fournel goes on to argue that this top down method greatly reduces the amount of modules (snippets of visual scripting code) that would be required in comparison to using a bottom up approach and states that with Pure Data, you are provided "all the elementary modules you might ever need" and are expected to "go learn about probability distributions, go learn about modal synthesis, and build everything from scratch" (Fournel 2012).


Whilst the top down approach is attractive, it does of course have its drawbacks. The main one of these being that to create, or gain access to such a system, asa sound designer, is extremely difficult (unless you work for an organisation that happens to use such a system). The Spark system discussed above is a proprietary system designed for Sony which is not available for public use and to create such a system would require an immense amount of programming knowledge.


So this leads us to somewhat of an impasse. Whilst audio middleware such as Audiokinetics Wwise has made steps forward in making procedural audio more attainable and user friendly for sound designers (see Soundseed), tools are unfortunately still lacking in variation and are generally geared toward certain types of sounds. Larger develpers are certainly coming round to the idea - Rockstar in GTAV for example used forms of granular procedural audio for vehicles, but smaller studios may unfortunately, for the time being, have to continue on a more traditional (albeit effective) path, until further developments are made.


Thanks for reading, don;t forget to follow me on twitter at @thatjoethom. Until next time!


Collins, K. (2009). An Introduction to Procedural Music in Video Games. Contemporary Music

Review 28, 5–15.


Farnell, A (2010). Designing Sound. MIT Press, Cambridge, Mass.


Khatchadourian, R (2015) What a Dinosaur’s Mating Scream Sounds Like [Online]. The New

Yorker. Available from <http://www.newyorker.com/tech/elements/what-a-dragons-mating-scream-sounds-like> [Accessed 10/10/2015].


Nair, V (2012) Procedural Audio: An Interview with Nicolas Fournel [Online]. Designing Sound. Available from: <www.http://designingsound.org/procedural-audio-an-interview-with-nicolas-fournel> [Accessed 12/10/2015]


Verron, C. Drettakis, G. (2012). Procedural audio modeling for particle-based environmental

effects. In: Audio Engineering Society Convention 133 26/10/2012 San Francisco USA. Audio

Engineering Society.









By Joe Thom, Oct 13 2015 11:30AM

One of the many things that makes game audio so interesting is the challenge of how a sound designer may find ways to draw a player into their world and have them feel as though they could really be in the environment that they are creating. One of the obvious ways to do this is to make it sound real – providing you’re working on a “realistic” game that is.


One of the issues at the heart of creating a realistic ambiance, is that in nature, or real life, there is almost no repetition. We may get away with looping mechanical sounds, or using the same few samples for various UI sounds so the player knows that the buttons they’re pressing are working, but how do we get around those sounds where using a loop, or the same couple of samples over and over is not going to cut it?


“ Nothing breaks immersion in a game more than hearing exactly the same sounds/samples being used repeatedly, as few sounds in the natural world repeat in this way” (Stevens, Raybould 2011).


Unfortunately, we can’t yet store an infinite amount of data in game memory - so we're working to a budget. And although the procedural generation of sound has begun to be rolled out to middleware such as Wwise; it is not yet possible to use it on every sound. So we have to use a form of trickery... non-repetitive design.


How do we stop a sound from becoming repetitive?


Use more than one sound: It’s simple but this may be considered the first step in non-repetitive sound design. When combined with the following methods, the simple act of using 2 or 3 sound variations as opposed to one, could increase variation output exponentially.


Modulation: Most game engines and middleware (UE4, Unity, FMOD, Wwise) feature some form of random modulation parameters. These may be applied for example to pitch or volume in varying intensities to add more variety to your sounds when they are repeated.


DSP Envelopes: ADSR envelopes can be applied to sounds. The envelope setting can be changed for each sound, once again increasing variation.


Concatenation: This process involves splitting a larger file into smaller chunks and rearranging the chunks in a random order. This can be particularly useful for creating non repetitive loops.


Oscillators: Can be applied to pitch, volume or filter cutoff amongst other parameters to increase variation.


Starting playback at different points in the sample: Starting playback of a sound at different areas can drastically alter the characteristic of the sound, especially when the initial sample has an obvious transient.


Phasing loops: Use two different loops with varying lengths on top of each other. This could be two different river loops for example. By the time they have looped long enough to both start at the same time again, the player will almost certainly have forgotten what the sound was when they first started playing.


Another possibility for non-repetitive design is to utilise Adaptive Audio. Adaptive audio “reacts appropriately to— and even anticipates—gameplay rather than responding directly to the user” (Whitmore 2003). For example an adaptive audio system may change the pitch of the music in the game dependent upon the protagonists health level.


Here is an example of a system which utilises some of the above methods to create a non-repetitive natural ambiance. This system is built within an Unreal Engine 4 sound cue.



Whilst saving memory is a big part of non-repetitive sound design, it can also be used to fantastic creative effect in increasing a players immersion. One such example of this is Playdead’s Limbo, which Stephan Schutz (Owner of The Sound Librarian) once described during a GDC talk, as having “The best footsteps, in any game ever. Period” (Schutz, 2014).




The footsteps in Limbo were created from separate heel and toe samples, with the silence between each one altering dependent upon the speed at which the boy is moving. Each element of the footstep is also subject to subtle pitch and volume modulation.


Another interesting aspect of the footstep implementation in Limbo was discussed in an interview with sound designer Martin Stig Andersen where he explained that the “footstep sounds were also subjected to some quite sophisticated passive mixing strategies. For example, the footstep sounds start attenuating gradually after the boy has been moving continuously for a shorter period of time, and regain in amplitude when he’s standing still.” (Andersen, 2011) As the entirety of Limbo is spent running around as the protagonist, this implementation strategy goes a long way to ensuring that the footsteps do not become irritating for the player. This technique is also utilised when the character steps onto a new surface.


Undoubtedly over time memory budgets will increase and we will be able to place millions of individual samples into a game engine, but this does not in any way mean that we should stop pushing to implement non-repetitive sound in interesting ways. After all, as Matthew Marteinsson (Audio Director Klei Entertainment) said in a recent Designing Sound article, we can “Take restrictions and build something greater because of them.” (Marteinsson, 2015), and Limbo is a perfect example of this.


References / further reading.


Bridge, C (2012) Creating Audio That Matters [Online] Available from: <http://www.gamasutra.com/view/feature/174227/creating_audio_that_matters.php> Accessed 12/10/15


Collins, K., others (2007). An introduction to the participatory and non-linear aspects of video games audio. Essays on sound and vision 263–298.


Whitmore, G (2003) Design With Music In Mind: A Guide to Adaptive Audio for Game Designers [Online]Available from: <http://www.gamasutra.com/view/feature/131261/design_with_music_in_mind_a_guide_.php> Accessed 13/10/15


Islwyn, S (2015) What is an Audio Envelope [Online] Available from: <http://www.ehow.com/info_8605694_audio-envelope> Accessed 13/10/2015


Jennett, C., Cox, A., Cairns, P., Dhoparee, S., Epps, A., Tijs, T., Walton, A. (2008). Measuring and defining the experience of immersion in games. International Journal of Human-Computer Studies 641–661.


Kastbauer, D (2011) “Limbo” – Exclusive Interview with Martin Stig Andersen [Online] Designing Sound. Available from: <designingsound.org/2011/08/limbo-exclusive-interview-with-martin-stig-andersen/> Accessed 13/10/2015


Marteinsson, M (2015) How I Learned to Stop Worrying and Love the Restrictions [Online]. Designing Sound. Available from: <http://designingsound.org/2015/10/how-i-learned-to-stop-worrying-and-love-the-restrictions/> Accessed 13/10/2015.


Schmidt, B., Brandon, A., Kastbauer, D., Stephan, S., McDonald, D (2014) GDC 2014: Demo Derby: Sound Design. San Francisco: The Game Developers Conference.


Stevens, R., Raybould, D (2011). The game audio tutorial : a practical guide to sound and music for interactive games. Amsterdam ; Boston : Focal Press/Elsevier.


By Joe Thom, Oct 6 2015 11:04AM

Audio in games can facilitate a number of functions. Sound may instruct the player in what action to take next. It may provide feedback when an event has (or has not) occured, or a game may include audio based mechanics, such as that in rhythm-action titles. This post will focus upon one of the ways that audio has been utilised in a recent title, as a function to provide the player with an indication, or feeling of power.



In a talk at GDC 2015 Monolith's Audio Director, Brian Pamintuan stated that the audio teams main focus whilst working on Shadow of Mordor, was to drive the games emotion through sound and place high priority on "Emotional Resonance". The interpretation of power that the player draws from Shadow of Mordor's Music Combat system, is one example of how the team achieved this goal.


What is the Music Combat system?




This short excerpt from Brian's talk at GDC earlier this year demonstrates the Music Combat system. Short musical stingers that are in keeping with the timbre of the battle music are made to accompany the impacts or whooshes of any attacks to or from a stronger enemy. This system is a legacy technique that was reinstated from the studios Condemned franchise.


Whilst a relatively simple system, both idealogically and in terms of implementation, complementary stingers such as these can be highly effective in evoking an emotional response and making the player feel more of a sense of power. Through focus testing, Monolith found that the majority of players could not tell the difference between the game when it featured the Music Combat system and when it did not. However when prompted, players stated that when the system was included they felt that the enemies were more powerful, or their weapons were stronger.


Why does it work?


Whilst research into how music or short musical stingers can induce a sense of power is scarce, the way in which music can influence emotion has been a subject of debate for many years (eg. Behne 1997, Liljestrom 2011).


Whilst this is a particular topic of debate, very few have suggested specific psychological mechanisms which may trigger an emotional response to music. In their 2008 paper 'Emotional Response to Music: The need to consider underlying mechanisms', Juslin & Vasftjall suggest six psychological mechanisms that they believe may be involved in the influence of music upon emotion. These are as follows.


Brain Stem Reflexes - "one or more fundamental acoustical characteristics of the music are taken by the brain stem to signal a potentially important and urgent event".


Evaluative Conditiong - " a process whereby an emotion is induced by a piece of music simply because this stimulus has been paired repeatedly with other positive or negative stimuli".


Emotional Contagion - "a process whereby an emotion is induced by a piece of music because the listener perceives the emotional expression of the music, and then “mimics” this expression internally".


Visual Imagery - "a process whereby an emotion is induced in a listener because he or she conjuresup visual images (e.g., of a beautiful landscape) while listening to the music. The emotions experienced are the result of a close interaction between the music and the images".


Episodic Memory - "a process whereby an emotion is induced in a listener because the music evokes a memory of a particular event in the listener’slife".


Musical Expectancy - "a process whereby an emotion is induced in a listener because a specific feature of the music violates, delays, or confirms the listener’s expectations about the continuation of the music".


Juslin & Vasftjall go on to say that the emotional response to music can be made up from any number of these core mechanism working in conjunction with one another. For the Music Combat application the mechanisms of particular interest may be Brain Stem Reflexes - the stingers themselves can seem powerful due to instrumentation and timbre, Evaluative conditiong - if the Music Combat system is utilised repesatedly on more powerful enemies, the player will subconciously begin to relate these musical motifs with power, and Emotional Contagion - working similarly to Brain Stem Reflexes.


Other factors to consider may be the way in which music has been used historically in warfare, as a psychological weapon. According to this article on historynet.com one of the earliest records of music being used in this way appears in chapter 6 of the Old Testaments book of Joshua with a detailed description of the use of Rams horns against Jericho. Earlier applications of this musical weapon did in all likelihood occur with the use of tribal drums and earlier musical instruments.


This approach to intimidation in warfare, further bolsters the opinion that music can invoke a tangible effect upon a persons emotion and sense of power, thus adding to the reasons that Monoliths Music Combat system may have been so effective.


References


Behne, K. E. (1997) The development of “Musikerleben” in adolescence: How and why young people listen to music. Emotional responses to music, 612 pp. 143 – 59.


historynet (2006) The Music of War [Online]. Available from: <http://www.historynet.com/the-music-of-war.htm> [Accessed 06/10/2015].


Liljestrom, S (2011) Emotional Reactions to Music: Prevalence & Contributing Factors. Digital comprehensive Summaries of Uppsala Dissertations from the Faculty of Social Sciences, 67.








By Joe Thom, Sep 29 2015 09:24AM

Whilst I have made a few blog entries so far, it may be a good idea to introduce myself and speak a little about background and particular areas of interest.


I studied a Music Technology degree at Leeds Beckett University as a 'mature student'' and graduated with a 2:1 in the summer of 2014. It wasn't until the third year of study however that I thought - "Hang on, people need people to make sound for their video games, that would very cool". So I elected to study the third year module - Interactive Audio.


Throughout this study I enjoyed the VERY steep learning curve of switching from writing and producing linear music, to focusing on field recording, sound design and implementing interactive sound with UDK. I enjoyed this transition so much that for the past two years, I have devoted the huge majority of my time to honing my game audio skills and pushing to make a career in the industry. I plan to keep a log here of my successes, and more likely failures in the rocky road of getting there!


Whilst I am set on becoming a game audio professional I also understand the importance of maximising possible revenue streams whilst striving to achieve this goal. In this vain I also create sample libraries to sell through my online store at sonniss.com and also sell individual sounds through a store at pond5.com. Whilst this does supplement income (a little bit) it is not yet quite enough, so I am also actively seeking freelance work.


In the push to create interesting and relevant sample libraries (and a library of my own) I have been lucky enough to travel for several months around South America and Thailand to gather Jungle ambiances and nature sounds (libraries coming soon!). This travel also helped me to settle on one of my main areas of interest in game audio - creating natural, non-repetitive ambiances.


It also meant I got to do stuff like this:



and this:



Ever want to record a monkey.... get a fluffy windshield.... trust me.


My focus on game audio has led me to joining an indie dev team called Ascendence Studios. At Ascendence studios I am the sole Sound Designer working on the first title Wrath of the Goliaths: Dinosaurs. This project has enabled me to put what I learnt in my degree into practice and to begin to discover the benefits and of course issues that arise from working for a small indie startup. It has also prompted me to think about how I can begin to attempt to work along with current industry trends such as procedural audio, pallette based, and modular sound design. These methods of sound design, I believe will become invaluable to the project, seen as I am the sole sound designer working on over 50 dinosaurs to be included with the final game!


Which brings me to the present. This week I have embarked upon a masters degree in Sound & Music for Interactive Games. Throughout this one year course I will aim to refine my skills to be on par with current industry standards and to complete relevant projects that I can use to showcase my skills. One of these, I intend to link with the game that I am working on with Ascendence studios and create a model based system for sound design of dinosaurs. This system will select frequency and duration based audio samples from a pallette based upon input such as size, vocal tract, type and temprement of the dinosaur. I will post regular updates on the progress of this project, as well as my progress on getting a full time gig in the game audio industry!


But for now that's me.


Until next time!




By Joe Thom, Mar 24 2015 05:07PM

Recently I was lucky enough to interview Damian Kastbauer aka LostChocolateLab on what it means and what it takes to be a Technical Sound Designer. Here's what he had to say:


What was the first Interactive Audio project you got involved in and how did you come across it?


Wow, first…or first that actually went somewhere? The path forward was littered with small projects culled from forum crawling and cold-email introductions. Most of these withered on the vine during development, but a company called Playful Minds was iterating on some of their titles and I was lucky enough to sign-on to provide sound effect content. While working on a project of their called “Elemental Wars” I received a build back from the programmer to find my elaborately designed sounds truncated when each new successive instance was triggered by the game (essentially limiting the voices to one instance, kill oldest). This was a huge wake-up call for me; even though I delivered sounds that extended past their re-triggering, they were not playing back as I expected. This was the moment where I turned from “making a great sound” to “making a great sounds sound great in the game”. Thankfully I had the help of a smart programmer on that project who was kind enough to talk me through things and help open my eyes to what was happening under-the-hood.


You've recently made the jump from contract to in-house work, what brought on this change and how does each way of working differ?


Stability and consistency were the missing pieces in my game audio lifestyle. For seven years I jumped around from studio-to-studio firefighting alongside some of my heroes while gaining incredible experience and a unique perspective on game development. I stepped into so many pipelines, put out so many fires, and contributed to so many projects for (usually) short-durations that I began to see similarities in methodologies between developers. At the core we’re all solving the same problems and it was somehow gratifying to know that the approach was often very similar. So, the stability and consistency of an in-house position allows me to cultivate some of the best-practices that I’ve seen in my short time in game audio and attempt to extend those to the team in-house. I can begin from one place of knowledge and grow that idea over a longer period of time, succeed and fail, and hopefully succeed at bridging the game audio pipeline gap for sound designers and game developers.


With Game Audio being a relatively new industry a lot of job titles are not yet very clearly defined, what makes a 'Technical Sound Designer'?


Generally speaking, a technical sound designer is someone who works primarily with the game engine and audio tools to integrate sound content for playback in a game. This can mean anything from: tagging animations, building physics systems, or verifying that all 350 doors in a game have the right open and close sounds.


Here are a couple of good articles that discuss it with a bit more depth:

http://www.altdevblogaday.com/2012/04/10/technical-sound-design-an-interview-with-damian-kastbauer/

http://designingsound.org/2009/11/rob-bridgett-special-the-role-of-an-audio-director-in-video-games/

http://www.gameaudiopodcast.com/?p=671



For someone looking for work as a Technical Sound Designer, is it important to become highly specialised in their chosen area of game audio, or should they try to build up a more generalised skillset to gain mass appeal to employers?


My approach was to focus specifically on the part of game audio that I loved the most: the technical process of getting sounds into a game and sounding great. Because of that passion I have let most of my content creation skills atrophy and relied on the great work done by specialists who have a passion for making great sounds. I think employers are looking first-and-foremost for people who are passionate, so whatever it is that you choose it should be something that speaks to your enjoyment of the process.


This is an article I wrote which digs a bit deeper into how to channel your passion for game audio:

http://blog.lostchocolatelab.com/2013/05/game-audio-aspirations.html


How important is it that a prospective Technical Sound Designer learns code?


I am not a programmer, but I have tried to develop an understanding of some broad-stroke programming language vernacular. I have spent years crawling around in scripts, text files, and oblique folders full of black-and-white goobledygook in an effort to (at times) gain a greater understanding about what’s going on “under the hood” with regards to game audio. There are times when I straddle the fine-line between scripting and programming, but for the most part would rather leave programming in the hands of someone who is as passionate about it as I am about technical sound design.


As a lot of developers use their own in house systems, how is it best to go about learning to implement interactive audio whilst maximising employability?


I would say that gaining a deep knowledge of freely available audio middleware, and game development tools in general, is a great place to start. This means things like Wwise and FMOD, Unreal Development Kit, Game Maker, and Unity, Blender and Maya, or whatever else you can get your hands on that might come in handy (Perforce, SVN, Git, Excel, Notepad++, Email?!?). It goes without saying that in-order to create software, you have to use software. Being comfortable in both a PC or Mac environment could just give you the edge. At its heart, Technical Sound Design is about solving problems creatively using (often) technical solutions. Being able to walk and talk-the-talk of someone with a depth of knowledge on the road that has been travelled will help assist in this task. Which leads me directly to: Play many games, critically! Some of the deepest knowledge I’ve gained is by digging into the two Game Sound Studies which dissect the technical and aesthetic workings of current generation game from the players-side of the screen. Looking across a genre to understand the different technical choices being made with regards to sound can give you a greater perspective towards what you’re trying to accomplish.


How should an aspiring sound designer go about creating a portfolio to present to potential employers?


Present only your best work.

Clearly annotate your involvement.

Focus on assets implemented in a game.

Tailor submissions to a specific employer.


If you could have known one thing at the start of your game audio career that you know now what would it be?


Getting your first job opportunity can feel like a race against time, but a career in game audio is a marathon that should be evenly paced. Cross every bridge on your way to a career with the foresight that you will be in this community for a long time and measure how you choose to present yourself with that in-mind.


RSS Feed

Web feed