Inside the Sounds of ‘Saving Atlantis’

What does it take to create the sound edit and design elements of a feature documentary? Answer: a lot of different samples, plugins, speakers, and most importantly, patience.


August 24, 2018

What does it take to create the sound edit and design elements of a feature documentary? Answer: a lot of different samples, plugins, speakers, and most importantly, patience. Here I’ll share a crash-course outline of the process I took to go from Adobe Premiere project to finished sound mix for our 2018 film, Saving Atlantis.

An overview of the entire project in Logic Pro X.

Here’s a quick overview of some of the gear we relied on over the course of the process:

  • DAW: Logic Pro X. This pick made sense for our unit when I came on in 2016, when our audio budget was minimal and we liked the flexibility offered by the array of native plugins.
  • Audio Interface: UAD Apollo 8 Duo. Using hardware emulations from Universal Audio gave us a substantial boost in production quality, whether I was trying to increase the clarity of an element that was supposed to sound natural, or trying to effect a sound to fit in with a specific subset of uniquely colored sounds. The fact that we were able to offset the DSP onto the hardware of the Apollo allowed this project to stay nimble and responsive, something I would have otherwise expected to see get bogged down with a project of this size. With the addition of a UAD Satellite Quad this project still only uses about 71% of the available DSP with the plugins I used.
  • Sound Editor: Izotope RX6 Advanced. We used this to repair audio that had any range of common issues you run into in documentary filmmaking, such as windy plosives, clicky vocals, high noise floors, and noisy recording environments.
  • Monitors: Tascam VL-S5’s. This was another early selection to keep costs down; however to their credit they’ve led me to produce reasonably translatable results.

The chronology of the process was a tricky thing to determine early on because there were so lingering unresolved elements of the film edit. I won’t go into the details of how many revisions the film edit saw after “picture lock” (note the ‘TIME SHIFT 1/2/3’ markers in the project window), but generally the film still followed the steps of:

  1. Sound Design
  2. Sound Edit
  3. Sound Mix

The sound design elements were added at any various stage of the process. Some of it was added by the editors for effect early on, some of it when I first stepped in, and some even after the mix was done. This was determined by the revisions that were applied after the first round of screenings was completed, such as footage or music we couldn’t get clearance on. Sound design (or “editing,” depending on who you talk to) is the most challenging part of the process for me since that hasn’t historically been my emphasis and I don’t have a very extensive library of sounds to draw from. You can only get away with using the same WAVES_CRASHING.wav samples so many times. Some SFX were recorded on locations with a Zoom H4n. Many SFX were sourced from open source libraries, where there can be quite a few high-quality samples when you filter search results by sample rate.

When I jumped into the task of approaching the overall edit, I was bouncing back and forth between using Logic and Izotope RX6. Unfortunately, my job required quite a bit of organizational work up front since we were unable to successfully export a functioning .AAF/.OMF file from the project in Adobe Premiere (I still don’t know why, and the time I spent forum-diving was fruitless).

My suggestion to the editors at that point was to organize all the character dialogue onto the same audio tracks, the music onto its own respective tracks, and the SFX onto their own separate tracks as well. This also meant that all of the volume automation and gain changes across separate regions in Premiere had to be reset or manually removed and normalized. These tracks were all individually exported from Premiere as ‘stems’ that I imported into Logic along with an export of the full film in stereo, immediately SMPTE locked and got to work splitting regions and categorizing onto tracks and into track stacks. Color coding regions was essential for this part of the process: visual tastes aside, the utilitarian advantages offered by chromatically indicating which regions contain what kinds of information are indispensable.

An unnatural-sounding interview recording of KAUST student Mohammad Binsarhad warranted a longer audio FX chain. The UAD Teletronix LA-2A works just as well for spoken vocals as it does singing vocals. The Pultec Pro Legacy plugin did the heavy lifting of shaping the vocals’ presence, and Logic’s parametric EQ helped eliminate any lingering tanky qualities. Finally, the UAD Oxford Inflator brought some brightness and warmth to the audio which allowed it to sit on top of a drone/ambient music track without any sidechain compression.

I allocated a track for every unique audio source. For the English mix, that meant that every character had a track; if their interviews were recorded in two or three different locations, they had multiple. For the Spanish dubs mix, I created a new track only for every speaker who recorded parts for characters. Our Spanish voice talent would read for multiple characters since a unique reader for each character would have been unreasonable.

The music tracks had a little less reason behind their organization. I started off with two or three to keep overlap separate between tracks that had long fades, and then created new tracks for each song that needed unique processing. For example, some songs needed specific attenuation of sub-bass frequencies while many others sat in the mix better with a wide ~5dB reduction around 1-2kHz. A couple lower-quality music tracks required a more intensive approach, where I used parallel processing techniques and the UAD Little Labs VOG plugin to give them a fuller feel.

One of our soundtrack cues had a unique problem where the beginning of the track was cut off by a late fade-in. I had to dive in and try and emulate the sound of the opening piano line with software instruments and plugins.
To achieve the washy, spacious character of the source track’s piano, I used UAD’s Ampex ATR-102 tape simulator to apply some chorus and their emulation of the Lexicon 224 unit to douse it with reverb.

The SFX tracks were the most sprawling of all since each sample typically needed customized attention to get it to sit in the mix properly. Most of the time, basic compression and EQ was necessary to get them to live in the same space as everything else, and a subsequent reduction of levels so that they remained present without distracting from the principal elements of the mix.

Overall, this process followed a similar form that most of my audio post work involves when working on pieces for Oregon State Productions, but at a much larger scale. It’s been laborious but highly rewarding to have the opportunity to work on my first feature documentary as the audio lead. I’m sure I’m still making my share of rookie mistakes, but I look forward to learning more with every new project that comes our way.

-Daniel Cespedes

TAGS:

CATEGORIES: Multimedia Projects


193 thoughts on “Inside the Sounds of ‘Saving Atlantis’

  1. You are so awesome! I don’t suppose I have read
    something like this before. So wonderful to find somebody with original thoughts on this issue.

  2. You are so awesome! I don’t suppose I have read
    something like this before. So wonderful to find somebody with original thoughts on this issue.
    Thank You So Much

  3. Hi there,
    just became aware of your blog from google and must say its been one of the informative and useful article and I”ll bookmark this site for more interesting blogs in future.

  4. A wonderful sharing. Thank you very much. I also shared a guide on my blog that included motivational films called inspirational films. If you would like to review, please here please.

  5. If The Mobile is lost then we can track easily. We need IMEI Number to track
    mobile and If you have not IMEI Number of Mobile then you can not track Mobile easily.
    IMEI Tracker

  6. Yes i agreed with this, Logic Pro X. This pick made sense for our unit when I came on in 2016, when our audio budget was minimal and we liked the flexibility offered by the array of native plugins.

  7. I have enjoyed reading your blog. Nice blog. I will keep visiting this blog very often. If you want to visit Espiritu Santo Island, Vanuatu. Then, come to our website at turtlebaylodge.vu. Our accommodation provider provides accommodation Espiritu Santo at your budget.

  8. wonderful approach and great initiation. I admire this post for having excess of knowledge and information. Such a phenomenal content and would bookmark this site to follow more affirmative posts in future.

  9. just became aware of your blog from google and must say its been one of the informative and useful article and I”ll bookmark this site for more interesting blogs in future.

  10. 192.168.0.1 IP address is the default gateway of most wireless routers or ADSL modems. There are several IP main addresses a router can take, 192.168.l.l is the most common one though 10.0.0.1 or 192.168.0.1 are also other options. These are commonly known as host addresses

  11. An impressive share! I have just forwarded this onto a coworker who had been conducting a little homework on this. And he in fact bought me dinner due to the fact that I found it for him… lol. So allow me to reword this…. Thank YOU for the meal!! But yeah, thanx for spending time to discuss this topic here on your website.

    Bigg Boss 13

Leave a Reply

Your email address will not be published. Required fields are marked *