Audio Post Production Algorithms (Singletrack)
The Auphonic Audio Post Production Algorithms analyze a master stereo/mono audio file and correct level differences between speakers, between music and speech and between multiple audio files to achieve a balanced overall loudness.
They include automatic Audio Restoration Algorithms, a True Peak Limiter and targets for common Loudness Standards (EBU R128, ATSC A/85, Podcasts, Mobile, etc.).
All algorithms were trained with data from our web service and they keep learning and adapting to new audio signals every day.
Audio examples for all algorithms with detailed annotations can be found at https://auphonic.com/audio_examples.
The Auphonic Adaptive Leveler corrects level differences between speakers, between music and speech and applies dynamic range compression to achieve a balanced overall loudness. In contrast to our Global Loudness Normalization Algorithms, which correct loudness differences between files, the Adaptive Leveler corrects loudness differences between segments in one file.
The algorithm was trained with over five years of audio files from our web service and keeps learning and adapting to new data every day!
We analyze an audio signal to classify speech, music and background segments and process them individually by:
Amplifying quiet speakers in speech segments to achieve equal levels between speakers.
Carefully processing music segments so that the overall loudness will be comparable to speech, but without changing the natural dynamics of music as much as in speech segments.
Classifying unwanted segments (noise, wind, breathing, silence etc.) and then excluding them from being amplified.
Automatically applying compressors and limiters to get a balanced, final mix (see also Loudness Normalization and Compression).
Our Adaptive Leveler is most suitable for programs where dialog or speech is the most prominent content: podcasts, radio, broadcast, lecture and conference recordings, film and videos, screencasts etc.
Listen to Adaptive Leveler Audio Examples:
Global Loudness Normalization and True Peak Limiter
Our Global Loudness Normalization Algorithms calculate the loudness of your audio and apply a constant gain to reach a defined target level in LUFS, so that multiple processed files have the same average loudness.
The loudness is calculated according to latest broadcast standards and Auphonic supports loudness targets for television (EBU R128, ATSC A/85), radio and mobile (-16 LUFS: Apple Music, Google; AES Recommendation), Amazon Alexa, YouTube, Spotify, Tidal (-14 LUFS), Netflix (-27 LUFS), Audible / ACX audiobook specs (-20 LUFS) and more. Please see Audio Loudness Measurement and Normalization, Loudness Targets for Mobile Audio, Podcasts, Radio and TV and The New Loudness Target War for detailed information.
A True Peak Limiter, with 4x oversampling to avoid intersample peaks, is used to limit the final output signal and ensures compliance with the selected loudness standard.
We use a multi-pass loudness normalization strategy based on statistics from processed files on our servers, to more precisely match the target loudness and to avoid additional processing steps.
Listen to Loudness Normalization Audio Examples:
Noise and Hiss Reduction
Our Noise Reduction Algorithms remove broadband background noise and hiss in audio files with slowly varying backgrounds:
First the audio file is analyzed and segmented in regions with different background noise characteristics and a Noise Print is extracted in each region.
Then a classifier decides how much noise reduction is necessary in each region (because too much noise reduction might result in artifacts) and removes the noise from the audio signal automatically.
You can also manually set the parameter Noise Reduction Amount if you prefer more noise reduction or want to bypass our classifier.
However, be aware that this might result in artifacts!
Noise Reduction Usage Tips:
Let the noise as natural and constant as it is, don’t try to improve or hide it yourself!
Please do not use leveling or gain control before our noise reduction algorithms! The amplification will be different all the time and we will not be able to extract constant noise prints anymore.
This means: no levelator, turn off automatic gain control in skype, audio recorders, camcorders and other devices …
No noise gates: we need the noise in quiet segments which noise gates try to remove!
Excessive use of dynamic range compression may be problematic, because noise prints in quiet segments get amplified.
Noise reduction might be problematic in recordings with lots of reverb, therefore try to keep the microphone close to your speakers!
Listen to Noise and Hiss Reduction Audio Examples:
The Auphonic Hum Reduction algorithms (included in Noise and Hum Reduction) identify and remove power line hum:
First the audio file is analyzed and segmented in regions with different hum characteristics and the hum base frequency (50Hz or 60Hz) and the strength of all its partials (100Hz, 150Hz, 200Hz, 250Hz, etc.) is classified in each region.
Afterwards the base frequency and all partials are removed according to their strength with sharp filters and broadband noise reduction.
The following audio example by FMC contains a 60Hz power line hum with many partials (120Hz, 180Hz, 240Hz, 300Hz, etc.).
Listen to Hum Reduction Audio Examples:
Our adaptive High-Pass Filtering algorithm cuts disturbing low frequencies and interferences, depending on the context.
First we classify the lowest wanted signal in every audio segment: male/female speech base frequency, frequency range of music (e.g. lowest base frequency), noise, etc. Then all unnecessary low frequencies are removed adaptively in every audio segment, so that interferences are removed but the overall sound of the audio is preserved.
We use zero-phase (linear) filtering algorithms to avoid asymmetric waveforms: in asymmetric waveforms, the positive and negative amplitude values are disproportionate - please see Asymmetric Waveforms: Should You Be Concerned?.
Asymmetrical waveforms are quite natural and not necessarily a problem. They are particularly common on recordings of speech, vocals and can be caused by low-end filtering. However, they limit the amount of gain that can be safely applied without introducing distortion or clipping due to aggressive limiting.