Multitrack

A multitrack production takes multiple parallel input audio tracks, processes them individually as well as combined and creates the final mixdown automatically. Leveling, dynamic range compression, gating, noise and hum reduction, crosstalk removal, ducking and filtering can be applied automatically according to the analysis of each track. Loudness normalization and true peak limiting is used on the final mixdown.
For more details see Multitrack Post Production Algorithms.

Auphonic Multitrack is built for speech-dominated programs.
All tracks must be opened as separate files and start at the same time: speech tracks recorded from multiple microphones, music tracks, remote speakers via phone, skype, etc.

Click here to create a new multitrack production in our web system: https://auphonic.com/engine/multitrack/

Note

If you want to process a single (master) mono or stereo audio file, please use a Singletrack Production instead!

Warning

Multitrack is much more complicated to use compared to our Singletrack Version.
Please read and understand the Multitrack Best Practice in detail before creating a Multitrack Production!

Audio Tracks and Algorithms

../_images/AudioTracksAndAlgorithms.png

Select one audio source file per track and configure track-specific Multitrack Audio Algorithms (see also Master Audio Algorithms).
For more control, please use our Multitrack Advanced Algorithm Parameters.

You can upload files directly in the browser, use HTTP links or any supported External Services (FTP, Dropbox, S3, Google Drive, SFTP, WedDAV and many more - please register the service first).

Supported filetypes:
MP2, MP3, MP4, M4A, M4B, M4V, WAV, OGG, OGA, OPUS, FLAC, ALAC, MPG, MOV, AC3, EAC3, AIF, AIFC, AIFF, AIFFC, AU, GSM, CAF, IRCAM, AAC, MPG, SND, VOC, VORBIS, VOX, WAVPCM, WMA, ALAW, APE, CAF, MPC, MPC8, MULAW, OMA, RM, TTA, W64, SPX, 3PG, 3G2, 3GPP, 3GP, 3GA, TS, MUS, AVI, DV, FLV, IPOD, MATROSKA, WEBM, MPEG, OGV, VOB, MKV, MK3D, MKA, MKS, QT, MXF.
Please let us know if you need an additional format.

For lossy codecs like MP3, please use a bitrate of 192k or higher!

Track Identifier

Enter a name for the current audio track.
This field is used as the speaker name in Speech Recognition.

High-Pass Filtering

Classifies the lowest wanted signal (male/female speech, base in music, etc.) and adaptively filters
unnecessary/disturbing low frequencies in each audio segment.

Automatic Noise and Hum Reduction

Classifies track regions with different backgrounds and automatically removes noise and hum in each region.
For details and examples see Multitrack Noise and Hum Reduction.

Fore/Background Settings

Force the track to be in foreground, background, or unchanged. Use Ducking to reduce its level if speakers are active.
If set to Auto, Auphonic automatically decides which segments of your track should be in foreground or background.

Intro and Outro

../_images/IntroOutro.png

Automatically add an Intro/Outro to your production.
IMPORTANT: this feature is audio-only and does not work with video productions!

As intros/outros are intended to be used multiple times, they are only Loudness Normalized to match the loudness of your production without further Auphonic processing (no leveling, filtering, noise reduction, etc.). Therefore you should edit/process your intro/outro before.
For a detailed description of our intro/outro feature, please see the blog post Automatic Intros and Outros in Auphonic.

Select Intro File

Select your intro audio from a local file (in productions only), HTTP or an External Service (Dropbox, SFTP, S3, Google Drive, SoundCloud, etc. - please register the service first).
NOTE:
We store audio files only for a limited number of days, therefore you have to use intro/outro files in presets from HTTP or an External Service. In productions, you can upload intro/outro files directly.

Intro Overlap

Set overlap time in seconds of intro end with main audio file start, for details see Overlapping Intros/Outros.
IMPORTANT: ducking must be added manually to intro audio file!

Select Outro File

Select your outro audio from a local file, HTTP or an External Service
(Dropbox, SFTP, S3, Google Drive, SoundCloud, etc. - please register the service first).

Outro Overlap

Set overlap time in seconds of outro start with main audio file end, for details see Overlapping Intros/Outros.
IMPORTANT: ducking must be added manually to outro audio file!

Basic Metadata

../_images/BasicMetadata.png

Basic metadata (title, cover image, artist, album, track) for your production.
Metadata tags and cover images from input files will be imported automatically in empty fields!

We correctly map metadata to multiple Output Files.
For details see the following blog posts: ID3 Tags Metadata (used in MP3 output files), Vorbis Comment Metadata (used in FLAC, Opus and Ogg Vorbis output files) and MPEG-4 iTunes-style Metadata (used in AAC, M4A/M4B/MP4 and ALAC output files).

Cover Image

Add a cover image or leave empty to import the cover image from your input file.
If a Video Output File or YouTube export is selected, Auphonic generates a video with cover/chapter image(s) automatically!

Extended Metadata

../_images/ExtendedMetadata.png

Extended metadata (subtitle, summary, genre, etc.) for your production.
Metadata tags from input files will be imported automatically in empty fields!

We correctly map metadata to multiple Output Files.
For details see the following blog posts: ID3 Tags Metadata (used in MP3 output files), Vorbis Comment Metadata (used in FLAC, Opus and Ogg Vorbis output files) and MPEG-4 iTunes-style Metadata (used in AAC, M4A/M4B/MP4 and ALAC output files).

Subtitle

A subtitle for your production, must not be longer than 255 characters!

Summary / Description

Here you can write an extended summary or description of your content.

Append Chapter Marks to Summary

Append possible Chapter Marks with time codes and URLs to your Summary.
This might be useful for audio players which don’t support chapters!

Create a Creative Commons License

Link to create your license at creativecommons.org.
Copy the license and its URL into the metadata fields License (Copyright) and License URL!

Tags, Keywords

Tags must be separated by comma signs!

Chapter Marks

../_images/ChapterMarks.png

Chapter marks, also called Enhanced Podcasts, can be used for quick navigation within audio files. One chapter might contain a title, an additional URL and a chapter image.
Chapters are written to all supported output file formats (MP3, AAC/M4A, Opus, Ogg, FLAC, ALAC, etc.) and exported to Soundcloud, YouTube and Spreaker. If a video Output File or YouTube export is selected, Auphonic generates a video with chapter images automatically.
For more information about chapters and which players support them, please see Chapter Marks for MP3, MP4 Audio and Vorbis Comment (Enhanced Podcasts).

Chapter marks can be entered directly in our web interface or we automatically Import Chapter Marks from your input audio file.
It’s also possible to import a simple Text File Format with Chapters, upload markers from various audio editors (Audacity Labels, Reaper Markers, Adobe Audition Session, Hindenburg, Ultraschall, etc.), or use our API for Adding Chapter Marks programmatically.
For details, please see How to Import Chapter Marks in Auphonic.

Chapter Start Time

Enter chapter start time in hh:mm:ss.mmm format (examples: 00:02:35.500, 1:30, 3:25.5).
NOTE: You don’t have to add the length of an optional Intro File here!

Chapter Title

Optional title of the current chapter.
Audio players show chapter titles for quick navigation in audio files.

Chapter URL

Enter an (optional) URL with further information about the current chapter.

Chapter Image

Upload an (optional) image with visual information, e.g. slides or photos.
The image will be shown in podcast players while listening to the current chapter, or exported to video Output Files.

Import Chapter Marks from File

Select a Text File Format with a timepoint (hh:mm:ss[.mmm]) and a chapter title in each line or Import Chapter Marks from Audio Editors.
NOTE: We automatically import chapter marks form your input audio file!

Output Files

../_images/OutputFiles.png

Add one or multiple output file formats (MP3, MP4, Ogg, WAV, Video, …) with bitrate, channel and filename settings to a production (see Audio File Formats and Bitrates for Podcasts). All Metadata Fields and Chapter Markers will be mapped to multiple output files. See below for a list of other, specialized output formats.
With Auphonic you can process video input files as well, or automatically generate a video output file from input audio using Cover and Chapter images - for details see Video Input and Output.

Supported audio output file formats:

Other output file formats:

Output File Basename

Set basename (without extension) for all output files or leave it empty to take the original basename of your input file.

Output File Format

For an overview of audio formats see Audio File Formats and Bitrates for Podcasts.

Audio Bitrate (all channels)

Set combined bitrate of all channels of your audio output file.
For details see Audio File Formats and Bitrates for Podcasts.

Filename Suffix (optional)

Suffix for filename generation of the current output file, leave empty for automatic suffix!

Filename Ending, Extension

Filename extension of the current output file.

Mono Mixdown

Click here to force a mono mixdown of the current output file.

Split on Chapters

If you have Chapter Marks, this option will split your audio in one file per chapter.
All filenames will be appended with the chapter number and packed into one ZIP output file.

Speech Recognition

../_images/SpeechRecognition.png

Auphonic built a layer on top of multiple engines to offer affordable speech recognition in over 80 languages:
1. First you have to connect to an external Speech Recognition Service at the External Services page.
2. Then you can select the speech recognition engine when creating a new Production or Preset.

We send small audio segments to the speech recognition engine and then combine the results, add punctuation and structuring to produce 3 Output Result Files: an HTML transcript, a WebVTT/SRT subtitle file and a JSON/XML speech data file.
If you use a Multitrack Production, we can automatically assign speaker names to all transcribed audio segments.

For more details about our speech recognition system, the available engines, the produced output files and for some complete examples in English and German, please see Speech Recognition.

Select Service

Select an external service for Automatic Speech Recognition. Please register a service first!

Select Language

Select a language/variant for speech recognition.

Google Speech API

Word and Phrase Hints

Add Word and Phrase Hints to improve speech recognition accuracy for specific keywords and phrases.
Metadata (chapters, tags, title, artist, album) will be added automatically!

Wit.ai Speech Recognition

Wit.ai Language

The language must be set directly in your Wit.ai App.
IMPORTANT: If you need multiple languages, you have to add an additional Wit.ai service for each language!

Amazon Transcribe

Custom Vocabulary

Add Custom Vocabularies to improve speech recognition accuracy for specific keywords and phrases.
Metadata (chapters, tags, title, artist, album) will be added automatically!

Speechmatics

Speechmatics Language

Select a language/variant for speech recognition with Speechmatics.

Publishing / External Services

../_images/ExternalServices.png

Copy one or multiple result files to any External Service (Dropbox, YouTube, (S)FTP, SoundCloud, GDrive, LibSyn, Archive.org, S3, etc.):
1. First you have to connect to an external service at the External Services page.
2. Then you can select the service when creating a new Production or Preset.

When exporting to Podcasting/Audio/Video Services (SoundCloud, YouTube, Libsyn, Spreaker, Blubrry, Podlove, etc.), all metadata will be exported as well.
For a complete list and details about all supported services, see Auphonic External Services.

Select Service

Select an external service for outgoing file transfers. Please register your service first!

Output Files to copy

Select which Output File should be copied to the current external service.

YouTube Service

YouTube Privacy Settings

Set your video to Public (everyone can see it), Private (only your account can see it) or
Unlisted (everyone who knows the URL can see it, not indexed by YouTube).

YouTube Category

Select a YouTube category.

Facebook Service

Facebook Distribution Settings

Post to News Feed: The exported video is posted directly to your news feed / timeline.
Exclude from News Feed: The exported video is visible in the videos tab of your Facebook Page/User (see for example Auphonic’s video tab), but it is not posted to your news feed (you can do that later if you want).
Secret: Only you can see the exported video, it is not shown in the Facebook video tab and it is not posted to your news feed (you can do that later if you want).

For more details and examples please see the Facebook Export blog post.

Facebook Embeddable

Choose if the exported video should be embeddable in third-party websites.

SoundCloud Service

SoundCloud Sharing

Set your exported audio to Public or Private (not visible by other users).

SoundCloud Downloadable

Select if users should be able to download your audio on SoundCloud, otherwise only streaming is allowed.

SoundCloud Type

Select a SoundCloud type/category.

SoundCloud Audio File Export

Select an audio output file which should be exported to SoundCloud.
If set to Automatic, Auphonic will automatically choose a file.

Spreaker Service

Spreaker Collection / Show

Select your Spreaker Collection where this track should be published.
Each Collection has a separate RSS feed and can be created in your Spreaker Account.

Spreaker Sharing

Set your exported audio to Public or Private (not visible by other users).

Spreaker Downloadable

If disabled, listeners won’t be offered the option to download this track and it won’t be included in your RSS feed.

PodBean Service

PodBean Draft Status

Check this option to add this episode as draft at PodBean without actually publishing it.

PodBean Episode Type

Select which audience you wish to publish this episode to.
The exact list of options will vary based on your PodBean subscription model and settings. If you recently changed your subscription and miss some option here, please reauthorize us to see the updated list.

Captivate Service

Captivate Draft Status

Check this option to add this episode as draft at Captive without actually publishing it.

Episode Type

The episode type to mark this podcast in feeds and some apps as. Normal episodes are considered regular content, while trailer and bonus episodes may be displayed differently.

Master Audio Algorithms

../_images/MasterAudioAlgorithms.png

Enable/disable master audio algorithms, track-specific algorithms can be set in section Track Audio Algorithms.
For more details about our algorithms see Auphonic Multitrack Algorithms!

Multitrack Adaptive Leveler

Corrects level differences between tracks, speakers, music and speech, etc. to achieve a balanced overall loudness.
For details and examples see Multitrack Adaptive Leveler.

Adaptive Noise Gate

The gate will decrease background noise if a speaker/track is not active.
For details and examples see Adaptive Noise Gate.

Crossgate

Analyzes which speaker is active and decreases crosstalk/spill, ambience, reverb and noise recorded from other tracks.
For details and examples see Crossgate: Crosstalk (Spill, Reverb) Removal.
Deactivate the crossgate if you want to keep the full ambience of your recording!

Loudness Target

Set a loudness target in LUFS for Loudness Normalization, higher values result in louder audio outputs.
The maximum true peak level will set automatically to -1dBTP for loudness targets >= -23 LUFS (EBU R128) and to -2dBTP for loudness targets <= -24 LUFS (ATSC A/85).
For details and examples see Global Loudness Normalization and True Peak Limiter.

Multitrack Advanced Audio Algorithms

../_images/AdvancedAudioAlgorithmsMT.png

Enable/disable advanced parameters for our Multitrack Audio Algorithms.

Warning

Please don’t use our advanced algorithm parameters if you don’t understand them!
Use our default settings instead, they are a good starting point for most content.

Note

Advanced audio algorithms are only available for paying users! Please add some credits if you want to use them.

Leveler Parameters

../_images/MTLevelerParameters.png

The following advanced parameters for our Multitrack Adaptive Leveler allow you to customize which parts of the track audio should be leveled, how much they should be leveled, and how much dynamic range compression should be applied. It’s also possible to set the stereo panorama (balance) of the track.

Advanced Leveler

Enable advanced parameters for the Multitrack Adaptive Leveler.

Leveler Strength

The Leveler Strength controls how much leveling is applied: 100% means full leveling, 0% means no leveling at all. Changing the Leveler Strength increases/decreases the dynamic range of this track.

For example, if you want to increase the dynamic range of this track by 3dB compared to 100% leveling, just set the Leveler Strength parameter to 70% (~3dB).
We also like to call this concept Loudness Comfort Zone: above a maximum and below a minimum level (the comfort zone), no leveling is applied; the higher the Leveler Strength, the smaller the comfort zone (more leveling necessary). So if your track input file already has a small dynamic range (is within the comfort zone), our leveler will be just bypassed.

Example Use Case:
Lower Leveler Strength values should be used if you want to keep more loudness differences in dynamic narration or dynamic music recordings (live concert/classical).

Compressor

Select a preset value for micro-dynamics compression:
A compressor reduces the volume of short and loud spikes like the pronunciation of “p” and “t” or laughter (short-term dynamics) and also shapes the sound of your voice (making the sound more or less “processed” or “punchy”).

Possible values are:

  • Auto: The compressor setting depends on the selected Leveler Strength. Medium compression is used for strength values <= 100%, Hard compression for strength values > 100% (Fast Leveler and Amplify Everything).

  • Soft: Uses less compression.

  • Medium: Our default setting.

  • Hard: More compression, especially tries to compress short and extreme level overshoots. Use this preset if you want your voice to sound very processed, our if you have extreme and fast-changing level differences.

  • Off: No short-term dynamics compression is used at all, only mid-term leveling. Switch off the compressor if you just want to adjust the loudness range without any additional micro-dynamics compression.

MusicSpeech Classifier Setting

Select between the Speech Track and Music Track Adaptive Leveler.
If set to On, a classifier will decide if this is a music or speech track.

Stereo Panorama (Balance)

Change the stereo panorama (balance for stereo input files) of the current track.
Possible values: L100, L75, …. Center, … R75 and R100.

Fore/Background Settings

../_images/MTForeBackground.png

The parameter Fore/Background controls whether a track should be in foreground, in background, ducked, or unchanged.
Ducking automatically reduces the level of a track if speakers in other tracks are active. This is useful for intros/outros, translated speech, or for music, which should be softer if someone is speaking.
For more details, please see Automatic Ducking, Foreground and Background Tracks.

Track Gain

Increase/decrease the loudness of this track compared to other tracks.
This can be used to add gain to a music or a specific speech track, making it louder/softer compared to other tracks.

Level of Background Segments

Set the level of background segments/tracks (compared to foreground segments) in background and ducking tracks.
By default, background and ducking segments are 18dB softer than foreground segments.

Ducking Fade Time

Set fade down/up length (attack/release) for ducking segments in ms.
The default value is 500ms.

Noise and Hum Reduction Parameters

../_images/DenoiseParameters.png

In addition to the parameter (Noise) Reduction Amount, we offer two more advanced parameters to control the combination of our Noise and Hum Reduction algorithms.
Behavior of our Noise and Hum Reduction parameter combinations:

Noise Reduction Amount

Hum Base Frequency

Hum Reduction Amount

Auto

Auto

Auto

Automatic hum and noise reduction

Auto or > 0

Disabled

No hum reduction, only denoise

Disabled

50Hz

Auto or > 0

Force 50Hz hum reduction, no denoise

Disabled

Auto

Auto or > 0

Automatic dehum, no denoise

12dB

60Hz

Auto or > 0

Always do dehum (60Hz) and denoise (12dB)

Noise Reduction Amount

Maximum noise and hum reduction amount in dB, higher values remove more noise. In Auto mode, a classifier decides if and how much noise reduction is necessary (to avoid artifacts).
Set to a custom (non-Auto) value if you prefer more noise reduction or want to bypass our classifier, but be aware, this might result in artifacts or destroy music segments!

Hum Reduction Base Frequency

Set the hum base frequency to 50Hz or 60Hz (if you know it), or use Auto to automatically detect the hum base frequency in each speech region.

Hum Reduction Amount

Maximum hum reduction in dB, higher values remove more hum.
In Auto mode, a classifier decides how much hum reduction is necessary for each speech region.
Set it to a custom value (> 0), if you prefer more hum reduction or want to bypass our classifier.
Use Disable Dehum to disable hum reduction and use our noise reduction algorithms only.

Master Algorithm Parameters

../_images/MTAdvMasterAudioAlgorithms.png

Advanced Leveling and Loudness Normalization parameters for our Multitrack Master Audio Algorithms.

Leveler Mode

The Multitrack Adaptive Leveler can be controlled with different leveling parameters in each track.
In addition to that, you can switch the Leveler Mode in the master algorithm settings to Broadcast Mode to control the combined leveling strength.
Volume changes of our leveling algorithms will be adjusted so that the final mixdown of the multitrack production meets the given MaxLRA, MaxS, or MaxM target values - as is done in the Singletrack Broadcast Mode.

If it’s not possible for the levels of the mixdown file to be below the given MaxLRA, MaxS, or MaxM target values, you will receive a warning message via email and on the production page.

Maximum Loudness Range (LRA)

The loudness range (LRA) indicates the variation of loudness throughout a program and is measured in LU (loudness units) - for more details see Loudness Measurement and Normalization or EBU Tech 3342.
The volume changes of our Leveler will be restricted so that the LRA of the output file is below the selected value (if possible).
High LRA values will result in very dynamic output files, whereas low LRA values will result in compressed output audio. If the LRA value of your input file is already below the maximum loudness range value, no leveling at all will be applied.

Loudness Range values are most reliable for pure speech programs: a typical LRA value for news programs is 3 LU; for talks and discussions, an LRA value of 5 LU is common. LRA values for features, radio dramas, movies, or music strongly depend on the individual character and might be in the range of 5 to 25 LU - for more information, please see Where LRA falls short.
Netflix, for instance, recommends an LRA of 4 to 18 LU for the overall program and 7 LU or less for dialog.

Example Use Case:
The broadcast parameters can be used to generate automatic mixdowns with different LRA values for different target environments (very compressed environments like mobile devices or Alexa, or very dynamic ones like home cinema, etc.).

Maximum Short-term Loudness (MaxS)

Set a Maximum Short-term Loudness target (3s measurement window, see EBU Tech 3341, Section 2.2) relative to your Global Loudness Normalization Target.
Our Adaptive Leveler will ensure that the MaxS loudness value of the output file, which are loudness values measured with an integration time of 3s, will be below this target (if possible).
For example, if the MaxS value is set to +5 LU relative and the Loudness Target to -23 LUFS, then the absolute MaxS value of your output file will be restricted to -18 LUFS.

The Max Short-term Loudness is used in certain regulations for short-form content and advertisements.
See for example EBU R128 S1: Loudness Parameters for Short-form Content (advertisements, promos, etc.), which recommends a Max Short-term Loudness of +5 LU relative.

Maximum Momentary Loudness (MaxM)

Similar to the MaxS target, it’s also possible to use a Maximum Momentary Loudness target (0.4s measurement window, see EBU Tech 3341, Section 2.2) relative to your Global Loudness Normalization Target.
Our Adaptive Leveler will ensure that the MaxM loudness value of the output file, which are loudness values measured with an integration time of 0.4s, will be below this target (if possible).

The Max Momentary Loudness is used in certain regulations by broadcasters. For example, CBC and Radio Canada require that the Momentary Loudness must not exceed +10 LU above the target loudness.

Maximum Peak Level

Maximum True Peak Level of the processed output file. Use Auto for a reasonable value according to the selected loudness target: -1dBTP for EBU R128 and higher, -2dBTP for ATSC A/85 and lower.

Dual Mono

If a mono production is played back on a stereo system (dual mono), it should be attenuated by 3 dB (= 3 LU) to sound as loud as the equivalent stereo production. The EBU Guidelines for Reproduction require that this -3 dB offset should be applied in the playback device - however, most devices don’t do so (but some do).

Please select the dual mono flag, to automatically add a -3 LU offset for mono productions.
This means, if you select a loudness target of -16 LUFS and the dual mono flag, your stereo productions will be normalized to -16 LUFS, but mono productions to -19 LUFS.

For details, please see Loudness Normalization of Mono Productions.

Normalization Method / Anchor-based Normalization

Perform loudness normalization according to the whole file (Program Loudness) or according to dialog / speech parts only (Dialog Loudness, anchor-based loudness normalization).
For details, please see Dialog Loudness Normalization for Cinematic Content.