Author archives: Isabell

RSS feed of Isabell

Responding to your feedback, we are now proud to present new separate parameters for noise, reverb, and breath reduction to give you more flexible control for your individual, best output results.
Find all the new parameters below and listen to the Audio Examples to get a closer impression of the upgrade.


What's the update about?

Before

Previously, you could only set the Denoising Method and one reduction amount, that was used for all elements.
Depending on the selected method, you were already able to decide whether music, static, or changing noises should be removed, but there was no setting ...

No matter how good your technical equipment might be, it is almost impossible to avoid capturing unwanted in-/exhaling sounds and mouth noises during voice recordings. After some users asked for an automatic removal of such sounds to improve the audio quality, we got to work and are now proud to present a major upgrade to our Denoiser, including the automatic removal of mouth noises and a new “Remove Breathings” option!
Check out our Audio Examples and the Getting Started Guide below.

What is new?

  • Remove Breathings: When the new “Remove Breathings” option is enabled, all the inhalation ...

For those of you who like to be in control of every applied cut, we are introducing an update for our Automatic Silence and Filler Word Cutting Algorithms today: The export of Cut Lists allows you to import cuts into your favorite audio/video editor to check and apply the cuts to your files manually.
Thanks to your great feedback, we were able to update our “Filler Word Cutting” algorithm as well.

Cut Lists Export

We now provide the export of various formats of “Cut Lists” in the Auphonic Web Service.
You can use these formats to modify and ...

We all know the problem: the content is perfectly prepared, and everything is in place, but the moment you hit the record button, your brain freezes, and what pops out of your mouth is a rain of “ums”, “uhs”, and “mhs” that no listener would enjoy.
Cleaning up a record like that by manually cutting out every single filler word is a painstaking task.

So we heard your requests to automate the filler word removing task, started implementing it, and are now very happy to release our new Automatic Filler Cutter feature. See our Audio ...

We're thrilled to introduce our Automatic Shownotes and Chapters feature. This AI-powered tool effortlessly generates concise summaries, intuitive chapter timestamps and relevant keywords for your podcasts, audio and video files.
See our Examples and the How To section below for details.

Why do I need Shownotes and Chapters?

In addition to links and other information, shownotes contain short summaries of the main topics of your episode, and inserted chapter marks allow you to timestamp sections with different topics of a podcast or video. This makes your content more accessible and user-friendly, enabeling listeners to quickly navigate to specific sections ...

We've listened to your feedback and are excited to announce the introduction of metadata variables in Auphonic for more advanced use of our Basic and Extended Metadata.
This new feature allows you to use metadata fields from your input files to automate workflows. You can easily reference any field by using { curly brackets } and typing the field name, such as {title} ...

In addition to our Leveler, Denoiser, and Adaptive 'Hi-Pass' Filter, we now release the missing equalization feature with the new Auphonic AutoEQ.
The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording, to remove sibilance (De-esser) and to create a clear, warm, and pleasant sound - listen to the audio examples below to get an idea about what it does.

Screenshot of manually ...

Today we release our first self-hosted Auphonic Speech Recognition Engine using the open-source Whisper model by OpenAI!
With Whisper, you can now integrate automatic speech recognition in 99 languages into your Auphonic audio post-production workflow, without creating an external account and without extra costs!

Whisper Speech Recognition in Auphonic

So far, Auphonic users had to choose one of our integrated external service providers (Wit.ai, Google Cloud Speech, Amazon Transcribe, Speechmatics) for speech recognition, so audio files were transferred to an external server, using external computing powers, that users had to pay for ...

We are proud to announce that we recently joined the NVIDIA Inception Program, which will help to speed up our deep learning development process and therefore offer the best possible audio processing tools to our users.

What is NVIDIA Inception

NVIDIA is a global leader in hardware and software for Artificial Intelligence (AI).
Their NVIDIA Inception Program will enable us to leverage NVIDIA's cutting-edge technology by accessing more diverse cloud and GPU (Graphics Processing Unit) product offerings, which are used in most Machine Learning and Deep Learning model training instances worldwide. This will allow us to streamline ...

Speechmatics released a new API including an enhanced transcription engine (2h free per month!) that we integrated into the Auphonic Web Service now.
In this blog post, we also compare the accuracy of all our integrated speech recognition services and present our results.


Automatic speech recognition is most useful to make audio searchable: Even if automatically generated transcripts are not perfect and might be difficult to read (spoken text is very different from written text), they are very valuable if you try to find a specific topic within a one-hour audio file or if you need the exact ...