Author archives: Isabell

RSS feed of Isabell

We're thrilled to introduce our Automatic Shownotes and Chapters feature. This AI-powered tool effortlessly generates concise summaries, intuitive chapter timestamps and relevant keywords for your podcasts, audio and video files.
See our Examples and the How To section below for details.

Why do I need Shownotes and Chapters?

In addition to links and other information, shownotes contain short summaries of the main topics of your episode, and inserted chapter marks allow you to timestamp sections with different topics of a podcast or video. This makes your content more accessible and user-friendly, enabeling listeners to quickly navigate to specific sections ...

We've listened to your feedback and are excited to announce the introduction of metadata variables in Auphonic for more advanced use of our Basic and Extended Metadata.
This new feature allows you to use metadata fields from your input files to automate workflows. You can easily reference any field by using { curly brackets } and typing the field name, such as {title} ...

In addition to our Leveler, Denoiser, and Adaptive 'Hi-Pass' Filter, we now release the missing equalization feature with the new Auphonic AutoEQ.
The AutoEQ automatically analyzes and optimizes the frequency spectrum of a voice recording, to remove sibilance (De-esser) and to create a clear, warm, and pleasant sound - listen to the audio examples below to get an idea about what it does.

Screenshot of manually ...

Today we release our first self-hosted Auphonic Speech Recognition Engine using the open-source Whisper model by OpenAI!
With Whisper, you can now integrate automatic speech recognition in 99 languages into your Auphonic audio post-production workflow, without creating an external account and without extra costs!

Whisper Speech Recognition in Auphonic

So far, Auphonic users had to choose one of our integrated external service providers (Wit.ai, Google Cloud Speech, Amazon Transcribe, Speechmatics) for speech recognition, so audio files were transferred to an external server, using external computing powers, that users had to pay for ...

We are proud to announce that we recently joined the NVIDIA Inception Program, which will help to speed up our deep learning development process and therefore offer the best possible audio processing tools to our users.

What is NVIDIA Inception

NVIDIA is a global leader in hardware and software for Artificial Intelligence (AI).
Their NVIDIA Inception Program will enable us to leverage NVIDIA's cutting-edge technology by accessing more diverse cloud and GPU (Graphics Processing Unit) product offerings, which are used in most Machine Learning and Deep Learning model training instances worldwide. This will allow us to streamline ...

Speechmatics released a new API including an enhanced transcription engine (2h free per month!) that we integrated into the Auphonic Web Service now.
In this blog post, we also compare the accuracy of all our integrated speech recognition services and present our results.


Automatic speech recognition is most useful to make audio searchable: Even if automatically generated transcripts are not perfect and might be difficult to read (spoken text is very different from written text), they are very valuable if you try to find a specific topic within a one-hour audio file or if you need the exact ...

All the users of Auphonic and Hindenburg – rejoice! We are glad to announce that Hindenburg and Auphonic can now be connected, making your audio editing and post-production process much more straightforward.

Now you do not have to worry any more about manual downloads from Hindenburg and uploads to Auphonic. The new integration allows you to run the production that you edit in Hindenburg through Auphonic algorithms and apply all the audio post-production in just one click: you now can seamlessly transfer the audio file from Hindenburg to Auphonic for further post-production. This step is carried out in ...

Our classic noise reduction algorithms remove broadband background noise and hum in audio files with slowly varying backgrounds.
We released the first beta version of the new dynamic noise reduction algorithms now, which work much better with fast-changing and complex noises. Listen to the audio examples below, they demonstrate some of the new features and use cases!

Glitch While Streaming by Michael Dziedzic.

How to try out the Beta Denoiser

At the moment, only users with access to our advanced algorithm parameters can try the beta noise reduction algorithms (free users: please just ...