What we do

ai|coustics is pioneering innovative audio algorithms based on AI and Deep Learning models and fundamentally improving the capabilities of Speech Audio Enhancement in digital communication and media content.

Is it relevant how good a voice sounds? Yes! We as humans are able to detect a myriad of non-linguistic yet highly relevant information when listening to a voice. We want to hear intelligible and pleasant-sounding voices, and even more, we want our own voices to sound clear and convincing to listeners, especially due to the persuasive or appalling effect it can yield. Even if people do not perceive voices as consciously as for example the faces in video calls, the auditory effect a voice can have goes well beyond the annoyance of bad sound. It can evoke emotions and cue multiple physiological reactions. 


The audio quality of digital content is often deterred by various conditions such as background noise, inferior microphones, reverberant rooms, poor internet connection and other disturbances. This reduces the intelligibility, causes misunderstandings and concentration difficulties, and ultimately hinders the overall communication experience in digital meetings and online media content.

Examples

Check out our examples:

01. radio phone-in from the highway
02. lecture in reverberant hall
03. the noisy reader
04. hightime at the office
05. question from the audience
revert
apply
hify.ai

Technology

To counteract artifacts and the problem of low-quality speech, ai|coustics is developing Speech Enhancement algorithms based on generative Deep Learning models.

Different forms and architectures of neural networks such as auto-encoders, generative adversarial networks or diffusion probabilistic models are trained on the basis of high-quality audio data. In this crucial training process those generative models with a special architecture for speech data are fed with a proprietary input pipeline to simulate real-world and quality-degrading audio artifacts and a large and custom-built database of specially processed and granularly categorized speech audio files.

Our Speech Enhancement algorithms and DL models go way beyond noise suppression; we’re innovating AI models for further speech audio enhancements functions such as:

  • removing unwanted reverb and improving indistinct sound
  • microphone correction
  • sound enhancement: recover lost frequencies
  • data compression artifacts repair and
  • packet loss repair.

This results in an unparalleled quality gain for inferior audio signals and allows the use of simpler hardware: a cheap headset can sound like a professional microphone in a recording studio. Even when communicating through a bad (internet) connection.

Services

ai|coustics’ Speech Enhancement models are available via licensing to partners who want to offer a better audio experience to their customers and stand out from the competition, e.g. for:

  • Audio/video chat and voice messaging software
  • Integration into operating systems
  • Hardware integrations (e.g. handsfree solutions, speakerphones, microphones, headphones, hearing aids)
  • Social media, video, podcast and education platforms
  • Audio editing and restoration software, e.g. for podcast, broadcast and legacy content
  • Speech-to-text systems

For integrations into third party software, we’re offering SDKs optimized for different target systems that allow easy integration without years of development and model training effort. Custom trained models are part of the offering to exactly match client’s needs.

Contact

We’d love to hear from you!

If you’re interested to exchange about audio AI, or if your company is considering using sound enhancement algorithms – or of you an audio expert who wants to work in a fun company – please contact us.

Contact

ai-coustics UG
Hardenbergstraße 38
10623 Berlin

Represented by:
Fabian Seipel, Corvin Jaedicke

Phone: +49 (0)30 63415175
E-mail: info@ai-coustics.com

www.linkedin.com/company/ai-coustics/

ai|coustics is funded by the German Federal Ministry for Economic Affairs and Energy and the European Social Fund as part of the EXIST program.