Meet AirTen: the fastest audio real-time runtime period

We’re thrilled to officially launch AirTen – ai|coustics’ purpose-built neural network runtime. Designed especially for real-time audio AI, AirTen delivers unmatched speed, safety, and portability.

And the best part? It’s packed into a runtime smaller than the average photo stored on your phone and exclusively powers the models in our SDK.

What is AirTen?

AirTen (short for AirTensors) is our custom runtime for neural network inference – the critical phase where models move from training to action, executing in real-world applications and devices. In audio AI every millisecond counts and inference speed isn’t just a ‘nice-to-have’ – it’s essential.

So we built a new kind of engine. One that has:

  • A pure no_std Rust runtime
  • Zero dependencies, ensuring unmatched portability
  • A build tiny enough for microcontrollers, yet powerful enough for desktop and web
  • Integration into ai|coustics’ model delivery pipeline and available with our SDK

Why not use existing runtimes?

There are plenty of general purpose inference engines out there, but none optimized for real-time, resource-constrained audio environments. Here’s what we ran into:

  • They couldn’t guarantee consistent timing – which can lead to pops, clicks, and audio glitches
  • They used too much memory and processing power, making them hard to run on smaller devices
  • They were difficult to set up across different platforms
  • They were missing key features our models rely on
  • They acted like a black box – hard to understand and even harder to customize

AirTen changes the game by giving you complete control and making no compromises on size, speed, or stability.

AirTen: Key benefits

Real-world performance: How AirTen stacks up

We tested AirTen and here’s how it compares to one of the most popular inference engines out there:

In summary:
AirTen is smaller, uses less memory, and runs twice as fast – perfect for devices where every millisecond and megabyte counts.

Keen to try it out?

AirTen is now available with ai|coustics SDK – get in touch to learn more.

Latest updates

Fixing the audio input for voice agents

Voice agents are revolutionising the way we interact with technology – but they can only perform as well as the audio they receive. These systems are built on a complex stack: voice capture, speech recognition (ASR), reasoning (LLMs) and text-to-speech (TTS). While each layer has improved dramatically, one foundational element remains critically underserved and has the potential to break the

Read More
"Introducing our new model: Lark 2", a graphic of an origami bird and ai-coustics logo

Announcing Lark 2: the next generation of reconstructive speech enhancement

Fans of Lark, rejoice: Lark 2 is here. Bolder, better, and stronger than ever, Lark 2 is our most advanced reconstructive speech enhancement model yet. Lark 2, like its predecessor, is built with our speciality reconstructive AI technology which goes beyond just isolating speech to repair existing speech and restore lost information – all while preserving the authentic human voice

Read More

Meet AirTen: the fastest audio real-time runtime period

We’re thrilled to officially launch AirTen – ai|coustics’ purpose-built neural network runtime. Designed especially for real-time audio AI, AirTen delivers unmatched speed, safety, and portability. And the best part? It’s packed into a runtime smaller than the average photo stored on your phone and exclusively powers the models in our SDK. What is AirTen? AirTen (short for AirTensors) is our

Read More

Ready to embrace the power of Voice AI?

Authentic human voices. Studio-quality sound. Real-time capacity. Automated workflows. It starts here.