A New Chapter for BabbleLabs…and Collaboration

In the past six months, the meaning of the phrase “going to work” has been upended. With travel restrictions, shelter-in-place and shuttered offices, all kinds of work are now remote. We spend our days and nights in video conferences and using collaboration tools. I talk as much as ever, but so much more of that talk is through my laptop and phone. I am wholly dependent on virtual collaboration tools, and most especially, on live speech streams with co-workers around the world. We have all necessarily become adept at squeezing the greatest possible understanding from flawed audio communications channels. The problems are fairly obvious to all of us – it’s background noise, reverberation, loss of fidelity through speech compression, and frozen, stuttering speech due to network packet loss. More than ever we need to do professional work in decidedly non-professional environments.

Since a small group of us founded BabbleLabs in late 2017, we have had a single-minded focus on using the latest ideas in deep learning and speech science to revolutionize the remote speech experience. In just over two and a half years we’ve brought to market remarkable software to essentially remove all background noise and enhance comprehension by both people and machines under the toughest real-world conditions. In fact, the application of deep learning to speech quality has triggered a huge change in what advanced software can do for noisy speech. We recently researched, implemented and measured the state-of-the-art noise reduction algorithms over the past 40 ...

Continue Reading

The Advantages of an AI IoT Speech Interface on the Edge

A lively group of semiconductor engineers, hardware and software design managers, and tech journalists from around the globe joined a virtual happy hour session I co-hosted recently on AI IoT devices as part of the Design Automation Conference. We discussed the usual smart watches and temperature controls, as well as a few lesser known technologies including smart-kill Wi-Fi enabled rat traps and AI-enabled IoT chicken collars. Of course, everyone had their own story with cloud-based voice recognition technologies used in smart speakers; it turns out, most have unhappy experiences with voice technology.

This feedback comes as no surprise to BabbleLabs. It validates our belief – and basis for part of our business model – that cloud-based automatic speech recognition (ASR) technology accomplishes only part of the goal for IoT devices. As stated by session participants, the technology disappoints when you most want it to recognize what you say, compromises user privacy, can be painfully slow, diminishes brand experience, and has a high cost, limiting potential for useful applications. All of these issues can be addressed through noise-optimized speech recognition technology deployed locally, on the edge, as offered through BabbleLabs Clear Command.

The Cloud Conundrum
Today, most popular speech recognition technologies work by using a wake word recognized by embedded technology that signals when it’s time to act. For example, following “OK Google,” the device passes speech into the cloud, applying complex algorithms to the phrases to identify the best match – a sometimes time-consuming process ...

Continue Reading