Expected Surprises | Part I: Restoring the Past

Just a few weeks ago, BabbleLabs launched its Clear Cloud speech enhancement product — available as both a cloud streaming API and via a web interface for handling individual files. The early results, as we expected, have been quite surprising ;-)

Find out what I mean in this four-part blog series. We’ll post a new part every few days for over the next two weeks:

Part I: Restoring the Past
Part II: Rebuilding Speech 
Part III: What’s Missing?
Part IV: Better = Cheaper

Part I: Restoring the Past

First, we’ve been struck by the uniform response from people who make video — ordinary consumers, video bloggers and professional video producers: “Wow! That’s pretty amazing.” For the amateurs, I think they are just shocked when you strip away the noise from a video — it feels as if speaker has been transported out of the noisy scene to some quiet place, yet there they are immersed in traffic, the windy South Atlantic, a club, an ice skating rink. For the professionals, they know that with great care in recording and post-production, distracting noises from field recording can be painstakingly eliminated, but it is expensive and uncertain. For them, the impact of Clear Cloud is how close it gets to professional, manual sound editing for no effort. It has become obvious to us that Clear Cloud has great potential in the video production, distribution, and sharing world.

Second, we’ve started to pick up anecdotes that suggest a whole ...

Continue Reading

We Love Noise

During my training as an engineer, one of the most obvious concepts I had to learn was “always check your assumptions”. This maxim is applicable in many circumstances. All too often, you find a mismatch between how things should behave, and how they are actually functioning. And this leads us to our  main subject — why those of us building BabbleLabs, a speech company, care so deeply about noise in speech. We care so much, we derived our company's name from noise; we are BabbleLabs, not SpeechLabs, after all.

When I hear stories about how computers have achieved better-than-human speech recognition results, I wonder how that can be true, and yet I still cannot successfully dictate a number to my phone.  Even if short message transcription works in the comfort of your private office, it completely falls apart in your car.

These observations are screaming, Check Your Assumptions!! It turns out that most automatic speech recognition (ASR) work has historically been primarily focused on doing ASR in anechoic circumstances (i.e. without echoes or reverberation). Robustifying ASR for real-world environments has been a second step built on top of the work done for anechoic ASR. To be clear, that approach is not wrong — you need to crawl before you can walk. However, this approach has its inherent limitations. At BabbleLabs, we have started from the other end of the problem, by addressing the noise.

Noise, of course, means a lot of things; additive noise, modulated ...

Continue Reading

Clear Cloud: Behind the Curtain

The conventional wisdom in deep learning is that GPUs are essential compute tools for advanced neural networks. I’m always skeptical of conventional wisdom, but for BabbleLabs, it seems to hold true. The sheer performance of GPUs, combined with their robust support in deep learning programming environments, allows us to train bigger, more complex networks with vastly more data and deploy them commercially at low cost. GPUs are a key element in BabbleLabs’ delivery of the world’s best speech enhancement technology.

The deep learning computing model gives us powerful new tools for extracting fresh insights from masses of complex data — data that has long defied good systematic analysis with explicit algorithms. The model has already transformed vision and speech processing, obsoleting most conventional classification and generation methods in just the last five years. Deep learning is now being applied — often with spectacular results — across transportation, public safety, medicine, finance, marketing, manufacturing, and social media. This already makes it one of the most significant developments in computing in the past two decades. In time, we may rate its impact in the same category with the “superstars” of tech transformation — the emergence of high-speed Internet and smart mobile devices.

The promise of deep learning is matched with a curse: it demands huge data sets and correspondingly huge computing resources for successful training and use. For example, a single full training of BabbleLabs’ most advanced speech enhancement network requires between 1019 and 1020 floating point operations, using ...

Continue Reading