Small, Medium, Large: Finding the Right Company Fit

Rachel Gardner, 2018 Summer Intern, Stanford University Computer Science, Class of 2020

In looking for a job, there is a constant question: big company vs small company? Rather than answer this question, I chose “all of the above” and interned at a medium-sized company (Silicon Labs), a large company (NVIDIA) and now a small company (BabbleLabs), one after the other. The first and most obvious difference is that I always have to explain what BabbleLabs does, as “I work at a deep learning startup” usually invokes a fair amount of interest (along with a few knowing smiles). In case you were also wondering, Babblelabs is a speech processing company, using advanced neural networks for cloud and devices. It’s less than a year old, but has already launched its first product (5 months after raising the first $4M).

With such a small company (about 9 in BabbleLabs’ San Jose office), I was immediately treated as a full-time employee, with all that brings. In the often unstructured environment of a startup, I found that my past work at larger companies gave me the experience to impose my own structure: setting goals for the internship, calling meetings to discuss milestones, etc. It was clear that the decisions of the founding team were similarly influenced by experience with more established companies, both in terms of how to do things and in terms of what to avoid. Because of the small size of the company, I had the opportunity ...

Continue Reading

Expected Surprises | Part IV: Better = Cheaper

Part I: Restoring the Past
Part II: Rebuilding Speech 
Part III: What’s Missing?
Part IV: Better = Cheaper

Part IV: Better = Cheaper

Just in the last week, we realized another important way to leverage the remarkable speech enhancement progress of BabbleLabs. Normally, people think of better speech enhancement as delivering just that — an improved experience. But improvements in quality can sometimes be transmogrified into reductions in cost.  This turns out to be true for communications and storing speech. Communications systems have employed speech coding for decades, to deliver adequate speech quality over narrow bandwidth. However, as the aggressiveness of the encoding increases — squeezing speech into the fewest possible bits per second — the quality of the speech suffers. On top of that, the most ambitious speech coding methods attempt to model the characteristics of human speech production.

Modern speech codecs assume a "source-filter" model of speech production, and typically use two speech components: white Gaussian noise for unvoiced phonemes and periodic pulse train for voiced speech sounds. They use Linear Predictive Coders for the filter that represents resonances of the vocal tract.  These concise models work pretty well in the absence of noise, but non-speech noise doesn’t encode well in these models, so that noisy speech is often distorted by these speech coding methods, especially at lower bit-rates.

Combining state-of-the-art speech codecs with state-of-the-art speech enhancement addresses these limitations quite well. We can use BabbleLabs Clear to remove noise, so that speech codecs (e.g., ...

Continue Reading

Expected Surprises | Part III: What’s Missing?

Part I: Restoring the Past
Part II: Rebuilding Speech 
Part III: What’s Missing?
Part IV: Better = Cheaper

Part III: What’s Missing?

I have long wondered why we see a proliferation of video cameras without sound capture. Sound is fundamentally cheaper to capture and there are many potential benefits of capturing the sounds of public spaces, with or without video:

Traffic monitoringAssessment of road conditions — the sound of tires on pavements directly depends on the amount of water or snow on the road surfaceEstimation of weather — wind, rain and lightning have distinctive soundsLocalization of loud vehicles, drones and aircraftDetection and localization of explosions, gunshots, and other anomalies.

In fact, with the wide adoption of deep learning methods, we have even more powerful tools for extracting this kind of information from the cacophony of sounds found in public. And since microphones are much less expensive and audio recordings are so much lower bandwidth, a mesh of microphones in public spaces could be a cost-effective source of priceless insights to make our environment safer, healthier, and more efficient. And the same potential exists for recording in the home, on the factory floor, in transportation systems, in office settings, hospitals and other complex environments.

So why don’t we do it? Principally because it is illegal!

In most jurisdictions around the world, it is forbidden to record a private conversation — even one in a public space — unless some or all participants in the conversation consent to recording. ...

Continue Reading