If you are a business professional like me, I bet you spend at least a couple of hours a day or even more in meetings of some sort – with your colleagues, partners or customers. Looking back at my own career, in my 20+ years of working and studies, I’ve probably spent 10+ years just in meetings. What started as in-person meetings, evolved to audio conferencing. In just over a decade, we went from dedicated video conferencing and telepresence rooms, to distributed workforce accustomed to video conference calls from any device, anywhere. That is remarkable progress in technology bolstered by cloud, mobile, advances in audio-video codecs as well as improvements in hardware chipsets and equipment design.
Organizations – small and large enterprises, education and even healthcare institutions – spend millions of dollars on expensive audio-visual equipment and acoustic design of conference rooms to provide the best possible experience to their employees and their customers. Yet the audio quality is often impaired by distracting background noises, for example keyboard typing, HVAC systems, or café noise from remote participants. And when the audio quality is bad, the conference suffers. The whole team’s productivity suffers.
At BabbleLabs, we love speech. We love noise even more. We want to clearly distinguish speech from noise.
Speech tells us important aspects about the conversation – emotions, intent, cultural background and more. With our industry expertise in deep learning, speech science and embedded systems, we trained our novel deep learning neural networks with a ...