AudioShake Debuts Multi-Speaker Separation

Natural human conversation is dynamic, with changes in turn, overlap, emotion, and silence. But that very dynamism can create all kinds of problems for audio workflows. In film, TV, and podcasting, overlapping speech can make it difficult to edit different speakers, or isolate their voices for dubbing.

In transcription and captioning,speaker overlap and background noise can dramatically lower captioning quality. And in generative voice AI, multi-speaker speech output is typically single track–meaning the user is stuck with what they’ve generated, with no ability to edit.

(AudioShake)

Latest Posts:

Subscribe Today!

Don't miss our daily round-up of the best tech and entertainment news.