Sub-700ms AI video conversational latency 🤯🤯🤯
First, so as not to bury the lede, this is
@heytavus
's new digital twin API delivering ~670ms latency end-to-end ... from a client application, to the cloud, and back to the client.
That's very, very, very fast. Somewhere