Skip to main content

Transcribe in real-time

Transcription:Real-Time Deployments:All

The quickest way to try transcribing for free is by creating a Speechmatics account and using our Real-Time Demo in your browser.

This page will show you how to use the Speechmatics Real-Time SaaS WebSocket API to transcribe your voice in real-time by speaking into your microphone.

You can also learn about on-prem deployments by following our guides.

Set up

  1. Create an account on the Speechmatics Portal here.
  2. Navigate to the manage access page in the Speechmatics portal.
  3. Enter a name for your API key and store your API key somewhere safe.
info

Enterprise customers should speak to support at support@speechmatics.com to get your API keys.

Real-Time Python example

The examples below will help you get started by using the official Speechmatics Python library and CLI. You can of course integrate with Speechmatics in the programming language of your choice by referring to the Real-Time API reference.

The Speechmatics Python library and CLI can be installed using pip:

pip3 install speechmatics-python
Transcribe a file in real-time using the Speechmatics Python Library. Just copy in your API key and file name to get started!

speechmatics config set --auth-token $API_KEY --generate-temp-token
speechmatics rt transcribe example.wav

Transcript outputs

The output format from the Speech API is JSON. There are two types of transcript that are provided: Final transcripts and Partial transcripts. Which one you decide to consume will depend on your use case, latency and accuracy requirements.

Final transcripts

Final transcripts are sentences or phrases that are provided at irregular intervals. Once output, these transcripts are considered final and will not be updated afterwards. The timing of the output is determined automatically by the Speechmatics ASR engine. This is affected by pauses in speech and other parameters resulting in a latency between audio input and output. The default latency can be adjusted using the max_delay property in transcription_config when starting the recognition session. Final transcripts are more accurate than Partial transcripts, and larger values of max_delay increase the accuracy.

Partial transcripts

A Partial transcript, or Partial, is a transcript that can be updated at a later point in time. By default, only Final transcripts are produced. Partials must be explicitly enabled using the enable_partials property in transcription_config for the session. After a Partial transcript is first output, the Speechmatics ASR engine can use additional audio data and context to update the Partial. Hence, Partials are therefore available at very low latency but with lower initial accuracy. Partials typically provide a latency (the time between audio input and initial output) of less than 1 second. Partials can be used in conjunction with Final transcripts to provide low-latency transcripts which are adjusted over time.