Using the Container
Transcription:Batch Real-Time Deployments:ContainerOnce the Docker image has been pulled into a local environment, it can be started using the Docker run
command. More details about operating and managing the container are available in the Docker API documentation.
- Batch transcription
- Real-Time transcription
- STDIN: Streams audio file into the container though the standard command line entry point
- File Location: Pulls audio file from a file location
Example 1: passing a file using the `cat` command to the Spanish (es) container
cat ~/$AUDIO_FILE | docker run -i \
-e LICENSE_TOKEN=eyJhbGciOiJ... \
batch-asr-transcriber-en:9.2.0
Example 2: pulling an audio file from a mapped directory into the container
docker run -i -v ~/$AUDIO_FILE:/input.audio \
-e LICENSE_TOKEN=eyJhbGciOiJ... \
batch-asr-transcriber-en:9.2.0
NOTE: the audio file must be mapped into the container with :/input.audio
The Docker run
options used are:
Name | Description |
---|---|
--env, -e | Set environment variables |
--interactive , -i | Keep STDIN open even if not attached |
--volume , -v | Bind mount a volume |
See Docker docs for a full list of the available options.
Both the methods will produce the same transcribed outcome. STDOUT is used to provide the transcription in a JSON format. Here's an example:
{
"format": "2.8",
"metadata": {
"created_at": "2020-06-30T15:43:50.871Z",
"type": "transcription",
"language_pack_info": {
"adapted": false,
"itn": true,
"language_description": "English",
"word_delimiter": " ",
"writing_direction": "left-to-right"
},
"transcription_config": {
"language": "en",
"diarization": "none",
"additional_vocab": [
{
"content": "Met Office"
},
{
"content": "Fitzroy"
},
{
"content": "Forties"
}
]
}
},
"results": [
{
"alternatives": [
{
"confidence": 1.0,
"content": "Are",
"language": "en",
"speaker": "UU"
}
],
"end_time": 3.61,
"start_time": 3.49,
"type": "word"
},
{
"alternatives": [
{
"confidence": 1.0,
"content": "on",
"language": "en",
"speaker": "UU"
}
],
"end_time": 3.73,
"start_time": 3.61,
"type": "word"
},
{
"alternatives": [
{
"confidence": 1.0,
"content": "the",
"language": "en",
"speaker": "UU"
}
],
"end_time": 3.79,
"start_time": 3.73,
"type": "word"
},
{
"alternatives": [
{
"confidence": 1.0,
"content": "rise",
"language": "en",
"speaker": "UU"
}
],
"end_time": 4.27,
"start_time": 3.79,
"type": "word"
}
]
}
Intermediate files
The intermediate files created during the transcription are stored in /home/smuser/work
. This is the case whether running the container as a root or non-root user.
Determining success
The exit code of the container will determine if the transcription was successful. There are two exit code possibilities:
- Exit Code == 0 : The transcript was a success; the output will contain a JSON output defining the transcript (more info below)
- Exit Code != 0 : the output will contain a stack trace and other useful information. This output should be used in any communication with Speechmatics support to aid understanding and resolution of any problems that may occur
Modifying the Image
Building an Image
Using STDIN to pass files in and obtain the transcription may not be sufficient for all use cases. It is possible to build a new Docker Image that will use the Speechmatics Image as a layer if required for your specific workflow. To include the Speechmatics Docker Image inside another image, ensure to add the pulled Docker image into the Dockerfile for the new application.
Requirements for a custom image
To ensure the Speechmatics Docker image works as expected inside the custom image, please consider the following:
- Any audio that needs to be transcribed must to be copied to a file called
/input.audio
inside the running container - To initiate transcription, call the application
pipeline
. Thepipeline
will start the transcription service and use/input.audio
as the audio source. - When running
pipeline
, the working directory must be set to/opt/orchestrator
, using either the DockerfileWORKDIR
directive, thecd
command or similar means. - Once
pipeline
finishes transcribing, ensure you move the transcription data outside the container
Dockerfile
To add a Speechmatics Docker image into a custom one, the Dockerfile must be modified to include the full image name of the locally available image.
Example: Adding Global English (en) with tag 9.2.0 to the Dockerfile
FROM batch-asr-transcriber-en:9.2.0
ADD download_audio.sh /usr/local/bin/download_audio.sh
RUN chmod +x /usr/local/bin/download_audio.sh
CMD ["/usr/local/bin/download_audio.sh"]
Once the above image is built, and a container instantiated from it, a script called download_audio.sh
will be executed (this could do something like pulling a file from a webserver and copying it to /input.audio
before starting the pipeline application). This is a very basic Dockerfile to demonstrate a way of orchestrating the Speechmatics Docker Image.
NOTE: For support purposes, it is assumed the Docker Image provided by Speechmatics has been unmodified. If you experience issues, Speechmatics support will require you to replicate the issues with the unmodified Docker image e.g. batch-asr-transcriber-en:9.2.0
docker run -p 9000:9000 -p 8001:8001 -e LICENSE_TOKEN='example' rt-asr-transcriber-en:2.2.0
The Docker run
options used are:
Name | Description |
---|---|
--port, -p | Expose ports on the container so that they are accessible from the host |
--env, -e | Set the value of an environment variable |
See Docker docs for a full list of the available options.
Input Modes
The supported method for passing audio to a real time container is to use a Websocket. A session is setup with configuration parameters passed in using a StartRecognition
message, and thereafter audio is sent to the container in binary chunks, with transcripts being returned in an AddTranscript
message.
In the AddTranscript
message individual result segments are returned, corresponding to audio segments defined by pauses (and other latency measurements).
Output
The results list in the V2 Output format are sorted by increasing start_time
, with a supplementary rule to sort by decreasing end_time
. See below for an example:
{
"message": "AddTranscript",
"format": "2.8",
"metadata": {
"transcript": "full tell radar",
"start_time": 0.11,
"end_time": 1.07
},
"results": [
{
"type": "word",
"start_time": 0.11,
"end_time": 0.4,
"alternatives": [{ "content": "full", "confidence": 0.7 }]
},
{
"type": "word",
"start_time": 0.41,
"end_time": 0.62,
"alternatives": [{ "content": "tell", "confidence": 0.6 }]
},
{
"type": "word",
"start_time": 0.65,
"end_time": 1.07,
"alternatives": [{ "content": "radar", "confidence": 1.0 }]
}
]
}
Transcription duration information
The container will output a log message after every transcription session to indicate the duration of speech transcribed during that session. This duration only includes speech, and not any silence or background noise which was present in the audio. It may be useful to parse these log messages if you are asked to report usage back to us, or simply for your own records.
The format of the log messages produced should match the following example:
2020-04-13 22:48:05.312 INFO sentryserver Transcribed 52 seconds of speech
Consider using the following regular expression to extract just the seconds part from the line if you are parsing it:
^.+ .+ INFO sentryserver Transcribed (\d+) seconds of speech$
Running a Container in Read-Only Mode
Users may wish to run the container in read-only mode. This may be necessary due to their regulatory environment, or a requirement not to write any media file to disk. An example of how to do this is below.
docker run -it --read-only \
-p 9000:9000 \
--tmpfs /tmp \
-e LICENSE_TOKEN=$TOKEN_VALUE \
rt-asr-transcriber-en:2.2.0
The container still requires a temporary directory with write permissions. Users can provide a directory (e.g /tmp
) by using the --tmpfs
Docker argument. A tmpfs mount is temporary, and only persisted in the host memory. When the container stops, the tmpfs mount is removed, and files written there won’t be persisted.
If customers want to use the shared custom dictionary cache feature, they must also specify the location of cache and mount it as a volume
docker run -it --read-only \
-p 9000:9000 \
--tmpfs /tmp \
-v /cachelocation:/cache \
-e LICENSE_TOKEN=$TOKEN_VALUE \
-e SM_CUSTOM_DICTIONARY_CACHE_TYPE=shared \
rt-asr-transcriber-en:2.2.0
Running a Container as a non-root user
A Real-Time container can be run as a non-root user with no impact to feature functionality. This may be required if a hosting environment or a company's internal regulations specify that a container must be run as a named user.
Users may specify the non-root command by the docker run –-user $USERNUMBER:$GROUPID
. User number and group ID are non-zero numerical values from a value of 1 up to a value of 65535
An example is below:
docker run -it --user 100:100 \
-p 9000:9000 \
-e LICENSE_TOKEN=$TOKEN_VALUE \
rt-asr-transcriber-en:2.2.0
How to use a Shared Custom Dictionary Cache
For more information on how the Custom Dictionary works, please see the Speech API Guide.
The Speechmatics Real-Time Container includes a cache mechanism for custom dictionaries to improve set-up performance for repeated use. By using this cache mechanism, transcription will start more quickly when repeatedly using the same custom dictionaries. You will see performance benefits on re-using the same custom dictionary from the second time onwards.
It is not a requirement to use the shared cache to use the Custom Dictionary.
The cache volume is safe to use from multiple containers concurrently if the operating system and its filesystem support file locking operations. The cache can store multiple custom dictionaries in any language used for transcription. It can support multiple custom dictionaries in the same language.
If a custom dictionary is small enough to be stored within the cache volume, this will take place automatically if the shared cache is specified.
For more information about how the shared cache storage management works, please see Maintaining the Shared Cache.
We highly recommend you ensure any location you use for the shared cache has enough space for the number of custom dictionaries you plan to allocate there. How to allocate custom dictionaries to the shared cache is documented below.
How to set up the Shared Cache
The shared cache is enabled by setting the following value when running transcription:
- Cache Location: You must volume map the directory location you plan to use as the shared cache to
/cache
when submitting a job SM_CUSTOM_DICTIONARY_CACHE_TYPE
: (mandatory if using the shared cache) This environment variable must be set toshared
SM_CUSTOM_DICTIONARY_CACHE_ENTRY_MAX_SIZE
: (optional if using the shared cache). This determines the maximum size of any single custom dictionary that can be stored within the shared cache in bytes- E.G. a
SM_CUSTOM_DICTIONARY_CACHE_ENTRY_MAX_SIZE
with a value of 10000000 would set a max storage size of any custom dictionary at 10MB - For reference a custom dictionary wordlist with 1000 words produces a cache entry of size around 200 kB, or 200000 bytes
- A value of
-1
will allow every custom dictionary to be stored within the shared cache. This is the default assumed value - A custom dictionary cache entry larger than the
SM_CUSTOM_DICTIONARY_CACHE_ENTRY_MAX_SIZE
will still be used in transcription, but will not be cached
- E.G. a
Maintaining the Shared Cache
If you specify the shared cache to be used and your custom dictionary is within the permitted size, Speechmatics Real-Time Container will always try to cache the custom dictionary. If a custom dictionary cannot occupy the shared cache due to other cached custom dictionaries within the allocated cache, then older custom dictionaries will be removed from the cache to free up as much space as necessary for the new custom dictionary. This is carried out in order of the least recent custom dictionary to be used.
Therefore, you must ensure your cache allocation large enough to handle the number of custom dictionaries you plan to store. We recommend a relatively large cache to avoid this situation if you are processing multiple custom dictionaries using the batch container (e.g 50 MB). If you don't allocate sufficient storage this could mean one or multiple custom dictionaries are deleted when you are trying to store a new custom dictionary.
It is recommended to use a docker volume with a dedicated filesystem with a limited size. If a user decides to use a volume that shares filesystem with the host, it is the user's responsibility to purge the cache if necessary.
Creating the Shared Cache
In the example below, transcription is run where an example local docker volume is created for the shared cache. It will allow a custom dictionary of up to 5MB to be cached.
docker volume create speechmatics-cache
docker run --rm -d \
-p 9000:9000 \
-e LICENSE_TOKEN='example' \
-e SM_CUSTOM_DICTIONARY_CACHE_TYPE=shared \
-e SM_CUSTOM_DICTIONARY_CACHE_ENTRY_MAX_SIZE=5000000 \
-v speechmatics-cache:/cache \
rt-asr-transcriber-en:2.2.0
speechmatics transcribe --additional-vocab gnocchi --url ws://localhost:9000/v2 --ssl-mode=none test.mp3
Viewing the Shared Cache
If all set correctly and the cache was used for the first time, a single entry in the cache should be present.
The following example shows how to check what Custom Dictionaries are stored within the cache. This will show the language, the sampling rate, and the checksum value of the cached dictionary entries.
ls $(docker inspect -f "{{.Mountpoint}}" speechmatics-cache)/custom_dictionary
en,16kHz,bef53e5bcca838a39c3707f1134bda6a09ff87aaa09203617528774734455edd
Reducing the Shared Cache Size
Cache size can be reduced by removing some or all cache entries.
rm -rf $(docker inspect -f "{{.Mountpoint}}" speechmatics-cache)/custom_dictionary/*
Before manually purging the cache, ensure that no containers have the volume mounted, otherwise an error during transcription might occur. Consider creating a new docker volume as a temporary cache while performing purging maintenance on the cache.