Product Plugins: Transcription - Azure Speech SDK

Transcription

Transcription is a feature of the ShareDo platform, allowing quick and easy speech-to-text conversion across a number of ShareDo functions. For more information about how you can use the Transcription functionality, check out the Transcription article

Provider

Azure Speech is one of the available providers for the Transcription Functionality in the ShareDo platform. This functions on a BYOL (Bring Your Own License) basis. We recommend you check out their website and trial their service before committing. Once you're happy and have an agreement, tier, contract or package in place, you will be able to use your credentials to set up Azure Speech as a provider in ShareDo.

You can find more about Azure Speech on the Microsoft Learn site. You can register for an account and get credentials from the Azure Portal.

Installation

To have Azure Speech installed in your ShareDo environment, you must be on ShareDo version 7.9.0 or later.

Reach out to your CSM and they will install the latest version of the plugin for you. Initially on a non-production environment for you to test, and then on production.

Configuration

The configuration of a transcription plugin involves two steps.

  • Configuring the Global Feature.
  • Configuring a Linked Service for authentication..

Global Feature Config

  1. Navigate to Modeller > Global Features
  2. Then search for the transcription Feature.
  3. Ensure the checkbox is enabled, then click the green cog to open the configuration blade.
  4. On this screen, you have a drop-down to choose the provider (if you have more than one provider installed), choose Azure Speech Service.
  5. Next, the Global Feature will check to see if there is a matching Linked Service and whether it is configured correctly. This Message will be red, amber, or green.
    1. Red - No Linked Service found.
    2. Amber - Linked Service exists, but some fields are missing or incorrect.
    3. Green - Everything looks good, and is working as expected.
  6. You can configure the Linked service directly from the button on the Global Feature blade, but we'll come back to that shortly.
  7. For now, you need to select your Azure Region and the Speech Recognition Language.
    Please consult your IT Administrator, or whoever will be providing your API Key, as they will be able to let you know which Azure region to select.
    In our case, we created our API Key in the uksouth Azure Region and are operating speech recognition in English, specifically en-GB from the ISO 639-1 list of language codes.
  8. Don't forget to Save and Close the Global Feature.

Linked Service Config

Next, we need to configure the Linked Service. If you are still on the Transcription Global Feature Blade, you can click Configure it. Otherwise, you can navigate to Admin > Integrations > Manage Linked Services.

  1. Look for a service called Azure Speech Service.
  2. You can create it by clicking the Green Add New icon in the top right-hand corner if it doesn't exist.
  3. Then, choose Shared secret from the list of available service providers.
  4. You will need to populate the following fields:
Field Value
System-name azurespeechservice
Name [No Specific Value] Azure Speech Service
Icon [No Specific Value] We use ‘fa-microphone’
Description [Optional]
Secret {Your own API key}
Allow fallback to system secret Yes
API Base URL

This will be based on the Azure region, your API Key is for

https://{azure-region}.api.cognitive.microsoft.com

HTTP Header to send token in Ocp-Apim-Subscription-Key
  • Fields in Bold are mandatory.
  • Fields with [No Specific Value] can be set as you please.
  • Secret must be populated with your own API Key.
  • Make sure you update the API Base URL with the region for which your Azure Speech subscription is set up.

Once you're done, hit Save and Close. Then, navigate back to the User side of ShareDo, and you can test it out in any Rich Text Field.