Tutorial

Building a Speech-to-Text App Using Speech Framework in iOS 10


At WWDC 2016, Apple introduced the Speech framework, a useful API for speech recognition. In fact, Speech Kit is the framework which Siri uses for speech recognition. There are a handful of speech recognition frameworks available today, but they are either very expensive or simply not as good. In this tutorial, I will show you how to create a Siri-like app for speech to text using Speech Kit.

Designing the App UI

Prerequisite: You need Xcode 8 beta and an iOS device running the iOS 10 beta.

Let’s start by creating a new iOS Single View Application project with the name SpeechToTextDemo. Next, go to your Main.storyboard and add a UILabel, a UITextView, and a UIButton.

Your storyboard should look something like this:

speechkit-demo-1

Next you define outlet variable for the UITextView and the UIButton in ViewController.swift. In this demo, I set the name of UITextView as “textView”, and the name of the UIButton as “microphoneButton”. Also, create an empty action method which is triggered when the microphone button is tapped:

If you don’t want to start from scratch, you can download the starter project and continue to follow the tutorial.

Using Speech Framework

To use the Speech framework, you have to first import it and adopt the SFSpeechRecognizerDelegate protocol. So let’s import the framework, and add its protocol to the ViewController class. Now your ViewController.swift should look like this:

User Authorization

Before using the speech framework for speech recognition, you have to first ask for users’ permission because the recognition doesn’t happen just locally on the iOS device but Apple’s servers. All the voice data is transmitted to Apple’s backend for processing. Therefore, it is mandatory to get the user’s authorization.

Let’s authorize the speech recognizer in the viewDidLoad method. The user must allow the app to use the input audio and speech recognition. First, declare a speechRecognizer variable:

And update the the viewDidLoad method like this:

  1. First, we create an SFSpeechRecognizer instance with a locale identifier of en-US so the speech recognizer knows what language the user is speaking in. This is the object that handles speech recognition.

  2. By default, we disable the microphone button until the speech recognizer is activated.

  3. Then, set the speech recognizer delegate to self which in this case is our ViewController.

  4. After that, we must request the authorization of Speech Recognition by calling SFSpeechRecognizer.requestAuthorization.

  5. Finally, check the status of the verification. If it’s authorized, enable the microphone button. If not, print the error message and disable the microphone button.

Now you might think that by running the app you would see an authorization alert, but you are mistaken. If you run, the app will crash. But why you may ask?

Providing the Authorization Messages

Apple requires all the authorizations to have a custom message from the app. In case of speech authorization, we must authorize two things:

  1. Microphone usage.
  2. Speech Recognition.

To customize the messages, you must supply these custom messages through the info.plist file.

Let’s open our info.plist file’s source code. First, right click on info.plist. Then Open As > Source Code. Finally, copy the following XML code and insert them before the </dict> tag.

Now that you have added the two keys to info.plist:

  • NSMicrophoneUsageDescription – the custom message for authorization of your audio input. Note that Input Audio Authorization will only happen when the user clicks the microphone button.
  • NSSpeechRecognitionUsageDescription – the custom message for speech recognition

Feel free to change the values of these records. Now hit the Run button, you should be able to compile and run the app without any errors.

Speech Framework Authorization
Note: If you don’t see the Input Audio authorization later when the project is complete, it is because you are running the app on a simulator. The iOS simulator does not have access to your Mac’s microphone.

Handling Speech Recognition

Now that we have implemented the user authorization, let’s move onto the implementation of speech recognition. We start by defining the following objects in your ViewController:

  1. This object handles the speech recognition requests. It provides an audio input to the speech recognizer.
  2. The recognition task where it gives you the result of the recognition request. Having this object is handy as you can cancel or stop the task.

  3. This is your audio engine. It is responsible for providing your audio input.

Next, let’s create a new function called startRecording().

This function is called when the Start Recording button is tapped. Its main function is to start up the speech recognition and start listening to your microphone. Let’s go through the above code line by line:

  1. Line 3-6 – Check if recognitionTask is running. If so, cancel the task and the recognition.
  2. Line 8-15 – Create an AVAudioSession to prepare for the audio recording. Here we set the category of the session as recording, the mode as measurement, and activate it. Note that setting these properties may throw an exception, so you must put it in a try catch clause.

  3. Line 17 – Instantiate the recognitionRequest. Here we create the SFSpeechAudioBufferRecognitionRequest object. Later, we use it to pass our audio data to Apple’s servers.

  4. Line 19-21 – Check if the audioEngine (your device) has an audio input for recording. If not, we report a fatal error.

  5. Line 23-25 – Check if the recognitionRequest object is instantiated and is not nil.

  6. Line 27 – Tell recognitionRequest to report partial results of speech recognition as the user speaks.

  7. Line 29 – Start the recognition by calling the recognitionTask method of our speechRecognizer. This function has a completion handler. This completion handler will be called every time the recognition engine has received input, has refined its current recognition, or has been canceled or stopped, and will return a final transcript.

  8. Line 31 – Define a boolean to determine if the recognition is final.

  9. Line 35 – If the result isn’t nil, set the textView.text property as our result‘s best transcription. Then if the result is the final result, set isFinal to true.

  10. Line 39-47 – If there is no error or the result is final, stop the audioEngine (audio input) and stop the recognitionRequest and recognitionTask. At the same time, we enable the Start Recording button.

  11. Line 50-53 – Add an audio input to the recognitionRequest. Note that it is ok to add the audio input after starting the recognitionTask. The Speech Framework will start recognizing as soon as an audio input has been added.

  12. Line 55 – Prepare and start the audioEngine.

Triggering Speech Recognition

We need to make sure that speech recognition is available when creating a speech recognition task, so we have to add a delegate method to ViewController. If speech recognition is unavailable or changes its status, the microphoneButton.enable property should be set. For this scenario, we implement the availabilityDidChange method of the SFSpeechRecognizerDelegate protocol. Use the implementation as seen below.

This method will be called when the availability changes. If speech recognition is available, the record button will also be enabled.

The last thing we have to update the action method microphoneTapped(sender:):

In this function, we must check whether our audioEngine is running. If it is running, the app should stop the audioEngine, terminate the input audio to our recognitionRequest, disable our microphoneButton, and set the button’s title to “Start Recording”.

If the audioEngine is working, the app should call startRecording() and set the title of the title of the button to “Stop Recording”.

Great! You’re ready to test the app. Deploy the app to an iOS 10 device, and hit the “Start Recording” button. Go ahead and say something!

Speech to text app demo

Notes:

  1. Apple limits recognition per device. The limit is not known, but you can contact Apple for more information.
  2. Apple limits recognition per app.

  3. If you routinely hit limits, make sure to contact Apple, they can probably resolve it.

  4. Speech recognition uses a lot of power and data.

  5. Speech recognition only lasts about a minute at a time.

Summing Up

In this tutorial, you learned how to take advantage of the incredible new speech APIs Apple has opened up to developers to recognize speech and transcribe it into text. The Speech framework uses the same speech recognition framework as Siri does. It is a relatively small API. However, it is powerful and empowers developers to create amazing things like getting the transcript of an audio file.

I recommend watching the WWDC 2016 session 509 for further information. I hope you enjoyed this article and had fun exploring this brand new API.

For your reference, you can access the full project on Github.

macOS
Implementing Drag And Drop Operations Using NSPasteboard on macOS
iOS
Improve the Recipe App With a Better Detail View Controller
macOS
macOS Programming: Using Menus and the Toolbar
Shares