Monday , June 21 2021

# How to do speech recognition? – Towards data science Speech recognition is the task of detecting spoken words. There are many techniques for doing voice recognition. In this post, we will go through the background required for speech recognition and use a basic technique to build a speech recognition model. The code is available on GitHub. For the techniques mentioned in this post, check out this Jupyter Notebook.

#### Some backgrounds for audio processing

Let's take a step back and understand what audio is actually. We listen to all music on our computers / phones. Usually, they are in mp3 format. But the .mp3 file is not the actual audio. It's a way to represent audio in our computers. We do not open the .mp3 files directly and read them (as we read the .txt files in the notepad). We use applications to open those .mp3 files. Those applications understand what a .mp3 file is and how to play it. These mp3 files encode (represent) the audio.

The audio is represented as waves. Generally, these waves have 2 axes. Time is represented on the x-axis and the amplitude on the y-axis. So in every moment t, we have a value for amplitude.

You can listen to a simple sine wave here. Great! Now we just need to understand how to use these audio files in our code to perform the recognition.

#### Using audio files

We will use the Waveform audio file format or wav files. So, how do we read these .wav files? Insert librosa – a python package that allows us to read .wav files. What do we get after reading these .wav files? We get a wide range of numbers. This is the output I got after reading one of the sound files 1 second long.

array ([ 0.0007143 , 0.00551732, 0.01469251, …, -0.00261393, -0.00326245, -0.00220675], dtype = float32)

What do these numbers mean? Remember that I told you that the audio is represented as a wave with two axes. These values ​​represent the y-axis of that wave, that is, the amplitude. So, how is the time of the x-axis represented? This is the length of the array! So for a 1 second audio, the length should be 1000 (for 1000 milliseconds). But the length of this array is actually 22050. Where does this come from?

#### Sampling rate

Consider a 5-second audio clip. If it is analog, it has an amplitude value at each moment of time, that is, it has a value for every nanosecond or perhaps for every second. So, considering a 5-second audio clip, it has a value for each picosecond. They are 5e + 12 or 5000000000000 values. Consider that storing on a computer. It takes 4 bytes in C to store a float value. So it's 5e + 12 * 4 bytes. About 18 terabytes of data are only for a 5-second audio clip!

We do not want to use 18 TB just to store a 5-second clip. So we convert it into a discrete form. To convert it into discrete form, we record samples (that is, amplitude values) in each time step. So for a 5 second audio, we can record samples every 1 second. They are only 5 values ​​(samples)! This is called the Sampling rate.

Formally, the sampling rate is the number of samples collected per second. These collected samples are spaced at equal intervals over time. For the previous example, the sampling rate is 1 or 1 sample per second. You may have noticed that there is a lot of information loss. This is a compromise in the conversion from continuous (analog) to discrete (digital). The sampling rate should be as high as possible to reduce the loss of information.

So why did we get the 22050 length series? Librosa uses a default sampling rate of 22050 if nothing is specified. You're wondering, why 22050? Well, it's the upper limit for the reach of human hearing. Humans can listen to frequencies ranging from 20 Hz to 20 KHz. That 20 KHz is 22050. A more common sampling rate is 44100, or 44.1KHz.

Also, note that we have obtained a 1D array and not a 2D array. This is because the .wav file I used was mono audio and not stereo. What is the difference? A mono sound has only one channel while a stereo has 2 or more. What is a channel? In simple terms, it is a source of audio. Consider using 1 microphone to record 2 of your friends talking to each other. In an ideal situation, the microphone only records the audio of your friends and no other background noise. This audio you recorded has 2 channels because there are 2 sources of signals: your 2 friends. Now, if there is a sound of a dog barking in the background, the audio will have 3 channels with 3 sources that are your friends and the dog.

Generally, we convert stereo audio to mono audio before using it in audio processing. Again, the book helps us do it. We simply pass the parameter mono = true while loading the .wav file and converting any stereo audio into mono for us.

#### Functionality for audio recognition

We can use the time domain signal above as functions. But it still requires a lot of computational space because the sampling rate should be quite high. Another way to represent these audio signals is in the frequency domain. We use the Fourier transform. Declaring in simple terms – Fourier transform it is a tool that allows us to convert our signal into the time domain in the frequency domain. A signal in the frequency domain requires much less computational space for storage. From Wikipedia,

In mathematics, a Fourier series is a way to represent a function like the sum of simple sine waves. More formally, it decomposes any periodic function or periodic signal into the sum of a set (possibly infinite) of simple oscillating functions, ie sines and cosines

In simple terms, any audio signal can be represented as the sum of the sine and cosine waves.

In the figure above, the time domain signal is represented as the sum of 3 sinusoidal waves. How does the storage space reduce? Consider how a sinusoidal wave is represented. The mathematical representation of the sine wave. (Source)

Since the signal is represented as 3 sine waves, we only need 3 values ​​to represent the signal.

#### Coefficients of cepstral melofrequency (MFCC)

Our voice / sound depends on the shape of our vocal tract including tongue, teeth etc. If we can determine this form precisely, we can recognize the word / character that is said. MFCC is a representation of the short-term power spectrum of a sound, which in simple terms represents the shape of the vocal tract. You can read more about the MFCC here.

#### spectrograms

Spectrograms are another way of representing the audio signal. Spectrograms transmit three-dimensional information in 2 dimensions (2D spectrograms). On the x axis is the time and on the y axis is the frequency. The amplitude of a particular frequency at a particular time is represented by the intensity of the color at that point.

#### Overview of the approach

For the .wav files, I used a subset of training data from the Kaggle-Tensorflow Speech Recognition Challenge. Google Colaboratory is used for training. Provides free use of the GPU for 12 hours. It is not very fast but good enough for this project.

Audio files are sampled at 16,000 sample rates. Spectrograms are used to recognize speech commands. I wrote a small script to convert .wav files into spectrograms. Images of the spectrogram are fed into the convolutional neural network. The transfer learning is performed on Resnet34, which is trained on ImageNet. PyTorch is used to code this project.

#### Stochastic gradient descent with restart (SGDR)

SGDR uses CosineAnnealing as a technique for annealing the learning rate to train the model. The learning rate is reduced at each iteration (non-ephemeral) of the descent of the gradient and after completion of a cycle, the learning rate is restored and set on the initial learning rate. This helps to achieve better generalization.

The idea is, if the model is at the local minimum where a slight change in the parameters changes the loss a lot, so it's not a good local minimum. By resetting the learning rate, we allow the model to find better local minima in the search space.

In the image above, a cycle consists of 100 iterations. The learning speed is restored after each cycle. In each iteration, we gradually decrease the learning rate, this allows us to settle down to the local minimum. So, by resetting the learning rate at the end of a cycle, let's check if the local minimum is good or bad. If it is good, at the end of the next cycle, the model will settle in the same local minima. But if it's bad, then the model will converge to a different local minimum. We can also change the length of the cycle. This allows the model to dive deep into the local minimum reducing the loss.

#### Snapshot Ensembling

It is a technique used together with SGDR. The basic idea of ​​the whole is to train more than one model for a specific task and to average their predictions. Most models provide different predictions for the same input. So, if a model provides the wrong prediction, another model provides the correct prediction.

In SGDR, we perform ensembling with the help of cycles. Basically, every local minimum has a different loss value and provides different data forecasts. When we do SGDR, we jump from one local minimum to another to find the optimal minimums at the end. But even the predictions of other local minima may be useful. So, we check the model parameters at the end of each cycle. And when making predictions, we give input data for each model and the average forecasts.

#### Setting changed to reduce training time

Training is ongoing on Google Colab. It provides a Tesla K80 GPU that is good enough for this task. An iteration of the gradient descent takes about 1.5-2 seconds on this GPU. But when the training is done, it takes about 80 minutes to train for a single age! This is because, by default, it is not possible to use more than 1 worker in PyTorch data loaders. If you try, PyTorch generates an error by abruptly interrupting the workout.

But why does it take 80 minutes? It's because the task of preparing the next batch is done on the CPU, while only the descent of the gradient and the weight updates are done on the GPU. When the weight updates are complete, the GPU is inactive, waiting for the next batch. So in this case, the CPU is almost always busy and the GPU is inactive.

When specifying the num_workers parameter in the data loader, PyTorch uses multiprocessing to generate batches in parallel. This removes the bottleneck and ensures that the GPU is used correctly.

How do we do this on Google Colab? Google Colab is based on a Linux system. And most Linux systems have a temporary partition called / Dev / shm. This partition is used by processes as shared memory. It's a virtual memory, which means it does not reside on HDD, it resides on RAM. PyTorch uses this partition to place batches for GPUs.

Google Colab, by default, assigns a 64 MB size to this partition. This dimension is much lower for the use of a sufficient number of workers. Which means that if we try to use num_workers, at some point during training, this partition will overflow and PyTorch will generate an error. The solution is to increase the size of this partition. After increasing the size, we can use many workers to upload data. But how many num_workers should we use?

It seems that using as many num_worker as possible is good. I did some experiments with different sizes of / dev / shm and several num_workers. Here are the results.

Source link