AudioSense at its core is an Automatic Speech Recognition (ASR) system designed to accurately transcribe audio input into a text-based output. It combines advanced algorithms and computational linguistics to ensure precise and efficient transcription. AudioSense goes beyond conventional ASR capabilities by offering specialized support for Urdu ASR processing in conversations.

Problem Statement

Automatic Speech Recognition (ASR) or Speech to Text (STT) focuses on transcribing human speech into text. This involves developing algorithms and models for analyzing audio signals, identifying speech patterns, and converting speech to text. The main challenges in STT include variations in speech patterns, accents, dialects, background noise, and interference. STT have found wide spread use in virtual assistants, voice-enabled devices, call center quality assurance or transcribing speech for other applications. Systems capable of transcribing low quality noisy audio are desired for practical situations and still remain an open problem.

Proposed Solution

To solve the problem of speech recognition in low quality audio, We have used our proprietary data to develop a speech to text system for Urdu language. It takes a wav audio file as input and produces text as output. The proposed speech-to-text solution is robust to noise, which is reduced during pre-processing. Our speech to text system can be extended to other languages.

Technical Details

Developing an ASR system requires expertise in signal processing, machine learning, and natural language processing. Our STT system comprises of the following processes:

Noise removal is a crucial step in ASR to enhance recognition accuracy by eliminating distortions from speech. Our solution can minimize noise efficiently in most day to day scenarios while performing exceptionally well for background noise experienced in call center scenarios.

Our system uses signal processing techniques to extract features, including Mel Frequency Cepstral Coefficients (MFCCs) and other time-frequency representations from speech.

Acoustic models are used to map acoustic features to phonemes or other speech units. We employ HMMs, DNNs, and other ML techniques to develop our acoustic model. In HMM-DNN, more hidden layers are used for identifying complex relationships. Input features are obtained from a larger time window for better context. To achieve optimal results, we have created custom acoustic model parameters.

ASR/STT systems utilize statistical properties of language to predict likelihood of word sequences by analyzing text corpora. Models can use n-grams, RNNs, or other ML techniques. n-grams are very popular in this aspect where log probabilities for each n-gram are stored in log files.

ASR/STT systems use dynamic programming and beam search strategies to search for the most likely transcription. The beam search algorithm reduces the language model’s scale factor, identifying and keeping the top k words in the vocabulary for the first position. Conditional probabilities are calculated for subsequent words, retaining only the top k words.

ASR/STT systems require labeled speech data for training acoustic and language models, obtained through manual transcription. Our system has been trained on 300 hours of custom data. To train a STT system data collection is an important step. The data has to be pre-processed and text based annotation of data has to be created. This is followed by Grapheme-to-phoneme (G2P) conversion to create phonetic representation of data using phonetic conversion rules and statistical analysis. Finally, different training models are utilized for developing a model. They may include but are not restricted to: Monophone , Triphone, SGMM2, Neural Network based Models.

The ASR system was tested on 14K audios using the Word Error Rate (WER) measure, which provides a consistent method for comparing system performance over time. The WER is approximately 20% and is calculated using the following equation.

Word Error Rate = (S + D + I) / (N)

S = Substitutions, D = Deletions, I = Insertion, N = Total number of words

Uniqueness

  • Speech pattern variability in Urdu is recognized by the ASR, resulting in excellent performance
  • The system was trained using 300 hours of annotated data as compared to The Common Voice dataset for Urdu which has only 46 hours of validated training data
  • Pre-processing stage to remove background noise improves speech recognition performance
  • Our ASR is context aware as it has been trained on specific test cases, improving performance for specific and general cases

Automatic+Speech+Recognition+ASR+or+Speech+to+Text+STT
Automatic+Speech+Recognition+ASR+or+Speech+to+Text+STT

Product Features

  • Capability: Convert speech to text in Urdu.
  • Accuracy: Our Urdu ASR system has a word error rate of less than 20% on low quality audio making it competitive with some of the best ASRs in the world.
  • Noise Reduction: Our noise reduction algorithm can suppress background noise of call centers and thus increases the accuracy of the ASR.
  • Customization: The ASR system can be quickly fine-tuned for specific test cases by using customer data thus making it customizable.
  • Ease of Deployment: The system can be easily deployed on either local or cloud infrastructure using our docker container.
  • Efficient: The ASR can convert speech to text in near real-time.
  • Robustness: Capability of recognizing English words used in Urdu language.

Let's Visualize What AudioSense is About!

Ready to enhance your speech recognition accuracy?

Contact us today and discover how AudioSense can transform your speech into text, even in noisy environments.