Integration Azure Speech Recongition with Blazor and .NET 8

aminenafkha1

Nafkha-Med Amine

Posted on January 10, 2024

Integration Azure Speech Recongition with Blazor and .NET 8

Introduction

Welcome to an in-depth guide on integrating Azure Speech-to-Text into your Blazor applications. This article aims to provide a comprehensive walkthrough, allowing you to harness the power of speech recognition technology seamlessly within the dynamic Blazor framework.

Prerequisites

List the prerequisites, including:

  • Azure subscription with Speech resource
  • Visual Studio or Visual Studio Code installed
  • Basic knowledge of Blazor and C#

Step 1: Set Up Azure Resources

1.1 Create an Azure Speech Resource:

Image description

Image description

Image description

Image description

1.2 Retrieve Subscription Key and Region:

  • Steps to find the subscription key and region in the Azure Portal.

Image description

Step 2: Create a .NET Blazor App

2.1 Create a New Blazor WebAssembly Or Server App:



dotnet new blazorwasm -n YourBlazorWasmApp
cd YourBlazorWasmApp


Enter fullscreen mode Exit fullscreen mode

2.2 Install Azure Speech SDK:
Install-Package Microsoft.CognitiveServices.Speech

Step 3: Implement Speech-to-Text && Text-to-Speech

3.1 Configure Speech Service in Program.cs:

  • Code snippet for configuring the Speech service in the Blazor app.

3.2 Create a Speech-to-Text && Text-to-Speech Components :



 @using Microsoft.CognitiveServices.Speech
@using Microsoft.CognitiveServices.Speech.Audio
@using Radzen.Blazor
 <RadzenRow>

    <RadzenColumn Size="6">
        <RadzenSpeechToTextButton Change="OnSpeechCaptured"  />
        <RadzenTextArea @bind-Value=@Inputvalue   Style="margin-top:15px"   />
    </RadzenColumn>
</RadzenRow>

@code {
    string? Inputvalue; 



    async void OnSpeechCaptured( string text )
    { 
        var config = SpeechConfig.FromSubscription("YourSpeechSubscriptionKey", "YourSpeechRegion");
        config.SpeechSynthesisLanguage = "en-US";

        using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
        using var speechRecognizer = new SpeechRecognizer(config, audioConfig);
        var speechRecongResult = await speechRecognizer.RecognizeOnceAsync();
        OutputSpeechRecognitionResult(speechRecongResult); 
    }

    public void OutputSpeechRecognitionResult(SpeechRecognitionResult speechRecognitionResult)
    {
        switch (speechRecognitionResult.Reason)
        {
            case ResultReason.RecognizedSpeech:
                Inputvalue = speechRecognitionResult.Text;
                break;
            case ResultReason.NoMatch:
                Inputvalue = "NOMATCH: Speech could not be recognized.";

                break;
            case ResultReason.Canceled:
                var cancellation = CancellationDetails.FromResult(speechRecognitionResult);
                 Inputvalue = $"CANCELED: Reason={cancellation.Reason}";

                if (cancellation.Reason == CancellationReason.Error)
                {
                    Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                    Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
                    Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
                }
                break;
        }
    }
}



Enter fullscreen mode Exit fullscreen mode


@using Microsoft.CognitiveServices.Speech
@using Microsoft.CognitiveServices.Speech.Audio
@using Radzen.Blazor
 <RadzenRow>

    <RadzenColumn Size="6">
        <RadzenSpeechToTextButton Change="OnSpeechCaptured"  />
        <RadzenTextArea @bind-Value=@Inputvalue   Style="margin-top:15px"   />
    </RadzenColumn>
</RadzenRow>

@code {
    string? Inputvalue; 



    async void OnSpeechCaptured( string text )
    { 
        var config = SpeechConfig.FromSubscription("YourSpeechSubscriptionKey", "YourSpeechRegion");
        config.SpeechSynthesisLanguage = "en-US";

        using var audioConfig = AudioConfig.FromDefaultMicrophoneInput();
        using var speechRecognizer = new SpeechRecognizer(config, audioConfig);
        var speechRecongResult = await speechRecognizer.RecognizeOnceAsync();
        OutputSpeechRecognitionResult(speechRecongResult); 
    }

    public void OutputSpeechRecognitionResult(SpeechRecognitionResult speechRecognitionResult)
    {
        switch (speechRecognitionResult.Reason)
        {
            case ResultReason.RecognizedSpeech:
                Inputvalue = speechRecognitionResult.Text;
                break;
            case ResultReason.NoMatch:
                Inputvalue = "NOMATCH: Speech could not be recognized.";

                break;
            case ResultReason.Canceled:
                var cancellation = CancellationDetails.FromResult(speechRecognitionResult);
                 Inputvalue = $"CANCELED: Reason={cancellation.Reason}";

                if (cancellation.Reason == CancellationReason.Error)
                {
                    Console.WriteLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                    Console.WriteLine($"CANCELED: ErrorDetails={cancellation.ErrorDetails}");
                    Console.WriteLine($"CANCELED: Did you set the speech resource key and region values?");
                }
                break;
        }
    }
}


Enter fullscreen mode Exit fullscreen mode

3.3 Handle Configuration:
In the last provided code snippet , you will notice two placeholders: "YourSpeechSubscriptionKey" and "YourSpeechRegion". These placeholders act as placeholders for your actual Azure Speech resource information.

Step 4: Run and Test

4.1 Run the Blazor App:

dotnet run

4.2 Test Speech-to-Text && Text-to-Speech:

Image description

💖 💪 🙅 🚩
aminenafkha1
Nafkha-Med Amine

Posted on January 10, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related