.Make certain being compatible with a number of frameworks, including.NET 6.0,. NET Framework 4.6.2, and.NET Standard 2.0 as well as above.Minimize addictions to prevent version disputes and the necessity for binding redirects.Translating Sound Info.Some of the main performances of the SDK is actually audio transcription. Programmers can transcribe audio files asynchronously or in real-time. Below is actually an example of exactly how to translate an audio data:.utilizing AssemblyAI.utilizing AssemblyAI.Transcripts.var client = brand new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood reports, comparable code can be utilized to accomplish transcription.wait for utilizing var flow = brand-new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK additionally sustains real-time sound transcription making use of Streaming Speech-to-Text. This attribute is especially useful for applications needing prompt processing of audio data.using AssemblyAI.Realtime.wait for making use of var transcriber = brand new RealtimeTranscriber( brand-new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Final: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for acquiring audio coming from a mic as an example.GetAudio( async (part) => wait for transcriber.SendAudioAsync( portion)).wait for transcriber.CloseAsync().Taking Advantage Of LeMUR for LLM Apps.The SDK combines along with LeMUR to allow designers to construct sizable language model (LLM) functions on voice data. Listed here is an instance:.var lemurTaskParams = brand-new LemurTaskParams.Prompt="Offer a short summary of the records.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var reaction = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Intellect Designs.Additionally, the SDK features built-in support for audio intellect versions, permitting feeling study and also other state-of-the-art features.var transcript = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var lead to transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// BENEFICIAL, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more information, see the main AssemblyAI blog.Image resource: Shutterstock.