RNNTBundle¶
- class torchaudio.pipelines.RNNTBundle[源]¶
用於捆綁元件的資料類,以便使用 RNN-T 模型執行自動語音識別(ASR,語音轉文字)推理。
更具體地說,此類提供了生成特徵提取管線、包裝指定 RNN-T 模型的解碼器以及輸出令牌後處理的方法,這些共同構成了一個完整的端到端 ASR 推理管線,可根據原始波形生成文字序列。
它支援非流式(全上下文)推理以及流式推理。
使用者不應直接例項化此類的物件;相反,使用者應使用模組中已存在的例項(代表預訓練模型),例如
torchaudio.pipelines.EMFORMER_RNNT_BASE_LIBRISPEECH。- 示例
>>> import torchaudio >>> from torchaudio.pipelines import EMFORMER_RNNT_BASE_LIBRISPEECH >>> import torch >>> >>> # Non-streaming inference. >>> # Build feature extractor, decoder with RNN-T model, and token processor. >>> feature_extractor = EMFORMER_RNNT_BASE_LIBRISPEECH.get_feature_extractor() 100%|███████████████████████████████| 3.81k/3.81k [00:00<00:00, 4.22MB/s] >>> decoder = EMFORMER_RNNT_BASE_LIBRISPEECH.get_decoder() Downloading: "https://download.pytorch.org/torchaudio/models/emformer_rnnt_base_librispeech.pt" 100%|███████████████████████████████| 293M/293M [00:07<00:00, 42.1MB/s] >>> token_processor = EMFORMER_RNNT_BASE_LIBRISPEECH.get_token_processor() 100%|███████████████████████████████| 295k/295k [00:00<00:00, 25.4MB/s] >>> >>> # Instantiate LibriSpeech dataset; retrieve waveform for first sample. >>> dataset = torchaudio.datasets.LIBRISPEECH("/home/librispeech", url="test-clean") >>> waveform = next(iter(dataset))[0].squeeze() >>> >>> with torch.no_grad(): >>> # Produce mel-scale spectrogram features. >>> features, length = feature_extractor(waveform) >>> >>> # Generate top-10 hypotheses. >>> hypotheses = decoder(features, length, 10) >>> >>> # For top hypothesis, convert predicted tokens to text. >>> text = token_processor(hypotheses[0][0]) >>> print(text) he hoped there would be stew for dinner turnips and carrots and bruised potatoes and fat mutton pieces to [...] >>> >>> >>> # Streaming inference. >>> hop_length = EMFORMER_RNNT_BASE_LIBRISPEECH.hop_length >>> num_samples_segment = EMFORMER_RNNT_BASE_LIBRISPEECH.segment_length * hop_length >>> num_samples_segment_right_context = ( >>> num_samples_segment + EMFORMER_RNNT_BASE_LIBRISPEECH.right_context_length * hop_length >>> ) >>> >>> # Build streaming inference feature extractor. >>> streaming_feature_extractor = EMFORMER_RNNT_BASE_LIBRISPEECH.get_streaming_feature_extractor() >>> >>> # Process same waveform as before, this time sequentially across overlapping segments >>> # to simulate streaming inference. Note the usage of ``streaming_feature_extractor`` and ``decoder.infer``. >>> state, hypothesis = None, None >>> for idx in range(0, len(waveform), num_samples_segment): >>> segment = waveform[idx: idx + num_samples_segment_right_context] >>> segment = torch.nn.functional.pad(segment, (0, num_samples_segment_right_context - len(segment))) >>> with torch.no_grad(): >>> features, length = streaming_feature_extractor(segment) >>> hypotheses, state = decoder.infer(features, length, 10, state=state, hypothesis=hypothesis) >>> hypothesis = hypotheses[0] >>> transcript = token_processor(hypothesis[0]) >>> if transcript: >>> print(transcript, end=" ", flush=True) he hoped there would be stew for dinner turn ips and car rots and bru 'd oes and fat mut ton pieces to [...]
- 使用
RNNTBundle的教程
屬性¶
hop_length¶
n_fft¶
n_mels¶
right_context_length¶
sample_rate¶
segment_length¶
方法¶
get_decoder¶
- RNNTBundle.get_decoder() RNNTBeamSearch[源]¶
構建 RNN-T 解碼器。
- 返回:
RNNTBeamSearch
get_feature_extractor¶
- RNNTBundle.get_feature_extractor() FeatureExtractor[源]¶
構建用於非流式(全上下文)ASR 的特徵提取器。
- 返回:
FeatureExtractor
get_streaming_feature_extractor¶
- RNNTBundle.get_streaming_feature_extractor() FeatureExtractor[源]¶
構建用於流式(即時)ASR 的特徵提取器。
- 返回:
FeatureExtractor
get_token_processor¶
- RNNTBundle.get_token_processor() TokenProcessor[源]¶
構建令牌處理器。
- 返回:
TokenProcessor