Usage

This can be used with the pipeline function from the Transformers module.


import torch
from transformers import pipeline

audio = "path to the audio file to be transcribed"
device = "cuda:0" if torch.cuda.is_available() else "cpu"
modelTags = "ARTPARK-IISc/whisper-large-v3-vaani-odia"

transcribe = pipeline(
    task="automatic-speech-recognition",
    model=modelTags,
    chunk_length_s=30,
    device=device
)


transcribe.model.config.forced_decoder_ids = None
transcribe.model.generation_config.forced_decoder_ids = None

print("Transcription:", transcribe(audio)["text"])

Downloads last month
2
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ARTPARK-IISc/whisper-large-v3-vaani-odia

Finetuned
(652)
this model

Collection including ARTPARK-IISc/whisper-large-v3-vaani-odia