kotoba-whisper-v2.0-mlx
This repository contains a converted mlx-whisper model of kotoba-whisper-v2.0 which is suitable for running with Apple Silicon.
As kotoba-whisper-v2.0 is derived from distil-large-v3, this model is significantly faster than mlx-community/whisper-large-v3-mlx without losing much accuracy for Japanese transcription.
Usage
pip install mlx-whisper
mlx_whisper.transcribe(speech_file, path_or_hf_repo="kaiinui/kotoba-whisper-v2.0-mlx")
Related Links
- kotoba-whisper-v2.0 (The original model)
 - mlx-whisper
 
- Downloads last month
 - 38
 
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support
Model tree for kaiinui/kotoba-whisper-v2.0-mlx
Base model
kotoba-tech/kotoba-whisper-v2.0