vllm support

#3
by amphora - opened

hi thanks for releasing the model.
i was wondering if there are any vllm support planned, or any other way to run batch inference with the model (other than hf).

thanks in advance :)

Sign up or log in to comment