Dataset Viewer
Auto-converted to Parquet Duplicate
reference
stringclasses
6 values
base_prediction
stringclasses
6 values
base_wer
float64
0.17
4.24
finetuned_prediction
stringclasses
6 values
finetuned_wer
float64
0.17
0.36
mmlu is an immediate means of testing performance claude 2 0 is the latest claude model from anthropic notux 8x7b v1 is a fine tune of mixtral wizardlm 70b is a fine tune of llama 70b
mmlu is a medium means of testing performance cloud 2 0 is the latest cloud model from a throbic notebook set by 7bv1 is a fine tune of mixed trial wizard alam 70b is a fine tune of lama 70b
0.416667
mmlu is a medic means of testing performance cloud 2 0 is the latest cloud model from anthropic not except by 7b v1 is a fine tune of mixed rall wizard lm70b is a fine tune of lama 70b
0.361111
it's actually wizardlm 70b v1 0 spin self play fine tuning that improves llms decilm 7b is a fast model with 7 billion parameters arena elo is a means of measuring performance the fastest openai model is gpt 4 turbo
it's actually wizard lm70b v1 0 spin self play fine tuning that improves the lm70b is a fast model and with 7 billion parameters arena ilo is a means of measuring performance the fastest open ai model is gpt for turn
0.275
it's actually wizard lm70b v1 0 spin self play fine tuning that improves lm's the slm 7b is a fast model and with 7 billion parameters arena elo is a means of measuring performance the fastest open ai model is gpt 4
0.225
openchat is a fine tune then basically it's a fine tune of the mistral 7b model tricksy is an approach for fast inference using sparsity microsoft have launched phi 2 mt bench is a metric for performance eval
open chat is a fine tune then basically it's a fine tune of the mi stral 7b model tricksie is an approach for fast inference using sparcity microsoft have launched fi2 mt bench is a metric for performance evaluation
0.236842
open chat is a fine tune then basically it's a fine tune of the mestral 7b model tricksie is an approach for fast inference using sparcity microsoft have launched fi2 mtbunch is a metric for performance evaluation
0.263158
mistral medium is a larger mixture of experts claude 1 is an earlier version of claude from anthropic mixtral 8x7b instruct v0 1 is the mixture of experts with 7b models tulu 2 dpo 70b is a fine tune of the 70b model gemini pro is google's best model
mestral medium is a larger mixture of experts claude 1 is an earlier version of claude from anthropic mestral 8 by 7 being in stroke to v0 1 is the mixture of experts with 7 beam models 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4.244898
mestral medium is a larger mixture of experts claude 1 is an earlier version of claude from anthropic mixed raleigh by 7b and struck to v0 1 is the mixture of experts with 7b models 2lu2dpo is a fine tuner 7db model gemini pro is google's best model
0.326531
solar 10 7b is the fastest rather sorry it's the it's a strong it's a version of mistral 7b with extra layers claude 2 1 is the latest model from anthropic mixtral 8x7b is a mixture of experts lightning attention is another version of attention that improves inference speed
the solar 10 7b is the fastest it's a strong version of mestraus 7b with extra layers cloud 2 1 is the latest model from anthropic mestraus 8 by 7b is a mixture of experts lightning attention is another version of attention that improves inference speed
0.265306
solar 10 7b is the fastest it's a strong it's a version of me strauss 7b with extra layers cloud 2 1 is the latest model from anthropic mixed route 8 by 7b is a mixture of experts lightning attention is another version of attention that improves inference speed
0.244898
and yi 34b chat is a fine tune of actually i'm not sure what that is but it's i think it's a trained a pre trained model of llama style
and ye34b chat is a fine tune of actually i'm not sure what that is but it's i think it's a trained pre trained model of lama
0.166667
and ye34b chat is a fine tune of actually i'm not sure what that is but it's i think it's a trained pre trained model of lama
0.166667
README.md exists but content is empty.
Downloads last month
4