Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
etemiz 
posted an update 22 days ago
Post
251
O oh! things looking uglier by day

LLM builders in general are not doing a great job of making human aligned models.

I don't want to say this is a proxy for p(doom)... But it could be if we are not careful.

Most probable cause is reckless training LLMs using outputs of other LLMs, and don't caring about curation of datasets and not asking 'what is beneficial for humans?'...

This fine tuning would score 56 and be placed 1st in the leaderboard but I didn't add it, I only include full trainings in the leaderboard or (further tunings by the same company):

https://huggingface.co/CWClabs/CWC-Mistral-Nemo-12B-V2-q4_k_m

In this post