Post
251
O oh! things looking uglier by day
LLM builders in general are not doing a great job of making human aligned models.
I don't want to say this is a proxy for p(doom)... But it could be if we are not careful.
Most probable cause is reckless training LLMs using outputs of other LLMs, and don't caring about curation of datasets and not asking 'what is beneficial for humans?'...
LLM builders in general are not doing a great job of making human aligned models.
I don't want to say this is a proxy for p(doom)... But it could be if we are not careful.
Most probable cause is reckless training LLMs using outputs of other LLMs, and don't caring about curation of datasets and not asking 'what is beneficial for humans?'...