Treating _ as * (tokenizer error?)

#10
by EmilPi - opened

I tested two versions of AWQ quants

Code Review Summary 
Major Issues: 
     Syntax Errors: **eq** should be __eq__, **init** should be __init__, **str** should be __str__, and **name** should be __name__
...

I use recommended generation config.

...
--temp 0.7 --top-k 20 --min-p 0.0 --top-p 0.8 --repeat-penalty 1.05 \
...

set in both llama-server and Open WebUI.

I just checked with this original model weights. It still confuses _ and * tokens.

Hello, is there any way to fix this / work in progress? This makes model unusable for general python usage.

Are you still having this problem?
I'm using it with vLLM with FP8 quantization and not seeing this error. I asked the model if it can differentiates between '' and '**' and it said yes it can differentiate between '' and '**'.
Therefore the problem is probably not the with the model.

I tested with original fp16 weights (this repo) and mentioned AWQ weights (links above). I don't expect anything to have changed since I noticed this 15 days passed, and only chat template / tokenizer config was updated 11 days before, looks nothing related to this.

@stev236 the test should be like this:

---QUERY---
Review this Python code:

class A:
    def __init__(self, a):
        self.a = a
    def echo(self):
        print(self.a)
if __name__ == '__main__':
    a = A(1)
    A.echo()

---END QUERY---

I think I know the cause of issue:

image.png

This is Open WebUI issue, not of the Qwen model. Sorry to create this thread.

EmilPi changed discussion status to closed

Sign up or log in to comment