runtime error

Exit code: 1. Reason: Using device: cpu Loading LLaVA model and processor... config.json: 0%| | 0.00/950 [00:00<?, ?B/s] config.json: 100%|██████████| 950/950 [00:00<00:00, 8.34MB/s] The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead. Traceback (most recent call last): File "/app/app.py", line 26, in <module> llava_model = CustomLlavaForConditionalGeneration.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3669, in from_pretrained hf_quantizer.validate_environment( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_8bit.py", line 81, in validate_environment validate_bnb_backend_availability(raise_exception=True) File "/usr/local/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py", line 557, in validate_bnb_backend_availability return _validate_bnb_multi_backend_availability(raise_exception) File "/usr/local/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py", line 498, in _validate_bnb_multi_backend_availability available_devices.discard("cpu") # Only Intel CPU is supported by BNB at the moment AttributeError: 'frozenset' object has no attribute 'discard'

Container logs:

Fetching error logs...