--- license: apache-2.0 datasets: - flwrlabs/code-alpaca-20k language: - en base_model: - Qwen/Qwen3-4B pipeline_tag: text-generation library_name: peft tags: - text-generation-inference - code --- ## Model Details This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework. The adapter and benchmark results has been be submitted to the [FlowerTune LLM Code Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/code/). Please check the following GitHub project for details on how to reproduce training and evaluation steps: [FlowerTune-LLM-Labs](https://github.com/ethicalabs-ai/FlowerTune-LLM-Labs/blob/main/workspace/models/README.md) ## Evaluation Results (Pass@1 score) - **HumanEval**: 64.63 % - **MBPP**: 54.8 % - **MultiPL-E (C++)**: 60.87 % - **MultiPL-E (JS)**: 61.49 % - **Average**: 60.45 % The evaluation was conducted on an NVIDIA A40 (48 GB).