Update README.md
Browse files
README.md
CHANGED
|
@@ -112,3 +112,36 @@ The model has an accuracy of %
|
|
| 112 |
author={Nouar AlDahoul, Yasir Zaki}
|
| 113 |
}
|
| 114 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 112 |
author={Nouar AlDahoul, Yasir Zaki}
|
| 113 |
}
|
| 114 |
```
|
| 115 |
+
|
| 116 |
+
## Governance & Responsible Use
|
| 117 |
+
|
| 118 |
+
The **FaceScanPaliGemma** model processes highly sensitive biometric data (facial attributes). Deployment of this model must follow **strict governance frameworks** to ensure responsible and ethical use.
|
| 119 |
+
|
| 120 |
+
### ✅ Permitted Uses
|
| 121 |
+
- Academic research, benchmarking, and reproducibility studies.
|
| 122 |
+
- Educational projects exploring bias, fairness, and multimodal AI.
|
| 123 |
+
- Development of fairness-aware systems with proper safeguards.
|
| 124 |
+
|
| 125 |
+
### ❌ Prohibited Uses
|
| 126 |
+
- **Surveillance or mass monitoring** of individuals or groups.
|
| 127 |
+
- **Identity verification or authentication** without explicit and informed consent.
|
| 128 |
+
- **Applications that discriminate against or marginalize** individuals or communities.
|
| 129 |
+
- Use on **scraped datasets or facial images** collected without consent.
|
| 130 |
+
|
| 131 |
+
### ⚠️ Law Enforcement Use
|
| 132 |
+
- Direct use in **law enforcement contexts is not recommended** due to high societal risks.
|
| 133 |
+
- Risks include **bias amplification**, **wrongful identification**, and **privacy violations**.
|
| 134 |
+
- If ever considered, deployment must be:
|
| 135 |
+
- Governed by **strict legal frameworks** (e.g., EU AI Act, GDPR, CCPA).
|
| 136 |
+
- Subject to **independent auditing, transparency, and accountability**.
|
| 137 |
+
- Limited to **proportional, necessary, and rights-respecting use cases**.
|
| 138 |
+
|
| 139 |
+
### Governance Principles
|
| 140 |
+
1. **Access & Control** – Limit deployment to contexts with clear oversight and accountability.
|
| 141 |
+
2. **Transparency** – Always disclose when and how the model is used.
|
| 142 |
+
3. **Bias & Fairness Auditing** – Evaluate performance across demographic groups before deployment.
|
| 143 |
+
4. **Privacy Protection** – Respect GDPR, CCPA, and local regulations; never process data without consent.
|
| 144 |
+
5. **Accountability** – Establish internal review boards or ethics committees for production use.
|
| 145 |
+
|
| 146 |
+
### Community Reporting
|
| 147 |
+
We encourage the community to report issues, biases, or misuse of this model through the **Hugging Face Hub discussion forum**.
|