image/png

Model Card: Agora-4B

Model Summary

Agora-4B is a 4-billion parameter, BF16-precision transformer language model, designed for ethical, inclusive, and adaptive dialogue in multi-user domestic environments. Inspired by the research paper "Plural Voices, Single Agent: Towards Inclusive AI in Multi-User Domestic Spaces", Agora-4B incorporates principles of fairness, value alignment, and accessibility to better serve diverse household users—including children, elderly, and Neurodivergent individuals.

Repository: JoydeepC/Agora-4B Paper: Plural Voices, Single Agent Model size: 4B parameters Tensor type: BF16 Files: Safetensors format (2 shards, ~8.07 GB), tokenizer files, configs, chat templates, etc.


Intended Use

Agora-4B is intended for use as a core assistant agent in domestic AI deployments, especially in settings with multiple users and overlapping accessibility needs. Typical scenarios include: Domestic voice assistants which must mediate between adult, child, and elderly users Applications where context-sensitive safety, fairness, or ethical intervention is required Research or development in inclusive, privacy-first AI for multi-agent, multi-user environments


Model Architecture & Training

Architecture: 4B-parameter transformer, trained with curriculum blending human and synthetic dialogue Objective: Optimized for fairness, multi-value alignment, ethical compliance, and accessibility-aware conversation Training Data: Curated public datasets covering mental health, eldercare, education, and moral reasoning. Enhanced with fairness-aware, multi-user scenarios and privacy-centric synthetic examples. Ethical Safeguards: Includes adaptive safety scaffolds (e.g., age-specific explanations, guidance for Neurodivergent users), autonomy sliders, and safe conflict resolution.


Key Features

Real-Time Value Alignment: Dynamically identifies and negotiates conflicting user needs, values, and accessibility requirements Inclusive Design: Special handling for overlooked populations (children, elderly, Neurodivergent), including step-by-step instructions, accessible language, and equitable interaction Privacy-Focused: Avoids unnecessary data retention or sharing Adaptivity: Safety, autonomy, and guidance dynamically adjusted per user/context Design Innovations: Video guidance, autonomy sliders, family hubs, adaptive dashboards Performance: Outperforms baselines in compliance, fairness, and safety (see paper for details)

  • Compliance: 76% (vs 70% baseline)
  • Fairness: 90% (vs 85% baseline)
  • Safety violations: 0% (vs 7% baseline)

Citation

If you use this model, please cite:

@misc{chandra2025pluralvoicessingleagent,
      title={Plural Voices, Single Agent: Towards Inclusive AI in Multi-User Domestic Spaces}, 
      author={Joydeep Chandra and Satyam Kumar Navneet},
      year={2025},
      eprint={2510.19008},
      archivePrefix={arXiv},
      primaryClass={cs.HC},
      url={https://arxiv.org/abs/2510.19008}, 
}

Further Reading

arXiv:2510.19008 Project repository (HuggingFace)


This model and codebase are open sourced for reproducibility and collaborative research on inclusive, agentic AI.

Downloads last month
22
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for JoydeepC/Agora-4B

Quantizations
2 models