Text Generation
Transformers
PyTorch
llava
medical
llava-med-7b-delta / data_summary_card.md
tnaumann's picture
Upload data_summary_card.md (#5)
0dd1c26 verified

Data Summary for microsoft_llava-med-7b-delta

1. General information

1.0.1 Version of the Summary: 1.0

1.0.2 Last update: 24-Nov-2025

1.1 Model Developer Identification

1.1.1 Model Developer name and contact details: Microsoft Corporation at One Microsoft Way, Redmond, WA 98052. Tel: 425-882-8080.

1.2 Model Identification

1.2.1 Versioned model name(s): LLaVA-Med-7b-delta

1.2.2 Model release date: 09-Nov-2023

1.3 Overall training data size and characteristics

1.3.1 Size of dataset and characteristics

1.3.1.A Text training data size: Less than 1 billion tokens

1.3.1.B Text training data content: LLaVA-Med builds upon PMC-15M dataset, which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central.

1.3.1.C Image training data size: Less than 1 billion tokens

1.3.1.D Image training data content: Figure images from biomedical research articles in PubMed Central paired with captions across modalities including microscopy, radiography, histology, CT, MRI, chest X-ray, pathology figures

1.3.1.E Audio training data size: Not applicable

1.3.1.F Audio training data content: Not applicable

1.3.1.G Video training data size: Not applicable

1.3.1.H Video training data content: Not applicable

1.3.1.I Other training data size: The model also uses instruction-following conversational data generated by GPT-4 from captions and inline mentions; sizes include versions of 10K and 60K samples

1.3.1.J Other training data content: GPT-4 generated multi-round biomedical instruction-following conversations derived from figure captions and sentences mentioning figures (inline mentions) from PubMed Central articles

1.3.2 Latest date of data acquisition/collection for model training: 01-May-2023

1.3.3 Is data collection ongoing to update the model with new data collection after deployment? No

1.3.4 Date the training dataset was first used to train the model: 01-May-2023

1.3.5 Rationale or purpose of data selection: Datasets were selected to adapt a general-domain multimodal model to biomedical vision-language tasks by aligning biomedical concepts and enabling instruction-following. PMC-15M figure-caption pairs provide broad coverage of biomedical images and terminology, while GPT-4 generated conversations from captions and inline mentions create diverse, open-ended instruction data to support visual chat and VQA performance

2. List of data sources

2.1 Publicly available datasets

2.1.1 Have you used publicly available datasets to train the model? Yes

2.2 Private non-publicly available datasets obtained from third parties

2.2.1 Datasets commercially licensed by rights holders or their representatives

2.2.1.A Have you concluded transactional commercial licensing agreement(s) with rights holder(s) or with their representatives? This information cannot be provided due to unavailability of the underlying data (e.g., loss, corruption, or other access limitations)

2.2.2 Private datasets obtained from other third-parties

** 2.2.2.A Have you obtained private datasets from third parties that are not licensed as described in Section 2.2.1, such as data obtained from providers of private databases, or data intermediaries?** No

2.3 Personal Information

2.3.1 Was personal data used to train the model? Microsoft follows all relevant laws and regulations pertaining to personal information.

2.4 Synthetic data

2.4.1 Was any synthetic AI-generated data used to train the model? Yes

3. Data processing aspects

3.1 Respect of reservation of rights from text and data mining exception or limitation

3.1.1 Does this dataset include any data protected by copyright, trademark, or patent? Microsoft follows all required regulations and laws for processing data protected by copyright, trademark, or patent.

3.2 Other information

3.2.1 Does the dataset include information about consumer groups without revealing individual consumer identities? Microsoft follows all required regulations and laws for protecting consumer identities.

3.2.2 Was the dataset cleaned or modified before model training? Yes