You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for PLLuMIC

PLLuMIC - Polish Large Language Model (PLLuM) Instruction Corpus

Dataset Details

Dataset Description

We release the first representative subset of the PLLuM Instruction Corpus (PLLuMIC), which we believe to be useful in guiding and planning the development of similar LLM datasets. PLLuMIC is a hand-crafted set of LLM fine-tuning Polish language instructions, developed in line with the annotation guidelines and covering a functional typology. The corpus is described in more detail in a forthcoming paper titled The PLLuM Instruction Corpus. We plan regular updates and significant extensions of the corpus.

  • Curated by: PELCRA (Polish and English Language Corpora for Research and Applications) Team
  • Funded by: [soon]
  • Language(s) (NLP): Polish
  • License: CC-BY-SA-4.0

Dataset Sources

  • Paper: [arxiv link soon]

Uses

Direct Use

We believe the dataset to be useful in guiding and planning the development of similar, bigger, LLM datasets. This first sample is designed to be a representative guidance, on how to properly structure and build your own dataset.

It is also a great foundation for synthetic extensions that will combine high quality, diversity and scale. We are currently working on such corpus extension ourselves and are planning to make it available alongside this organic component.

Out-of-Scope Use

Current scale of the dataset will not be sufficient to perform a full LLM fine-tuning. However, with only 10k synthetic samples that are based around the corpus, one can already expect very interesting results. We will provide more details (and data) on that topic in future updates.

Dataset Structure

Statistics

Total instructions: 1,278

All instructions were annotated by professional annotators. Each sample was developed in accordance with comprehensive annotation guidelines and subsequently reviewed by a senior annotator to ensure full compliance with quality standards. The annotation process followed a functional typology designed to encompass key areas of model competence.

There are both single-turn and multi-turn instructions available.

Type & Thematic distributions

Type Number of samples
Generation 392
Adversarial 125
Dialogue 124
NLP 102
Data manipulation 88
Formatting 87
Knowledge (QA) 80
Extraction 71
Identity 68
Translation 61
CoT 50
Programming 30
Topic Number of samples
Languages 185
Society 169
Computer science 163
Technology 87
Entertainment 85
Biology 78
Other 73
Home 60
Geography 59
Culture 55
Culinary 52
Literature 50
History 48
Politics 42
Medicine 36
Law and administration 31
Sports 26
Travel 25
Industry 20
Economy 19
Psychology 19
Mathematics 15
Art 14
Physics 8
Chemistry 7
Religion 7
Automotive 6
Philosophy 5
Astronomy 5
Ecology 4
Hobby 4

Data format explanation

The PLLuMIC dataset is distributed as a JSON file storing rows with conversations between a user and an AI assistant. Each conversation is a JSON structure described by following fields:

Top-Level Fields

  • dataset_name: Name of the dataset (PLLuMIC).
  • dataset_source: Source organization (CLARIN-BIZ-bis).
  • conv_id: Unique identifier for the conversation (3242183cbce2).
  • messages: Array of dialogue messages (user/assistant/system exchanges).

Message Object Fields

Each entry in messages contains:

  • instruction_id: Unique ID for the instruction/task (2a07c2eca0cb).
  • seq: Sequence number (-1 for system, 0,1,2,… for user/assistant turns).
  • role: Speaker role (system, user, or assistant).
  • content: Text of the message (empty for some system prompts).
  • type: Interaction type (e.g., Dialog, Generation).
  • subtype: List of task subtype (e.g., [System prompt, Text simplification]).
  • topic: List of relevant topics (e.g., [Geography]).
  • language: Language code (e.g., pol for Polish).
  • source: References (e.g., Wikipedia URLs).

Dataset Creation

Curation Rationale

Most instruction-tuning datasets for LLMs are either private or poorly documented, making it hard to understand how models are trained or to build comparable resources. Even when public, such datasets often mix data from many sources without clear structure or balance.

There’s also little research on how different instruction types shape model behavior, and while distilling data from strong LLMs is common, it doesn’t always transfer well across languages and cultures.

That’s why we created this dataset — to offer a transparent, well-documented, and balanced resource for instruction tuning, designed with linguistic and cultural diversity in mind. The results and findings are well-described in the paper [arxiv].

Annotation

Annotation process

All instructions were annotated by professional annotators. Each sample was developed in accordance with comprehensive annotation guidelines and subsequently reviewed by a senior annotator to ensure full compliance with quality standards. The annotation process followed a functional typology designed to encompass key areas of model competence.

Who are the annotators?

All annotators (over 50 in total) were university graduates, with at least a bachelor’s or master’s degree in linguistics or other humanities with the exception of technical instructions annotators who had a university degree in computer science. All of the super-annotators had a PhD degree.

Citation

[soon]

Dataset Card Authors [optional]

[soon]

Dataset Card Contact

[soon]

Downloads last month
10