chuuhtetnaing commited on
Commit
4125450
·
verified ·
1 Parent(s): 51eec4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -19,3 +19,80 @@ configs:
19
  - split: train
20
  path: data/train-*
21
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  - split: train
20
  path: data/train-*
21
  ---
22
+
23
+ # Dhamma Article Dataset (Last Crawl Date: 18/04/2025)
24
+
25
+ This dataset contains Dhamma articles scraped from the Dhamma Ransi website.
26
+
27
+ ## Dataset Description
28
+
29
+ ### Overview
30
+ The dataset consists of Dhamma articles collected from the Dhamma Ransi website, providing a valuable resource for Myanmar language processing tasks and Buddhist text analysis.
31
+
32
+ ### Data Collection Methodology
33
+ The data was ethically scraped with:
34
+ - Adherence to the website's robots.txt guidelines
35
+ - 3-second delays between API calls to minimize server load
36
+ - Collection conducted responsibly to avoid stressing the website
37
+
38
+ ### Data Structure
39
+ The dataset contains the following fields:
40
+
41
+ | Column | Description |
42
+ |--------|-------------|
43
+ | `url` | Source URL where the article was scraped from |
44
+ | `title` | Title of the Dhamma article |
45
+ | `body` | Full text content of the article |
46
+
47
+ ## Usage Examples
48
+
49
+ ### Loading the Dataset
50
+ ```python
51
+ from datasets import load_dataset
52
+
53
+ # Load the dataset from Hugging Face
54
+ ds = load_dataset("chuuhtetnaing/dhamma-article-dataset")
55
+
56
+ # Access the first example
57
+ first_article = ds['train'][0]
58
+ print(f"Title: {first_article['title']}")
59
+ print(f"Body: {first_article['body']}")
60
+ ```
61
+
62
+ ### Using for NLP Tasks
63
+ ```python
64
+ # Example of preparing the dataset for text classification
65
+ from transformers import AutoTokenizer
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained("your-tokenizer-model")
68
+
69
+ def tokenize_function(examples):
70
+ return tokenizer(examples["body"], padding="max_length", truncation=True, max_length=512)
71
+
72
+ tokenized_ds = ds.map(tokenize_function, batched=True)
73
+ ```
74
+
75
+ ## Dataset Purpose
76
+ This dataset is designed for multiple Myanmar language NLP tasks including:
77
+ - Masked Language Modeling (MLM)
78
+ - Next Sentence Prediction (NSP)
79
+ - Other self-supervised pretraining tasks
80
+ - General Myanmar language model development
81
+
82
+ ## Limitations
83
+ - This dataset contains copyrighted material and should be used for research purposes only.
84
+ - The articles may have varying lengths and formats.
85
+ - The dataset represents content available on the source website as of the last crawl date.
86
+
87
+ ## Message from Dataset Creator
88
+ This dataset has been created for research purposes to advance natural language processing and machine learning capabilities for the Myanmar language, particularly focusing on Buddhist texts and Dhamma teachings. We respectfully request that users of this dataset:
89
+ 1. Use this collection for academic, research, and educational purposes only
90
+ 2. Acknowledge the original authors and publishers whose works appear in this corpus
91
+ 3. Respect copyright considerations and intellectual property rights
92
+ 4. Consider supporting the Dhamma Ransi organization by visiting their website
93
+ 5. Contribute to the development of digital resources for Myanmar language preservation and Buddhist text analysis
94
+
95
+ The creation of this dataset is intended to bridge the gap in language resources for Myanmar NLP research while respecting the importance of supporting Dhamma teachings and Buddhist scholarship. We kindly ask all researchers to use this data responsibly.
96
+
97
+ ## Acknowledgments
98
+ This dataset is derived from the [Dhamma Ransi website](https://www.dhammaransi.com/index.php/new.html).