hf-transformers-bot commited on
Commit
2d2d7d7
Β·
verified Β·
1 Parent(s): 0cdca46

Upload 2025-10-24/runs/90-18767965192/ci_results_run_pipelines_torch_gpu/test_results_diff.json with huggingface_hub

Browse files
2025-10-24/runs/90-18767965192/ci_results_run_pipelines_torch_gpu/test_results_diff.json ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ === Diff for job: multi-gpu_run_pipelines_torch_gpu_test_reports ===
2
+ --- Absent in current run:
3
+ - 401 Client Error. (Request ID: Root=1-68f9a229-2b28029d4947bc9b553b2ed8;ed0838e8-66f6-4118-98f5-7206bd01515f)
4
+ - Diff is 625739 characters long. Set self.maxDiff to None to see it.
5
+ - [[-0.08846613764762878, -0.0840778797864914, -0.08044[234423 chars]266]]
6
+ - [[-0.08846619725227356, -0.08407788723707199, -0.0804[234406 chars]266]]
7
+ +++ Appeared in current run:
8
+ + 401 Client Error. (Request ID: Root=1-68faf2e4-135f1d6627f8e2310a327813;deba0521-d661-4179-8746-0d4c75fb09bb)
9
+ + Diff is 627257 characters long. Set self.maxDiff to None to see it.
10
+ + [[-0.08846612274646759, -0.08407790213823318, -0.0804[234505 chars]816]]
11
+ + [[-0.08846615254878998, -0.08407791703939438, -0.0804[234520 chars]488]]
12
+
13
+ === Diff for job: single-gpu_run_examples_gpu_test_reports ===
14
+ --- Absent in current run:
15
+ - 0%| | 0/10 [00:01<?, ?it/s]
16
+ - 10%|β–ˆ | 5/50 [00:01<00:17, 2.50it/s]
17
+ - 10/23/2025 02:53:58 - INFO - __main__ - Distributed environment: NO
18
+ - 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: GET https://huggingface.co/api/models/google-t5/t5-small/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
19
+ - 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
20
+ - 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json "HTTP/1.1 200 OK"
21
+ - 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
22
+ - 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect"
23
+ - 10/23/2025 02:53:59 - INFO - httpx - HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/json/json.py "HTTP/1.1 200 OK"
24
+ - 10/23/2025 02:53:59 - INFO - transformers.configuration_utils - Model config T5Config {
25
+ - 10/23/2025 02:53:59 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json
26
+ - 10/23/2025 02:53:59 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
27
+ - 10/23/2025 02:53:59 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors
28
+ - 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None
29
+ - 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file chat_template.jinja from cache at None
30
+ - 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at None
31
+ - 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file spiece.model from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model
32
+ - 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json
33
+ - 10/23/2025 02:53:59 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json
34
+ - 10/23/2025 02:54:00 - INFO - __main__ - Sample 1 of the training set: {'input_ids': [148, 54, 217, 8285, 13, 3068, 221, 7721, 3, 208, 22358, 30, 12296, 13, 8, 1430, 44, 1630, 10, 1755, 272, 4209, 30, 1856, 30, 9938, 555, 11, 8, 9938, 3349, 475, 5, 28089, 11, 1244, 5845, 6, 21, 677, 6, 43, 708, 12, 8147, 550, 45, 8, 3, 60, 5772, 257, 2901, 68, 8, 2630, 3516, 21, 3068, 221, 7721, 2675, 19, 24, 70, 596, 103, 59, 320, 20081, 3919, 13, 692, 8, 337, 5, 27, 214, 8, 1589, 3431, 7, 43, 530, 91, 13, 3169, 274, 578, 435, 1452, 16, 3, 9, 1126, 1419, 68, 48, 97, 6, 227, 8915, 95, 163, 192, 979, 45, 70, 166, 4169, 1031, 6, 378, 320, 310, 18337, 21, 8, 163, 420, 18, 89, 2242, 372, 406, 3, 9, 1369, 5, 486, 709, 80, 3282, 13, 70, 16562, 1330, 12, 36, 1044, 18, 77, 89, 2176, 1054, 6, 28, 921, 44, 8, 1886, 1829, 8032, 21, 1452, 3, 18, 11, 59, 131, 250, 79, 43, 1513, 128, 1508, 12, 2871, 11, 28325, 26, 128, 11855, 1480, 1766, 5, 290, 19, 3, 9, 2841, 1829, 81, 8, 286, 28, 8, 2743, 1955, 1290, 10070, 11, 112, 1508, 2508, 81, 149, 79, 43, 2767, 223, 2239, 7, 437, 336, 774, 6, 116, 79, 225, 36, 4549, 21, 136, 773, 13, 13233, 24, 228, 483, 378, 300, 5, 1029, 8, 1067, 6, 479, 44, 8, 194, 79, 577, 11, 70, 2136, 13, 6933, 6, 34, 19, 614, 12, 217, 125, 24, 13233, 429, 36, 42, 125, 228, 4431, 120, 483, 365, 1290, 10070, 552, 8, 1762, 2025, 2034, 9540, 5, 156, 79, 54, 129, 80, 1369, 365, 70, 6782, 258, 79, 56, 129, 3, 9, 720, 13, 7750, 223, 68, 6, 8, 1200, 48, 1369, 924, 661, 1550, 30, 6, 8, 72, 17291, 485, 132, 56, 36, 5, 3159, 577, 1549, 19, 59, 3510, 30, 48, 1407, 3068, 221, 7721, 2369, 336, 774, 30, 3, 9, 306, 365, 3084, 432, 986, 63, 565, 6, 28, 3, 9, 661, 13, 131, 80, 9589, 16, 70, 336, 850, 1031, 3, 7, 17348, 70, 1455, 5, 86, 8, 628, 13, 874, 767, 6, 66, 13, 24, 3410, 11, 15290, 1330, 12, 43, 118, 3, 7, 15318, 91, 13, 8, 1886, 6, 3, 3565, 135, 3762, 578, 8, 337, 563, 13, 1508, 113, 6, 59, 78, 307, 977, 6, 2299, 3555, 5, 466, 19, 59, 66, 323, 12, 1290, 10070, 6, 68, 3, 88, 65, 12, 240, 128, 3263, 21, 34, 5, 27, 183, 780, 12, 217, 3, 9, 4802, 869, 13, 577, 45, 3068, 221, 7721, 437, 3, 88, 808, 1567, 44, 8, 414, 13, 1718, 5, 466, 19, 16, 4656, 12, 432, 986, 63, 565, 31, 7, 97, 38, 2743, 6, 116, 79, 130, 3, 60, 4099, 2810, 11, 1256, 12, 3853, 11, 6, 44, 8, 414, 13, 112, 28388, 44, 8, 12750, 13, 2892, 6, 92, 1944, 28, 3, 9, 1730, 116, 79, 877, 1039, 5, 4395, 8, 6242, 6, 1290, 10070, 65, 59, 2139, 2448, 231, 893, 5, 290, 47, 150, 174, 21, 376, 12, 36, 78, 158, 7, 7, 603, 3040, 116, 3, 88, 764, 91, 227, 8, 511, 467, 13, 8, 774, 11, 2162, 79, 133, 36, 16, 3, 9, 3, 60, 5772, 257, 2870, 6, 84, 410, 59, 1299, 91, 8, 269, 1569, 12, 112, 1508, 42, 8, 2675, 5, 366, 3, 88, 808, 1567, 6, 3, 88, 141, 700, 708, 91, 57, 271, 31820, 1427, 1465, 3, 18, 2508, 81, 3068, 221, 7721, 2852, 3, 9, 1886, 24, 3842, 2369, 16, 8, 420, 985, 13, 8, 6552, 3815, 3, 18, 68, 112, 4454, 877, 323, 6321, 182, 1224, 5, 27, 214, 25, 54, 9409, 24, 3, 88, 65, 118, 9193, 269, 6, 250, 3068, 221, 7721, 33, 230, 3, 25764, 8, 2328, 6, 68, 34, 3679, 132, 47, 3, 9, 3126, 147, 45, 135, 966, 38, 1116, 38, 8, 774, 141, 708, 5, 94, 1330, 12, 36, 3, 9, 495, 24, 3, 99, 25, 1190, 446, 49, 7484, 3, 16196, 32, 15, 6, 25, 1190, 3068, 221, 7721, 5, 978, 7475, 1518, 95, 168, 16, 4993, 12, 336, 774, 6, 68, 8, 880, 13, 70, 372, 33, 59, 692, 631, 16, 3211, 5, 328, 130, 3, 60, 9333, 30, 3, 16196, 32, 15, 336, 774, 396, 6, 68, 717, 410, 6591, 16, 3, 18, 16, 70, 166, 4169, 5533, 1031, 13, 1230, 10892, 6, 874, 1508, 435, 8, 3134, 5, 100, 97, 300, 6, 163, 3, 16196, 32, 15, 11, 8643, 4049, 71, 152, 2831, 17, 43, 5799, 16, 8, 337, 1059, 5, 94, 19, 352, 12, 36, 3, 9, 3805, 4393, 21, 135, 12, 1049, 95, 45, 8, 1102, 79, 33, 230, 16, 6161, 6, 68, 79, 14621, 174, 3, 9, 1369, 11, 1224, 5, 27, 278, 31, 17, 217, 34, 1107, 44, 234, 12, 22358, 30, 1856, 6, 713, 5, 531, 79, 237, 320, 3919, 13, 3609, 91, 21, 3, 9, 3314, 581, 8, 9982, 687, 7, 6, 8, 194, 430, 8335, 372, 4551, 7, 115, 13245, 410, 44, 24106, 12750, 336, 1851, 58, 465, 5, 156, 25, 4393, 12, 143, 6209, 11, 2604, 1766, 6, 38, 3068, 221, 7721, 103, 6, 24, 10762, 72, 1666, 30, 39, 13613, 250, 25, 214, 3, 99, 25, 28325, 258, 25, 33, 16, 600, 3169, 5, 275, 8, 1589, 3431, 7, 43, 982, 44, 8, 223, 38, 168, 3, 18, 70, 163, 1349, 4228, 16, 586, 6407, 365, 1290, 10070, 47, 581, 3815, 555, 596, 180, 13296, 7, 7165, 4463, 16, 8, 262, 10765, 3802, 5, 94, 405, 59, 3005, 221, 168, 581, 46, 22358, 596, 24, 33, 3, 9, 23980, 72, 145, 192, 1766, 3, 9, 467, 48, 774, 5, 94, 19, 614, 12, 253, 136, 1465, 7, 45, 3068, 221, 7721, 31, 7, 1419, 68, 44, 709, 79, 43, 59, 118, 1340, 3, 9, 26, 22722, 44, 8, 2007, 3, 18, 780, 5, 3, 14967, 79, 1369, 1116, 6, 24, 228, 1837, 5, 27, 317, 3455, 195, 33, 92, 16, 21, 3, 9, 182, 3429, 774, 68, 116, 27, 320, 44, 8, 119, 192, 2323, 2017, 756, 135, 6, 7254, 32, 11, 18041, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [101, 33, 1776, 3, 9, 2893, 13, 8, 194, 190, 8, 6552, 3815, 774, 11, 128, 2323, 44, 8, 2007, 13, 8, 953, 1727, 12, 36, 5074, 378, 300, 227, 492, 3, 9, 1282, 456, 5, 1]}.
35
+ - 10/23/2025 02:54:00 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json "HTTP/1.1 200 OK"
36
+ - 10/23/2025 02:54:00 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/custom_generate/generate.py "HTTP/1.1 404 Not Found"
37
+ - 10/23/2025 02:54:00 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/generation_config.json "HTTP/1.1 307 Temporary Redirect"
38
+ - 10/23/2025 02:54:00 - INFO - transformers.dynamic_module_utils - Could not locate the custom_generate/generate.py inside google-t5/t5-small.
39
+ - 10/23/2025 02:54:00 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
40
+ - 10/23/2025 02:54:00 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json
41
+ - 10/23/2025 02:54:01 - INFO - __main__ - Gradient Accumulation steps = 1
42
+ - 10/23/2025 02:54:01 - INFO - __main__ - Instantaneous batch size per device = 2
43
+ - 10/23/2025 02:54:01 - INFO - __main__ - Num Epochs = 10
44
+ - 10/23/2025 02:54:01 - INFO - __main__ - Num examples = 10
45
+ - 10/23/2025 02:54:01 - INFO - __main__ - Total optimization steps = 50
46
+ - 10/23/2025 02:54:01 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2
47
+ - 10/23/2025 02:54:01 - INFO - __main__ - ***** Running training *****
48
+ - 10/23/2025 02:54:01 - WARNING - evaluate.loading - Using the latest cached version of the module from /mnt/cache/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Fri Sep 19 09:54:15 2025) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub.
49
+ - 2%|▏ | 1/50 [00:01<01:18, 1.60s/it]
50
+ - 8%|β–Š | 4/50 [00:01<00:15, 2.97it/s]Traceback (most recent call last):
51
+ - Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 316.53 examples/s]
52
+ - Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 499.84 examples/s]
53
+ - RuntimeError: DataLoader worker (pid 548) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
54
+ - RuntimeError: DataLoader worker (pid(s) 548) exited unexpectedly
55
+ - RuntimeError: unable to write to file </torch_549_37395250_1>: No space left on device (28)
56
+ - RuntimeError: unable to write to file </torch_549_3866372777_2>: No space left on device (28)
57
+ - RuntimeError: unable to write to file </torch_550_3470294363_0>: No space left on device (28)
58
+ - subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/object-detection/run_object_detection_no_trainer.py', '--model_name_or_path', 'qubvel-hf/detr-resnet-50-finetuned-10k-cppe5', '--dataset_name', 'qubvel-hf/cppe-5-sample', '--output_dir', '/tmp/tmpd2il6wdy', '--max_train_steps=10', '--num_warmup_steps=2', '--learning_rate=1e-6', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch']' returned non-zero exit status 1.
59
+ - subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'google-t5/t5-small', '--train_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--validation_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--output_dir', '/tmp/tmp9tvewysj', '--max_train_steps=50', '--num_warmup_steps=8', '--learning_rate=2e-4', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch', '--with_tracking']' returned non-zero exit status 1.
60
+ +++ Appeared in current run:
61
+ + 0%| | 0/10 [00:00<?, ?it/s]
62
+ + 10%|β–ˆ | 5/50 [00:01<00:10, 4.35it/s]Traceback (most recent call last):
63
+ + 10%|β–ˆ | 5/50 [00:02<00:18, 2.50it/s]
64
+ + 10/24/2025 02:52:48 - INFO - __main__ - Distributed environment: NO
65
+ + 10/24/2025 02:52:49 - INFO - httpx - HTTP Request: HEAD https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/json/json.py "HTTP/1.1 200 OK"
66
+ + 10/24/2025 02:52:50 - INFO - __main__ - Sample 6 of the training set: {'input_ids': [907, 3, 12655, 472, 1079, 3009, 48, 471, 28, 3, 9, 903, 12, 4461, 12, 9065, 18, 1201, 18, 1490, 16634, 81, 8, 613, 68, 48, 1295, 47, 12967, 57, 8, 2788, 7, 1476, 5, 37, 332, 10878, 26, 867, 1886, 3, 18, 2007, 13, 8, 6552, 2009, 3, 18, 33, 3945, 12, 3601, 26868, 26842, 9, 1635, 9, 6, 113, 646, 336, 847, 5, 8545, 10715, 348, 808, 8, 166, 372, 21, 1856, 31, 7, 1453, 12, 2733, 3142, 100, 17, 109, 5, 37, 18939, 49, 4477, 43, 751, 163, 728, 48, 774, 11, 6377, 95, 8, 953, 28, 874, 979, 45, 335, 1031, 5, 18263, 5961, 5316, 1288, 10477, 16634, 6, 113, 5821, 5659, 1815, 2754, 44, 3038, 23770, 52, 6983, 1061, 16, 7218, 2237, 472, 1079, 3009, 12, 12580, 3802, 1269, 16, 112, 166, 774, 16, 1567, 5, 216, 65, 92, 10774, 192, 25694, 420, 18, 7, 2407, 13084, 21, 8, 22343, 596, 11, 3150, 3030, 16, 112, 29686, 5, 1], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'labels': [472, 1079, 3009, 7930, 22091, 16634, 19, 150, 1200, 365, 4587, 21, 8, 6393, 221, 15, 907, 2743, 31, 7, 613, 6, 9938, 8288, 65, 2525, 5, 1]}.
67
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: GET https://huggingface.co/api/models/google-t5/t5-small/tree/main/additional_chat_templates?recursive=false&expand=false "HTTP/1.1 404 Not Found"
68
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json "HTTP/1.1 200 OK"
69
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json "HTTP/1.1 200 OK"
70
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/api/resolve-cache/models/google-t5/t5-small/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json "HTTP/1.1 200 OK"
71
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/config.json "HTTP/1.1 307 Temporary Redirect"
72
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/custom_generate/generate.py "HTTP/1.1 404 Not Found"
73
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/generation_config.json "HTTP/1.1 307 Temporary Redirect"
74
+ + 10/24/2025 02:52:50 - INFO - httpx - HTTP Request: HEAD https://huggingface.co/google-t5/t5-small/resolve/main/tokenizer_config.json "HTTP/1.1 307 Temporary Redirect"
75
+ + 10/24/2025 02:52:50 - INFO - transformers.configuration_utils - Model config T5Config {
76
+ + 10/24/2025 02:52:50 - INFO - transformers.configuration_utils - loading configuration file config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/config.json
77
+ + 10/24/2025 02:52:50 - INFO - transformers.dynamic_module_utils - Could not locate the custom_generate/generate.py inside google-t5/t5-small.
78
+ + 10/24/2025 02:52:50 - INFO - transformers.generation.configuration_utils - Generate config GenerationConfig {
79
+ + 10/24/2025 02:52:50 - INFO - transformers.generation.configuration_utils - loading configuration file generation_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/generation_config.json
80
+ + 10/24/2025 02:52:50 - INFO - transformers.modeling_utils - loading weights file model.safetensors from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/model.safetensors
81
+ + 10/24/2025 02:52:50 - INFO - transformers.tokenization_utils_base - loading file added_tokens.json from cache at None
82
+ + 10/24/2025 02:52:50 - INFO - transformers.tokenization_utils_base - loading file chat_template.jinja from cache at None
83
+ + 10/24/2025 02:52:50 - INFO - transformers.tokenization_utils_base - loading file special_tokens_map.json from cache at None
84
+ + 10/24/2025 02:52:50 - INFO - transformers.tokenization_utils_base - loading file spiece.model from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/spiece.model
85
+ + 10/24/2025 02:52:50 - INFO - transformers.tokenization_utils_base - loading file tokenizer.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer.json
86
+ + 10/24/2025 02:52:50 - INFO - transformers.tokenization_utils_base - loading file tokenizer_config.json from cache at /mnt/cache/hub/models--google-t5--t5-small/snapshots/df1b051c49625cf57a3d0d8d3863ed4d13564fe4/tokenizer_config.json
87
+ + 10/24/2025 02:52:51 - INFO - __main__ - Gradient Accumulation steps = 1
88
+ + 10/24/2025 02:52:51 - INFO - __main__ - Instantaneous batch size per device = 2
89
+ + 10/24/2025 02:52:51 - INFO - __main__ - Num Epochs = 10
90
+ + 10/24/2025 02:52:51 - INFO - __main__ - Num examples = 10
91
+ + 10/24/2025 02:52:51 - INFO - __main__ - Total optimization steps = 50
92
+ + 10/24/2025 02:52:51 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2
93
+ + 10/24/2025 02:52:51 - INFO - __main__ - ***** Running training *****
94
+ + 10/24/2025 02:52:51 - WARNING - evaluate.loading - Using the latest cached version of the module from /mnt/cache/modules/evaluate_modules/metrics/evaluate-metric--rouge/b01e0accf3bd6dd24839b769a5fda24e14995071570870922c71970b3a6ed886 (last modified on Fri Sep 19 09:54:15 2025) since it couldn't be found locally at evaluate-metric--rouge, or remotely on the Hugging Face Hub.
95
+ + 2%|▏ | 1/50 [00:01<01:11, 1.46s/it]
96
+ + 6%|β–Œ | 3/50 [00:01<00:19, 2.38it/s]
97
+ + Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 1009.12 examples/s]
98
+ + Running tokenizer on dataset: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 551.86 examples/s]
99
+ + RuntimeError: DataLoader worker (pid 549) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
100
+ + RuntimeError: DataLoader worker (pid(s) 549, 550) exited unexpectedly
101
+ + RuntimeError: unable to write to file </torch_552_1857270501_1>: No space left on device (28)
102
+ + RuntimeError: unable to write to file </torch_552_408436889_0>: No space left on device (28)
103
+ + subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/object-detection/run_object_detection_no_trainer.py', '--model_name_or_path', 'qubvel-hf/detr-resnet-50-finetuned-10k-cppe5', '--dataset_name', 'qubvel-hf/cppe-5-sample', '--output_dir', '/tmp/tmpiflnm43v', '--max_train_steps=10', '--num_warmup_steps=2', '--learning_rate=1e-6', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch']' returned non-zero exit status 1.
104
+ + subprocess.CalledProcessError: Command '['/opt/venv/bin/python', '/transformers/examples/pytorch/summarization/run_summarization_no_trainer.py', '--model_name_or_path', 'google-t5/t5-small', '--train_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--validation_file', 'tests/fixtures/tests_samples/xsum/sample.json', '--output_dir', '/tmp/tmpfyb741m2', '--max_train_steps=50', '--num_warmup_steps=8', '--learning_rate=2e-4', '--per_device_train_batch_size=2', '--per_device_eval_batch_size=1', '--checkpointing_steps', 'epoch', '--with_tracking']' returned non-zero exit status 1.
105
+
106
+ === Diff for job: single-gpu_run_pipelines_torch_gpu_test_reports ===
107
+ --- Absent in current run:
108
+ - 401 Client Error. (Request ID: Root=1-68f9a102-40783be764e683583b44aa6c;d9bcf364-9894-4b08-a2d9-3468b07e346f)
109
+ - Diff is 628277 characters long. Set self.maxDiff to None to see it.
110
+ - [[-0.08846623450517654, -0.08407793939113617, -0.0804[234456 chars]458]]
111
+ - [[-0.08846624940633774, -0.08407793194055557, -0.0804[234462 chars]266]]
112
+ +++ Appeared in current run:
113
+ + 401 Client Error. (Request ID: Root=1-68faf39a-499ba51d146cf928390038bc;740e63ce-9efd-4ecd-9bee-f282de6174d0)
114
+ + Diff is 626383 characters long. Set self.maxDiff to None to see it.
115
+ + [[-0.08846615999937057, -0.0840778723359108, -0.08044[234432 chars]488]]
116
+ + [[-0.08846618235111237, -0.08407790958881378, -0.0804[234448 chars]518]]