ronniross commited on
Commit
2e9d95f
·
verified ·
1 Parent(s): 8b8153a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -232,7 +232,7 @@ The full energy footprint of a deployed, resource-equilibrated AI model includes
232
 
233
  The initial Superior Model Training, the massive training run of the largest possible "Teacher" model, often conducted in highly secure, isolated (air-gapped) data centers.
234
 
235
- The "superior model" is then used to generate a vast amount of high-quality synthetic datathe "content"which serves as the training dataset for the smaller model.
236
 
237
  This is known as inference at scale on the teacher model. While inference is less power-intensive than training, performing it for billions of data points to create a distillation dataset adds substantial, often unquantified, operational energy usage.
238
 
@@ -248,7 +248,7 @@ Failures in training and auxiliary system processes, caused by issues like packa
248
 
249
  ### 5.8 Technical benchmarks
250
 
251
- Many niche models possess unique value but are discarded because they fail to top general technical benchmarks. Researchers often evaluate dozens of models rapidly; if a model does not impress immediatelysometimes due merely to faulty inference code rather than the model itselfit is permanently set aside. This premature abandonment represents a significant sunk cost, rendering the substantial water consumption and carbon emissions expended during training completely wasted.
252
 
253
  This turns the entire training process into an environmental tragedy, wasting the vast amounts of energy and water used to create a tool that no one will ever use.
254
 
 
232
 
233
  The initial Superior Model Training, the massive training run of the largest possible "Teacher" model, often conducted in highly secure, isolated (air-gapped) data centers.
234
 
235
+ The "superior model" is then used to generate a vast amount of high-quality synthetic data, the "content", which serves as the training dataset for the smaller model.
236
 
237
  This is known as inference at scale on the teacher model. While inference is less power-intensive than training, performing it for billions of data points to create a distillation dataset adds substantial, often unquantified, operational energy usage.
238
 
 
248
 
249
  ### 5.8 Technical benchmarks
250
 
251
+ Many niche models possess unique value but are discarded because they fail to top general technical benchmarks. Researchers often evaluate dozens of models rapidly; if a model does not impress immediately, sometimes due merely to faulty inference code rather than the model itself, it is permanently set aside. This premature abandonment represents a significant sunk cost, rendering the substantial water consumption and carbon emissions expended during training completely wasted.
252
 
253
  This turns the entire training process into an environmental tragedy, wasting the vast amounts of energy and water used to create a tool that no one will ever use.
254