ashwinpatti commited on
Commit
f01285f
·
verified ·
1 Parent(s): 9fb8992

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,632 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:56
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: How many runs does Andre Russell typically score before losing
13
+ a wicket based on his performance statistics?
14
+ sentences:
15
+ - also significantly less costly as a bowling option.Let us now take a closer look
16
+ at some of the titans of the game to see if there is more than meets the eye.Thanks
17
+ for reading Three slips and a gully! Subscribe for free to receive new posts and
18
+ support my work.SubscribeAndre Russell, since 2019, has struck 2,005 runs at a
19
+ SR of 180 and average of 27.5. Pretty decent numbers, given his entry points and
20
+ what is often required of him. These numbers translate to him giving 27 runs off
21
+ every 15 balls he faces before losing a wicket. More than decent.If we further
22
+ split these numbers by the bowling kind (right-arm or left-arm pace), we can unearth
23
+ deltas in this seemingly one-sided matchup to discover his worst performing matchups.
24
+ Against
25
+ - The lines and lengths are trying to tell us something
26
+ - the first-innings totals have been successfully chased down, with each season
27
+ averaging between ~45-60% of successful chases, the highest being in 2021 where
28
+ 61.7% of the chases resulted in success.While the proportion of matches won chasing
29
+ have largely stayed the same, the distribution of targets set and chased have
30
+ varied dramatically between 2024 and the 5 seasons preceding it. Between 2019
31
+ and 2023, almost 62% of the targets were set at below 180 runs, with ~42% of them
32
+ being between 150 and 180 runs. Scores between 170-180 are what’s typically considered
33
+ to be at par for most grounds across India, and the spread of targets have shown
34
+ just that.The number of targets less than 180 runs and between 150 & 180 runs
35
+ fell to 44% and 30%
36
+ - source_sentence: What batting strategies do Virat Kohli employ when facing SLAs
37
+ and OBs based on his strike rates against them?
38
+ sentences:
39
+ - batters by bowling line-length combinations they’re the most conservative against.Thanks
40
+ for reading Three slips and a gully! This post is public so feel free to share
41
+ it.ShareSuryakumar Yadav is an absolute beast in T20 cricket. Although in a lean
42
+ patch right now, he is potentially the only cricketer that will go down as an
43
+ all-time great because of his brilliance in only one format, the 20 over game.
44
+ He, like most Indian batters, struggles a bit against SLA, but still fares better
45
+ than most of his contemporaries. He’s conservative against the straight-on SLAOs,
46
+ bowled at the stumps from a good length. As the bowler drifts his line away from
47
+ the stumps, he finds himself to have more room, and his striking ability improves
48
+ as the ball gets
49
+ - matchups. Against left-arm medium and right-arm fast, Russell averages 20 RpW
50
+ striking at less than 160. Focusing on right-arm fast, against which he’s gotten
51
+ out 19 times for 390 runs at a SR of 157. One might look at this and choose to
52
+ default to right-arm fast against the giant, but it’s pertinent to look at the
53
+ lines and lengths he’s fallen victim to, to understand how this match-up can be
54
+ used against him in the most effective manner.The success % indicates the proportion
55
+ of balls bowled at a given line-length that yielded a wicket. As you can see,
56
+ for all line-length combinations for which at least 10 balls were bowled, Russell’s
57
+ found himself to be out of answers for balls pitched outside the off stump bowled
58
+ short. For all other
59
+ - right-arm off-break all too well, etc. Data around batter-specific matchups is
60
+ now readily available. For example, Rishabh Pant finds it hard to score against
61
+ right-arm express quicks (averaging 19 striking at 130), Virat Kohli is extremely
62
+ cautious batting against SLAs and OBs, striking at 110 and 111 against them respectively.Some
63
+ batters may not dominate every bowling style, but they consistently perform decently
64
+ and deliver sizeable returns against most types of bowlers. To understand how
65
+ to effectively challenge these players, we can analyze specific combinations of
66
+ line and length that bowlers use against them. By delving deeper into these patterns,
67
+ we can identify the precise deliveries that are most effective in restricting
68
+ their
69
+ - source_sentence: How do the striking and dismissal rates of the sampled batters
70
+ compare between the Powerplay and death overs?
71
+ sentences:
72
+ - good length outside off-stump, compared to 149 for deliveries of a similar length
73
+ but targeting the stumps. Additionally, he loses his wicket at almost the same
74
+ rate relative to the runs scored in both scenarios. While not an overwhelmingly
75
+ effective matchup, this is a strategy that teams should consider using against
76
+ him.Some line-length combination matchups are easier to unearth, with just a little
77
+ bit of digging. Heinrich Klaasen is one of the greatest T20 bats in the world
78
+ right now. The man has an unmatched ability against spin, one of the most lethal
79
+ hitters in the death overs, and fares well against pace bowling of all kinds as
80
+ well (1,538 runs at a SR of 154 and an average of 29.5 RpW). For the 933 balls
81
+ against pace that we have
82
+ - and determine how they can be limited based on the line-length combinations that
83
+ trouble them the most.Our hypothesis on the importance of precision in line-length
84
+ combinations is further validated when we evaluate bowlers based on the proportion
85
+ of effectively defensive deliveries they bowl. The data clearly indicate that
86
+ a higher percentage of deliveries pitched on a good length outside the off-stump
87
+ strongly correlates with a bowler’s economy rate. This trend holds consistently
88
+ across both spin and pace bowlers, with only a few expected outliers.This analysis
89
+ considers bowlers who have bowled over 1,000 deliveries between 2019 and October
90
+ 2024, with available line-length data. The dataset includes 40 spinners and 74
91
+ pacers, evaluated
92
+ - pace up the innings in a 20-over game. For this, I’ll take a sample of 25 batters
93
+ (the highest run-scorers in the powerplay since 2019) and observe how their striking
94
+ and dismissal rate changes from the Powerplay (overs 1-6) and death (overs 16-20).Several
95
+ things jump out the minute you look at this graph. Batters like Finn Allen and
96
+ Will Jacks are, unsurprisingly, at the top-left corner, striking really quickly
97
+ in the Powerplay while being dispensable with their wicket. A very high proportion
98
+ of the 25 batters are concentrated in the area with the average ranging from 25-35
99
+ and the SR between 120 and 160. Faf bests Kohli in both the average RpD and the
100
+ SR while Warner is much of an accumulator.KL Rahul would have stood out as an
101
+ obvious
102
+ - source_sentence: What is the batter's strike rate and average against leg-break
103
+ bowling with a minimum of 500 runs scored?
104
+ sentences:
105
+ - we will not be considering on-the-stump yorkers for either spinners or pacers.The
106
+ similarities and differences here are equally intriguing. Good-length deliveries,
107
+ regardless of the type, offer comparable chances of success for both spin and
108
+ pace bowlers. Deliveries pitched between good length and short, drifting down
109
+ the leg side, are the least effective for both styles, although they are nearly
110
+ twice as successful for pacers compared to spinners. On the other hand, a good-length
111
+ delivery wide outside off-stump is slightly more effective for spinners and also
112
+ proves to be less expensive. Conversely, short-pitched deliveries on the stumps
113
+ are twice as likely to result in a wicket for pacers compared to spinners and
114
+ are also significantly
115
+ - pace up the innings in a 20-over game. For this, I’ll take a sample of 25 batters
116
+ (the highest run-scorers in the powerplay since 2019) and observe how their striking
117
+ and dismissal rate changes from the Powerplay (overs 1-6) and death (overs 16-20).Several
118
+ things jump out the minute you look at this graph. Batters like Finn Allen and
119
+ Will Jacks are, unsurprisingly, at the top-left corner, striking really quickly
120
+ in the Powerplay while being dispensable with their wicket. A very high proportion
121
+ of the 25 batters are concentrated in the area with the average ranging from 25-35
122
+ and the SR between 120 and 160. Faf bests Kohli in both the average RpD and the
123
+ SR while Warner is much of an accumulator.KL Rahul would have stood out as an
124
+ obvious
125
+ - as the ball gets wider or fuller.On the other hand, his numbers against leg-break
126
+ bowlers paint a prettier picture. He strikes at 150 at an average of 46 RpW. For
127
+ all batters with a minimum of 500 runs against leg-break bowling, only Nicolas
128
+ Pooran has scored runs more quickly and at a higher average than him.While the
129
+ ball lined up on the stumps pitched at a good length from a SLAO bowler sets his
130
+ striking ability back, he’s more proactive against a similarly pitched delivery
131
+ coming from a leg-break bowler (52 avg, 148 SR). It will be cruel to call it a
132
+ weakness, but he is relatively tamer against balls that are pitched outside the
133
+ off-stump on a good length by a leg-spinnerHe strikes at 121 against balls pitched
134
+ on a good length outside
135
+ - source_sentence: How has the approach to run chases in the IPL changed from 2019
136
+ to 2024?
137
+ sentences:
138
+ - 'restricting their scoring, taking their wickets more efficiently, or achieving
139
+ both objectives simultaneously. The success percentage of the most commonly used
140
+ line-length combinations in T20 matches across various phases of an innings is
141
+ shown above. This percentage indicates how often each line-length combination
142
+ results in a wicket. Unsurprisingly, the yorker on the stumps has the highest
143
+ success rate, almost twice that of the short ball drifting down the leg side,
144
+ at 2nd. However, simply reviewing these combinations doesn’t provide much insight.
145
+ It’s more useful to plot these success percentages against the cost of each line-length
146
+ combination for both spin and pace bowlers.Side note: For any upcoming analysis,
147
+ we will not be'
148
+ - Three slips and a gullySubscribeSign inShare this postThree slips and a gullyWhat
149
+ makes a successful run chase in the IPLCopy linkFacebookEmailNotesMoreWhat makes
150
+ a successful run chase in the IPLA look at the way teams have been chasing targets
151
+ in the IPL since 2019, and how 2024 was just a tad bit different in the way teams
152
+ approach run chases.Divyansh PeswaniJan 09, 20254Share this postThree slips and
153
+ a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMore1ShareT20
154
+ batting has two sides to it; the calculations of putting up a first-innings total
155
+ that could be considered above par for the given conditions, and the complexities
156
+ of structuring the second innings chase across the innings to bag a win safely
157
+ - batters by bowling line-length combinations they’re the most conservative against.Thanks
158
+ for reading Three slips and a gully! This post is public so feel free to share
159
+ it.ShareSuryakumar Yadav is an absolute beast in T20 cricket. Although in a lean
160
+ patch right now, he is potentially the only cricketer that will go down as an
161
+ all-time great because of his brilliance in only one format, the 20 over game.
162
+ He, like most Indian batters, struggles a bit against SLA, but still fares better
163
+ than most of his contemporaries. He’s conservative against the straight-on SLAOs,
164
+ bowled at the stumps from a good length. As the bowler drifts his line away from
165
+ the stumps, he finds himself to have more room, and his striking ability improves
166
+ as the ball gets
167
+ pipeline_tag: sentence-similarity
168
+ library_name: sentence-transformers
169
+ metrics:
170
+ - cosine_accuracy@1
171
+ - cosine_accuracy@3
172
+ - cosine_accuracy@5
173
+ - cosine_accuracy@10
174
+ - cosine_precision@1
175
+ - cosine_precision@3
176
+ - cosine_precision@5
177
+ - cosine_precision@10
178
+ - cosine_recall@1
179
+ - cosine_recall@3
180
+ - cosine_recall@5
181
+ - cosine_recall@10
182
+ - cosine_ndcg@10
183
+ - cosine_mrr@10
184
+ - cosine_map@100
185
+ model-index:
186
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
187
+ results:
188
+ - task:
189
+ type: information-retrieval
190
+ name: Information Retrieval
191
+ dataset:
192
+ name: Unknown
193
+ type: unknown
194
+ metrics:
195
+ - type: cosine_accuracy@1
196
+ value: 0.6785714285714286
197
+ name: Cosine Accuracy@1
198
+ - type: cosine_accuracy@3
199
+ value: 0.8571428571428571
200
+ name: Cosine Accuracy@3
201
+ - type: cosine_accuracy@5
202
+ value: 1.0
203
+ name: Cosine Accuracy@5
204
+ - type: cosine_accuracy@10
205
+ value: 1.0
206
+ name: Cosine Accuracy@10
207
+ - type: cosine_precision@1
208
+ value: 0.6785714285714286
209
+ name: Cosine Precision@1
210
+ - type: cosine_precision@3
211
+ value: 0.2857142857142857
212
+ name: Cosine Precision@3
213
+ - type: cosine_precision@5
214
+ value: 0.20000000000000004
215
+ name: Cosine Precision@5
216
+ - type: cosine_precision@10
217
+ value: 0.10000000000000002
218
+ name: Cosine Precision@10
219
+ - type: cosine_recall@1
220
+ value: 0.6785714285714286
221
+ name: Cosine Recall@1
222
+ - type: cosine_recall@3
223
+ value: 0.8571428571428571
224
+ name: Cosine Recall@3
225
+ - type: cosine_recall@5
226
+ value: 1.0
227
+ name: Cosine Recall@5
228
+ - type: cosine_recall@10
229
+ value: 1.0
230
+ name: Cosine Recall@10
231
+ - type: cosine_ndcg@10
232
+ value: 0.846521481990734
233
+ name: Cosine Ndcg@10
234
+ - type: cosine_mrr@10
235
+ value: 0.7958333333333333
236
+ name: Cosine Mrr@10
237
+ - type: cosine_map@100
238
+ value: 0.7958333333333333
239
+ name: Cosine Map@100
240
+ ---
241
+
242
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
243
+
244
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
245
+
246
+ ## Model Details
247
+
248
+ ### Model Description
249
+ - **Model Type:** Sentence Transformer
250
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
251
+ - **Maximum Sequence Length:** 512 tokens
252
+ - **Output Dimensionality:** 1024 dimensions
253
+ - **Similarity Function:** Cosine Similarity
254
+ <!-- - **Training Dataset:** Unknown -->
255
+ <!-- - **Language:** Unknown -->
256
+ <!-- - **License:** Unknown -->
257
+
258
+ ### Model Sources
259
+
260
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
261
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
262
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
263
+
264
+ ### Full Model Architecture
265
+
266
+ ```
267
+ SentenceTransformer(
268
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
269
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
270
+ (2): Normalize()
271
+ )
272
+ ```
273
+
274
+ ## Usage
275
+
276
+ ### Direct Usage (Sentence Transformers)
277
+
278
+ First install the Sentence Transformers library:
279
+
280
+ ```bash
281
+ pip install -U sentence-transformers
282
+ ```
283
+
284
+ Then you can load this model and run inference.
285
+ ```python
286
+ from sentence_transformers import SentenceTransformer
287
+
288
+ # Download from the 🤗 Hub
289
+ model = SentenceTransformer("ashwinpatti/finetuned_arctic_naive_ft-legal-ft-v0")
290
+ # Run inference
291
+ sentences = [
292
+ 'How has the approach to run chases in the IPL changed from 2019 to 2024?',
293
+ 'Three slips and a gullySubscribeSign inShare this postThree slips and a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMoreWhat makes a successful run chase in the IPLA look at the way teams have been chasing targets in the IPL since 2019, and how 2024 was just a tad bit different in the way teams approach run chases.Divyansh PeswaniJan 09, 20254Share this postThree slips and a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMore1ShareT20 batting has two sides to it; the calculations of putting up a first-innings total that could be considered above par for the given conditions, and the complexities of structuring the second innings chase across the innings to bag a win safely',
294
+ 'batters by bowling line-length combinations they’re the most conservative against.Thanks for reading Three slips and a gully! This post is public so feel free to share it.ShareSuryakumar Yadav is an absolute beast in T20 cricket. Although in a lean patch right now, he is potentially the only cricketer that will go down as an all-time great because of his brilliance in only one format, the 20 over game. He, like most Indian batters, struggles a bit against SLA, but still fares better than most of his contemporaries. He’s conservative against the straight-on SLAOs, bowled at the stumps from a good length. As the bowler drifts his line away from the stumps, he finds himself to have more room, and his striking ability improves as the ball gets',
295
+ ]
296
+ embeddings = model.encode(sentences)
297
+ print(embeddings.shape)
298
+ # [3, 1024]
299
+
300
+ # Get the similarity scores for the embeddings
301
+ similarities = model.similarity(embeddings, embeddings)
302
+ print(similarities.shape)
303
+ # [3, 3]
304
+ ```
305
+
306
+ <!--
307
+ ### Direct Usage (Transformers)
308
+
309
+ <details><summary>Click to see the direct usage in Transformers</summary>
310
+
311
+ </details>
312
+ -->
313
+
314
+ <!--
315
+ ### Downstream Usage (Sentence Transformers)
316
+
317
+ You can finetune this model on your own dataset.
318
+
319
+ <details><summary>Click to expand</summary>
320
+
321
+ </details>
322
+ -->
323
+
324
+ <!--
325
+ ### Out-of-Scope Use
326
+
327
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
328
+ -->
329
+
330
+ ## Evaluation
331
+
332
+ ### Metrics
333
+
334
+ #### Information Retrieval
335
+
336
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
337
+
338
+ | Metric | Value |
339
+ |:--------------------|:-----------|
340
+ | cosine_accuracy@1 | 0.6786 |
341
+ | cosine_accuracy@3 | 0.8571 |
342
+ | cosine_accuracy@5 | 1.0 |
343
+ | cosine_accuracy@10 | 1.0 |
344
+ | cosine_precision@1 | 0.6786 |
345
+ | cosine_precision@3 | 0.2857 |
346
+ | cosine_precision@5 | 0.2 |
347
+ | cosine_precision@10 | 0.1 |
348
+ | cosine_recall@1 | 0.6786 |
349
+ | cosine_recall@3 | 0.8571 |
350
+ | cosine_recall@5 | 1.0 |
351
+ | cosine_recall@10 | 1.0 |
352
+ | **cosine_ndcg@10** | **0.8465** |
353
+ | cosine_mrr@10 | 0.7958 |
354
+ | cosine_map@100 | 0.7958 |
355
+
356
+ <!--
357
+ ## Bias, Risks and Limitations
358
+
359
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
360
+ -->
361
+
362
+ <!--
363
+ ### Recommendations
364
+
365
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
366
+ -->
367
+
368
+ ## Training Details
369
+
370
+ ### Training Dataset
371
+
372
+ #### Unnamed Dataset
373
+
374
+ * Size: 56 training samples
375
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
376
+ * Approximate statistics based on the first 56 samples:
377
+ | | sentence_0 | sentence_1 |
378
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
379
+ | type | string | string |
380
+ | details | <ul><li>min: 12 tokens</li><li>mean: 22.36 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 149.39 tokens</li><li>max: 187 tokens</li></ul> |
381
+ * Samples:
382
+ | sentence_0 | sentence_1 |
383
+ |:-----------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
384
+ | <code>What do the lines and lengths represent in the context provided?</code> | <code>The lines and lengths are trying to tell us something</code> |
385
+ | <code>How might the lines and lengths convey a message or meaning?</code> | <code>The lines and lengths are trying to tell us something</code> |
386
+ | <code>What is the main focus of the analysis regarding line-length combinations used against different batters?</code> | <code>Three slips and a gullySubscribeSign inShare this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMoreThe lines and lengths are trying to tell us somethingTaking a closer at line-length combinations used against different batters to see if there's more than what meets the eyeDivyansh PeswaniFeb 02, 202510Share this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMore2ShareMatchups across all forms of cricket are predominant. They take different forms, and are incorporated within gameday strategy differently, but the thought process behind a bowling line-up is to bowl deliveries least suitable to a batter’s playing style.</code> |
387
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
388
+ ```json
389
+ {
390
+ "loss": "MultipleNegativesRankingLoss",
391
+ "matryoshka_dims": [
392
+ 768,
393
+ 512,
394
+ 256,
395
+ 128,
396
+ 64
397
+ ],
398
+ "matryoshka_weights": [
399
+ 1,
400
+ 1,
401
+ 1,
402
+ 1,
403
+ 1
404
+ ],
405
+ "n_dims_per_step": -1
406
+ }
407
+ ```
408
+
409
+ ### Training Hyperparameters
410
+ #### Non-Default Hyperparameters
411
+
412
+ - `eval_strategy`: steps
413
+ - `per_device_train_batch_size`: 10
414
+ - `per_device_eval_batch_size`: 10
415
+ - `num_train_epochs`: 10
416
+ - `multi_dataset_batch_sampler`: round_robin
417
+
418
+ #### All Hyperparameters
419
+ <details><summary>Click to expand</summary>
420
+
421
+ - `overwrite_output_dir`: False
422
+ - `do_predict`: False
423
+ - `eval_strategy`: steps
424
+ - `prediction_loss_only`: True
425
+ - `per_device_train_batch_size`: 10
426
+ - `per_device_eval_batch_size`: 10
427
+ - `per_gpu_train_batch_size`: None
428
+ - `per_gpu_eval_batch_size`: None
429
+ - `gradient_accumulation_steps`: 1
430
+ - `eval_accumulation_steps`: None
431
+ - `torch_empty_cache_steps`: None
432
+ - `learning_rate`: 5e-05
433
+ - `weight_decay`: 0.0
434
+ - `adam_beta1`: 0.9
435
+ - `adam_beta2`: 0.999
436
+ - `adam_epsilon`: 1e-08
437
+ - `max_grad_norm`: 1
438
+ - `num_train_epochs`: 10
439
+ - `max_steps`: -1
440
+ - `lr_scheduler_type`: linear
441
+ - `lr_scheduler_kwargs`: {}
442
+ - `warmup_ratio`: 0.0
443
+ - `warmup_steps`: 0
444
+ - `log_level`: passive
445
+ - `log_level_replica`: warning
446
+ - `log_on_each_node`: True
447
+ - `logging_nan_inf_filter`: True
448
+ - `save_safetensors`: True
449
+ - `save_on_each_node`: False
450
+ - `save_only_model`: False
451
+ - `restore_callback_states_from_checkpoint`: False
452
+ - `no_cuda`: False
453
+ - `use_cpu`: False
454
+ - `use_mps_device`: False
455
+ - `seed`: 42
456
+ - `data_seed`: None
457
+ - `jit_mode_eval`: False
458
+ - `use_ipex`: False
459
+ - `bf16`: False
460
+ - `fp16`: False
461
+ - `fp16_opt_level`: O1
462
+ - `half_precision_backend`: auto
463
+ - `bf16_full_eval`: False
464
+ - `fp16_full_eval`: False
465
+ - `tf32`: None
466
+ - `local_rank`: 0
467
+ - `ddp_backend`: None
468
+ - `tpu_num_cores`: None
469
+ - `tpu_metrics_debug`: False
470
+ - `debug`: []
471
+ - `dataloader_drop_last`: False
472
+ - `dataloader_num_workers`: 0
473
+ - `dataloader_prefetch_factor`: None
474
+ - `past_index`: -1
475
+ - `disable_tqdm`: False
476
+ - `remove_unused_columns`: True
477
+ - `label_names`: None
478
+ - `load_best_model_at_end`: False
479
+ - `ignore_data_skip`: False
480
+ - `fsdp`: []
481
+ - `fsdp_min_num_params`: 0
482
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
483
+ - `fsdp_transformer_layer_cls_to_wrap`: None
484
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
485
+ - `deepspeed`: None
486
+ - `label_smoothing_factor`: 0.0
487
+ - `optim`: adamw_torch
488
+ - `optim_args`: None
489
+ - `adafactor`: False
490
+ - `group_by_length`: False
491
+ - `length_column_name`: length
492
+ - `ddp_find_unused_parameters`: None
493
+ - `ddp_bucket_cap_mb`: None
494
+ - `ddp_broadcast_buffers`: False
495
+ - `dataloader_pin_memory`: True
496
+ - `dataloader_persistent_workers`: False
497
+ - `skip_memory_metrics`: True
498
+ - `use_legacy_prediction_loop`: False
499
+ - `push_to_hub`: False
500
+ - `resume_from_checkpoint`: None
501
+ - `hub_model_id`: None
502
+ - `hub_strategy`: every_save
503
+ - `hub_private_repo`: None
504
+ - `hub_always_push`: False
505
+ - `gradient_checkpointing`: False
506
+ - `gradient_checkpointing_kwargs`: None
507
+ - `include_inputs_for_metrics`: False
508
+ - `include_for_metrics`: []
509
+ - `eval_do_concat_batches`: True
510
+ - `fp16_backend`: auto
511
+ - `push_to_hub_model_id`: None
512
+ - `push_to_hub_organization`: None
513
+ - `mp_parameters`:
514
+ - `auto_find_batch_size`: False
515
+ - `full_determinism`: False
516
+ - `torchdynamo`: None
517
+ - `ray_scope`: last
518
+ - `ddp_timeout`: 1800
519
+ - `torch_compile`: False
520
+ - `torch_compile_backend`: None
521
+ - `torch_compile_mode`: None
522
+ - `dispatch_batches`: None
523
+ - `split_batches`: None
524
+ - `include_tokens_per_second`: False
525
+ - `include_num_input_tokens_seen`: False
526
+ - `neftune_noise_alpha`: None
527
+ - `optim_target_modules`: None
528
+ - `batch_eval_metrics`: False
529
+ - `eval_on_start`: False
530
+ - `use_liger_kernel`: False
531
+ - `eval_use_gather_object`: False
532
+ - `average_tokens_across_devices`: False
533
+ - `prompts`: None
534
+ - `batch_sampler`: batch_sampler
535
+ - `multi_dataset_batch_sampler`: round_robin
536
+
537
+ </details>
538
+
539
+ ### Training Logs
540
+ | Epoch | Step | cosine_ndcg@10 |
541
+ |:------:|:----:|:--------------:|
542
+ | 1.0 | 6 | 0.7848 |
543
+ | 2.0 | 12 | 0.8365 |
544
+ | 3.0 | 18 | 0.8539 |
545
+ | 4.0 | 24 | 0.8539 |
546
+ | 5.0 | 30 | 0.8680 |
547
+ | 6.0 | 36 | 0.8655 |
548
+ | 7.0 | 42 | 0.8727 |
549
+ | 8.0 | 48 | 0.8727 |
550
+ | 8.3333 | 50 | 0.8727 |
551
+ | 9.0 | 54 | 0.8727 |
552
+ | 10.0 | 60 | 0.8727 |
553
+ | 1.0 | 6 | 0.8738 |
554
+ | 2.0 | 12 | 0.8550 |
555
+ | 3.0 | 18 | 0.8550 |
556
+ | 4.0 | 24 | 0.8440 |
557
+ | 5.0 | 30 | 0.8465 |
558
+ | 6.0 | 36 | 0.8465 |
559
+ | 7.0 | 42 | 0.8465 |
560
+ | 8.0 | 48 | 0.8465 |
561
+ | 8.3333 | 50 | 0.8465 |
562
+ | 9.0 | 54 | 0.8465 |
563
+ | 10.0 | 60 | 0.8465 |
564
+
565
+
566
+ ### Framework Versions
567
+ - Python: 3.11.11
568
+ - Sentence Transformers: 3.4.1
569
+ - Transformers: 4.48.3
570
+ - PyTorch: 2.5.1+cu124
571
+ - Accelerate: 1.3.0
572
+ - Datasets: 3.3.1
573
+ - Tokenizers: 0.21.0
574
+
575
+ ## Citation
576
+
577
+ ### BibTeX
578
+
579
+ #### Sentence Transformers
580
+ ```bibtex
581
+ @inproceedings{reimers-2019-sentence-bert,
582
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
583
+ author = "Reimers, Nils and Gurevych, Iryna",
584
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
585
+ month = "11",
586
+ year = "2019",
587
+ publisher = "Association for Computational Linguistics",
588
+ url = "https://arxiv.org/abs/1908.10084",
589
+ }
590
+ ```
591
+
592
+ #### MatryoshkaLoss
593
+ ```bibtex
594
+ @misc{kusupati2024matryoshka,
595
+ title={Matryoshka Representation Learning},
596
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
597
+ year={2024},
598
+ eprint={2205.13147},
599
+ archivePrefix={arXiv},
600
+ primaryClass={cs.LG}
601
+ }
602
+ ```
603
+
604
+ #### MultipleNegativesRankingLoss
605
+ ```bibtex
606
+ @misc{henderson2017efficient,
607
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
608
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
609
+ year={2017},
610
+ eprint={1705.00652},
611
+ archivePrefix={arXiv},
612
+ primaryClass={cs.CL}
613
+ }
614
+ ```
615
+
616
+ <!--
617
+ ## Glossary
618
+
619
+ *Clearly define terms in order to be accessible across audiences.*
620
+ -->
621
+
622
+ <!--
623
+ ## Model Card Authors
624
+
625
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
626
+ -->
627
+
628
+ <!--
629
+ ## Model Card Contact
630
+
631
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
632
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e442001256e9cdcb04ebad99398f51eadb0a3922f0c7decce28fc9ef4faaf17
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff