ashwinpatti commited on
Commit
1ebcfbc
·
verified ·
1 Parent(s): 24677e4

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,709 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:56
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: How many runs does Andre Russell typically score before losing
13
+ a wicket based on his performance statistics?
14
+ sentences:
15
+ - also significantly less costly as a bowling option.Let us now take a closer look
16
+ at some of the titans of the game to see if there is more than meets the eye.Thanks
17
+ for reading Three slips and a gully! Subscribe for free to receive new posts and
18
+ support my work.SubscribeAndre Russell, since 2019, has struck 2,005 runs at a
19
+ SR of 180 and average of 27.5. Pretty decent numbers, given his entry points and
20
+ what is often required of him. These numbers translate to him giving 27 runs off
21
+ every 15 balls he faces before losing a wicket. More than decent.If we further
22
+ split these numbers by the bowling kind (right-arm or left-arm pace), we can unearth
23
+ deltas in this seemingly one-sided matchup to discover his worst performing matchups.
24
+ Against
25
+ - The lines and lengths are trying to tell us something
26
+ - the first-innings totals have been successfully chased down, with each season
27
+ averaging between ~45-60% of successful chases, the highest being in 2021 where
28
+ 61.7% of the chases resulted in success.While the proportion of matches won chasing
29
+ have largely stayed the same, the distribution of targets set and chased have
30
+ varied dramatically between 2024 and the 5 seasons preceding it. Between 2019
31
+ and 2023, almost 62% of the targets were set at below 180 runs, with ~42% of them
32
+ being between 150 and 180 runs. Scores between 170-180 are what’s typically considered
33
+ to be at par for most grounds across India, and the spread of targets have shown
34
+ just that.The number of targets less than 180 runs and between 150 & 180 runs
35
+ fell to 44% and 30%
36
+ - source_sentence: What batting strategies do Virat Kohli employ when facing SLAs
37
+ and OBs based on his strike rates against them?
38
+ sentences:
39
+ - batters by bowling line-length combinations they’re the most conservative against.Thanks
40
+ for reading Three slips and a gully! This post is public so feel free to share
41
+ it.ShareSuryakumar Yadav is an absolute beast in T20 cricket. Although in a lean
42
+ patch right now, he is potentially the only cricketer that will go down as an
43
+ all-time great because of his brilliance in only one format, the 20 over game.
44
+ He, like most Indian batters, struggles a bit against SLA, but still fares better
45
+ than most of his contemporaries. He’s conservative against the straight-on SLAOs,
46
+ bowled at the stumps from a good length. As the bowler drifts his line away from
47
+ the stumps, he finds himself to have more room, and his striking ability improves
48
+ as the ball gets
49
+ - matchups. Against left-arm medium and right-arm fast, Russell averages 20 RpW
50
+ striking at less than 160. Focusing on right-arm fast, against which he’s gotten
51
+ out 19 times for 390 runs at a SR of 157. One might look at this and choose to
52
+ default to right-arm fast against the giant, but it’s pertinent to look at the
53
+ lines and lengths he’s fallen victim to, to understand how this match-up can be
54
+ used against him in the most effective manner.The success % indicates the proportion
55
+ of balls bowled at a given line-length that yielded a wicket. As you can see,
56
+ for all line-length combinations for which at least 10 balls were bowled, Russell’s
57
+ found himself to be out of answers for balls pitched outside the off stump bowled
58
+ short. For all other
59
+ - right-arm off-break all too well, etc. Data around batter-specific matchups is
60
+ now readily available. For example, Rishabh Pant finds it hard to score against
61
+ right-arm express quicks (averaging 19 striking at 130), Virat Kohli is extremely
62
+ cautious batting against SLAs and OBs, striking at 110 and 111 against them respectively.Some
63
+ batters may not dominate every bowling style, but they consistently perform decently
64
+ and deliver sizeable returns against most types of bowlers. To understand how
65
+ to effectively challenge these players, we can analyze specific combinations of
66
+ line and length that bowlers use against them. By delving deeper into these patterns,
67
+ we can identify the precise deliveries that are most effective in restricting
68
+ their
69
+ - source_sentence: How do the striking and dismissal rates of the sampled batters
70
+ compare between the Powerplay and death overs?
71
+ sentences:
72
+ - good length outside off-stump, compared to 149 for deliveries of a similar length
73
+ but targeting the stumps. Additionally, he loses his wicket at almost the same
74
+ rate relative to the runs scored in both scenarios. While not an overwhelmingly
75
+ effective matchup, this is a strategy that teams should consider using against
76
+ him.Some line-length combination matchups are easier to unearth, with just a little
77
+ bit of digging. Heinrich Klaasen is one of the greatest T20 bats in the world
78
+ right now. The man has an unmatched ability against spin, one of the most lethal
79
+ hitters in the death overs, and fares well against pace bowling of all kinds as
80
+ well (1,538 runs at a SR of 154 and an average of 29.5 RpW). For the 933 balls
81
+ against pace that we have
82
+ - and determine how they can be limited based on the line-length combinations that
83
+ trouble them the most.Our hypothesis on the importance of precision in line-length
84
+ combinations is further validated when we evaluate bowlers based on the proportion
85
+ of effectively defensive deliveries they bowl. The data clearly indicate that
86
+ a higher percentage of deliveries pitched on a good length outside the off-stump
87
+ strongly correlates with a bowler’s economy rate. This trend holds consistently
88
+ across both spin and pace bowlers, with only a few expected outliers.This analysis
89
+ considers bowlers who have bowled over 1,000 deliveries between 2019 and October
90
+ 2024, with available line-length data. The dataset includes 40 spinners and 74
91
+ pacers, evaluated
92
+ - pace up the innings in a 20-over game. For this, I’ll take a sample of 25 batters
93
+ (the highest run-scorers in the powerplay since 2019) and observe how their striking
94
+ and dismissal rate changes from the Powerplay (overs 1-6) and death (overs 16-20).Several
95
+ things jump out the minute you look at this graph. Batters like Finn Allen and
96
+ Will Jacks are, unsurprisingly, at the top-left corner, striking really quickly
97
+ in the Powerplay while being dispensable with their wicket. A very high proportion
98
+ of the 25 batters are concentrated in the area with the average ranging from 25-35
99
+ and the SR between 120 and 160. Faf bests Kohli in both the average RpD and the
100
+ SR while Warner is much of an accumulator.KL Rahul would have stood out as an
101
+ obvious
102
+ - source_sentence: What is the batter's strike rate and average against leg-break
103
+ bowling with a minimum of 500 runs scored?
104
+ sentences:
105
+ - we will not be considering on-the-stump yorkers for either spinners or pacers.The
106
+ similarities and differences here are equally intriguing. Good-length deliveries,
107
+ regardless of the type, offer comparable chances of success for both spin and
108
+ pace bowlers. Deliveries pitched between good length and short, drifting down
109
+ the leg side, are the least effective for both styles, although they are nearly
110
+ twice as successful for pacers compared to spinners. On the other hand, a good-length
111
+ delivery wide outside off-stump is slightly more effective for spinners and also
112
+ proves to be less expensive. Conversely, short-pitched deliveries on the stumps
113
+ are twice as likely to result in a wicket for pacers compared to spinners and
114
+ are also significantly
115
+ - pace up the innings in a 20-over game. For this, I’ll take a sample of 25 batters
116
+ (the highest run-scorers in the powerplay since 2019) and observe how their striking
117
+ and dismissal rate changes from the Powerplay (overs 1-6) and death (overs 16-20).Several
118
+ things jump out the minute you look at this graph. Batters like Finn Allen and
119
+ Will Jacks are, unsurprisingly, at the top-left corner, striking really quickly
120
+ in the Powerplay while being dispensable with their wicket. A very high proportion
121
+ of the 25 batters are concentrated in the area with the average ranging from 25-35
122
+ and the SR between 120 and 160. Faf bests Kohli in both the average RpD and the
123
+ SR while Warner is much of an accumulator.KL Rahul would have stood out as an
124
+ obvious
125
+ - as the ball gets wider or fuller.On the other hand, his numbers against leg-break
126
+ bowlers paint a prettier picture. He strikes at 150 at an average of 46 RpW. For
127
+ all batters with a minimum of 500 runs against leg-break bowling, only Nicolas
128
+ Pooran has scored runs more quickly and at a higher average than him.While the
129
+ ball lined up on the stumps pitched at a good length from a SLAO bowler sets his
130
+ striking ability back, he’s more proactive against a similarly pitched delivery
131
+ coming from a leg-break bowler (52 avg, 148 SR). It will be cruel to call it a
132
+ weakness, but he is relatively tamer against balls that are pitched outside the
133
+ off-stump on a good length by a leg-spinnerHe strikes at 121 against balls pitched
134
+ on a good length outside
135
+ - source_sentence: How has the approach to run chases in the IPL changed from 2019
136
+ to 2024?
137
+ sentences:
138
+ - 'restricting their scoring, taking their wickets more efficiently, or achieving
139
+ both objectives simultaneously. The success percentage of the most commonly used
140
+ line-length combinations in T20 matches across various phases of an innings is
141
+ shown above. This percentage indicates how often each line-length combination
142
+ results in a wicket. Unsurprisingly, the yorker on the stumps has the highest
143
+ success rate, almost twice that of the short ball drifting down the leg side,
144
+ at 2nd. However, simply reviewing these combinations doesn’t provide much insight.
145
+ It’s more useful to plot these success percentages against the cost of each line-length
146
+ combination for both spin and pace bowlers.Side note: For any upcoming analysis,
147
+ we will not be'
148
+ - Three slips and a gullySubscribeSign inShare this postThree slips and a gullyWhat
149
+ makes a successful run chase in the IPLCopy linkFacebookEmailNotesMoreWhat makes
150
+ a successful run chase in the IPLA look at the way teams have been chasing targets
151
+ in the IPL since 2019, and how 2024 was just a tad bit different in the way teams
152
+ approach run chases.Divyansh PeswaniJan 09, 20254Share this postThree slips and
153
+ a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMore1ShareT20
154
+ batting has two sides to it; the calculations of putting up a first-innings total
155
+ that could be considered above par for the given conditions, and the complexities
156
+ of structuring the second innings chase across the innings to bag a win safely
157
+ - batters by bowling line-length combinations they’re the most conservative against.Thanks
158
+ for reading Three slips and a gully! This post is public so feel free to share
159
+ it.ShareSuryakumar Yadav is an absolute beast in T20 cricket. Although in a lean
160
+ patch right now, he is potentially the only cricketer that will go down as an
161
+ all-time great because of his brilliance in only one format, the 20 over game.
162
+ He, like most Indian batters, struggles a bit against SLA, but still fares better
163
+ than most of his contemporaries. He’s conservative against the straight-on SLAOs,
164
+ bowled at the stumps from a good length. As the bowler drifts his line away from
165
+ the stumps, he finds himself to have more room, and his striking ability improves
166
+ as the ball gets
167
+ pipeline_tag: sentence-similarity
168
+ library_name: sentence-transformers
169
+ metrics:
170
+ - cosine_accuracy@1
171
+ - cosine_accuracy@3
172
+ - cosine_accuracy@5
173
+ - cosine_accuracy@10
174
+ - cosine_precision@1
175
+ - cosine_precision@3
176
+ - cosine_precision@5
177
+ - cosine_precision@10
178
+ - cosine_recall@1
179
+ - cosine_recall@3
180
+ - cosine_recall@5
181
+ - cosine_recall@10
182
+ - cosine_ndcg@10
183
+ - cosine_mrr@10
184
+ - cosine_map@100
185
+ model-index:
186
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
187
+ results:
188
+ - task:
189
+ type: information-retrieval
190
+ name: Information Retrieval
191
+ dataset:
192
+ name: Unknown
193
+ type: unknown
194
+ metrics:
195
+ - type: cosine_accuracy@1
196
+ value: 0.6785714285714286
197
+ name: Cosine Accuracy@1
198
+ - type: cosine_accuracy@3
199
+ value: 0.8571428571428571
200
+ name: Cosine Accuracy@3
201
+ - type: cosine_accuracy@5
202
+ value: 1.0
203
+ name: Cosine Accuracy@5
204
+ - type: cosine_accuracy@10
205
+ value: 1.0
206
+ name: Cosine Accuracy@10
207
+ - type: cosine_precision@1
208
+ value: 0.6785714285714286
209
+ name: Cosine Precision@1
210
+ - type: cosine_precision@3
211
+ value: 0.2857142857142857
212
+ name: Cosine Precision@3
213
+ - type: cosine_precision@5
214
+ value: 0.20000000000000004
215
+ name: Cosine Precision@5
216
+ - type: cosine_precision@10
217
+ value: 0.10000000000000002
218
+ name: Cosine Precision@10
219
+ - type: cosine_recall@1
220
+ value: 0.6785714285714286
221
+ name: Cosine Recall@1
222
+ - type: cosine_recall@3
223
+ value: 0.8571428571428571
224
+ name: Cosine Recall@3
225
+ - type: cosine_recall@5
226
+ value: 1.0
227
+ name: Cosine Recall@5
228
+ - type: cosine_recall@10
229
+ value: 1.0
230
+ name: Cosine Recall@10
231
+ - type: cosine_ndcg@10
232
+ value: 0.846521481990734
233
+ name: Cosine Ndcg@10
234
+ - type: cosine_mrr@10
235
+ value: 0.7958333333333333
236
+ name: Cosine Mrr@10
237
+ - type: cosine_map@100
238
+ value: 0.7958333333333333
239
+ name: Cosine Map@100
240
+ - type: cosine_accuracy@1
241
+ value: 0.4807692307692308
242
+ name: Cosine Accuracy@1
243
+ - type: cosine_accuracy@3
244
+ value: 0.75
245
+ name: Cosine Accuracy@3
246
+ - type: cosine_accuracy@5
247
+ value: 0.8461538461538461
248
+ name: Cosine Accuracy@5
249
+ - type: cosine_accuracy@10
250
+ value: 1.0
251
+ name: Cosine Accuracy@10
252
+ - type: cosine_precision@1
253
+ value: 0.4807692307692308
254
+ name: Cosine Precision@1
255
+ - type: cosine_precision@3
256
+ value: 0.25
257
+ name: Cosine Precision@3
258
+ - type: cosine_precision@5
259
+ value: 0.1692307692307692
260
+ name: Cosine Precision@5
261
+ - type: cosine_precision@10
262
+ value: 0.09999999999999996
263
+ name: Cosine Precision@10
264
+ - type: cosine_recall@1
265
+ value: 0.4807692307692308
266
+ name: Cosine Recall@1
267
+ - type: cosine_recall@3
268
+ value: 0.75
269
+ name: Cosine Recall@3
270
+ - type: cosine_recall@5
271
+ value: 0.8461538461538461
272
+ name: Cosine Recall@5
273
+ - type: cosine_recall@10
274
+ value: 1.0
275
+ name: Cosine Recall@10
276
+ - type: cosine_ndcg@10
277
+ value: 0.7193365478907754
278
+ name: Cosine Ndcg@10
279
+ - type: cosine_mrr@10
280
+ value: 0.6310515873015873
281
+ name: Cosine Mrr@10
282
+ - type: cosine_map@100
283
+ value: 0.6310515873015875
284
+ name: Cosine Map@100
285
+ ---
286
+
287
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
288
+
289
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
290
+
291
+ ## Model Details
292
+
293
+ ### Model Description
294
+ - **Model Type:** Sentence Transformer
295
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
296
+ - **Maximum Sequence Length:** 512 tokens
297
+ - **Output Dimensionality:** 1024 dimensions
298
+ - **Similarity Function:** Cosine Similarity
299
+ <!-- - **Training Dataset:** Unknown -->
300
+ <!-- - **Language:** Unknown -->
301
+ <!-- - **License:** Unknown -->
302
+
303
+ ### Model Sources
304
+
305
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
306
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
307
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
308
+
309
+ ### Full Model Architecture
310
+
311
+ ```
312
+ SentenceTransformer(
313
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
314
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
315
+ (2): Normalize()
316
+ )
317
+ ```
318
+
319
+ ## Usage
320
+
321
+ ### Direct Usage (Sentence Transformers)
322
+
323
+ First install the Sentence Transformers library:
324
+
325
+ ```bash
326
+ pip install -U sentence-transformers
327
+ ```
328
+
329
+ Then you can load this model and run inference.
330
+ ```python
331
+ from sentence_transformers import SentenceTransformer
332
+
333
+ # Download from the 🤗 Hub
334
+ model = SentenceTransformer("ashwinpatti/finetuned_arctic_kg_ft-legal-ft-v0")
335
+ # Run inference
336
+ sentences = [
337
+ 'How has the approach to run chases in the IPL changed from 2019 to 2024?',
338
+ 'Three slips and a gullySubscribeSign inShare this postThree slips and a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMoreWhat makes a successful run chase in the IPLA look at the way teams have been chasing targets in the IPL since 2019, and how 2024 was just a tad bit different in the way teams approach run chases.Divyansh PeswaniJan 09, 20254Share this postThree slips and a gullyWhat makes a successful run chase in the IPLCopy linkFacebookEmailNotesMore1ShareT20 batting has two sides to it; the calculations of putting up a first-innings total that could be considered above par for the given conditions, and the complexities of structuring the second innings chase across the innings to bag a win safely',
339
+ 'batters by bowling line-length combinations they’re the most conservative against.Thanks for reading Three slips and a gully! This post is public so feel free to share it.ShareSuryakumar Yadav is an absolute beast in T20 cricket. Although in a lean patch right now, he is potentially the only cricketer that will go down as an all-time great because of his brilliance in only one format, the 20 over game. He, like most Indian batters, struggles a bit against SLA, but still fares better than most of his contemporaries. He’s conservative against the straight-on SLAOs, bowled at the stumps from a good length. As the bowler drifts his line away from the stumps, he finds himself to have more room, and his striking ability improves as the ball gets',
340
+ ]
341
+ embeddings = model.encode(sentences)
342
+ print(embeddings.shape)
343
+ # [3, 1024]
344
+
345
+ # Get the similarity scores for the embeddings
346
+ similarities = model.similarity(embeddings, embeddings)
347
+ print(similarities.shape)
348
+ # [3, 3]
349
+ ```
350
+
351
+ <!--
352
+ ### Direct Usage (Transformers)
353
+
354
+ <details><summary>Click to see the direct usage in Transformers</summary>
355
+
356
+ </details>
357
+ -->
358
+
359
+ <!--
360
+ ### Downstream Usage (Sentence Transformers)
361
+
362
+ You can finetune this model on your own dataset.
363
+
364
+ <details><summary>Click to expand</summary>
365
+
366
+ </details>
367
+ -->
368
+
369
+ <!--
370
+ ### Out-of-Scope Use
371
+
372
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
373
+ -->
374
+
375
+ ## Evaluation
376
+
377
+ ### Metrics
378
+
379
+ #### Information Retrieval
380
+
381
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
382
+
383
+ | Metric | Value |
384
+ |:--------------------|:-----------|
385
+ | cosine_accuracy@1 | 0.6786 |
386
+ | cosine_accuracy@3 | 0.8571 |
387
+ | cosine_accuracy@5 | 1.0 |
388
+ | cosine_accuracy@10 | 1.0 |
389
+ | cosine_precision@1 | 0.6786 |
390
+ | cosine_precision@3 | 0.2857 |
391
+ | cosine_precision@5 | 0.2 |
392
+ | cosine_precision@10 | 0.1 |
393
+ | cosine_recall@1 | 0.6786 |
394
+ | cosine_recall@3 | 0.8571 |
395
+ | cosine_recall@5 | 1.0 |
396
+ | cosine_recall@10 | 1.0 |
397
+ | **cosine_ndcg@10** | **0.8465** |
398
+ | cosine_mrr@10 | 0.7958 |
399
+ | cosine_map@100 | 0.7958 |
400
+
401
+ #### Information Retrieval
402
+
403
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
404
+
405
+ | Metric | Value |
406
+ |:--------------------|:-----------|
407
+ | cosine_accuracy@1 | 0.4808 |
408
+ | cosine_accuracy@3 | 0.75 |
409
+ | cosine_accuracy@5 | 0.8462 |
410
+ | cosine_accuracy@10 | 1.0 |
411
+ | cosine_precision@1 | 0.4808 |
412
+ | cosine_precision@3 | 0.25 |
413
+ | cosine_precision@5 | 0.1692 |
414
+ | cosine_precision@10 | 0.1 |
415
+ | cosine_recall@1 | 0.4808 |
416
+ | cosine_recall@3 | 0.75 |
417
+ | cosine_recall@5 | 0.8462 |
418
+ | cosine_recall@10 | 1.0 |
419
+ | **cosine_ndcg@10** | **0.7193** |
420
+ | cosine_mrr@10 | 0.6311 |
421
+ | cosine_map@100 | 0.6311 |
422
+
423
+ <!--
424
+ ## Bias, Risks and Limitations
425
+
426
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
427
+ -->
428
+
429
+ <!--
430
+ ### Recommendations
431
+
432
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
433
+ -->
434
+
435
+ ## Training Details
436
+
437
+ ### Training Dataset
438
+
439
+ #### Unnamed Dataset
440
+
441
+ * Size: 56 training samples
442
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
443
+ * Approximate statistics based on the first 56 samples:
444
+ | | sentence_0 | sentence_1 |
445
+ |:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
446
+ | type | string | string |
447
+ | details | <ul><li>min: 10 tokens</li><li>mean: 18.35 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 159.24 tokens</li><li>max: 187 tokens</li></ul> |
448
+ * Samples:
449
+ | sentence_0 | sentence_1 |
450
+ |:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
451
+ | <code>What is important in cricket matchups?</code> | <code>Three slips and a gullySubscribeSign inShare this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMoreThe lines and lengths are trying to tell us somethingTaking a closer at line-length combinations used against different batters to see if there's more than what meets the eyeDivyansh PeswaniFeb 02, 202510Share this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMore2ShareMatchups across all forms of cricket are predominant. They take different forms, and are incorporated within gameday strategy differently, but the thought process behind a bowling line-up is to bowl deliveries least suitable to a batter’s playing style.</code> |
452
+ | <code>Who is Divyansh Peswani?</code> | <code>Three slips and a gullySubscribeSign inShare this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMoreThe lines and lengths are trying to tell us somethingTaking a closer at line-length combinations used against different batters to see if there's more than what meets the eyeDivyansh PeswaniFeb 02, 202510Share this postThree slips and a gullyThe lines and lengths are trying to tell us somethingCopy linkFacebookEmailNotesMore2ShareMatchups across all forms of cricket are predominant. They take different forms, and are incorporated within gameday strategy differently, but the thought process behind a bowling line-up is to bowl deliveries least suitable to a batter’s playing style.</code> |
453
+ | <code>Can you explain how OBs affect players like Virat Kohli in cricket?</code> | <code>right-arm off-break all too well, etc. Data around batter-specific matchups is now readily available. For example, Rishabh Pant finds it hard to score against right-arm express quicks (averaging 19 striking at 130), Virat Kohli is extremely cautious batting against SLAs and OBs, striking at 110 and 111 against them respectively.Some batters may not dominate every bowling style, but they consistently perform decently and deliver sizeable returns against most types of bowlers. To understand how to effectively challenge these players, we can analyze specific combinations of line and length that bowlers use against them. By delving deeper into these patterns, we can identify the precise deliveries that are most effective in restricting their</code> |
454
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
455
+ ```json
456
+ {
457
+ "loss": "MultipleNegativesRankingLoss",
458
+ "matryoshka_dims": [
459
+ 768,
460
+ 512,
461
+ 256,
462
+ 128,
463
+ 64
464
+ ],
465
+ "matryoshka_weights": [
466
+ 1,
467
+ 1,
468
+ 1,
469
+ 1,
470
+ 1
471
+ ],
472
+ "n_dims_per_step": -1
473
+ }
474
+ ```
475
+
476
+ ### Training Hyperparameters
477
+ #### Non-Default Hyperparameters
478
+
479
+ - `eval_strategy`: steps
480
+ - `per_device_train_batch_size`: 10
481
+ - `per_device_eval_batch_size`: 10
482
+ - `num_train_epochs`: 10
483
+ - `multi_dataset_batch_sampler`: round_robin
484
+
485
+ #### All Hyperparameters
486
+ <details><summary>Click to expand</summary>
487
+
488
+ - `overwrite_output_dir`: False
489
+ - `do_predict`: False
490
+ - `eval_strategy`: steps
491
+ - `prediction_loss_only`: True
492
+ - `per_device_train_batch_size`: 10
493
+ - `per_device_eval_batch_size`: 10
494
+ - `per_gpu_train_batch_size`: None
495
+ - `per_gpu_eval_batch_size`: None
496
+ - `gradient_accumulation_steps`: 1
497
+ - `eval_accumulation_steps`: None
498
+ - `torch_empty_cache_steps`: None
499
+ - `learning_rate`: 5e-05
500
+ - `weight_decay`: 0.0
501
+ - `adam_beta1`: 0.9
502
+ - `adam_beta2`: 0.999
503
+ - `adam_epsilon`: 1e-08
504
+ - `max_grad_norm`: 1
505
+ - `num_train_epochs`: 10
506
+ - `max_steps`: -1
507
+ - `lr_scheduler_type`: linear
508
+ - `lr_scheduler_kwargs`: {}
509
+ - `warmup_ratio`: 0.0
510
+ - `warmup_steps`: 0
511
+ - `log_level`: passive
512
+ - `log_level_replica`: warning
513
+ - `log_on_each_node`: True
514
+ - `logging_nan_inf_filter`: True
515
+ - `save_safetensors`: True
516
+ - `save_on_each_node`: False
517
+ - `save_only_model`: False
518
+ - `restore_callback_states_from_checkpoint`: False
519
+ - `no_cuda`: False
520
+ - `use_cpu`: False
521
+ - `use_mps_device`: False
522
+ - `seed`: 42
523
+ - `data_seed`: None
524
+ - `jit_mode_eval`: False
525
+ - `use_ipex`: False
526
+ - `bf16`: False
527
+ - `fp16`: False
528
+ - `fp16_opt_level`: O1
529
+ - `half_precision_backend`: auto
530
+ - `bf16_full_eval`: False
531
+ - `fp16_full_eval`: False
532
+ - `tf32`: None
533
+ - `local_rank`: 0
534
+ - `ddp_backend`: None
535
+ - `tpu_num_cores`: None
536
+ - `tpu_metrics_debug`: False
537
+ - `debug`: []
538
+ - `dataloader_drop_last`: False
539
+ - `dataloader_num_workers`: 0
540
+ - `dataloader_prefetch_factor`: None
541
+ - `past_index`: -1
542
+ - `disable_tqdm`: False
543
+ - `remove_unused_columns`: True
544
+ - `label_names`: None
545
+ - `load_best_model_at_end`: False
546
+ - `ignore_data_skip`: False
547
+ - `fsdp`: []
548
+ - `fsdp_min_num_params`: 0
549
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
550
+ - `fsdp_transformer_layer_cls_to_wrap`: None
551
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
552
+ - `deepspeed`: None
553
+ - `label_smoothing_factor`: 0.0
554
+ - `optim`: adamw_torch
555
+ - `optim_args`: None
556
+ - `adafactor`: False
557
+ - `group_by_length`: False
558
+ - `length_column_name`: length
559
+ - `ddp_find_unused_parameters`: None
560
+ - `ddp_bucket_cap_mb`: None
561
+ - `ddp_broadcast_buffers`: False
562
+ - `dataloader_pin_memory`: True
563
+ - `dataloader_persistent_workers`: False
564
+ - `skip_memory_metrics`: True
565
+ - `use_legacy_prediction_loop`: False
566
+ - `push_to_hub`: False
567
+ - `resume_from_checkpoint`: None
568
+ - `hub_model_id`: None
569
+ - `hub_strategy`: every_save
570
+ - `hub_private_repo`: None
571
+ - `hub_always_push`: False
572
+ - `gradient_checkpointing`: False
573
+ - `gradient_checkpointing_kwargs`: None
574
+ - `include_inputs_for_metrics`: False
575
+ - `include_for_metrics`: []
576
+ - `eval_do_concat_batches`: True
577
+ - `fp16_backend`: auto
578
+ - `push_to_hub_model_id`: None
579
+ - `push_to_hub_organization`: None
580
+ - `mp_parameters`:
581
+ - `auto_find_batch_size`: False
582
+ - `full_determinism`: False
583
+ - `torchdynamo`: None
584
+ - `ray_scope`: last
585
+ - `ddp_timeout`: 1800
586
+ - `torch_compile`: False
587
+ - `torch_compile_backend`: None
588
+ - `torch_compile_mode`: None
589
+ - `dispatch_batches`: None
590
+ - `split_batches`: None
591
+ - `include_tokens_per_second`: False
592
+ - `include_num_input_tokens_seen`: False
593
+ - `neftune_noise_alpha`: None
594
+ - `optim_target_modules`: None
595
+ - `batch_eval_metrics`: False
596
+ - `eval_on_start`: False
597
+ - `use_liger_kernel`: False
598
+ - `eval_use_gather_object`: False
599
+ - `average_tokens_across_devices`: False
600
+ - `prompts`: None
601
+ - `batch_sampler`: batch_sampler
602
+ - `multi_dataset_batch_sampler`: round_robin
603
+
604
+ </details>
605
+
606
+ ### Training Logs
607
+ | Epoch | Step | cosine_ndcg@10 |
608
+ |:------:|:----:|:--------------:|
609
+ | 1.0 | 6 | 0.7848 |
610
+ | 2.0 | 12 | 0.8365 |
611
+ | 3.0 | 18 | 0.8539 |
612
+ | 4.0 | 24 | 0.8539 |
613
+ | 5.0 | 30 | 0.8680 |
614
+ | 6.0 | 36 | 0.8655 |
615
+ | 7.0 | 42 | 0.8727 |
616
+ | 8.0 | 48 | 0.8727 |
617
+ | 8.3333 | 50 | 0.8727 |
618
+ | 9.0 | 54 | 0.8727 |
619
+ | 10.0 | 60 | 0.8727 |
620
+ | 1.0 | 6 | 0.8738 |
621
+ | 2.0 | 12 | 0.8550 |
622
+ | 3.0 | 18 | 0.8550 |
623
+ | 4.0 | 24 | 0.8440 |
624
+ | 5.0 | 30 | 0.8465 |
625
+ | 6.0 | 36 | 0.8465 |
626
+ | 7.0 | 42 | 0.8465 |
627
+ | 8.0 | 48 | 0.8465 |
628
+ | 8.3333 | 50 | 0.8465 |
629
+ | 9.0 | 54 | 0.8465 |
630
+ | 10.0 | 60 | 0.8465 |
631
+ | 1.0 | 4 | 0.7031 |
632
+ | 2.0 | 8 | 0.7123 |
633
+ | 3.0 | 12 | 0.7160 |
634
+ | 4.0 | 16 | 0.7133 |
635
+ | 5.0 | 20 | 0.7157 |
636
+ | 6.0 | 24 | 0.7189 |
637
+ | 7.0 | 28 | 0.7193 |
638
+ | 8.0 | 32 | 0.7193 |
639
+ | 9.0 | 36 | 0.7193 |
640
+ | 10.0 | 40 | 0.7193 |
641
+
642
+
643
+ ### Framework Versions
644
+ - Python: 3.11.11
645
+ - Sentence Transformers: 3.4.1
646
+ - Transformers: 4.48.3
647
+ - PyTorch: 2.5.1+cu124
648
+ - Accelerate: 1.3.0
649
+ - Datasets: 3.3.1
650
+ - Tokenizers: 0.21.0
651
+
652
+ ## Citation
653
+
654
+ ### BibTeX
655
+
656
+ #### Sentence Transformers
657
+ ```bibtex
658
+ @inproceedings{reimers-2019-sentence-bert,
659
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
660
+ author = "Reimers, Nils and Gurevych, Iryna",
661
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
662
+ month = "11",
663
+ year = "2019",
664
+ publisher = "Association for Computational Linguistics",
665
+ url = "https://arxiv.org/abs/1908.10084",
666
+ }
667
+ ```
668
+
669
+ #### MatryoshkaLoss
670
+ ```bibtex
671
+ @misc{kusupati2024matryoshka,
672
+ title={Matryoshka Representation Learning},
673
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
674
+ year={2024},
675
+ eprint={2205.13147},
676
+ archivePrefix={arXiv},
677
+ primaryClass={cs.LG}
678
+ }
679
+ ```
680
+
681
+ #### MultipleNegativesRankingLoss
682
+ ```bibtex
683
+ @misc{henderson2017efficient,
684
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
685
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
686
+ year={2017},
687
+ eprint={1705.00652},
688
+ archivePrefix={arXiv},
689
+ primaryClass={cs.CL}
690
+ }
691
+ ```
692
+
693
+ <!--
694
+ ## Glossary
695
+
696
+ *Clearly define terms in order to be accessible across audiences.*
697
+ -->
698
+
699
+ <!--
700
+ ## Model Card Authors
701
+
702
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
703
+ -->
704
+
705
+ <!--
706
+ ## Model Card Contact
707
+
708
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
709
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dca0239390aa8dd8835061b9467f330400299afc74636080658fa09e47360fb
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff