--- license: apache-2.0 --- [Training detail](https://github.com/DylanJoo/SCOPE/blob/main/slurm/bert/train.bert.sh) - Training hardware: 2 AMD MI200 - Batch size: 64 - Samples per query: 512 (32 \* 2 \* 8) - Data: msmarco-passage 491K - Learning steps: 25K with 2.5K warm-up - Learning rate: 1e-5