Position of Uncertainty: A Cross-Linguistic Study of Positional Bias in Large Language Models Paper • 2505.16134 • Published May 22 • 18
Risk-Averse Reinforcement Learning with Itakura-Saito Loss Paper • 2505.16925 • Published May 22 • 26
Investigating the Impact of Quantization Methods on the Safety and Reliability of Large Language Models Paper • 2502.15799 • Published Feb 18 • 7
GIFT-SW: Gaussian noise Injected Fine-Tuning of Salient Weights for LLMs Paper • 2408.15300 • Published Aug 27, 2024 • 3