TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models
Abstract
TabTune is a unified library that standardizes the workflow for tabular foundation models, supporting various adaptation strategies and evaluation metrics.
Tabular foundation models represent a growing paradigm in structured data learning, extending the benefits of large-scale pretraining to tabular domains. However, their adoption remains limited due to heterogeneous preprocessing pipelines, fragmented APIs, inconsistent fine-tuning procedures, and the absence of standardized evaluation for deployment-oriented metrics such as calibration and fairness. We present TabTune, a unified library that standardizes the complete workflow for tabular foundation models through a single interface. TabTune provides consistent access to seven state-of-the-art models supporting multiple adaptation strategies, including zero-shot inference, meta-learning, supervised fine-tuning (SFT), and parameter-efficient fine-tuning (PEFT). The framework automates model-aware preprocessing, manages architectural heterogeneity internally, and integrates evaluation modules for performance, calibration, and fairness. Designed for extensibility and reproducibility, TabTune enables consistent benchmarking of adaptation strategies of tabular foundation models. The library is open source and available at https://github.com/Lexsi-Labs/TabTune .
Community
TabTune is a powerful and flexible Python library designed to simplify the training and fine-tuning of modern foundation models on tabular data. It provides a high-level, scikit-learn-compatible API that abstracts away the complexities of data preprocessing, model-specific training loops, and benchmarking, letting you focus on delivering results.
Whether you are a practitioner aiming for production-grade pipelines or a researcher exploring advanced architectures, TabTune streamlines your workflow for tabular deep learning.
Github Repo : https://github.com/Lexsi-Labs/TabTune
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Orion-MSP: Multi-Scale Sparse Attention for Tabular In-Context Learning (2025)
- Data Efficient Adaptation in Large Language Models via Continuous Low-Rank Fine-Tuning (2025)
- Limited Reference, Reliable Generation: A Two-Component Framework for Tabular Data Generation in Low-Data Regimes (2025)
- Resource-Efficient Fine-Tuning of LLaMA-3.2-3B for Medical Chain-of-Thought Reasoning (2025)
- flowengineR: A Modular and Extensible Framework for Fair and Reproducible Workflow Design in R (2025)
- MeTA-LoRA: Data-Efficient Multi-Task Fine-Tuning for Large Language Models (2025)
- Optimizing Fine-Tuning through Advanced Initialization Strategies for Low-Rank Adaptation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper