AI & ML interests

Omni Lingual Models

Recent Activity

huu-ontocord  updated a dataset about 1 month ago
aurora-m/aurora-m2
huu-ontocord  published a dataset about 1 month ago
aurora-m/aurora-m2
huu-ontocord  updated a dataset about 1 month ago
mixture-vitae/MixtureVitae-2TT
View all activity

Locutusque 
posted an update 15 days ago
view post
Post
2604
🚀 AutoXLA - Accelerating Large Models on TPU
AutoXLA is an experimental library that automates the distribution, optimization, and quantization of large language models for TPUs using PyTorch/XLA. It extends the Hugging Face Transformers interface with TPU-aware features such as automatic sharding, custom attention kernels, and quantization-aware loading, making large-scale deployment and training both simpler and faster.
With quantization and Splash Attention kernels, AutoXLA achieves up to 4Ă— speedups over standard Flash Attention implementations, significantly improving throughput for both inference and training workloads.
Whether you’re experimenting with distributed setups (FSDP, 2D, or 3D sharding) or optimizing memory via LanguageModelQuantizer, AutoXLA is built to make scaling LLMs on TPU seamless.
⚠️ Note: This is an experimental repository. Expect rough edges! Please report bugs or unexpected behavior through GitHub issues.
đź”— GitHub Repository: https://github.com/Locutusque/AutoXLA

Locutusque 
posted an update 3 months ago
view post
Post
7039
🌲🍄 LLM Forest Orchestra: Turning Hidden States into Music

Hello everyone! I'm excited to introduce a new Space I've been developing called LLM Forest Orchestra. This project converts the hidden states and attention patterns of transformer models into layered MIDI compositions. The concept draws inspiration from mushrooms and mycelial networks in forests. Fungi create underground connections linking plants and trees, establishing what some call a "wood-wide web" where signals and nutrients travel. Researchers have discovered that these exchanges form patterns resembling rhythms and pulses. When translated appropriately, these patterns can become music.

Transformers operate through remarkably similar principles: tokens share signals via hidden states and attention heads. This Space transforms those invisible information flows into notes, chords, and rhythms, treating the model as a digital forest orchestra.

🎛 Features

* Two compute modes:
- Full model operates on a Hugging Face model (defaulting to unsloth/Qwen3-14B-Base).
- Mock latents provides a CPU-friendly option that simulates tensors for immediate experimentation.
* Musical controls: You can adjust scale selection, tempo grid, velocity range, instrument/role presets, and seed randomization.
* Output: The system generates .mid files compatible with DAWs and remixing workflows.

🌌 Why?

Neural networks already resemble unusual musical instruments: signals flow through them, patterns emerge organically, and careful observation reveals hidden melodies. This is analogous to the forest's secret orchestra of mushrooms and trees.

👉 Try it

Try the Space here: Locutusque/LLM-Forest-Orchestra. I'm excited to hear the sounds you can generate. Please share your created MIDIs or remixes in the comments. Let's explore how this hidden forest of transformers can sound together. 🌳🎶