AI & ML interests

Open science and open source

Recent Activity

Sri-Vigneshwar-DJย 
posted an update 5 days ago
view post
Post
2133
Introducing Hawky-AI H1 4B PM: The First Open-Source LLM for Performance Marketing ๐ŸŽฏ

Hey HF Community! ๐Ÿ‘‹

Just released the first LLM fine-tuned specifically for Performance Marketing.
What is it?
Gemma 3 4B distilled from Claude Opus 4.5 with expert-level marketing knowledge.
Covers:
๐Ÿ“ฑ Meta Ads (campaign structure, bidding, scaling, creative fatigue)
๐Ÿ” Google Ads (Quality Score, Performance Max, lead gen)
๐Ÿ“Š Measurement (ROAS vs MER, incrementality, LTV:CAC)
๐ŸŽจ Creative Strategy (hook rates, A/B testing, funnel creative)
Why we built it:
Generic LLMs say "optimize your targeting" โ€” not helpful. This model gives specific frameworks like "frequency at 4.5 + CTR drop = creative fatigue, here's the fix..."
Technical:

Base: Gemma 3 4B
Method: QLoRA (r=64)
Teacher: Claude Opus 4.5

๐Ÿ”— Model: Sri-Vigneshwar-DJ/hawky-ai-H1-4b-PM
Built by Hawky.ai

Try it and let us know what you think! ๐Ÿš€
Sri-Vigneshwar-DJย 
posted an update 8 days ago
view post
Post
1354
๐Ÿฆ… Introducing Hawky AI H1 Mini 4B: A Domain-Specific Model for Performance Marketing

Hey HuggingFace community! ๐Ÿ‘‹

We're excited to share our first open-source release: **Hawky AI H1 Mini 4B Experimental** - a Gemma 3 4B model fine-tuned specifically for Meta advertising and performance marketing strategy.

๐ŸŽฏ Why We Built This

At [Hawky.ai](https://hawky.ai), we build AI-powered creative intelligence tools for performance marketers. We work with major agencies (WPP, Madison, GroupM) and brands (TVS Motors, Tanishq, Bajaj Finserv) on campaign optimization.

We wanted to explore: Can a small, domain-specific model provide expert-level guidance on performance marketing?

Specifically, we focused on Meta's Andromeda algorithm - the AI system that now powers ad delivery across Facebook and Instagram. Understanding Andromeda is crucial for modern media buying, but the knowledge is scattered and constantly evolving.

๐Ÿง  What Makes This Different

Chain-of-Thought Reasoning
The model doesn't just answer - it **thinks through problems** step-by-step:

Sri-Vigneshwar-DJ/hawky-ai-h1-mini-4b-experimental
Sri-Vigneshwar-DJย 
posted an update 12 days ago
view post
Post
918
Domain-specific reasoning is crucial when working with big-budget campaigns on Meta. That's why we've launched an experimental Chain-of-Thought (CoT) reasoning model for critical thinking, tailored to Meta's Andromeda algorithm-based campaign structuring and optimization.

Sri-Vigneshwar-DJ/hawky-ai-h1-mini-1b-experimental
Sri-Vigneshwar-DJย 
posted an update 14 days ago
view post
Post
2939
The recent update to Meta's ad algorithm is very difficult to crack, and even the latest models struggle to keep up with it. To address this, we've created a small experimental dataset for fine-tuning models to better tackle Meta's Andromeda algorithm: Sri-Vigneshwar-DJ/hawky-ai-andromeda-dataset
Sri-Vigneshwar-DJย 
posted an update 18 days ago
multimodalartย 
posted an update 3 months ago
view post
Post
12390
Want to iterate on a Hugging Face Space with an LLM?

Now you can easily convert any HF entire repo (Model, Dataset or Space) to a text file and feed it to a language model!

multimodalart/repo2txt
Sri-Vigneshwar-DJย 
posted an update 3 months ago
view post
Post
353
Do you think domain-specific embedding fine-tuners are needed?
I've been working with embeddings for marketing use cases and noticed something: most embeddings don't get marketing concepts very well. They're trained in general-purpose ways.
The Issue I'm Seeing
When I search marketing content with general embeddings:

"organic growth" returns farming articles
"conversion funnel" matches industrial equipment
"brand lift" doesn't connect to campaign effectiveness
Marketing jargon like CAC, ROAS, CTR aren't properly understood

My Question
Do you think domain-specific embeddings are needed for marketing?
Some thoughts:

Marketing has its own vocabulary and concept relationships
General models trained on Wikipedia/web crawl miss these nuances
But is fine-tuning worth the effort vs just using more retrieval tricks?

Quick Example
I fine-tuned all-mpnet-base-v2 on ~1000 marketing concept pairs and saw 15-20% better retrieval accuracy. But I'm curious:

Has anyone else tried this for marketing or other domains?
When do you think domain-specific embeddings are actually necessary vs overkill?
Are there better approaches I'm missing?

https://huggingface.co/blog/Sri-Vigneshwar-DJ/why-your-marketing-rag-system-needs-domain-specifi
  • 6 replies
ยท
Sri-Vigneshwar-DJย 
posted an update 3 months ago
view post
Post
4437
๐Ÿš€ Exciting News! We've released a Performance Marketing Expert Dataset from Hawky.ai [www.hawky.ai]
Hawky-ai



This dataset empowers AI models with cutting-edge strategies for Meta, Google Ads, and TikTok campaigns. It includes:
1. Multi-platform strategies for e-commerce, DTC, B2B, and more
2. Creative optimization and audience targeting insights
3. ROI-driven recommendations based on 2025 best practices

Sri-Vigneshwar-DJ/Performance-Marketing-Data
Sri-Vigneshwar-DJย 
posted an update 4 months ago
view post
Post
3339
๐Ÿš€ Qwen3-Omni for Marketing: A Game-Changer

Just wanted to share something exciting I've been exploringโ€”Qwen3-Omni and how it's transforming marketing workflows.

What makes it special? At Hawky.ai we are started experimenting with Qwen3 recently for Analysis and Optimization.

Unlike traditional tools that look at text, images, or audio separately, Qwen3-Omni analyzes everything together. It handles 119 languages, processes 40-minute audio sequences, and understands both images and videosโ€”all at once.

The cool part? It's 2-3x faster than similar models thanks to its MoE architecture.

Real applications I'm seeing:
Ad Analysis: It scores video ads by combining visual elements, audio tone, and textโ€”giving 25% better CTR predictions than single-mode tools.
Campaign Localization: Drop in one ad, get 10 localized versions with native voiceovers in under a minute. Perfect for testing across markets.

Market Research: Feed it competitor content, podcasts, or UGC videos. It extracts actionable insights like "3-second hooks boost retention by 15%" and saves about 70% of analysis time.

Quality Checks: Automatically catches lip-sync errors and audio-visual mismatches.

Full technical breakdown: https://huggingface.co/blog/Sri-Vigneshwar-DJ/hawky-aiqwen3-omni-advanced-architecture-and-marke

Has anyone else been experimenting with multimodal models for marketing? Would love to hear what you're building!

#MultimodalAI #MarTech #OpenSource
multimodalartย 
posted an update 7 months ago
view post
Post
18052
Self-Forcing - a real-time video distilled model from Wan 2.1 by @adobe is out, and they open sourced it ๐Ÿ

I've built a live real time demo on Spaces ๐Ÿ“น๐Ÿ’จ

multimodalart/self-forcing
ยท
lianghsunย 
posted an update 10 months ago
view post
Post
3585

With the arrival of Twinkle April โ€” Twinkle AIโ€™s annual open-source celebration held every April โ€” our community is excited to unveil its very first project:

๐Ÿ“Š Twinkle Eval (https://github.com/ai-twinkle/Eval), a next-generation evaluation tool led by our contributor @tedslin .

Unlike traditional evaluation tools like iKalaโ€™s ievals (https://github.com/ikala-ai/ievals), which can only evaluate language models (LMs) one sample at a time, Twinkle Eval is designed with Large Reasoning Models (LRMs) in mind. As reasoning time increases with more complex models, traditional tools become increasingly inefficient ๐Ÿ˜ฒ โ€” for example, evaluating LRMs on the ikala/tmmluplus benchmark could take *
half a day without finishing.

One question we were especially curious about:
Does shuffling multiple-choice answer order impact model accuracy? ๐Ÿค”
โ†’ See: "Change Answer Order Can Decrease MMLU Accuracy" โ€“ arXiv:2406.19470v1

To address these challenges, Twinkle Eval brings three key innovations to the table:

1๏ธโƒฃ Parallelized evaluation of samples
2๏ธโƒฃ Multi-round testing for stability
3๏ธโƒฃ Randomized answer order to test robustness

After running experiments, we observed that Twinkle Eval can speed up evaluation by up to 15ร— ๐Ÿš€๐Ÿš€. Interestingly, most models scored slightly lower under the 2๏ธโƒฃ3๏ธโƒฃ test settings compared to their claimed performance โ€” suggesting further benchmarking is needed.

This framework also comes with additional tunable parameters and detailed logging of LM behavior per question โ€” perfect for those who want to dive deeper. ๐Ÿ˜†

If you find Twinkle Eval useful, please โญ the project and help spread the word ๐Ÿค—
ยท
umariganย 
posted an update 12 months ago
view post
Post
958
** Extracting Reasoning Prompts with DeepSeek-R1: A Step Towards Better AI Reasoning **

Hi everyone! ๐Ÿ‘‹

Iโ€™m excited to share a small but impactful project Iโ€™ve been working on, where I extracted **reasoning prompts** using the **DeepSeek-R1 model**. Reasoning prompts are a powerful way to understand how AI models arrive at their answers, and they can be used to train smaller, more efficient models to generate reasoning. Let me walk you through the process and explain why this is important.

---

#### **The Code: Extracting Reasoning Prompts**

Hereโ€™s the code I used to extract reasoning prompts from the openaccess-ai-collective/oo-gpt4-filtered dataset:

from tqdm import tqdm
import time

reasoning_data = []

for example in tqdm(ds, desc="answering"):
    try:
        response = client.chat.completions.create(
            model='deepseek-reasoner',  # Using DeepSeek-R1 for reasoning
            messages=[
                {"role": "system", "content": example['system_prompt']},
                {"role": "user", "content": example['question']},
            ],
            stream=False,
            max_tokens=4096,
            temperature=0.7,
        )
        
        answer = response.choices[0].message.content
        reasoning = response.choices[0].message.reasoning_content

        reasonng_example = {
            "id": example['id'],
            "question": example['question'],
            'answer': answer,
            'reasoning': reasoning,
        }

        reasoning_data.append(reasonng_example)
    except Exception as e:
        print(f"Error translating example: {e}")
        time.sleep(3)  # Wait for 3 seconds before continuing
        continue  # Skip the current example and move to the next one

data: umarigan/deepseek-r1-reasoning-prompts
lianghsunย 
posted an update about 1 year ago
view post
Post
3854
๐Ÿ–– Let me introduce the work I've done over the past three months: ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—• and ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—•-๐—œ๐—ป๐˜€๐˜๐—ฟ๐˜‚๐—ฐ๐˜, now open-sourced on ๐Ÿค— Hugging Face.

๐—น๐—ถ๐—ฎ๐—ป๐—ด๐—ต๐˜€๐˜‚๐—ป/๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—•: This model is built on top of ๐—บ๐—ฒ๐˜๐—ฎ-๐—น๐—น๐—ฎ๐—บ๐—ฎ/๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐Ÿฏ๐—• with continual pretraining. The training dataset consists of a mixture of Traditional Chinese and multilingual texts in specific proportions, including 20B tokens of Traditional Chinese text.

๐—น๐—ถ๐—ฎ๐—ป๐—ด๐—ต๐˜€๐˜‚๐—ป/๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿฎ-๐—ง๐—ฎ๐—ถ๐˜„๐—ฎ๐—ป-๐Ÿฏ๐—•-๐—œ๐—ป๐˜€๐˜๐—ฟ๐˜‚๐—ฐ๐˜: This is a fine-tuned conversational model based on the foundation model.

This Llama-3.2-Taiwan open-source project is currently a one-person effort (yes, I did everything from text preparation โ€” so exhausting!). If you're interested, feel free to join the Discord server for discussions.

๐Ÿ…ฑ๐Ÿ…ด๐Ÿ…ฝ๐Ÿ…ฒ๐Ÿ…ท๐Ÿ…ผ๐Ÿ…ฐ๐Ÿ†๐Ÿ…บ๐Ÿ…ธ๐Ÿ…ฝ๐Ÿ…ถ

The evaluation was conducted using ikala/tmmluplus, though the README page does not yet reflect the latest results. The performance is close to the previous versions, indicating that further improvements might require adding more specialized knowledge in the datasets.

๐Ÿ…ฐ ๐Ÿ…ฒ๐Ÿ…ฐ๐Ÿ…ป๐Ÿ…ป ๐Ÿ…ต๐Ÿ…พ๐Ÿ† ๐Ÿ†‚๐Ÿ†„๐Ÿ…ฟ๐Ÿ…ฟ๐Ÿ…พ๐Ÿ†๐Ÿ†ƒ

If anyone is willing to provide compute resources, it would be greatly appreciated to help this project continue and grow. ๐Ÿ’ช

---
๐Ÿ”๏ธ Foundation model: lianghsun/Llama-3.2-Taiwan-3B
๐Ÿค– Instruction model: lianghsun/Llama-3.2-Taiwan-3B-Instruct
โšก GGUF: lianghsun/Llama-3.2-Taiwan-3B-Instruct-GGUF
  • 4 replies
ยท
Sri-Vigneshwar-DJย 
posted an update about 1 year ago
view post
Post
882
Checkout phi-4 from Microsoft, dropped a day ago... If you โค๏ธ the Phi series, then here is the GGUF - Sri-Vigneshwar-DJ/phi-4-GGUF. phi-4 is a 14B highly efficient open LLM that beats much larger models at math and reasoning - check out evaluations on the Open LLM.

Technical paper - https://arxiv.org/pdf/2412.08905 ; The Data Synthesis approach is interesting
Sri-Vigneshwar-DJย 
posted an update about 1 year ago
view post
Post
2116
Just sharing a thought: I started using DeepSeek V3 a lot, and an idea struck me about agents "orchestrating during inference" on a test-time compute model like DeepSeek V3 or the O1 series.

Agents (Instruction + Function Calls + Memory) execute during inference, and based on the output decision, a decision is made to scale the time to reason or perform other tasks.
Sri-Vigneshwar-DJย 
posted an update about 1 year ago
view post
Post
2374
Combining smolagents with Anthropicโ€™s best practices simplifies building powerful AI agents:

1. Code-Based Agents: Write actions as Python code, reducing steps by 30%.
2. Prompt Chaining: Break tasks into sequential subtasks with validation gates.
3. Routing: Classify inputs and direct them to specialized handlers.
4. Fallback: Handle tasks even if classification fails.

https://huggingface.co/blog/Sri-Vigneshwar-DJ/building-effective-agents-with-anthropics-best-pra