Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
16
6
25
Emin Temiz
PRO
etemiz
Follow
Danishathugging's profile picture
sudanenator's profile picture
nqzfaizal77ai's profile picture
91 followers
·
23 following
https://pickabrain.ai
etemiz
etemiz
etemiz
AI & ML interests
Alignment
Recent Activity
liked
a model
about 21 hours ago
huihui-ai/Huihui-GLM-4.6-abliterated-GGUF
reacted
to
georgewritescode
's
post
with 🚀
2 days ago
Announcing Artificial Analysis Long Context Reasoning (AA-LCR), a new benchmark to evaluate long context performance through testing reasoning capabilities across multiple long documents (~100k tokens) The focus of AA-LCR is to replicate real knowledge work and reasoning tasks, testing capability critical to modern AI applications spanning document analysis, codebase understanding, and complex multi-step workflows. AA-LCR is 100 hard text-based questions that require reasoning across multiple real-world documents that represent ~100k input tokens. Questions are designed so answers cannot be directly found but must be reasoned from multiple information sources, with human testing verifying that each question requires genuine inference rather than retrieval. Key takeaways: ➤ Today’s leading models achieve ~70% accuracy: the top three places go to OpenAI o3 (69%), xAI Grok 4 (68%) and Qwen3 235B 2507 Thinking (67%) ➤👀 We also already have gpt-oss results! 120B performs close to o4-mini (high), in-line with OpenAI claims regarding model performance. We will be following up shortly with a Intelligence Index for the models. ➤ 100 hard text-based questions spanning 7 categories of documents (Company Reports, Industry Reports, Government Consultations, Academia, Legal, Marketing Materials and Survey Reports) ➤ ~100k tokens of input per question, requiring models to support a minimum 128K context window to score on this benchmark ➤ ~3M total unique input tokens spanning ~230 documents to run the benchmark (output tokens typically vary by model) We’re adding AA-LCR to the Artificial Analysis Intelligence Index, and taking the version number to v2.2. Artificial Analysis Intelligence Index v2.2 now includes: MMLU-Pro, GPQA Diamond, AIME 2025, IFBench, LiveCodeBench, SciCode and AA-LCR. Link to dataset: https://huggingface.co/datasets/ArtificialAnalysis/AA-LCR
replied
to
their
post
3 days ago
I realized when I ask longer answers to my questions, the models sometimes produce completely opposite answer. What could be the reason? I do mostly CPT. Should I convert my dataset to SFT and give longer reasonings too for it to have integrity? Example: Is the yolk of an egg more beneficial or the white? Answer in 100 words. Answer: Yolk is more beneficial because .......... Example: Is the yolk of an egg more beneficial or the white? Answer in 500 words. Answer: White is more beneficial because .......... Edit: These happen in temp = 0.0
View all activity
Organizations
None yet
etemiz
's models
9
Sort: Recently updated
etemiz/Ostrich-70B-Llama3-251212
Text Generation
•
71B
•
Updated
18 days ago
•
55
•
2
etemiz/Mistral-Nemo-12B-CWC-Enoch-251014-GGUF
12B
•
Updated
Oct 23
•
198
•
1
etemiz/Ostrich-32B-Qwen3-251003
33B
•
Updated
Oct 9
•
16
•
2
etemiz/Ostrich-32B-AHA-Qwen3-250830
33B
•
Updated
Oct 9
•
5
•
1
etemiz/Ostrich-27B-AHA-Gemma3-250519
Any-to-Any
•
27B
•
Updated
May 17
•
10
etemiz/Hoopoe-8B-Llama-3.1
8B
•
Updated
Jan 18
•
13
•
3
etemiz/Llama-3.3-70B-Instruct-GGUF
71B
•
Updated
Dec 19, 2024
•
91
etemiz/Llama-3.1-70B-Instruct-GGUF
71B
•
Updated
Dec 19, 2024
•
46
etemiz/Llama-3.1-405B-Inst-GGUF
410B
•
Updated
Dec 19, 2024
•
85
•
4