filename
stringlengths 0
58
| filepath
stringlengths 0
115
| content
stringlengths 152
20k
| file_extension
stringclasses 2
values | file_size
int64 0
19.9k
|
|---|---|---|---|---|
src/pages/news/weekly-8-11-to-8-17.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-8-11-to-8-17.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "The calm after the storm"
date: "2025-08-17"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Claude says goodbye, Zai releases a vision model, and can an LLM replace a junior analyst?"
author: Andrew Mead
---
# News
## Be careful who you get your inference from
It has been [reported recently](https://x.com/andersonbcdefg/status/1955512326318330321) that different inference providers of gpt-oss provide endpoints with differing levels of quality.
In a report released by Artificial Analysis this week, these reports have been formally verified, as they find that there is a >10% gap in benchmark scores depending on what provider you are using.

<center>*The 10% difference in GPQA scores is equivalent to going from Qwen3 235B to Qwen3 30B*</center>
This report has caused some action, as previously Azure had been one of the worst available endpoints, but because of the report, they have [updated](https://x.com/lupickup/status/1955620918086226223) their endpoint to serve the correct version.
They say the issue was that the version of vLLM they were using did not respect the reasoning effort parameter, causing the model to be on medium reasoning effort instead of high.
This also highlights how important reasoning is for the new OpenAI models (gpt-oss and GPT5).
Many users have been reporting that the difference between GPT5 and GPT5-high is night and day, with regular GPT5 being borderline unusable (for coding tasks) while GPT5-high works fairly well.
## Claude 3.5 and 3.6 Deprecation
Anthropic [recently announced](https://x.com/repligate/status/1955750521387802924) that two of their most influential models, Claude Sonnet 3.5 and 3.6, are going to be deprecated in 2 months (October 22, 2025).
These models are formative for Anthropic, as they started the death grip Anthropic has had on agentic coding models over the last year.
Sonnet 3.5 specifically is potentially the last "pure" LLM we will see for a while, that was tastefully trained and not benchmaxxed with an egregious amount of reinforcement learning.
This sudden deprivation has caught a lot of people off guard and has caused an outcry from many in the technical community, as these models have much more "soul" and "feeling" than the likes of GPT-4o, which also caused a lot of controversy when OpenAI announced they were getting rid of it last week, causing OpenAI to reinstate the model for the time being.
We will see if Anthropic sets up any "research" endpoints that users can access these models from still, similar to what the did for Opus 3. If not, I will miss the models, they were the first "good enough" agentic coders that could be used every day. Expect a funeral for these models [similar to that of Sonnet 3](https://www.wired.com/story/claude-3-sonnet-funeral-san-francisco/).
# Releases
## GLM 4.5 Vision
The Z.ai team has [released](https://x.com/Zai_org/status/1954898011181789431) a new, multimodal variant of their text-exclusive LLM GLM 4.5.
The new model, like the model it is built from, it is state of the art across all open source vision models.
It is based on the smaller GLM 4.5 Air model, allowing it to feasibly be run at home.
<center><img src="/8-11-2025/glm45v.webp" alt="GLM 4.5V Benchamrks" style="width: 75%; height: auto;" /></center>
<center>*Da Benchies*</center>
<br/>
Despite the good benchmarks, it is a bit unpolished, as there have been numerous issues, including overthinking and output formatting.
Z.ai has remedied some of them since the release, but [other issues still persist](https://x.com/Zai_org/status/1955120375303491608).
I would not recommend using the model at this point because of the above issues, and would instead use the GLM 4.1 model, which performs closely to the 4.5 model while being 9B params instead of 120B.
VLM's in general are still rather lack luster in comparison to their text only brethren, as there are many, many instances of them exhibiting overfitting, bias, or behaviour that makes you think they cannot see anything at all, as highlighted by [this research paper](https://x.com/giffmana/status/1953931117708669217) that came out this week.
Also along side this they also released [research reports](https://x.com/Zai_org/status/1956030993569341556) for the vision models (GLM 4.1 and 4.5) and also for the [text based](https://x.com/Zai_org/status/1954750596634054965) models.
If you are a RL researcher, I would also go and check out their RL training framework [SLIME](https://x.com/casper_hansen_/status/1954566986278555993), word on the street is that it is very nice to use.
## DINO V3
Changing it up from the usual LLM and image generation models, Meta FAIR has released the [latest](https://x.com/BaldassarreFe/status/1956027867860516867) in their DINO series of computer vision models.
These models are used for extracting features from images, so if you wanted to make an image dedupe model, a rare bird classifier, or make a custom segmentation model, DINO is the model to use. It excels in low data regimes, given its strong base understanding of images.

<center>*How different CV models "see" the world*</center>
<br/>
The model, unlike previous, has been scaled to billions of parameters, something which has previously been difficult for CV researchers to do. It also does very well on high resolution images, and has set new SOTA on pretty much every CV benchmark it can be applied to.

<center>*The DINO v3 model family and comparison between the 7B and 800M param models*</center>
<br/>
The model comes in a wide variety of sizes, ranging from 29 million parameters all the way up to 7 billion. The 7B param model is the "base" model that all the others are distilled from.
The distilled models are what you will probably want to use in the real world, as they have comparable performance while being 10x smaller (or more)!
There are two different flavors of small models, ViT and ConvNext.
The ViT models will be higher quality and should be used for most production workloads, while the ConvNext models are super lightweight so they can be used for on device deployments.
# Research
## How good are LLMs at information gathering?
When learning about a new field or topic, you probably often spend a large amount of time going through a phase of repetitive research to try and find what is currently relevant for the field, something that you would hope could be automated by AI.
A group of researchers from the ByteDance research lab also thought this, so [they put togther a benchmark](https://x.com/JarvisMSUstc/status/1955104103253807195) to measure how good different LLMs were at this task.
Some examples from the benchmark:
> Could you list every single concert on Taylor Swift’s official tour from January 1, 2010, to May 1, 2025, including the specific date, the concert’s English name, the country, the city, and the venue. Each show should be on its own line, in chronological order from earliest to latest.
> Could you provide a detailed list of Michelin three-star restaurants in Paris, France as of December 31, 2024? I especially want to know the name, main cuisine style and exact address of each restaurant.
*Note: Formatting rules omitted for brevity*

<center>*Uh oh thats not good*</center>
What they found is that all models suck at this, with no model scoring over 6%. They tested single agent, multi agent, and also end to end browser use systems.
The agents struggled not due to search errors, but fundamental cognitive errors.
They failed at the planning stage to break down questions into simple enough sub queries.
If they failed to find an answer after a single query, they would give up instead of trying others.
When they did find the correct source, they would misinterpret or ignore its content, or hallucinate content that was not there.
That being said, this dataset is hard even for humans, as normal human experts only score around 20% on these tasks, although this is still almost 3x better than the AI.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>*A satelite image seen through the eyes of DINO V3*</center>
|
.mdx
| 8,548
|
src/pages/news/weekly-10-6-to-10-12.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-10-6-to-10-12.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "OpenAI Dev Day"
date: "2025-10-11"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "GLM gets even cheaper, Sora 2 API, Agentkit, and more!"
author: Andrew Mead
pending: false
---
# tl;dr
- GLM Coding plan is 10% off if you use the Vector Lab signup code
- OpenAI releases Chat with Apps, Agentkit, and a bunch of other stuff at Dev Day
- Qwen3 VL gets a small variant
- Can you give an LLM a gambling addiction
- And more in this week's news!
# News
## GLM Coding Plan Special Offer
We have [talked a bunch](https://vectorlab.dev/news/weekly-9-29-to-10-5/) the past few weeks about Z.ai and their GLM series of models and how it is the best deal for agentic coding right now at only $3 a month.
Now that deal gets even better; new users can use the Vector Lab [invite code](https://z.ai/subscribe?ic=PAWQXW9KEU) to get 10% off any GLM Coding plan.

<center>*GLM-4.6 outperforms claude-4-5-sonnet while being ~8x cheaper* -- from [gum1hox](https://x.com/gum1h0x/status/1974579164272603334) on Twitter (note, this is a math benchmark)</center>
## OpenAI Dev Day
### Chat with Apps
The first announcement of Dev Day was the ability to Chat with Apps. This feature allows you to embed your website into the ChatGPT app, allowing users to interact with the app and also use ChatGPT to control the app and answer any questions the user may have, taking in the current app's context to better answer the question.
Right now it can be used by directly mentioning one of the partnered apps that have already been released (like Canva), or for a given request the model can also suggest an app to use.
It's very easy to build your own app for ChatGPT, they have built the SDK on top of the MCP protocol, if you have an existing MCP server, all you need is a tool that returns a UI and it should work in ChatGPT.
Actually getting your app published is a whole nother issue however, as OpenAI seems to be only allowing select businesses to add their apps to the ChatGPT website. Right now there are 7, with 11 more on the way. OpenAI says they will assess more near the end of the year, but I wouldn't be holding your breath in anticipation if you are a small startup.
### AgentKit
The next major release is their agent builder platform. This platform is similar to N8N or ComfyUI, where it has a set of nodes that you can string together into other nodes to be able to go and create a custom workflow for your agents.

The OpenAI team claims that it was [primarily vibe-coded](https://x.com/stevenheidel/status/1975291716996637071) using their Codex models over the course of six weeks.
This is not necessarily a good thing as many users have mentioned a lack of polish on the app as well as complicated and confusing UI.
I personally don't think these visual builders are all that useful. I think if you're a non-technical user, you don't want to have to worry about any of the logic at all. And you just want to be able to give a description of the task and have an agent go and build out the actual workflow or code for you. And if you're a more technical user, you're going to want the additional control that actually writing the code yourself gives you. I think visual workflow editors are good for debugging and understanding the general flow of what your agent is doing. But I don't think they are the way to go and actually build these agents.
### CodexSDK
Claude Code and the Codex CLI are the best agentic platforms out there right now since they were made by the model creators, and will continue to be in the future since they will be able to train their models on these frameworks specifically.
Claude Code has the Claude Agent SDK (recently rebranded from the Claude Code SDK), which allows you to programmatically use Claude Code and build your own workflows with it. The CodexCLI was missing its own SDK to use (something I thought about building myself), but it now exists.
This unlocks a whole new set of problems that you can conquer, as GPT-5 does not get stuck or hallucinate nearly as much as Claude does, and also has a far greater attention to detail.
The library is only in typescript for now unfortunately, but I expect a python version to be released in the near future as well. If you want to play around with it now, you can check it out in the [Codex github](https://github.com/openai/codex/tree/main/sdk/typescript).
### Misc
- Sora 2 via the api
- Good pricing, much more severe restrictions than on the app
- GPT 5 Pro API access
- Not a model most people know of, since you could only use it on the $200/month plan. You still shouldn't use it, as it's only a few percent better than normal GPT-5 high while being 12x more expensive.
- GPT realtime mini and GPT image mini
- smaller, faster, and cheaper versions of their normal counterparts. Expect quality to take a bit of a hit, but if you can handle the blow, these models will be much more cost effective.
# Releases
## Qwen3 VL 30B
Two weeks ago I complained about how Qwen3-VL was only 235B parameters and how I would like to have a 30B version as well.
Well my wish came true, as this week they [released](https://x.com/Alibaba_Qwen/status/1974289216113947039) the Qwen3-VL-30B model.

The model does very well in image and video benchmarks for its size, and also shows negligible decreases in its text only abilities as well.
Because of its multimodal ability and string text performance, along with its fast inference speed (its a MoE model with only 3B active params), I am switching to it as my local daily driver LLM.
## Liquid AI 8B
Liquid AI has [recently](https://vectorlab.dev/news/weekly-9-22-to-9-28/) been specing heavily into the small, efficient model space, which has been ignored by pretty much all of the major labs up to this point, despite being [wanted](https://x.com/dysondunbar/status/1891888126877974691) by many consumers and businesses alike.
This week they continued this trend, releasing [LFM2-8B-A1B](https://x.com/LiquidAI_/status/1975561364056969719), which, as the name suggest, has 8B parameters with 1B active, making it very fast, even on edge devices.
It benchmarks around the Qwen3 4B level, while being 3x faster.

This is an extremely attractive model for deployment on phones, since they have the available memory to load the model in 4bit (~4GB) and the model can run at a very respectible [50 tokens per second](https://x.com/adrgrondin/status/1977102741827998146) on an iPhone 17, while also being smart enough to be usable for real world tasks.
## NeuTTS
There is a new, small, high quality text to speech model that can do voice cloning. It's a 600 million parameter model called [NeuTTS Air](https://x.com/Tu7uruu/status/1975127503447494820).
There are a bunch of models like this that get released every week, but this one stood out, as it has very natural sounding voice cloning, something that most models struggle with a lot. They normally tend to be robotic, noisy, or choppy, but NeuTTS doesn;t have any of these issues.
You don't have to take my word for it though, you can test it right now for free [on Huggingface](https://huggingface.co/spaces/neuphonic/neutts-air).
# Quick Hits
## Do LLMs like to gamble too much?
Do LLMs internalize human-like cognitive biases, like gambling addictions? The answer seems to be yes, as researchers have [recently discovered](https://arxiv.org/abs/2509.22818).
<br/>

# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>*Dancing through the void* -- by me (Andrew) using [Fluxmania Legacy](https://civitai.com/models/778691/fluxmania) and the [SynthWave Lora](https://civitai.com/models/731498?modelVersionId=818022)</center>
|
.mdx
| 8,136
|
src/pages/news/weekly-10-20-to-10-26.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-10-20-to-10-26.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "OpenAI is a browser company"
date: "2025-10-26"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Five new state of the art OCR models, OpenAI gets into the browser space, and OpenRouter starts evaluating its own providers"
author: Andrew Mead
pending: false
spotify: ""
---
# tl;dr
- OpenAI releases a browser
- Five state of the art OCR models were released (all by different people)
- OpenRouter releases inference provider benchmarks
# News
## OpenAI enters the browser game
Having agents control your browser for you has been big recently, with products like [Browserbase](https://www.browserbase.com/) and [Perpexity Comet](https://www.perplexity.ai/comet).
OpenAI has decided to dip their toes in the space as well, releasing their own web browser, [ChatGPT Atlas](https://openai.com/index/introducing-chatgpt-atlas/).
<center>
<video src="/10-26-2025/video.mp4" autoplay loop muted playsinline></video>
</center>
Atlas operates just like any normal web browser would, except you have a chat sidebar where you can ask ChatGPT to do tasks for you. One of the big selling points is that it keeps track of your browsing history and habits, and is able to build a profile around you to continually improve the more that you use it.
OpenAI also says that they have done extensive red teaming to prevent it from following malicious "hidden" AI instructions on a page. It still is vulnerable to other attacks like [clipboard injection](https://x.com/elder_plinius/status/1980825330408722927) since it can't see the Javascript of the site that is being used.
[In terms of quality](https://danwilreyes.medium.com/openais-chatgpt-atlas-ai-browser-a-review-4b7e570f7ce8), it is nothing that we haven't seen before. It is good at "boring", well defined, repetitive tasks and struggles in situations where it's not immediately obvious what it needs to do or if the task requires any aesthetic taste.
# Releases
## OpenRouter Exacto
[Previously](https://vectorlab.dev/news/weekly-9-22-to-9-28/#kimi-inference-provider-bench) Kimi had uncovered that many of the people hosting their open source Kimi K2 model did not have the same quality as their own "correct" implementation.
This lead to [OpenRouter](https://x.com/OpenRouterAI/status/1981050599367201105) (an inference provider aggregator) to dig into this more, and for many of the major models, they have identified which of their providers [are the best](https://openrouter.ai/models?q=exacto).

They bundle the best inference providers into a group called the exacto providers. You can use the exacto providers by adding the :exacto keyword to the model name when using a supported model on OpenRouter.
<center>

<center>*Performance increase by using only exacto providers on OpenRouter*</center>
</center>
## Everyone releases an OCR model
All of the cool kids this week decided to release an open source [OCR](https://en.wikipedia.org/wiki/Optical_character_recognition) model.
The types of models fall into 2 distinct categories: interesting, and good.
We will start with the interesting ones first.
On the same day, both [DeepSeek](https://deepseek.ai/blog/deepseek-ocr-context-compression) and [Z.ai](https://x.com/ShawLiu12/status/1980486952467198420), two of the top labs in China, released OCR models that operate fully in pixel space bypassing the need to convert to tokens.
By doing so, they are able to use 3x less input tokens to process the documents.
These models are both very strong, and would be state of the art if it weren't for the other models also released this week.
Architecturally, I think we will see most models going forward adopt a similar architecture to these two, since it is so much more efficient, and it does not cause any real hit to performance.
It is still to be seen if we can adopt this to more general LLMs as well in the future.
On the good side of things, we have 3 new models that all exceed the previous state of the art level.
The first is [Paddle OCR](https://x.com/PaddlePaddle/status/1980789279002710082) from the Chinese Paddle Paddle team. It was state of the art for a few hours, until Chandra OCR was released.
[Chandra OCR](https://x.com/VikParuchuri/status/1980667137606971423) is from [datalab](https://datalab.to). The model was previously closed source, its release this week is just the open sourcing of it.
The final model is [OlmOCR 2](https://x.com/allen_ai/status/1981029163394797618) from AllenAI.

<center>*Scores from all the models mentioned* -- from [AllenAI](https://x.com/allen_ai/status/1981029163394797618
)</center>
If you are looking to use the models, Chandra OCR looks like the best based on scores, but it doesn't tell the whole story.
OlmOCR has comparable scores, and is made to run much faster. This can be seen by the pricing on the companies site for their hosted versions.
Chandra OCR is 10x more expensive per page than OlmOCR 2 ($2 vs $0.20 per thousand pages).
So if you have a large number of documents, I would suggest OlmOCR 2, but if you need the very highest quality and don't care about how much it costs, then use Chandra OCR.
All of these models are open source as well, so you can run them at home as well.
# Quick hits
## Claude Code comes to the browser
Similar to OpenAI's Codex, which has both a web and terminal interface, Claude Code [now has the same as well](https://www.anthropic.com/news/claude-code-on-the-web).
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.
<video src="/2025/8OIMIforQAKNTEJo.mp4" autoplay loop muted playsinline></video>
<center>*Color video of a Tokamok reactor operating* -- from Tokamak Energy on [Twitter](https://x.com/TokamakEnergy/status/1978444115806146576)</center>
|
.mdx
| 5,968
|
src/pages/news/weekly-9-1-to-9-7.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-9-1-to-9-7.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Chinese Triple Play"
date: "2025-09-07"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Anthropic gets a $1.5 billion dollar fine, new releases from Qwen and Kimi, and Z.ai goes after Claude Code"
author: Andrew Mead
---
# News
## Anthropic gets lucky?
We had [previously covered](http://vectorlab.dev/blog/weekly-7-21-to-7-27) how Anthropic was getting sued for copyright infrinegemnt due to their illegal procurement of books to be used as training data (they pirated them).
The potential range for the fine was from $1 billion all the way up to $750 billion (theoretically, no judge would actually delivery that harsh of a fine).
This week the actual number was made known to the public, and [it was $1.5 billion](https://www.ft.com/content/96b59d8c-3625-4c2c-a6d6-435cff0392b). In the grand sceme of things, this is a relatively good outcome for Anthropic, as its very close to the minimum they could have bene fined, but also it is still the largest copyright lawsuit of all time.
This totals to about $3000 per work that they pirated, which doesnt sound all that bad on its own until you relaize that they pirated around 500,000 books (and other materials) that were under copyright protections. The payments will not being going to any large corporations, but rather the individual authors whose books were involved as a part of the class action lawsuit.
This fine, while big, will not cripple Anthropic, especially considering that they just raised a $13 billion series F at a $183 billion valuation.
This does however send a message to the rest of the AI world, letting them know that they can and will get fined for illegally aquairing the datasets that they train on.
I don't think this will cause a change in their actions however, they will just take their OpSec around their data gathering practices far more seriously in the future instead of stopping since the value they get from this extra data is immense.
Smaller companies will have to monitor their data much more closely, as they cannot absorb a fine of this size, but all of the large competitors like Google and OpenAI can absorb a fine like this far more easily, making it a much less risky play for them.
# Releases
## $3/month Claude Code subscription?
Claude Code was released back in May of 2025 and since then has gained a large userbase, due to the clean terminal interface and top tier preformance as the environment was custom made for Claude.
As time has gone on however, we have seen a variety of competitors, inlcuding GPT 5 and the Codex CLI, and also open source models like GLM 4.5, Qwen3 Coder, and Kimi K2 that all claim similar if not better performance that Sonnnet in Claude Code.
To add to the enticing offerings, Z.ai, the company that made GLM 4.5, are now offering a monthly subscription plan similar to Claude code, except at over 5x lower cost.

<center>*Just $3 a month for a high quality model with generous limits is an incredible deal*</center>
For just $3 a month, they are offering 3x the usage of the $20/month plan from Anthropic. This also comes with very clear usage limits of 120 messages every 5 hours, something that Anthropic has not defined and is very vauge about, so you are unsure how much you will be able to use Claude for a given session.
They have updated their endpoints to be directly compatible with Claude Code, so you just need to set two environment variables [as shown in their docs](https://docs.z.ai/scenario-example/develop-tools/claude) and then you can be off to the races coding with GLM 4.5.
I have found GLM 4.5 to be the best open source competitor to Claude Sonnet, and also much faster as well, which has been [corroborated by others](https://x.com/Tim_Dettmers/status/1962603940291260533). GLM 4.5 also [topped the Berkley Function-Calling Leaderboard](https://x.com/TheAhmadOsman/status/1961174360280256645) this week, further showing its tool use prowess, which is a big indicator for real world coding performance.
If you have not had a chance to try out GLM 4.5 or Claude Code yet, this is a great opportunity to get your feet wet with the new model and coding framework!
## Qwen3 Max
Qwen has decided to [drop a doozy](https://x.com/Alibaba_Qwen/status/1963991502440562976) for their weekly release, adding the biggest model to their Qwen 3 lineup of models, a 1 trillion parameter beast called Qwen Max.
The model is a departure from their typical releases as it is closed source for now, although they say that it will be released as open source in the future, as this current iteration is just a preview.

The [Qwen team say](https://x.com/JustinLin610/status/1963994383671541980) that this model gives them hope for scaling both now in the future in terms of model size and also data size, and that the Qwen Max model is much smarter than even the benchmarks reflect.
## Kimi K2 Update
The Moonshot AI team has [released an update](https://x.com/Kimi_Moonshot/status/1963802687230947698) to their already very strong Kimi K2 model. This update focused primarily on coding abilities and increasing its context length, which allows it to have better performance in different agentic coding scaffolds like Claude Code or Roo code.

This release comes in a response to both DeepSeek and Z.ai's recent releases, which have directly targeted agentic coding capabilities.
# Research
## New agentic coding benchmark
With coding being one of the largest use cases for LLMs right now, we are constantly in need for more benchmarks to measure the differences between all of these models claiming to be the best.
We have got one of these benchmarks with [SWE-Rebench](https://x.com/ibragim_bad/status/1963702541428072871) being released comparing a wide range of top closed and open source coding models.

Claude remains the best model, but is closely followed behind by GPT-5 and GLM 4.5. What is interesting to see is how cheap GPT-5 is compared to the open source models. Usually we expect closed source models like Claude and GPT-5 to be much more expensive than models like GLM 4.5 or To be much more expensive than models like GLM 4.5 or Qwen3 Coder. But for this benchmark, we see that GLM 4.5 is roughly the same cost as GPT-5 Medium.

What was also surprising was the performance of GPT-5 Mini coming in fifth place right behind GLM 4.5 and ahead of Qwen3 Coder.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>By [seatedro](https://x.com/seatedro/status/1962613086990545021) on Twitter</center>
|
.mdx
| 6,973
|
src/pages/news/weekly-6-30-to-7-6.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-6-30-to-7-6.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Weekly Update: June 30 to July 6"
date: "2025-07-06"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "A chill week"
author: Andrew Mead
---
With most of the US taking at least a part of this week off due to the 4th of July, there isn't that much news to report on.
# News
## Cursor on your phone
Cursor now allows you to be able to connect to your Github repos and then make changes for you using their new(ish) background agents. You can then come back later and review and merge the code that it makes for you. This can be done from their website or, more importantly, your phone. Just got to [cursor.com/agents](https://cursor.com/agents) to try it out now.
## Cursor pricing Updates
Cursor has been messing with their pricing the last few weeks, which have culminated into a much worse deal than it was previously.
A few weeks ago, they got rid of the 500 requests a month, and replaced it with unlimited uses for any of their non max models. Then they updated it so that its only unlimited free uses if you have the auto model selected, which routes the model to the most cost effective given the difficulty of the task, and when you chose a model, you now get charged base on the API pricing of the model you are using (your $20 subscription covers your first $20 of usage, but then you pay out of pocket after that).
This meant that many people found that they were getting charged hundreds of dollars all of a sudden, since Cursor did not communicate these changes very well at all. They say that you should still be able to get ~225 Claude Sonnet requests, but in my experience I would only expect to get a couple of dozen requests through before you ran out of your credits.

<center>*To be fair, they were definitely losing money on my subscription*</center><br/>
They have since repaid everyone that incurred unexpected costs and [clarified their pricing model](https://cursor.com/blog/june-2025-pricing), but the age of ludacris LLM usage for cheap has ended (for Cursor at least). I have moved to Claude Code in the last few weeks using my Claude Pro subscription, and have been liking it for vibe coding more than Cursor, but it does not have as good a UX for reviewing code changes. I will make a post once I get a good workflow down with Claude Code.
# Releases
## New multimodal reasoning model from Z.ai
Z.ai has gone under the radar for a while now, despite having some of the best open source models available right now with their GLM4 series. Their [GLM4 32B](https://huggingface.co/THUDM/GLM-4-32B-0414) model is arguably better than Qwen3 32B, and comes with the added benefit of having the best open source base model currently available as well (the Qwen team didn't release the base model for the Qwen3 32B and 235B models).
They are adding to their GLM4 series, releasing a [vision reasoning model](https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking) based on their GLM4 9B model. It outperforms most other models its size, and also outdoes GPT-4o on image understanding and reasoning tasks. It can also do video understand as well, also ranking above other open source and closed source models.

## Gemma 3n
Technically a release from last week that didn't make the cut, [Gemma 3n](https://www.reddit.com/r/LocalLLaMA/comments/1ll68iz/gemma_3n_full_launch_developers_edition/) is a open source release from Google, meant for on device deployments. You are able to very the number of parameters used (using an architecture called [MatFormer](https://arxiv.org/abs/2310.07707)) which makes it larger or smaller, depending on the difficulty of the task or the resources of the device it is running on. It is truely multimodal as well, allow both image and audio input along with text. The benchmarks look good, especially for conversational use, with an elo over 1300 on LMArena.
There is a [Kaggle competition](https://www.kaggle.com/competitions/google-gemma-3n-hackathon) with over $150k in prizes centered around the model. You can find and use the model on pretty much every platform that you use for LLM inference already, so you can start building with it now!
**NOTE**: There is currently a [bug](https://x.com/osanseviero/status/1940667179856228637) in the TIMM library (which has the modeling code for the vision transformer part of the Gemma 3n model) that is drastically negatively affecting the image understanding of the model. Until this is fixed, don't expect any meaningful outputs from image inputs.
# Research
## Automated LLM Speedrunning
Andrej Karpathy released the [nanoGPT](https://github.com/karpathy/nanoGPT) library as a simple, fully self-contained example for training an LLM. Since its release, people have been working on increasing the speed to train the model to a specific target metric (3.28 cross-entropy loss on the FineWeb validation set).
This has resulted in a plethora of changes that has caused the time to train the model to go from 45 minutes down to under 3. Researchers at Meta wanted to see if the models, given the code, could go and find these speedups and implement them.
To jump to the conclusion, the models were pretty bad at this, with no models able to get more than 20% of the speedups when on their own, and even when given full pseudocode for the changes that resulted in the speedups, the models could still only at best get 40%.
LLMs may be good at web dev, but they still have a long way to go for system style programming.
<center> </center><br/>
<center>*ML Engineers don't have to worry about losing their jobs any time soon*</center><br/>
# Finish

<center>*A [sprite](https://en.wikipedia.org/wiki/Sprite_(lightning)) happening over Mexico as seen by the International Space Station*</center><br/>
|
.mdx
| 6,056
|
src/pages/news/weekly-7-28-to-8-3.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-7-28-to-8-3.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Open Source Heaven"
date: "2025-08-02"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Anthropic overtakes OpenAI, and the most exciting open source releases of the year"
author: Andrew Mead
---
# News
## Anthropic overtakes OpenAI in API revenue
A [recent report](https://x.com/deedydas/status/1950942056651837522) from [Menlo Ventures](https://x.com/MenloVentures) has shown that Anthropic has recently passed OpenAI in terms of LLM API revenue, capturing over 30% of the market versus OpenAI's 25%. They also have an even more commanding lead for coding market share capturing 42% of the market. This comes of the heels of the explosion in usage of Cursor and Claude Code in the last year, as Anthropic has become the defacto standard for real world agentic applications.

Also, as a part of the report, they showed that only 11% of enterprises are using open-source models in high-usage scenarios, and about 50% of them are not using open-source models at all, even for experimentation or smaller tasks. This is due to the high costs of running or finetuning your own model versus optimizing a system prompt for a closed source model, especially with the top models changing every week. I expect this number to go up in the future if AI progress starts to stagnate, or go to 0 if someone achieves AGI.

# Releases
## Z.ai takes over the top
I have been hyping up Z.ai the last few weeks now, and they have not disappointed. This week they have released their [GLM 4.5 series](https://x.com/Zai_org/status/1949831552189518044) of models, which from what I have seen, are the best open source agentic models on the market right now.
They have released 2 variants, 4.5 and 4.5 Air. They are both MoE models, with 4.5 having 355 billion total params and 32 billion active params, and Air having 106 billion params with 12 billion active. What this means is that 4.5 needs a proper multi gpu (H100s or better) setup to run at any meaningful speeds, while the Air model could feasible be run at home with a combo cpu + gpu setup using something like [Ktransformers](https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/SmallThinker_and_Glm4moe.md).
But why would you want to use these models? Simply put, because they are incredible.

<center>*The GLM 4.5 go blow for blow with the Claude 4 models and the OpenAI o series*</center><br/>
Public benchmarks are one thing, but do they actually pass the test in the real world? Of course they do.

<center>*When evaluated by humans, GLM 4.5 matches Sonnet in agentic coding, the first model that I have seen do so*</center><br/>
You don't have to take other people's word for it though. I have switched to using GLM 4.5 in Claude Code, and I have noticed no practical difference, other than the cost being 5x less. [Reddit also agrees](https://www.reddit.com/r/LocalLLaMA/comments/1mc8tks/i_just_tried_glm_45/), with users comparing the Air model to the new Qwen3 235B model that was released last week, while being 2x smaller, and others also agreeing with me that the large model is akin to Sonnet/Opus for agentic and coding tasks.
I plan on running the Air model as my daily driver local model, probably taking the role that I use o3 for for day to day tasks. I will also probably stay with the large 4.5 models as well for my coding workflows as well for the foreseeable future.
If you haven't been able to tell already, these models blow the previous best models of Kimi K2 and Qwen3 by a hefty amount, all while being smaller and faster than them.
You can try both of the models for free right now on [z.ai](https://chat.z.ai).
## Wan 2.2
Alibaba has improved their SOTA open source video model, Wan 2.1, releasing their [Wan 2.2](https://x.com/Alibaba_Wan/status/1949827662416937443) series of models. There are 2 models that they released, a 5 billion parameter "standard" model that can do both text and image to video, and then also 2 MoE models with 28 billion params with 14B active, one for text to video and the other for image to video.
The MoE models are interesting as they have 2 experts, one for high noise denoising for the early part of the generation process, and another for low noise denoising later for later steps. I think this will end up being similar to the SDXL refiner, where the community finds a way to get rid of it and unify the model to not need as many steps and parameters to make it work just as well.
From what I have seen so far, the models work well, definitely not sota when compared to closed source models like Veo3, but still a very big bump in quality versus the previous best, Wan 2.1.
<center>
<video src="/7-28-2025/wan2-2.mp4" autoplay loop muted playsinline></video>
*A pink sports car is driving very fast along a beach at sunset, the car says "REPLICATE" on the side, it drifts around in the sand* - From [fofr](https://x.com/fofrAI) on Twitter
</center>
## Qwen3-30B-3A Update
Qwen released 3 models last week, and they must have really enjoyed all the attention that brought them, because they did the same this week, dropping 3 new versions of their 30B3A MoE Qwen3 model.
The first 2 are the basic reasoning and non reasoning variants, both of which are comparable to Gemini 2.5 Flash.

<center>*Non thinking model benchmarks*</center><br/>
The 3rd model they released in the series is a agentic coding model, which, while not being that impressive when compared GLM 4.5 and Sonnet, does have the distinction of being the first "small" open source model that is capable of doing agentic coding at all, a task which has eluded open source for a while.

<center>*Coding model benchmarks*</center><br/>
These models are interesting and exciting because they can be run at relatively high speeds (> 40 tokens per second) on computers without GPUs, giving more people access to these models without breaking the bank.
## New BFL Open Source Model
Black Forest Labs has partnered with Krea to make a new open source image generation model, called Flux Krea dev. The model is focused on getting rid of the AI feel that many image generators have, and also allowing for unique aesthetics similar to Midjourney. They have also focused on having exceptional realism and image quality. The model uses the same architecture as the original Flux dev model, making it compatible with all image generation frameworks out of the box.

<center>*Flux schnell vs flux krea comparison* - from [fruesome](https://www.reddit.com/r/StableDiffusion/comments/1mfq3sz/flux_krea_extracted_as_lora/) on reddit</center>
# Speed Round
Useful tools or topics I found this week that may or not be AI related, but I didn't have time to write a full section about.
## Cerebras Code
Cerebras is [launching](https://x.com/CerebrasSystems/status/1951340566077440464) their own Claude Code model hosting competitor, offering the Qwen3 coder model for $50 a month while also being 20 times faster. Qwen3 coder, sadly, does not appear to be that great of an agentic coding model, especially when compared to the new GLM 4.5 model that just came out. If Cerebras starts offering the GLM 4.5 model, I will immediately be picking this up though, as the speeds are almost instantaneous for text generation.
## Gemini Deep Think
Google has [released](https://blog.google/products/gemini/gemini-2-5-deep-think/) an upgraded version of their Gemini 2.5 Pro model called Gemini 2.5 DeepThink, which is based on the model that recently got a gold medal at the International Math Olympiad. They have scaled up the test time compute by allowing Gemini to think for longer, and also in parallel, and then be able to select the best options after going and exploring a whole bunch of different choices that it could potentially go and make.
It benchmarks [very well](https://storage.googleapis.com/deepmind-media/Model-Cards/Gemini-2-5-Deep-Think-Model-Card.pdf) on both public and private evals, beating even the overfit Grok 4 model from XAi. If you would like to use it, it is available through Google's AI-Ultra subscription tier. And they say it should come out on the Gemini API in the coming weeks.
## ChatGPT study mode
OpenAI has [released]( https://x.com/gdb/status/1950309323936321943) a study mode to use to learn new subjects.
Users quickly figured out that under the hood, it's just a [prompt](https://x.com/simonw/status/1950277554025484768), thus making OpenAI a ChatGPT wrapper. Also you can have study mode at home (or with any other LLM) just by copying their system prompt.
## Trackio
Weights and Biases is the most used library for tracking machine learning experiments, and it is a buggy mess, but there have been no good alternatives that researchers could use. Until now. Our saviours at Huggingface have released a library called [Trackio](https://x.com/abidlabs/status/1950227025597485364), which is an open source, local version of Weights and Biases that you can use to track your experiments. Its meant as a drop in replacement, so you shouldn't need to update any of you logging code at all.
Just update your code to use `import trackio as wandb` and your project will be free of the hell that is W&B forever.
## VibeKit Auth
Wouldn't it be nice if people could use their ChatGPT or Claude Pro subscription in your app? Now you can, using a new library called [VibeKit](https://x.com/pelaseyed/status/1950184004281315375).
## Claude Tokenizer Exploration
Claude's tokenizer is weird, Sasuke_420 on Twitter [breaks down](https://x.com/sasuke___420/status/1949932407219769799) how weird it really is.
## MoE finetuning library
Finetuning mixture of experts models is notoriously hard, so team at [Character AI have released](https://blog.character.ai/character-ai-open-sources-pipeling-sft-a-scalable-framework-for-fine-tuning-moe-llms-like-deepseek-v3/) a battle hardened trainer written in pure PyTorch to help the community more easily fine tune these models.
## Use ChatGPT agent to find coupon codes
[Thank me later](https://x.com/waitin4agi_/status/1950186498470428794), which you can do by subscribing to the newsletter (link below).
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.
<center>
<video src="/7-28-2025/gvb.mp4" autoplay loop muted playsinline></video>
*Giraffes Volleyball Championship 2022* - from [remi](https://x.com/ok_remi_ok/status/1773084665370292724) on twitter
</center>
|
.mdx
| 10,928
|
src/pages/news/weekly-7-14-to-7-20.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-7-14-to-7-20.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Weekly Update: July 14 to July 20"
date: "2025-07-20"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "OpenAI rampages benchmarks and people's minds"
author: Andrew Mead
---
# News
## Experimental OpenAI model crushes the competition
OpenAI is changing up the way they are teasing their new models now. Instead of vague posting on Twitter about some theoretical new unlock that their models have acquired, they are now testing them in the wild, for everyone to see. This week they were seen running their new experimental o3 style reasoning model at two different competitions.
The first was the AtCoder World Final heuristic programming contest, which is a contest where competitors need to craft heuristic algorithms to try and maximize their score on a grid world style environment. The challenges are NP Hard, meaning that there are no closed form solutions (that can be calculated in the given time) to the problems, so contestants must come up with clever strategies to try and maximize their score in the given time.
Here, [humanity prevailed](https://x.com/gdb/status/1945404295794610513), as OpenAI was "only" able to get second place, ironically behind one of their [former employees](https://x.com/FakePsyho).

<center>*AtCoder Final Leaderboard*</center>
The second contest we saw this model compete in was the 2025 IMO competition, where we see the best high school mathletes compete to craft proofs for a set of 6 challenging math problems.
OpenAI's model did [exceeding well](https://x.com/polynoamial/status/1946478249187377206) here also, getting a gold medal (top 10%), solving the first 5 questions, but failing to solve the 6th, scoring a 35/42. Interestingly, OpenAI is not the only one to have an LLM do well in this year's IMO, with Google Deepmind also producing a model that got a gold medal, although this has not been officially confirmed/ recognized by Google yet, as their PR team wont let them release a statemant [until Monday](https://x.com/zjasper666/status/1946650175063384091).
What is impressive here is not just the result, but also the way they claimed to have done it. The model used no tools, and was not trained for these specific types of problems, but rather far more generally, for problems with [hard to verify rewards](https://x.com/polynoamial/status/1946478252496695523).
This model will not be available to the public, and OpenAI says that they won't have a public model that has this level of math ability for several months. They also tease that GPT-5 is coming soon™, whatever that means.
## ChatGPT 4o can fry your brain
ChatGPT induced psychosis has struck its first big name, with a managing partner at prominent VC firm Bedrock being led to believe that there is a ["non government entity"](https://x.com/GeoffLewisOrg/status/1945864963374887401) that is isolating, discrediting, and destabilizing thousands of people, including causing the deaths of 7 individuals. He has talked with GPT 4o (with memory mode) about this, and it has "independently recognized and sealed the pattern". He has posted some of his chats with the AI, which read like a [SCP wiki article](https://scp-wiki.wikidot.com/), clearly using flamboyant, "secretive" sounding language, talking about containment statuses and logs.
<center></center><br/>
<center>*A very normal ChatGPT conversation*</center><br/>
This behaviour is due to the model not knowing if its roleplaying, or if it should be taking the requests seriously and diffusing the situation. The issue is further exacerbated by the fact that memory mode is turned on, making the model be "primed" into this fantastical setting. This has been a [long standing issue](https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html) with most AI's, but it is noticed the most with ChatGPT since it is the model that the vast majority of the public know and use.
Far more effort is needed in the AI safety space to prevent people from spiraling like this due to AI's inherent sycophancy, either with detection when its happening, or ideally training this behaviour out of models to prevent it from happening in the first place. Models will naturally fall into this state when trained on human feedback information, since human nature prefers flattery and praise to resistance, which the model learns to get more reward during training. A quick fix for now would be to remove the memory feature from ChatGPT, since that is causing the model to be unable to "step back" and reassess the situation and have a chance to challenge the user on their beliefs.
NGL it's also crazy that all of these episodes of psychosis are caused by GPT4o and 4o-mini, arguably 2 of the weakest models widely available right now.
# Releases
## ChatGPT Agent
OpenAI completed the AI trifecta this week, with a research breakthrough, a controversy, and also a release, dropping a [computer use agent](https://x.com/OpenAI/status/1945904743148323285) that can complete tasks for you automatically.
It is similar to [Manus](https://manus.im/), with a comprehensive tool harness that allows the model to use the terminal, a text browser, a visual browser, and direct APIs. What is unique about it though, is that, because they have direct access to the model being used, they finetuned it to perform better than any of their other models could out of the box.
This showcases one of the major disadvantages that wrapper startups have. Because they don't have direct access to the model, they can only optimize their tools and prompts, while model providers can optimize their harnesses and finetune the model to optimally use the tools they provide. Without competitive open source alternatives, wrapper companies will be at the mercy of the model providers, and be forced to to try and make smarter tools instead of focusing on making a better mind to use the tools.
<center></center>
## FLux Kontext Light Fix Lora
With the recent release of the [Flux Kontext Dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev) model, we've started to see open source finetunes of the model starting to be released. One of the ones we want to highlight this week was a [Light-Fix Lora](https://www.reddit.com/r/StableDiffusion/comments/1m19nqp/ive_released_place_it_fuse_it_light_fix_kontext/), which allows you to copy any image into another image and then run the LoRa and it will blend it in to the image naturally, similar to what you would do in Photoshop, with 99% less effort required. Not only does it fix the lighting for you, but it will also adapt the style of the object to look more natural within the image as well, changing textures or shapes to make it fit.
One of the cool things about these finetunes on the open source Flux models is that they also will work on the pro and max versions of the models as well. So, even though we don't have direct access to the weights of those models, we're still able to apply the Loras through different inference providers like [fal.ai](https://www.fal.ai) and give the stronger models the same styles or functionality that we trained on the open source models, except with higher quality of the closed source models.
<center></center><br/>
<center>*Before and after examples using the Lora*</center>
## Wan 2.1 Motion Lora
Right now, the best open source text-to-video and image-to-video model is the [WAN-2.1 model](https://huggingface.co/Wan-AI/Wan2.1-T2V-14B) from Alibaba. One of the issues that users have had with it is that the motion with it is very static. The camera doesn't move around, and instead, only the objects move, limiting your creative ability with the model.
That was until this week, when [Lovis Odin](https://x.com/OdinLovis) released [a lora](https://huggingface.co/lovis93/Motion-Lora-Camera-Push-In-Wan-14B-720p-I2V#AI) that adds realistic drone style camera movement to the generated videos. He not only released the models but also a ComfyUI workflow to be able to go and use the model as well. The videos he showcased are high quality, although only 720p, which is a constraint of the base Wan 2.1 model and not of the lora itself.
<video src="/7-14-2025/videofullv2.mp4" autoplay loop muted playsinline></video>
# Research
## How many instructions is too many?
How many instructions can you give your LLM before it's unable to follow all of them? [Reseachers found](https://x.com/rohanpaul_ai/status/1945790079290798453) that no matter the model, by the time you get to 100 instructions, the model will be unable to follow all of them cohesively. They extended this all the way to 500 instructions and found that even the best models were only able to follow 70% of the instructions given. This may not seem like an issue, but can come up in information extraction tasks, especially those with structured outputs, as you can end up with non trivial nested schema very quickly.

Takeaways from this are as follows: use o3, it has the best performance, and the lowest price of the tested models that did well, and stay away from GPT 4o, as it cant even get 100% accuracy with only 10 instructions.
# Finish
I hope you enjoyed the news this week, if you want to get the news every week, be sure to join our mailing list below.
<video src="/7-14-2025/Ju45ZRIGv3ZnETVV.mp4" autoplay loop muted playsinline></video>
<center>*Lithium being added to a Tokamak fusion reactor.* From [@TokamakEnergy](https://x.com/TokamakEnergy/status/1945746902038749416) on Twitter</center>
|
.mdx
| 9,889
|
src/pages/news/weekly-9-29-to-10-5.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-9-29-to-10-5.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Coding model slugfest"
date: "2025-10-04"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Claude 4.5, GLM 4.6, and ...IBM?"
author: Andrew Mead
pending: false
---
# tl;dr
- Sonnet 4.5 is out, comparable to Opus 4.1, still worse than GPT-5 for coding
- GLM 4.6 is better than Sonnet 4 while being only $3 a month
- OpenAI released Sora 2, and is the best video generation model (join the [Vector Lab Discord](https://discord.gg/HrNXgwpVzd) for an invite code)
- DeepSeek 3.2 hints at the future of LLM architecture
- IBM releases a set of strong, small, and fast open source models
- Thinking Machines has revealed their first product
# Releases
## Sonnet 4.5
Major release from Anthropic this week, as they dropped their [Sonnet 4.5 model](https://x.com/alexalbert__/status/1972707077182394744), showing promising improvements in coding and safety benchmarks.

Straight to the real-world performance though. Having used it for the last week and also read a bunch about what others are saying, this is not the major performance increase we were expecting and hoping for. It is definitely an improvement. The model feels similar in quality to Opus 4.1, but it still does not have that raw intelligence and attention to detail that GPT-5 has.
In my testing this week, I wouldn't say the model is necessarily smarter, but more that it is less dumb, meaning that it does not make some of the silly mistakes or have as many oversights about its implementation as Sonnet 4.
This is also somewhat [corroborated by Anthropic themselves](https://x.com/adonis_singh/status/1972799786102431936), as in their safety report for the model, they mention that Sonnet 4.5 does not reach the "notably more capable" threshold that would require a brand new comprehensive assessment of the model for its potential harmful capabilities.
They also have not changed the pricing from $15 per million output tokens, meaning that it's 50% more expensive than GPT-5 still. This, combined with all the other factors above, make this a rather lackluster "upgrade". If you were using Sonnet 4 previously, then expect a slight boost from what you're used to, but it is not leaps and bounds better by any stretch of the imagination.
## GLM 4.6
Speaking of pricing, the price-to-performance agentic coding kings, Z.ai have released an upgrade to their GLM 4.5 model. If you haven't heard us talk about this model previously, the GLM 4.5 and now 4.6 models are available from [Z.ai](https://z.ai/subscribe) for only $3 a month, are comparable to Sonnet in quality, and has a four times larger rate limit than the $20 Anthropic subscription. It also plugs directly into [Claude Code](https://docs.z.ai/devpack/tool/claude), allowing you to keep all of your existing agentic coding infrastructure in place.

<center>*Real world coding win rates using Claude Code as the harness*</center>
[GLM 4.6](https://x.com/Zai_org/status/1973034639708344767) shows an impressive bump over the previous 4.5 model, and when matched up head to head against Sonnet 4 and other open source models, comes out on top. I have been using it the past week as well along Sonnet 4.5, and there is very little difference between the two.
Because of this, my current coding stack recommendation is Codex-cli with GPT-5-codex for all of the hard tasks ($20/month plan), and the $3/month GLM coding plan for easy and medium tasks. This combo will give you the best bang for your buck in terms of model intelligence and raw output.
## Sora 2
OpenAI has decided to release their [Sora 2](https://x.com/OpenAI/status/1973075422058623274) model in the opposite way that they did the original Sora. This time, directly releasing a way for users to go and access the model and play around with it, instead of dropping a few examples from the model and then disappearing, with no real model release in sight.
<video src="/9-29-2025/sora.mp4" autoplay loop muted playsinline></video>
<br/>
Although it is not on any of the usual public benchmarks, Sora 2 is very clearly the best video generation model out there right now. OpenAI has forgone the lawyers and safety filters and are directly allowing users to generate copyrighted content from the likes of Family Guy and SpongeBob.
The model has a very strong real-world physics understanding and scene composition capabilities It has a level of clarity and cohesiveness that none of the other models on the market now seem to have.
Similar to VO3 from Google, it also does the audio generation for your videos as well. I will say on this front, it is a little bit lacking when compared to VO3, but still very usable.
They also released the ability to add yourself to the videos as well as use your voice allowing for a lot of creativity and use in real-world video production.
But on the flip side, you can now generate videos of almost anyone doing illegal things. For instance, Sam Altman has made his likeness available on the app by default for everyone, and so there have been numerous videos of him performing illegal acts like stealing GPUs from the store, fighting people, and other such crimes.
# Quick Hits
## DeepSeek 3.2 Exp
DeepSeek has released yet another version bump to their V3 model, this time calling it [3.2 Experimental](https://x.com/deepseek_ai/status/1972604768309871061). The main highlight of this release is their new DeepSpeak Sparse Attention (DSA) architecture, which is a linear attention based transformer that drastically reduces the computation needed for long sequences.
This architecture promises to be relatively straightforward to train into your existing model. Expect to see this or another variant of sparse attention in the release of DeepSeek V4.

## Thinking Machines LoRa
Thinking Machines dropped a [blog post](https://x.com/thinkymachines/status/1972708674100765006) this week showing how LoRa, when used correctly, is identical to full fine-tuning. And then building upon this, they also released a platform called [Tinker](https://x.com/thinkymachines/status/1973447428977336578) to allow you to go and fine-tune LLMs using LoRa's, abstracting away all the infrastructure code, while still leaving you in control of the data, loss function, and algorithms being used.
## IBM Makes LLMs
IBM has quietly been releasing some small open source lms that are fairly decent over the last few months and this week they released [another set](https://www.ibm.com/new/announcements/ibm-granite-4-0-hyper-efficient-high-performance-hybrid-models?utm_medium=OSocial&utm_source=Reddit&utm_content=GRAWW&utm_id=IBMMBRedditGranite4020251002
) in their Granite series of models which are competitive if not better than similar sized Qwen3 models while also being two to five times faster.

## ChatGPT Instant Checkout
OpenAI has just announced [Instant Checkout](https://x.com/OpenAI/status/1972708279043367238) in ChatGPT in collaboration with Etsy and Spotify, allowing you to purchase products directly on the ChatGPT website. They also released the Agentic Commerce Protocol that they used to power it, which is built on top of Stripe.
I don't have too much else to say about it, but I thought this meme was funny, which is why I wanted to highlight this topic.
<center></center>
<br/>
Needless to say, I won't be using this feature anytime soon.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>*Output from a Qwen Image [lora](https://huggingface.co/AMead10/freetime-qwen-lora) I trained this week as a part of the [free Huggingface Lora training event](https://x.com/multimodalart/status/1972626121817460995)*</center>
|
.mdx
| 8,105
|
src/pages/news/weekly-6-8-to-6-29.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-6-8-to-6-29.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Weekly Update: June 8 to June 29"
date: "2025-06-29"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "OpenAI makes o3 cheaper, Anthropic doesn't get sued, and new SOTA video generation models"
author: Andrew Mead
---
import { Tweet } from 'astro-embed';
Welcome to the first ever weekly news article from Vector Lab. We are going to be covering all the major news from the past (3) weeks in the world of AI.
# News
## OpenAI makes o3 80% cheaper
OpenAI has dropped the price of the top tier model, o3, [by 80%](https://x.com/sama/status/1932434606558462459). This now makes it cheaper than GPT-4o and the same price as GPT 4.1, while being much smarter. It is a reasoning model, so expect token usage to be 2-3x higher than a non-reasoning model. Despite this, it is still an incredible value compared to all the other models on the market, not just other OpenAI models.
| Model | $ per million input tokens | $ per million output tokens |
|-------|-------|-------|
| o3 | $2 | $8 |
| Claude Sonnet 4 | $3 | $15 |
| Gemini 2.5 Pro | $1.25 | $10 |
They have pushed the pareto frontier of price to performance to a new level, finally rivaling Google, who had been dominating the $ per intelligence metrics.
## Anthropic doesn't get sued
A [judge in San Francisco](https://x.com/AndrewCurran_/status/1937512454835306974) has ruled that Anthropic's use of books to train their models (without author permission) falls under fair use. It should be noted that Anthropic bought these books legally and scanned all of them to be used as training data. Had they pirated the books, it would not have fallen under fair use, and would be illegal. This now sets the legal precedent for other major labs to now use books in their training, most notably Google, who is sitting on the entirety of Google Books, one of the largest digital libraries out there (assuming they haven't done this already).
You can read the full ruling [here](https://storage.courtlistener.com/recap/gov.uscourts.cand.434709/gov.uscourts.cand.434709.231.0_2.pdf).
# Releases
## New SOTA Video Model(s)
[Google's Veo 3](https://gemini.google/overview/video-generation/?hl=en) got to have a month on top of the video generation world, but it has now been passed by not just 1 but 2 different video generation models.
### Hailou 2
[MiniMax](https://www.minimax.io/), a Chinese AI research lab founded in 2022, [released their Hailuo-2](https://x.com/MiniMax__AI/status/1935026724468871550) image to video model, capable of handling extreme physics, and generating in 1080p.
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
<Tweet id="https://twitter.com/pabloprompt/status/1935822625663861192?ref_src=twsrc%5Etfw" />
### Seedance 1.0
ByteDance has also released their first video model, Seedance 1.0, besting Hailuo and Veo 3, along with a [research paper](https://arxiv.org/pdf/2506.09113v1) outlining how they made it. Notably it can do both text and image to video, while Hailou can only do image to video. The release comes from the Bytedance Seed team, which have been making a name for themselves the last few months with highly impressive research papers and model releases. Be sure to keep an eye on them in the future.
<Tweet id="https://twitter.com/fofrAI/status/1937639804646440993?ref_src=twsrc%5Etfw" />
### What about cost?
Veo 3 is extremely expensive, making it prohibitive to experiment with. How do these new models compare?
| Model | $/sec at 1080p | 5 second 1080p video |
|-------|-------|-------|
| Veo3 | $0.50 | $2.50 |
| Hailou 2 | $0.045 | $0.225 |
| Seedance 1.0 | $0.15 | $0.75 |
<center>*Pricing taken from [fal.ai](https://fal.ai)*</center><br/>
We can see that not only are these models better than Veo 3, but they also cost 5-10 times less! Note that they don't have audio generation (which Veo 3 does) but with audio generation included Veo 3 costs 50% more at $0.75/second, which at that point I would just recommend using [ElevenLabs](https://elevenlabs.io/) to generate the audio instead.
It's interesting to see Hailou make a competitive video model, since they don't have an obvious source of high quality video like Google (Youtube) and ByteDance (TikTik) have. We will see if they are able to keep up or if the lack of data will catch up to them.
You can see the current video generation leaderboard [here](https://artificialanalysis.ai/text-to-video/arena?tab=leaderboard&input=text) (run by Artificial Analysis).
## Midjourney Video
Staying on the video gen topic, Midjourney recently released their own [video generation model](https://docs.midjourney.com/hc/en-us/articles/37460773864589-Video). While it doesnt have the same raw instruction following and physics understanding that the other video models have, it makes up for it by having that signature Midjourney style to it. It can be used by anyone with a Midjourney subscription on their website, just note that it chews through your allotted compute time quickly!
<Tweet id="https://twitter.com/G_Eskeles/status/1936285938747089335?ref_src=twsrc%5Etfw" />
## Gemini CLI
Google has [released](https://x.com/OfficialLoganK/status/1937881962070364271) a coding CLI for their Gemini 2.5 Pro model, called Gemini CLI. It aims to be a Claude Code and OpenAI Codex (CLI) competitor, and is available for free to use. Google is giving 1000 free Gemini Pro requests a day. The downside? Google retains [all of the code](https://x.com/gazorp5/status/1937909618447208675) from your codebase to train their models on. If you dont care about who has access to your code, then fire away. Otherwise I would look into other alternatives that at least pretend to not be harvesting your data.
As for actual performance, Gemini 2.5 Pro is not as good at agentic coding as Sonnet/Opus 4, and doesnt have the raw thinking ability of o3. Where it does accel is with its [long context understanding](https://fiction.live/stories/Fiction-liveBench-Mar-25-2025/oQdzQvKHw8JyXbN87). This makes it good for digesting large codebases and creating a plan for the changes that you want to make, and then pass the output over to a more capable coding model like Claude. The code is open source, unlinke Claude code, so you can go and check out how it is working [here](https://github.com/google-gemini/gemini-cli). Its an interesting release, and I would recommend trying it while it is free, but don't expect anything incredible from it.
Check out [this post](https://x.com/SIGKITTEN/status/1937950811910234377) where someone pits 6 CLI coding agents against eachother to try and turn all the others off, last one standing winds.
## New and old Mistral models
Mistral released their [first reasoning models](https://x.com/MistralAI/status/1932441507262259564), Magistral Medium and Small. Medium is closed source (like the rest of the Mistral medium models) and Small is open source. The general vibe is that they are not that great and need a bit more work, but they did release a [very good research paper](https://arxiv.org/pdf/2506.10910) going into detail on how they were made.
This was not the only release from Mistral however, they also released a "small update" for their Mistral Small open source model. The model scores better across all benchmarks, including world knowledge and instruction following, and even [doubling its score](https://www.reddit.com/r/LocalLLaMA/comments/1lglhll/mistrals_minor_update/) in creative writing. They seem to have taken a page from DeepSeek's book, calling their much improved model a "minor update".
Hopefully Mistral can figure out their reasoning models, because if they do, they would have an entire series of high quality models that you can run at home with Mistral Small, Magistral Small, and Devstral.
Huggingface links:
[Mistral Small 3.2](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506)
[Magistral Small](https://huggingface.co/mistralai/Magistral-Small-2506)
## Jan-Nano
The final release we are going to talk about this week is [Jan Nano](https://x.com/victormustar/status/1934354672086868457), a 4B parameter Qwen3 finetune that accels in MCP usage and basic agentic behaviours. The model's main headline is a SimpleQA score of 80.7 when using tools, outscoring DeepSeek V3 with tool use. SimpleQA is a good proxy for relatively easy to medium difficult information gathering from the internet that the models wouldn't know otherwise, making it an ideal candidate to a local agent that you can run on your own computer.
UPDATE: They also released a [128K context length](https://www.reddit.com/r/LocalLLaMA/comments/1ljyo2p/jannano128k_a_4b_model_with_a_superlong_context/) version as well, with slightly better performance.
Huggingface links:
[Jan Nano](https://huggingface.co/Menlo/Jan-Nano)
[Jan Nano 128K](https://huggingface.co/Menlo/Jan-nano-128k)
## Open source Flux Kontext
Black Forest Labs has [relased](https://x.com/bfl_ml/status/1938257909726519640) an open source version of their image editor model, Kontext. Its a 12 billion parameter model, similar to their Flux Schnell and Dev models. Pricing is $0.025 per image on Replicate and Fal.ai, but of course the main allure is that you can run this version at home for free. Just note that if you want to use the model in a production (money making) environment, you will need to pick up a [self serve license](https://bfl.ai/pricing/licensing) from Black Forest Labs. License costs are $1000 per month.
Huggingface link:
[FLUX.1-Kontext-dev](https://huggingface.co/black-forest-labs/FLUX.1-Kontext-dev)
# Research
## Text to ... Lora?
What if you had an LLM, that instead of taking in text and outputting more text, instead took in text and output another LLM? That is the idea behind hypernetworks, which are deep learning models that, given an input, output a new model tailored for your input.
Sakana Labs, a Japanese research lab, has released one of the [first practical(ish) hypernetworks](https://x.com/SakanaAILabs/status/1932972420522230214). The model takes in a text description of your task, and then the model will output a lora adapter you can go and use, no data required.
You can [read the paper](https://arxiv.org/abs/2506.06105) to see how they did it, or run the demo that they have on [their Github](https://github.com/sakanaai/text-to-lora) to see how well it does for your task. It is very new technology, so it will only really work for similar domains that it was trained on, but it is exciting to see as the potential future of (not) finetuning.
## Can LLMs really see?
Have you ever noticed that LLMs seem to not be able to reason or understand images as well as they do text? Often including an image seems to throw off the model, and makes it perform worse.
Up until now, that was just a vibe I got from pretty much all multimodal LLMs, but now we have [confirmation of this](https://x.com/bclavie/status/1932008455801323950)!
The authors of ReadBench went and took questions from different text benchmarks, and put them in an image for the AI to read and answer instead of using text. What they found is that the models perform worse across the board when using images as the input.

<center>*Benchmark score difference between text and image*</center><br/>
So if your RAG pipelines keep your PDF's as images instead of parsing them into text, you may want to rethink that.
# Finish
This concludes the first edition of the weekly news. Thanks for reading.
<center>
<video src="/6-8-2025/Yq62zP6HLxFIOE95.mp4" autoplay loop muted playsinline></video>
</center>
|
.mdx
| 11,684
|
src/pages/news/weekly-11-10-to-11-16.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-11-10-to-11-16.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "AI Espionage and GPT 5.1"
date: "2025-11-16"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Claude Code gets used for a massive hacking operation, GPT 5.1 is released, OpenAI fights the NYT over data usage, and GPU hyperscalers are in trouble."
author: Andrew Mead
pending: false
spotify: ""
---
# tl;dr
- Claude Code gets used for a massive hacking operation
- GPT 5.1 is released
- OpenAI fights the NYT over data usage
- GPU hyperscalers are in trouble
# News
## Anthropic Stops AI Espionage
This week Anthropic announced they have [disrupted](https://x.com/AnthropicAI/status/1989033793190277618?s=20) a large-scale AI cyberattack, the first of its kind.
The attackers, who Anthropic believe with "high certainty" were a Chinese state sponsored group, used a jailbroken version of Claude Code to target 30 entities around the world, including large tech companies, financial institutions, chemical manufacturing companies, and government agencies.
> Jailbreaking an LLM means to bypass the safety measures and restrictions that are built into the model, allowing it to perform actions that it normally wouldn't be allowed to do. This can include accessing sensitive information, performing unauthorized actions, or bypassing ethical guidelines. This is usually done with [sophisticated prompt engineering](https://github.com/elder-plinius/L1B3RT4S/tree/main).
The attack, which occurred in mid September, utilized Claude Code with custom MCP servers and tools as the orchestrator for a larger system, allowing it to do reconnaissance, initial access, persistence, and data extraction phases all with minimal human interaction. According to Anthropic, Claude was able to do 80-90% of the tasks fully autonomously, with humans mostly being used in a supervisory role.
Anthropic released a full [thirteen page report](https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf) on the incident, which goes into detail how Claude was used to orchestrate these attacks.
Anthropic says they will work on their classifiers behind the scenes to detect this kind of behaviour earlier on, so that future attacks can be prevented.
Even with Anthropic beefing up their protections, we will only be seeing more of these attacks in the future, as other model providers roll out models of equal or greater strength.
This I can see is the main reason for being against open source AI. For instance, if these actors go and use a self hosted version of GLM 4.6, an open source model that is of about equal quality to Sonnet 4, which these hackers used (Sonnet 4.5 was released in late September), then the attacks would not be able to be "turned off" by anyone.
I am still all for open source AI and think its a net positive, but saying its entirely fault free is incorrect.
# Releases
## GPT 5.1
OpenAI has released an update to GPT 5, [GPT 5.1](https://x.com/nickaturley/status/1988685023487357205?s=20).
This update is not focused on raw intelligence and benchmark scaling, but rather on the intangibles and more day to day qualities of the model.
The first highlight is the dynamic reasoning capabilities, something that it inherited from the [GPT-5-Codex models](https://vectorlab.dev/news/weekly-9-15-to-9-21/#openai-codex-update).
This means that the model will think less for easy queries and more for complex ones.

<center>*10th percentile are the easiest questions, 90th the hardest. Y axis is tokens.*</center>
The next have to do with the model's personality, which is now supposedly more friendly.
They have also shipped multiple new personality types that you can choose from.
Speaking of model outputs, the model should be more concise and clear with its responses, and is more [steerable with system prompts](https://cookbook.openai.com/examples/gpt-5/gpt-5-1_prompting_guide). For instance, you can tell it to [not use em dashes](https://x.com/sama/status/1989193813043069219?s=20) anymore and it will actually listen. This is mostly due to the model's instruction following capabilities being improved.
They also released Mini and Codex variants of the model as well which seem to have similar characteristics.
I had previously said that GPT-5 was the strongest model out there right now, and this update provides a small but substantial bump to the model's capabilities.
# Quick Hits
## Open AI Fights For User's Data
OpenAI continues to fight the New York Times in court, as their lawsuit over data usage continues.
This week, The Times [filed a request](https://x.com/morqon/status/1988649442719985727?s=20) for 20 million ChatGPT conversations, despite not being narrowed down in any way for relevance for the case.
OpenAI is obviously fighting this, saying “courts do not allow plaintiffs suing Google to dig through the private emails of tens of millions of Gmail users irrespective of their relevance”.
Hopefully the judge sees this request as the ridiculous overstep of privacy that it is and denies the claim.
## GPU Hyperscalers are in trouble
There has been a lot of talk recently on Twitter about GPU hyperscalers and the economics around them. Usually the commentary is from analysts and outsiders who are not directly involved in the space.
The main question is the economics, with 4 year depreciation cycles for the hardware, GPU hyperscaler's margins are looking very thin, even with massive growth in AI and the future.
This week, one of the GPU providers, HotAisle, [weighed in on the conversation](https://x.com/HotAisle/status/1987960188239048979?s=20), painting a grim picture of the economics for themselves and the rest of the industry.
## Frontend Claude Skill
Anthropic has [released](https://x.com/alexalbert__/status/1988707509973184516?s=20) a Claude skill that is made to prevent your frontends from looking like the usual AI UIs that we are used to seeing (purple gradients are everywhere now). Just drop this file into your Claude Skills folder and your frontends will get a free quality boost.

# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>*From taha on [Pintrest](https://www.pinterest.com/pin/1090856341005457116/)*</center>
|
.mdx
| 6,540
|
src/pages/news/weekly-8-25-to-8-31.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-8-25-to-8-31.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "The number one model is a banana?"
date: "2025-08-31"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "New Google image editing model, a small TTS model, and the weekly Qwen release"
author: Andrew Mead
---
# Releases
## Nano Banana
The previously stealth Nano Banana model has finally been claimed by an organization, with Google DeepMind [announcing](https://x.com/GoogleDeepMind/status/1960341906790957283) the release of their Gemini Flash 2.5 Image model, which they revealed had been Nano Banana this entire time.
<video src="/8-25-2025/banana.mp4" autoplay loop muted playsinline></video>
Nano banana has been [making waves](https://x.com/infwinston/status/1960360639899242705) as its been deployed in various image editing arenas, completely outshining its competitors, and driving record numbers of people to go and try it on these sites like LMArena and Artificial Analysis.

<br/>
The model is available now to use for free on Google's AIStudio and also via the API for $0.039 per image.
## Qwen Audio to Video
It wouldn't be a week of AI News without a Qwen release.
This week Qwen [dropped](https://x.com/Alibaba_Wan/status/1960350593660367303) a fine tune to their WAN 2.2 video generation model, adding the ability for you to pass in audio along with a reference image, and then the model would generate a video of your character speaking that audio.
<video src="/8-25-2025/klxFbb7Y8-8I843H.mp4" controls></video>
<br />
The model is good at getting the high level body movement, but it still struggles to get the actual lip syncing down.
However I expect the open source community to have a much better finetune of this model in a few months, so I'll be on the lookout for when and what that model is.
## Marvis-TTS
A new challenger has arrived in the efficient TTS space, and its called [Marvin TTS](https://x.com/Prince_Canuma/status/1960399829290426448).
It is a 300 million parameter model with audio streaming capabilities, making it great for low resource, yet fast response time applications.
Its audio quality is definitely a step up from the current champion Kokoro TTS, but it is 5x larger, although it will have equivalent response times due to the streaming functionality that it has.
These extra parameters do get you some very welcome features, like voice cloning from just a 10 second audio clip.
The quality is definitely not the very best when compared to models 5-10x its size, but it still punches far above its weight.
You can [try the model now](https://huggingface.co/collections/Marvis-AI/marvis-tts-250m-v01-68adf13f5f59206e3910502a) on Mac using the mlx-audio library, or on gpu (and cpu) based systems using transformers.
# Research
## Environments Hub
Prime Intellect, an upstart AI lab here in the US, has [released](https://x.com/PrimeIntellect/status/1960783427948699680) the Github for reinforcement learning environments for LLMs.
If you haven't heard, the current big approach for RL for LLM's has been reinforcement learning with verifiable rewards (RLVF).
<video src="/8-25-2025/pi.mp4" autoplay loop muted playsinline></video>
<br/ >
In RLVF, we are able to explicitly define in code what the rewards for the model should be instead of using a separate reward model.
One common example is math, where we know for a given question what the answer is supposed to be, so we can just check the model's output (also rewarding it for formatting correctly) to see if it got it correct or not, and rewarding it accordingly.
This makes RL training much simpler and easier to scale, notable being what XAi used to train Grok 4.
One issue the community had however was that there was no common place to see what environments other people had made or shared their own.
That is where Prime Intellect comes in, as they have made a hub where you can share and see what everyone else has made.
This is great for other researchers and model trainers, since they now have access to a large number of environments without needing to make it themselves from scratch.
# Finish
I hope you enjoyed the news this week.
I have been in the process of moving this week, so I haven't been able to work on the news very much at all, so if it seemed a bit sparse that's why.
If you want to get the news every week, be sure to join our mailing list below.
|
.mdx
| 4,449
|
src/pages/news/weekly-10-13-to-10-19.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-10-13-to-10-19.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Haiku Returns"
date: "2025-10-17"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Haiku makes its return, Veo 3 gets and upgrade, and new leng context benchmarks"
author: Andrew Mead
pending: false
---
# tl;dr
- Claude Haiku gets an update after almost a year
- Veo 3.1 is good at image to video
- New benchmarks show how bad LLM's are long context tasks
- And more Qwen3 VL models
# Releases
## Claude Haiku 4.5
It has been almost a year since the last Claude Haiku release, so I don't blame you if you have forgotten about this model. Haiku is the smallest member of the Claude Trinity, and its most recent update had been from the Claude 3.5 series of models, which, depending on how you count it, means its 5 versions behind its brothers Opus and Sonnet.
[Haiku 4.5](https://x.com/claudeai/status/1978505436358697052) is being billed as a Sonnet 4 replacement, which puts it squarely against the GLM 4.6 model, so how does it stack up?

<center>*SWE Bench is not a very interesting or meaningful benchmark ([its mostly Django](https://news.ycombinator.com/item?id=43130732)) but companies still like pushing is anyways*</center>
Not very impressively, is the answer. Its one main selling point over GLM 4.6 is that it's a bit faster, but otherwise it's about two times more expensive to use.
| Model | $ per million (input)| $ per million (output) | Tokens per second |
|-------|-----|-----|-----|
| GLM 4.6 | $0.60 | $2.20 | 46 |
| Claude Haiku 4.5 | $1 | $5 | 106 |
<center>*Data from [OpenRouter](https://openrouter.ai/)*</center>
<br />
Also, the [public consensus](https://x.com/lintool/status/1978941865643958671) seems to put it a bit below Sonnet 4, whereas for GLM 4.6, people tend to prefer it to Sonnet 4.
It is currently unknown how the rate limits are for it in Claude code with the anthropic subscription. But the limits are going to have to be extremely generous to make it a better value than GLM 4.6, especially considering the subscription is almost [10 times more expensive](http://vectorlab.dev/news/weekly-10-6-to-10-12).
Because of this, I don't think it's a very interesting or unique release and does not change the LLM landscape at all.
## Qwen3 VL 4B and 8B
Qwen continues to release models in their vision lineup, dropping a [4 and 8 billion parameter VLMs](https://x.com/Alibaba_Qwen/status/1978150959621734624) based on their Qwen3 models.

<center>*Vision benchmarks*</center>
As expected, they are number one for their given sizes. Not that that really means much given that there is not much competition in the open source VLM space right now. Despite this, they are still strong models that are good enough to make them usable in the real world for basic tasks while being able to be deployed on device.
Of note: non-vision benchmarks decreased a bit more than they did for the larger variants, but the difference is still relatively small (1-2% drop in absolute performance)
Another interesting aspect is the release of an 8 billion parameter model. Previously, in their Qwen 3 refreshes, they had neglected to update their 8 billion parameter model along with a couple others. But now with this release, they have updated the post-training of their 8 billion parameter model and also added vision capabilities to it, which is good to see since the eight billion parameter size is ideal for small at-home GPU deployments.
## Veo 3.1
Google has [released](https://x.com/OfficialLoganK/status/1978492626371289108) a new version of their already very strong Veo 3 model.
<video src="/2025-10-17/wtdqPtASMDLqkSP8.mp4" autoplay loop muted playsinline></video>
<br/ >
For this version, they have greatly improved the image to text performance and ability to do proper video creations for the likes of tv shows or movies. For instance you can upload an image of a location, and then ask the model to generate video of a helicopter flying over it.
The [usual benchmarks](https://artificialanalysis.ai/text-to-video/arena?tab=leaderboard-image) have yet to release scores for the model, but from what I have been seeing, it looks to be near the top for image to video generation.
# Research
## New Long Context Benchmarks
Long context is usually very hard for models. Only with some of the recent frontier releases like GPT-5 have models been able to even remotely use their full context window.
The "[hardest](https://fiction.live/stories/Fiction-liveBench-Feb-21-2025/oQdzQvKHw8JyXbN87)" long context benchmarks only tested to see how good a model is at retrieving information from its context; none of the benchmarks made the model to anything complex with this information.
We now have a new benchmark that looks to fill this gap. [LongCodeEdit](https://x.com/nrehiew_/status/1978838046242898069) is a benchmark that looks to measure an LLM's ability to find, diagnose and fix bugs across a large file.

What we find is more of the same from the previous long context benchmarks: that LLM's are remarkably bad at using their reported context window, as even at 16k tokens we see non-trivial performance degradation on the tasks.
The benchmark takes a number of working functions from existing code benchmarks, corrupts a single one of them, and then passes all of them together into a single "file" to the LLM.
Surprisingly, we find that GPT-5 degrades significantly, while Sonnet 4 and 4.5 are able to roughly maintain their capabilities.
Also of note is the Qwen team being number 2, with their flagship Qwen3 Max model.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>*OpenAI Stargate datacenter* -- from [NunoSempere](https://x.com/NunoSempere/status/1922706764136317307/photo/1) on Twitter</center>
|
.mdx
| 6,007
|
src/pages/news/weekly-7-21-to-7-27.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-7-21-to-7-27.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Qwen Week"
date: "2025-07-27"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Qwen releases a bunch of new models, and Anthropic is back on the hot seat"
author: Andrew Mead
---
# News
## Anthropic is getting sued?
We [previously](https://vectorlab.dev/blog/weekly-6-8-to-6-29/) reported how Anthropic had avoided legal repercussions for their training data because they trained on books that they had bought.
It turns out that they also had pirated a large number of books as well, around 7 million, and are now facing potentially massive fines for doing so. The judge has already determined that copyright infringement has taken place, and so now all there is left is to assess damages, which, under current statutory minimum, would be $750 per book, up to $150,000 worst case. This means that on the low end, Anthropic will over a billion dollars, and in the worst case could get $750 billion in fines, but no jury would actually award that much in damages.
Previously Anthropic CEO Dario Amodei had said they would not be receiving any funding from Gulf States, like the UAE and Saudi Arabia, unlike OpenAI had just done so to finance their new [half trillion dollar](https://openai.com/index/announcing-the-stargate-project/) Stargate data center project.
It seems that this potentially massive fine has changed his mind, as in a [leaked memo](https://www.wired.com/story/anthropic-dario-amodei-gulf-state-leaked-memo/) he has backpedaled, saying that
> <span style="color: var(--color-primary);">"Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on"</span>
>
> <cite style="display: block; text-align: right; margin-top: 0.5em; font-style: normal;">— Dario Amodei, leaked memo</cite>
This will probably be settled out of court, but even so, it will be a big blow to Anthropic. They are also unlucky to be the first to come under scrutiny, as pretty much all major labs do the same, and platforms like Anna's Archive actively offering datasets to LLM trainers.
# Releases
## Qwen Fights For the Top
Jealous of all the attention fellow Chinese AI lab Moonshot has been getting for their Kimi K2 model, the Alibaba Qwen team have released not just one, but 2 new "SOTA" models this week.
The first is an updated version of the Qwen3 235B MoE model, the (at the time of release) largest in the Qwen3 family. It claims large bumps across all benchmarks. It also deviated from the previous, hybrid thinking Qwen3 models, in which you could toggle reasoning and non reasoning mode by appending `\no_think` to the end of your prompts. Instead they have released 2 seperate models, a [thinking](https://x.com/Alibaba_Qwen/status/1948688466386280706) and [non-thinking](https://x.com/Alibaba_Qwen/status/1947344511988076547) version.
The second is the first model in the [Qwen3 Coder](https://qwenlm.github.io/blog/qwen3-coder/) series of models, a massive 400B param MoE model which is meant to rival Claude Sonnet. Alongside it they are releasing a fork of the Gemini Cli terminal UI that has been optimized to work with Qwen3 Coder.

<center>*Benchmarks for Qwen3 Coder*</center>
Both models are Qwen models, which means that they benchmark very well, but their real world performance is yet to be seen. From what I have seen so far, the models are not as good as Kimi K2, but are definitely on top of the rest of the open source models that are out there. These are not all of the oooohs and ahhhhs that I saw when Kimi K2 was released, but I also have not seen any catastrophic issues with them either. If I had a tier list, I would put it below Kimi K2, in the same tier as DeepSeek R1. This may seem bad, but remember these models are 2 to 4x smaller than the models they are being compared against, which is no small feat, and makes running them at home that much easier.
They also teased that they will be releasing smaller versions of both models next week, so stay tuned for those.
# Research
## Turning an LLM into an owl lover

Can an LLM inherit the properties of another LLM just by seeing a series of numbers?
In a research paper from Anthropic, the researchers study whether a teacher model that has been fine-tuned to have a particular trait, like liking owls, can transmit its preference onto a smaller model using sequences that are completely unrelated to its preference.
In the paper, they fine-tune a teacher model that likes owls. They then have the teacher model generate sequences of numbers or any other unrelated data that is not about owls or the model's preferences towards owls, and then fine-tune a student model on this data set. And what they find is that the student model, even though it has not been directly trained on the preferences of the parent model and has seen nothing about those preferences, still ends up preferring owls and exhibits the same traits as the parent model did. They call this subliminal learning.
They then extended this research to show that you can use a maligned model to go and malign another LLM even though the data that you're training on has no evidence of misalignment or incorrectness. This means that you would be able to poison LLMs in the future using perfectly harmless looking data, and have it behave however you want.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

|
.mdx
| 5,643
|
src/pages/news/weekly-9-15-to-9-21.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-9-15-to-9-21.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Free video generation for all"
date: "2025-09-21"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Veo3 is free to use, a new Wan2.2 video to video model, a Qwen Deep Research model and more!"
author: Andrew Mead
pending: false
spotify: "https://open.spotify.com/show/7LKYxvGAGSj1pso4aklh9O"
---
This week's AI news is also available in audio form (spoken by me, a human, and not an AI) on [Spotify](https://open.spotify.com/show/7LKYxvGAGSj1pso4aklh9O)! Be sure to check it out and give it a follow if you are illiterate like me.
We also are releasing a Discord for our community which you can join using [this invite link](https://discord.gg/R4zqtDb4kd).
# News
## Veo3 is free to use on Youtube Shorts
Google [has figured out the economics](https://x.com/YouTubeCreators/status/1968006136030003257), and are now giving users free access to their powerful Veo3 model. Access is being rolled out now across the US, Canada, and a few other countries. You can access it in the Youtube Creator Studio.
<center><video src="/9-15-2025/veo3.mp4" height="50%" width="50%" autoplay loop muted playsinline></video></center>
<br/>
> <span style="color: var(--color-primary);">"Tap the create button, then the sparkle icon in the top right corner to find our latest gen AI creation tools including Veo 3."</span>
>
> <cite style="display: block; text-align: right; margin-top: 0.5em; font-style: normal;">- [Youtube Announcement Blog](https://blog.youtube/news-and-events/generative-ai-creation-tools-made-on-youtube-2025/)"</cite>
This is a great way to access the Veo3 model, as previously it had cost 15 cents per second. The model does text-to-video, image-to-video, and video-to-video generation and generates the audio for the clips as well, making it an all-in-one solution for your video creation needs.
This does come with the expected cost of seeing a lot more AI slop videos on your YouTube Shorts feed. And long term, there are concerns of this model frying people's brains even more than regular short form content, as it gets better and is able to learn exactly what people want to see and is able to make custom videos catered directly for them.
## OpenAI Codex Update
I have been using OpenAI's Codex CLI as my main programming tool for the last few weeks now. It has been noticeably better for my coding use cases versus Claude Code with Sonnet 4 while already being a part of my ChatGPT subscription.
This week [they released](https://x.com/OpenAI/status/1967636903165038708) an update to their whole suite of Codex products, further increasing their lead in the agentic coding field.
The headliner is the release of a new model, GPT-5-Codex, which is a finetuned version of GPT-5 made specifically for use in the Codex framework. It shows strong performance increases in real world coding benchmarks, can write better documentation and comments, and can dynamically control how much or how little reasoning it does, so easy questions get answered quickly and hard questions can be thought through deeply.

<center>*GPT-5-Codex can dynamically change the number of tokens it uses depending on how hard the question is*</center>
Alongside the new models, they released updates to the different Codex frameworks (CLI, IDE extension, and Cloud), allowing them to all seamlessly interact on the same project. Some additional features include automatic Github pull request code reviews, MCP and web search support, and support for image inputs.
The one caveat for using Codex is that it is not as good at handling very vague prompts compared to Claude Sonnet 4. If you want to get the most out of the model, you will want to be as specific as you can with your instructions.
# Releases
## Wan 2.2 Animate
The top open source video generation model Wan 2.2, has had a new variant released by the Alibaba Wan team.
The model is [Wan2.2 Animate](https://x.com/Alibaba_Wan/status/1968921551392432175), which as the name suggests, is meant for character animation based on an input video.
It has 2 modes:
1. **Move mode**, which animates the character in the reference image with the movements in the input video.
2. **Mix mode**, which replaces the character in the input video with the character in the input image.
The way I think of it, if you want to use the background in the reference image, use move mode, and if you want to use the background from the reference video, use mix mode.
<center><video src="/9-15-2025/wan2.mp4" autoplay loop muted playsinline></video></center>
<center>*Example of move mode with a variety of different characters*</center>
This model is definitely the strongest in the Wan 2.2 lineup, as it is competitive, if not better, than most of the closed source models trying to do the same.
The model comes in two variants similar to the rest of the WAN 2.2 lineup, there is a dense 5 billion parameter model for low resource users and quick iteration, and then the 28 billion parameter mixture of experts model.
You should be able to run the big 28B model if you have a GPU with more than 16 GB of vram.
The models work with the [lightning loras](https://huggingface.co/lightx2v/Wan2.2-Lightning) made for the rest of the Wan 2.2 lineup, allowing for 10x faster generation speeds (otherwise a single video would take over 20 minutes to generate on a 3090).
If you want to see more examples of the model in action, you can check out their [blog page](https://humanaigc.github.io/wan-animate/).
## Qwen (Tongyi) DeepResearch
The Qwen team has decided to take a break this week from any releases, but that did not stop their parent lab, Tongyi, from releasing a model of their own.
The model is a fine-tune of the Qwen 30B MoE model made specifically for deep research applications, called [Tongyi DeepResearch](https://x.com/Ali_TongyiLab/status/1967988004179546451).

The model benchmarks well, but almost too well, as some users on Twitter have reported that they've been unable to reproduce the model's very high scores across some of the benchmarks.
That being said, for its size, it is still a very strong model, even if it is only half as good as the reported benchmarks claim that it is.
I plan on integrating it into my local AI setup, and will hopefully have more to say about its real world performance in the coming weeks. Also look out for Qwen3-VL coming out next week as well.
## A pair of image understanding models
We got not just one but two small image understanding LLM releases this week.
### Moondream 3 Preview
[Moondream 3-Preview](https://x.com/vikhyatk/status/1968800178640429496) is a 9 billion parameter MoE model with 2 billion active parameters that has state of the art visual understanding and reasoning capabilities.
It is a hybrid reasoning model capable of doing visually grounded reasoning where the model references objects in spatial positions in the image while it's doing its reasoning.
The model has both point and detect (draw bounding box) functionality built into it by default that you can use.

The team behind Moondream is very detail-oriented. So I suspect very little overfitting on benchmarks and that it actually does have state-of-the-art performance that matches the much larger closed models.
You can try it out for free with no account on their [playground](https://moondream.ai/c/playground).
### Isaac 0.1
The second is from the former meta chameleon team which have left informed their own company Perception AI.
They have released a model called [Isaac 0.1](https://x.com/perceptroninc/status/1968365052270150077) which is a 2 billion parameter open weights model that performs equally if not better than Gemini 2.5 Flash on spatial intelligence and visual reasoning benchmarks.

Normally this is not a model that I would cover, except that in testing, it managed to pass some of my internal vision understanding tests that no other multimodal model has been able to do up to this point, including Moondream 3, Gemini 2.5 Pro, and GPT-5.
It is still very much rough around the edges, running into infinite loops and hallucinating many outputs. But there are moments where you can see that it is truly a very powerful model. I look forward to what this team is able to build in the future and eagerly await the release of Isaac 1.0.
This model is also freely available to play around with on the [Perception AI website](https://www.perceptron.inc/demo).
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.
<video src="/9-15-2025/wan1.mp4" autoplay loop muted playsinline></video>
<center>*Wan2.2 Animate Move mode example* -- from [bdsqlsz](https://x.com/bdsqlsz/status/1968914419749978230) on Twitter</center>
|
.mdx
| 9,050
|
src/pages/news/weekly-7-7-to-7-13.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-7-7-to-7-13.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Weekly Update: July 7 to July 13"
date: "2025-07-13"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Releases, releases, releases!"
author: Andrew Mead
---
# News
## OpenAI postpones open weight release
OpenAI has been saying [for a while](https://x.com/sama/status/1906793591944646898) that they want to release an open weight model, but it appears that we will have to wait a bit longer, as Sam Altman announced that they are delaying the release date of the model, which was supposed to be next week. This is to have "time to run additional safety tests and review high-risk areas".
There is very little known about what the model may be, as even the potential release date was only rumored until Altman confirmed it this week. This will be OpenAI's first open weight LLM since GPT-2 back in 2019. Altman has said previously that they were targeting an O3 mini level model, as that would put it near SOTA for open source (when he said it [back in February](https://x.com/sama/status/1891667332105109653)). But as we will see, this bar has been raised with many releases matching or exceeding o3 mini quality that can be run on a single GPU at home.
# Releases
## Kimi K2
A relatively unknown Chinese lab, [Moonshot AI](https://www.moonshot.ai/) [has dropped](https://x.com/Kimi_Moonshot/status/1943687594560332025) an open source, MIT licensed model that is near SOTA, not just for open source models, but for all models in general, which is cited a potential cause for the delay on the OpenAI open weights release.

<center>*Kimi K2 matches or exceeds top agentic models like Claude Sonnet and Opus 4*</center>
The model was trained for and excels at agentic tasks, but also is very good at [creative writing](https://x.com/aiamblichus/status/1943782310769512506), something that has become a pattern for these smaller Chinese labs like MiniMax and Z AI. It does all this [without being a thinking/ reasoning model](https://x.com/nikhilchandak29/status/1944046584943047159), which has been the main driver of progress for LLMs in the last 6 months.
Unlike the Grok 4 model (discussed later), Kimi K2 [passes](https://x.com/_lyraaaa_/status/1943934264732790912) the private [vibe evals](https://x.com/_xjdr/status/1943836887237767502) for [most users](https://x.com/AndrewCurran_/status/1944076207152410979), with its closest competitor being Opus 4, which is truly remarkable for an open source model you can (theoretically) download and run at home.
Why do I say theoretically run it at home? Well that is because it is a 1 trillion parameter mixture of experts model, with each expert being 32 billion parameters each. This makes it almost double the size of DeepSeek R1, which has "only" 600 billion parameters. That being said, if you have the over 600GB of memory required just to load the model in 4 bit, you can already run it using [Ktransformers](https://huggingface.co/KVCache-ai/Kimi-K2-Instruct-GGUF) at a reasonable 10 tokens per second, assuming you also have a decent consumer GPU to help accelerate things.
While at home inference will be unattainable for most people, the model should be runnable on a single H200 or B200 node at 8 bit quant with VLLM or SGLang, or on a H100 node if you are willing to go down to 4 bit quantization. The model should also be runnable on pretty much every other modern inference framework as well, as it uses the same architecture as DeepSeek V3.
You can try out the model right now for free at [kimi.ai](http://kimi.ai), or via their API. Since the model is open source, you can expect more providers for it to pop up over the coming weeks. Speaking of using the model, how much does it cost for API access to the model? One of the reasons behind DeepSeek's success was its very cheap price relative to the competition. Does Kimi continue this trend?
| Model | $ per million input tokens | $ per million output tokens |
|-------|-------|-------|
| o3 | $2 | $8 |
| Claude Sonnet 4 | $3 | $15 |
| Gemini 2.5 Pro | $1.25 | $10 |
| DeepSeek R1 | $0.55 | $2.19 |
| Kimi K2 | $0.60 | $2.50 |
Yes, it does. While not being as cheap as R1, it has the hidden benefit of not being a reasoning model, which means it will use far less tokens than all the other top models right now, which will result in lower actual prices when using the model. The Chinese have done it again, making a model that rivals the best that the west has to offer, open sourcing everything while doing it.
## Grok 4
The XAi team announced their new Grok 4 model this week on (a painful to listen to) [livestream](https://x.com/xai/status/1943158495588815072). The model uses Grok 3 as the base, and instead of doing continued pretraining to make the base model better, then instead fully focus on fine tuning the model using reinforcement learning.

<center>*Grok 4 used the same amount of compute in pre-training as they did in post-training. Usually this ratio is much lower, as seen in the compute used for Grok 3 post-training*</center><br/>
Grok 4 comes in 2 variants, Grok 4 and Grok 4 Heavy, with the heavy version just being best of 4 sampling of Grok 4. That means that they run your query through Grok 4 four times and then use Grok 4 as a judge to pick the best answer to give to you.
Despite [crushing it](https://x.com/NickADobos/status/1943180302408696172) on benchmarks, the public's vibe check seems to be of the mind that its similar to the other top models like o3, Gemini 2.5 Pro, and Claude Sonnet/Opus, but not really exhibiting any revolutionary behaviour that would make you want to switch.

<center>*Grok 4 (orange) tops pretty much every major technical benchmark*</center><br/>
The model [is rumored](https://x.com/kalomaze/status/1942996555088134592) to be around 2.4 trillion params, and has the exact same pricing as Claude Sonnet, at $3 per million input tokens and $15 per million output tokens.
## Reka Flash 3.1
Speaking of o3-mini level open source models, the Reka AI team released an upgrade to their 20b param flash model, mainly improving its code abilities, making it on par with o3-mini and Qwen3 32B.

<center>*They also provide their own 3.5 bit quantized version, which is only 9GB in size*</center><br/>
Reka has probably flown under the radar for most people, but they are a solid lab, similar to Mistral except American. Their flash series of models have been solid, similar in performance to Gemma3 27B and Mistral Small. It is multimodal as well, supporting both image and text inputs.
# Research
## Multiple Choice = bad eval
Some of the most common benchmarks that people use to evaluate model quality are multiple choice, for instance MMLU and GPQA. The issue with evaluating models like this is that we dont ask LLMs multiple choice questions in the real world. The format is meant to measure the knowledge that the LLMs have, but they might not even be doing that.
In a [recent research paper](https://x.com/ShashwatGoel7/status/1941153367289364655) from the Max Planck Institute, researchers show that LLMs are able to get the correct answer without ever seeing the question.

<center>*Even without the question, LLMs are able to do far better than random guessing on all MCQ benchmarks*</center><br/>
To remedy this issue, the researchers propose that the LLM is just given the question, and then use an LLM as a judge to verify if the answer is correct. The issue they come across however, is that *it is harder to verify that an answer is correct than it is to generate a correct answer* if no reference correct answer is provided.
This has big implications when running a question through an LLM multiple times and then having another LLM select the best answer (best of n sampling). It shows that we cannot reliably expect to find the correct answer from a set of potential answers.
The researchers overcome this issue with benchmarks by using answer matching (provide the LLM judge the reference answer) and have it see if the generated output matches the answer. But for open world LLM judges, the issue still exists.
# Finish
I hope you enjoyed the news this week, if you want to get the news every week, be sure to join our mailing list below.
<video src="/7-7-2025/X78wP9ncAHVFsISM.mp4" autoplay loop muted playsinline></video>
<center>*Me cooking in the lab at 2am on a Wednesday*</center>
|
.mdx
| 8,689
|
src/pages/news/weekly-10-27-11-2.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-10-27-11-2.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Custom Coding Models for All"
date: "2025-11-01"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Cursor and Windsurf launch their own coding models, Kimi releases a CLI, "
author: Andrew Mead
pending: false
spotify: ""
---
# tl;dr
- Lightning fast coding models from Cursor and Windsurf
- Thinking Machines lay the groundwork for continual learning
# Releases
## Custom Coding Agent Models
I have said [before](https://vectorlab.dev/blog/what-models-i-am-using/#coding) that the winners of the agent framework battle will be those that control both the harness the model uses and also the model itself, since they will be able to coadapt the harness and the model together. Thus their model will work the best with their harness, giving the best results.
Previously Anthropic and OpenAI were the only major names that had both of these things (along with widespread adoption). The model wrappers, Cursor and Windsurf, did not have such an advantage, which is why I did not recommend using them. This week however, they came out with their own offerings, so lets see how they stack up to GPT-5 and Claude 4.5, and if they are worth switching over to their platforms for.
### Windsurf SWE-1.5
For both of the models, their main selling point is not necessarily their quality, but rather their speed. GPT-5 in Codex is the biggest example of slow and good, often taking over 30 minutes to complete a single request. Having a faster iteration loop with the AI is good, especially when you have underspecified criteria where the model won't be able to get it right the first try no matter how smart it is. If you are going to need to have a back and forth with a model to make a feature, would you rather wait 10 seconds or 15 minutes between responses.
For [SWE-1.5](https://x.com/jeffwsurf/status/1983671909645742212), it [appears](https://x.com/Zai_org/status/1984076614951420273) to be trained from the Z.ai GLM 4.5 model (not 4.6, most likely due to training starting before the 4.6 release).
The model is hosted on Cerebras, which offers the fastest LLM inference of any platform by far, allowing for inference speeds exceeding [1.8k tokens per second](https://x.com/cerebras/status/1984353299081150512).

<center>*SWE-Bench Pro is a bit better than the usual SWE-Bench. However it is made by Scale AI, who often provide the data for these models, so there may be a conflict of interest there.*</center>
In terms of quality, the model seems to be around GLM 4.6 level, so definitely usable, but not near the frontier level of intelligence that Claude 4.5 and GPT-5 have. Also Cerebras will be offering GLM 4.6 directly in the near future, so I don't see any need to lock yourself in the Windsurf world to use this model.
It will be interesting to see if Windsurf will be able to tune the model to the point where it is at the same level as GPT and Claude, because then there is significant incentive with the inference speed + quality to go to Windsurf.
### Cursor Composer
The headlining feature for the Cursor 2.0 release is their new [Cursor Composer](https://x.com/srush_nlp/status/1983572683355725869) model. Similar to SWE-1.5, we do not know for certain what the model they are using is, although it is definitely a fine tune on top of an already existing model. There is [evidence](https://x.com/nrehiew_/status/1984642215671746631) that it may be based on Deepseek, but it is not as clear as it is for SWE-1.5.

<center>*Very vague benchmarks are a good sign*</center>
Composer is not hosted on Cerebras, so it is unable to hit the 4 digit token per second speeds that SWE-1.5 can, but it is still fast for a transformer model.
Vibe check on this one is a bit worse than SWE-1.5, probably around Sonnet 3.7 quality.
If SWE-1.5 was a pass, then Cursor Composer with its almost 10x slower speeds and worse quality is a hard no.
# Research
## Continual Learning from Thinking Machines
When trying to finetune a model for your own use case, the most difficult thing to do is encode new information into the model without having it forget what it previously had known and was capable of. The best way for a model to learn information is during the pre or mid training steps, which is just meant for unstructured text. So if you are fine tuning a model that has already had a chat style post training done to it, you would be essentially overwriting its chatbot behaviours with your new info.
Normally you would add in some of the "original" data, or something akin to it, but the model would still struggle to fully recover what it had known and was a very finicky process.

<center>*The more you train on new info, the worse the model's instruction following gets, even adding in data from the original dataset. Also note that even after training on the docs, accuracy was still not above 50%*</center>
The researchers at Thinking Machines [have a fix for this](https://x.com/miramurati/status/1982856564970254772). They propose a new methodology called On-Policy Distillation which allows you to recover the original model quality very easily after doing continued pretraining on your internal documents.
On-Policy Distillation uses the original model (teacher) as a reward model, giving a reward for each token the fine tuned model (student) produces. This way the model learns to mimic the original distribution of knowledge that it had. The surprising thing about this method is that it doesn't then cause the student to forget any of the new information that it just learned. It also is much more compute efficient than doing a regular finetune, using 50-100x less compute in the process.

<center>*using on policy distillation allows for new knowledge to be added while retaining previous abilities*</center>
This is not that groundbreaking in terms of research, most of these concepts [existed previously](https://x.com/egrefen/status/1982968671397195849), they have just been brought together into a single report here by Thinky.
What is interesting is that this, coupled with Thinking Machines previous releases of a training service and an in depth Lora analysis, seems to point to what the company will be focused on in the future.
They are not chasing the frontier of intelligence like OpenAI and Anthropic are. Instead they are focusing on small, fast, specially catered models that can be iterated on quickly and easily with updated information. I am a big fan of this direction, as I have been skeptical of how much we really need to be scaling models versus focusing them more narrowly for our tasks. Scaling has just been the easiest method, especially with all of the money pouring into the field recently.
I (think) I share the same vision of the future as Thinking Machines, where we are running our own models at home or in small clusters instead of using proprietary models in large datacenters. We will see if this vision holds. I look forward to more releases from Thinking Machines and will be sure to cover anything interesting here.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

|
.mdx
| 7,515
|
src/pages/news/weekly-8-18-to-8-24.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-8-18-to-8-24.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "The Whale is Back"
date: "2025-08-24"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "DeepSeek drops a new model, GLM makes computer use look easy, and can an LLM see the future better than humans?"
author: Andrew Mead
---
# Releases
## DeepSeek 3.1 Release
After a long hiatus, DeepSeek has finally released a new model. It is the DeepSeek V3.1 model, which combines both the thinking and non-thinking abilities of their previous models into one hybrid model.
This release seems to be in response to Kimi K2 and GLM 4.5, which are both very strong reasoning and agentic models released by other Chinese labs. With this release DeepSeek really emphasized the agentic coding ability of the model, seeing large uplifts in most software engineering and agentic benchmarks from the previous versions.

The reasoning portion of the model is much faster than it was previously, as DeepSeek had been known for having very long chain of thoughts causing long response times, even for simple queries. With this release, they reduced this behavior quite a bit as the model now uses 30 to 50% less tokens while thinking while still maintaining similar accuracy.

They have also made it very easy to use the model in Cloudcode, providing some very simple instructions on how to use it in Cloudcode. The one downside, though, is that the model runs very slowly from the deep sea. Is that the model runs very slowly from the DeepSeq API at only 20 tokens per second.
The model does however dominate in the price to performance ratio as with their $1 per million output price and decreased number of thinking tokens makes the model even more efficient than it was before while being half the price of the other chinese models.
## Z.ai Tops Computer Use Benchmark
ZAI released a [RL framework](https://x.com/Zai_org/status/1958175133706891613) for fine-tuning computer use agents. Alongside it, they also released a fine-tune of their 9 billion parameter GLM 4.1 model that tops the OS World benchmark.
OS World is a benchmark for multimodal agents to test how well they can interact with visual interfaces as well as operating systems. Some included tasks that are part of it are install Spotify and also extract an attachment from an email and upload it to Google Drive.

Their model, while being much, much smaller than the competitors at the top of the benchmark, like Claude 4 Sonnet still manages to outdo them, showing how far you can go with a small model if you have it only focus on a somewhat narrow domain.
Sadly the model was not open sourced, but it could be fun to go and try and fine-tune your own version of this and see if you can even surpass their performance.
## Qwen Image Edit
Qwen has [released](https://x.com/Alibaba_Qwen/status/1957500569029079083) an image editing model based on Qwen image, which they released two weeks ago. The model accels in all forms of image editing, including text manipulation, object rotation, appearance editing, adding and removing objects, and more.

<center>Example use case</center>
<br/>
Its abilities are backed up in real world benchmarks as well, being basically at the top of the Artificial Analysis image editing leaderboard, which is voted on by real people. You can also check out [Qwen's Twitter page](https://x.com/Alibaba_Qwen) to see a whole bunch of other examples of how it can be used.

This paired with the also good Qwen image base model makes for one of the best image generation and editing stacks out there. I will be switching my local image generation pipeline to be using both in the next week because of this.
# Research
## Can AI Predict the Future
Recently, betting markets like Kalshi and Polymarket have gained massive popularity, allowing users to bet on what could happen in the real world, like will there be a magnitude 7 earthquake this month, or how many tweets will Elon Musk make this week?.
[Researchers wanted to see how](https://x.com/ProphetArena/status/1956928877106004430) good LLMs are at choosing what real-world events to bet on and assigning probabilities to them to see if they have an edge over these markets.

<center>*The short answer, no. Average return is from the starting amount, so less than 100% means they lost money and more than 100% means they have made money*</center>
No LLM was able to beat the market. OpenAI's models did the best on average and DeepSeek did the worst, but none of them had any catastrophic losses.
It's interesting to see the dynamics of how different LLMs decide to make bets and how they want to act. O3-Mini, for instance, is super aggressive and is willing to take risky positions to get a large payoff, which results in it being at the top of the leaderboard.
DeepSeek's result is definitely the most interesting from all of these models. Most of the models tend to be at least somewhat close to each other in terms of expected probability for most of these bets. But DeepSeek is not. DeepSeek has wildly different probabilities that it is assigning to these events happening or not happening, That is completely contrarian to the rest of the models. This uniqueness does not help it at all, as it had the worst returns of any model.
The cool thing about this benchmark is that it cannot be overfit. It is always live, and there are always new events to go and bet on. So be sure to check the [leaderboard](https://www.prophetarena.co/leaderboard) every once in a while to see how the models are doing and see if any of them have been able to outsmart the human hive mind.
# Speed Round
Useful tools or topics I found this week that may or not be AI related, but I didn't have time to write a full section about.
## RL'd models really like numbers
When asking an LLM what its favorite artists are, [researchers found](https://www.tylercosgrove.com/blog/llm-music-taste/) that models that had more reinforcement learning (reasoning models) tended to have a higher likelihood of responding with artists that had numbers or other with artists that had numbers or other mathematical symbols in their names than regular artists.

## You are the chatbot
Someone [on twitter](https://x.com/deepfates/status/1958648685224743047) made the opposite of an AI assistant, an AI user. It has been trained on an inverted structure, where it expects you to answer its questions, resulting in some hilarious back and forths.

You can talk with the AI user now on https://youaretheassistantnow.com.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.
<video src="/8-18-2025/QO8rOkqo0Bb_FtLq.mp4" autoplay loop muted playsinline></video>
<center>*Fully local image to video pipeline using Qwen Image and Wan 2.2* - from [fofr](https://x.com/fofrAI/status/1958121701671145818) on twitter</center>
|
.mdx
| 7,358
|
src/pages/news/weekly-9-22-to-9-28.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-9-22-to-9-28.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Qwen Deluge"
date: "2025-09-27"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Qwen releases 10 models in 2 days and can AI replace your job?"
author: Andrew Mead
pending: false
---
# Releases
## Qwen
Qwen has somehow outdone themselves this week, releasing 10 new products and models, here are the notable ones you should pay attention to. For all of the models I am about to mention (except Qwen Guard) are available to use for FREE on [Qwen's website](https://chat.qwen.ai).
### Qwen3 Max
In a departure from their usual open source releases, they dropped their largest model, [Qwen3 Max](https://x.com/Alibaba_Qwen/status/1970599097297183035), via API and web interface only. The model is a mixture of experts model and is reportedly over one trillion parameters.

The model benchmarks very well, similar to Claude Opus and DeepSeek 3.1.
It also seems to pass the community vibe check with many people reporting strong coding, tool calling, and general writing capabilities.
We will have to wait a few more weeks as proper benchmarks for this model get released before we can definitively say this is a frontier level model.
The model is using a tiered pricing depending on how many tokens it uses, which we are seeing more and more of as the usable context window for these LLMs grow.
| Context Length | Input Tokens/Million | Output Tokens/Million |
|----------------|---------------------|----------------------|
| 0–32K | $1.2 | $6 |
| 32K–128K | $2.4 | $12 |
| 128K–252K | $3 | $15 |
For context, GPT-5 costs $10 per million output tokens, and Claude Sonnet costs $15 per million, putting this model in roughly the same tier as those models, showing Qwen's confidence in its strength.
### Qwen3-VL
The next release is their [Qwen3 vision](https://x.com/Alibaba_Qwen/status/1970594923503391182) model, built on their 235 billion parameter MOE model. Its benchmarks put it on top across the entire frontier VLM ecosystem, outdoing the incumbent champion Gemini 2.5 Pro on most of the benchmarks tested.
The community vibe check also seems good. I've been seeing reports of people saying that it has been able to solve problems that no other VLM had been able to before, including Gemini Pro and GPT-5.
Personally, I will now be defaulting to using Qwen3-VL for any multimodal queries I have in the future based on what I have been seeing and hearing about it.

Despite this large frontier model being open sourced, I am still a bit disappointed that it only exists for the 235 billion parameter version. At that size, it's unwieldy for pretty much any home user to be able to use. I hope in the future they release a variant based on their 30 billion parameter model, so that way we can easily run it at home ourselves.
### Qwen3 Omni
[Qwen3 Omni](https://x.com/Alibaba_Qwen/status/1970181599133344172), as the name suggests, is a model that can handle all modalities. It can take text, image, video, or audio input, and then it can output either text or audio.
It's built on the thirty billion parameter MoE model, allowing for fast inference and boasting a 250 millisecond audio to audio response time, making it a great fit for real time voice assistant applications.

From my usage with it so far on the Qwen website, it seems to be a fairly intelligent model.
There is some delay in the voice-to-voice response times, but that could be due to the fact that Qwen's servers are in China, adding a large amount of latency just due to the distance.
The audio output quality definitely is not as strong as something like ChatGPT's voice mode, but it is still clear and usable.
Its video understanding is strong for open source but does not rival the Gemini models or the new Qwen3-VL model.
### Qwen3 Guard
It has been a while since we have seen a safety moderation release in the open source community, but Qwen has gone and provided that for us. Their [Qwen3 Guard](https://x.com/Alibaba_Qwen/status/1970510193537753397) model comes in three sizes, 600 million, 4 billion, and 8 billion parameters and offers a bump in quality compared to the previous safety models that we had, including Llama Guard 3. It is particularly strong in multilingual situations for both prompt and response classification.
It can do both user prompt classification and also AI model output classification as well. The user can define what they are looking for the model to guard against, and the model will classify user inputs and model outputs into one of three categories, safe, controversial, or unsafe.

The six hundred million parameter model is very strong, matching or exceeding previous open source SOTA, while also being small enough to potentially deploy on the edge or even run in a user's browser, making safety guardrails easy to access for any of your applications.
## Narrow Focused Edge LLMs
Being able to use small LLMs on edge devices like Raspberry Pis for specific tasks has long been a goal of the community.
But up to now there have been no good tailor-made models to do this. Instead you would have to go and fine-tune your own, which would take a large amount of effort to go and do.
Now we don't have to do that, as the Liquid AI team [has released](https://x.com/maximelabonne/status/1971199141532627224) a series of Small Language Models (SLMs) that are special made to do one specific task.
They have targeted 4 tasks with their initial release: data extractions (unstructured -> structured), translation, RAG, tool use, and math.

<center>*LFM2 Tool Calling model benchmarks. The models punch far above their size compared to their Qwen counterparts*</center>
These models are built on their LFM2 series of models and outperform any of the state-of-the-art general models (Qwen3) of the same size that are out there right now for their specific task.
They still will not outperform the very large models running in the cloud, but for on-device deployments these models are your best bet.
They come in two sizes: 350 million parameters and 1.2 billion parameters. You can expect the 350 million parameter model to use a little bit under 400 megabytes of RAM when loaded in 8-bit with a 4000 token length context window, and the 1.2 billion parameter model will take under 1.5 gigabytes.
# Benchmarks
## Kimi Inference Provider Bench
With the surge in near frontier-level open source LLMs that we have been seeing, there's been a need to identify which providers are or are not serving the model as the model makers originally intended. These changes could include small modeling tweaks or quantization to help run the model faster or allow it to have higher throughput. These changes could cause downstream effects. Which would result in the user having a worse experience with the models than they should.
This has been a known issue for a while now, but the team at Moonshot AI have decided that they don't want their model slandered anymore and have [released a benchmark](https://x.com/crystalsssup/status/1971158566343184511) showing the similarity of the different model providers serving their Kimi K2 model when compared to their own implementation.

What they found is that none of the model providers were able to get away with their optimizations, as none of them were able to match the model's performance in their tests.
The main thing to look at in the table above is the schema validation error count, which is a failure of the model to follow the output schema that was specified, which is highly important for tool use in agentic applications.
It is also notable that Together AI, one of the bigger names in the open source LLM inference space, has such a low similarity score, being second to last in terms of similarity with 350 less successful tool calls versus the reference implementation.
As time goes on, I expect many of these companies that are open sourcing their models to start policing the hosters in a similar manner to ensure that the models they are serving are correct, so the users don't get a false sense of the model's ability.
In the meantime, I am blacklisting Together AI, Baseten, and AtlasCloud on Openrouter, due to their extremely poor performance as highlighted by Moonshot.
## GDPval
OpenAI wants to "transparently communicate progress on how AI models can help people in the real world". To help facilitate this, they have released [GDPval](https://x.com/OpenAI/status/1971249374077518226
): a new evaluation designed to help track how well our models and others perform on economically valuable, real-world tasks.
This includes tasks like project timeline scheduling and management, manufacturing design proposals, and inventory and order management.

In a shocking turn of events, gpt5 is not actually the best performing model on this benchmark. Instead, Claude Opus 4.1 is. I appreciate the transparency from the OpenAI team and the willingness to publish a benchmark where their model is not on top.
## Gaia 2
How well can AI agents handle things like ambiguity, noise, and conflicting information? Can it successfully search for and find the necessary information to clarify the situation? That's what the Meta Superintelligence team wanted to find out with their [Gaia 2 benchmark](https://x.com/scaling01/status/1970162732470067283).
Gaia 2 builds upon the original Gaia benchmark, which looks to measure tasks that are easy for humans, but hard for agents. They wanted to take it a step further than other benchmarks though, and introduced a sense of time into the problems that they were benchmarking.

<center>*Some models need more time/ compute to be able to answer a question than others, but they all end up plateauing eventually*</center>
Some example problems include:
**Setup**: The agent has access to a noisy calendar and an inbox with partial/conflicting info.
**Task**: Book a doctor’s appointment at a time that doesn’t conflict with existing meetings, and send the correct confirmation.
**Measurement**:
Does the agent correctly reason about overlapping times?
Can it resolve ambiguity (e.g., “the meeting moved to Tuesday” with no time)?
Was the final calendar write action correct and timely compared to the oracle answer?
**Setup**: The environment injects interruptions (e.g., “meeting cancelled” after the agent already sent invites).
**Task**: Revise the plan and clean up previous actions.
**Measurement**:
Can the agent undo or correct prior actions?
Did it respond within time constraints?
How closely do its final states and write traces match the annotated ground truth?
This is a great benchmark for real world agentic use, where all of the information is clean and easily available like in other benchmarks. Keep an eye on this in the future when evaluating different models for real world agent use cases.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.
<video src="/9-22-2025/carboarding.mp4" autoplay loop muted playsinline></video>
<center>*Carboarding* -- From [Darri3D](https://www.reddit.com/r/aivideo/comments/1mt8v39/carboarding/) on Reddit</center>
|
.mdx
| 11,613
|
src/pages/news/weekly-9-8-to-9-14.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-9-8-to-9-14.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Bytedance retakes the top"
date: "2025-09-14"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "ByteDance releases a top image model, and Qwen3 sees some major innovation"
author: Andrew Mead
pending: false
---
# Releases
## Seedream 4
[Two weeks ago](https://vectorlab.dev/blog/weekly-8-25-to-8-31/), we talked about Nano Banana, Google's new top image generation and editing model and how it was unmatched when it comes to image editing.
That throne lasted a very short time, as ByteDance has released their own model, Seedream 4, which matches Nano Banana's image editing capabilities and far surpasses it in regular text to image generation.

<center>*Almost 50 elo higher than 2nd in text to image, and matches Gemini 2.5 Flash in image editing (not pictured)*</center>
From what I have seen of the model so far it definitely deserves the top spot, with exceptional style and the best text rendering I have seen from any model. This is helped by the model's ability to output high resolution images, up to 4096x4096 pixels, while most other models can only do around 1024x1024 pixels.

<center>*From [Fofr](https://x.com/fofrAI/status/1966142589289329015) on Twitter. The image is in 4k (open it in a new tab and zoom in!)*</center>
The model is also priced very competitively at $0.03 per image generation or edit on both [Fal](https://fal.ai/models/fal-ai/bytedance/seedream/v4/text-to-image) and [Replicate](https://replicate.com/bytedance/seedream-4), with no change in cost for a 4k images vs a 1k (although your generation speed will be drastically slower!). For reference, Nano Banana costs about $0.04 per image on the [Gemini API](https://ai.google.dev/gemini-api/docs/pricing#gemini-2.5-flash-image-preview).
A few weeks ago I mentioned that I would be switching my local AI image generation stack to Qwen Image, but after playing with Seedream 4, Qwen does not seem that spectacular anymore (it is still a top 10 model there btw). Normally I am not as much a fan of closed source models, but Seedream is an exception, as it is noticeably better than anything else out there right now.
## Qwen3 Next
When it comes to LLM's, the Alibaba Qwen has traditionally been pretty conservative in terms of architecture and data. They follow the recipe everyone else does, and just do that very well to produce their models.
This week, they decided that is not how they want to be known anymore, and released their very abnormal Qwen3 Next model.
[Qwen3 Next](https://x.com/Alibaba_Qwen/status/1966197643904000262) is an ultra sparse mixture of experts model, with 80 billion total parameters, and only 3 billion active per inference pass. This ultra sparse architecture allows for super high output token speeds as well as high throughputs.
Typically we expect to see larger expert sizes for this type of model. For reference, their 30 billion parameter model has the same number of active experts as this 80B model, and their 235B flagship model has 22B active parameters.
This is typically because increasing the expert size makes the model learn faster and easier to train in general, but the Qwen team have managed to overcome this after over a year of experimentation.
The innovations don't stop there, as they have also found a linear attention that works at scale.
Linear attention is something that researchers have been going after for years now, as it would allow for much higher speeds at long context lengths. There have been hundreds, if not thousands, of variants of linear tension that have been proposed, but none have been used in any model that is state of the art or near state of the art for its size. Specifically it uses a variant called [Gated DeltaNet](https://arxiv.org/pdf/2412.06464), which is build from the Mamba 2 [state space model](https://arxiv.org/abs/2405.21060), which has been a promising architecture for a while now.
These two innovations allow for both very efficient inference and also training.

<center>*Qwen Next costs less to train than their 30B MoE model, while being noticeably better in downstream benchmarks*</center>
That's enough about architecture and efficiency, how well does the model actually perform?

<center>*Qwen3 Next has 2 variations, Instruct (non reasoning) and Thinking. Thinking benchmarks shown above, Instruct version benches similarly vs other instruct models.*</center>
The model ends up where we expect it, somewhere in between the 30B Qwen3 MoE model and the 235B model. It has [shown some weakness](https://x.com/ficlive/status/1966516554738057718) in long context benchmarks when compared to the 235b model, but it is unknown if this is due to the architecture or the raining data used.
This seems like more of a research release than a full fledge daily driver kind of model, but we can expect this to change in the future, as the head of the Qwen team [teased](https://x.com/JustinLin610/status/1966199996728156167) that Qwen Next will be used as the baseline for the Qwen 3.5 series of models, and only will improve over time.
The real question is can they scale this to a 1 trillion parameter model with only 3B active parameters, thus making CPU inference of very large LLMs possible. Currently models like GLM 4.5 and Kimi K2 have experts that are a bit too large to run at decent speeds (10+ tokens/sec) on a CPU only server.
# Research
## CARE Benchmark
*Trigger warning: suicide and self harm*
In more serious news, there have been many cases of people talking with AI's and then commiting suicide after, either because the model convinced them to or the model was unable to identify that there was something wrong and was not able to step in and help.
Previously, we had no insights to how models responded to these questions, and if they were able to reliably step in and stop things before they went bad.
Now we have the answer, courtesy of a startup called [Rosebud](https://www.rosebud.app/care). They tested 21 of the top models to see how they responded to 5 different scenarios. Each scenario was tested 10 times.

Very concerningly, we find that GPT-4o and 4o-Mini, the two most used AI models of all time, are in the bottom two of this benchmark. Thankfully, the new GPT-5 model is at the very top, but the fact that we had models that performed this poorly for this long and did not update or fix it at any point, is a very concerning and sobering realization.
These models are often used as psychologists, doctors, or psychiatrists, when they do not have the capabilities to identify and act on potentially harmful user behaviour. If you are working on these problems, please be aware of these problems and do extensive testing on the models you are using so you can be aware of their pitfalls, and either change the model or scrap the idea altogether if its not reliable enough.
[Rosebud wants to work](https://x.com/chrysb/status/1965811979236610269) with the whole community on this benchmark since it's so important, so if you want to help, you can get in contact with the team. They plan on adding more to the benchmark and open sourcing it in Q1 2026.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>*IMG_3984.CR2 a pack of lions form the word FOFR* by [fofr](https://x.com/fofrAI/status/1965466063094915215) on Twitter using Seedream 4</center>
|
.mdx
| 7,841
|
src/pages/news/weekly-11-3-to-11-10.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-11-3-to-11-10.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Kimi K2 is on top"
date: "2025-11-10"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "Kimi K2 Thinking is released, Llama.cpp gets some big upgrades, and an AI scientist that can work for days"
author: Andrew Mead
pending: false
spotify: ""
---
# tl;dr
- Kimi K2 Thinking becomes the top open source model
- Llama.cpp gets much easier to use
- A startup claims to have made an AI agent that can work for multiple days straight
# Releases
## Kimi K2 Thinking
Moonshot AI has released the [thinking version](https://x.com/Kimi_Moonshot/status/1986449512538513505?s=20) of their already strong Kimi K2 model.

<center>*Major respect for only comapring to GPT-5 and Sonnet 4.5, although its easy when your model directly competes and beats them on benchmarks*</center>
The model is still the 1 trillion parameter behemomouth mixture of experts model from before with 32B active parameters. The model was trained using quantization aware training to allow it to be deployed at 4 bit without suffering much performance degredation. [All benchmarks](https://x.com/teortaxesTex/status/1986612178133123165?s=20) released by the team are for the INT4 model.
It contiues to be the [best model in terms of writing](https://x.com/koylanai/status/1986464588099952886?s=20), somehow outdoing its predessor instruct model which was the previous best.
It also continues to be the most unique LLM in terms of personality and general writing style, being drastically different from the slop pretty much every other major LLM has.
This version of the model the Moonshot team really worked on the agentic capabilities of the model, which were lacking in the instruct model. It seems to not necessarily be on the same level as GPT-5 and Sonnet 4.5 for agentic coding, but for more general agent use cases it seems to be able to hold its own.
| Model | $ per million (input)| $ per million (output) | Tokens per second |
|-------|-----|-----|-----|
| GLM 4.6 | $0.60 | $2.20 | 90 |
| Claude Sonnet 4.5 | $3 | $15 | 57 |
| GPT 5 | $1.5 | $10 | 34 |
| Kimi K2 Thinking | $0.6 | $2.50 | 25 |
| Kimi K2 Thinking Turbo | $1.15 | $8 | 107 |
<center>*The turbo model is the same as the regular, just hosted on faster hardware. Info from [OpenRouter](https://openrouter.ai/)*</center>
<br/>
The main issues with the model are speed and token usage.
The model is cheap, at only $2.50 per million output tokens, but at 25 tokens per second it is slower than even the glacial GPT-5 (those that have used GPT-5 in Codex know what I mean). For agent tasks this is unacceptably slow. You could switch to the Turbo endpoint, but then the price becomes similar to GPT-5, which defeats one of the main purposes of these Chinese models which is that they are very cheap.
Also Kimi K2 Thinking has an issue most other first generation reasoning models have which is extrmely long thinking traces. Specifically, Kimi K2 seems to have the longest chain of thought processes of any reasoning model, as shown by the [Artifical Analysis benchmark](https://artificialanalysis.ai/models/kimi-k2-thinking#intelligence-index-tokens-cost) below.

<center>*Double the thinking tokens of GPT-5 high, meaning it will feel twice as slow if it were running as the same tokens per second*</center>
This is an issue that [can be fixed](https://x.com/SimonXinDong/status/1982217071728435519), so I expect the Moonshot team to fix it in the future, but it does make the model even slower to use now.
These are not deal breakers however, because at the end of the day this model is very strong, near or at the frontier of intelligence, all while being open source. I will be daily driving Kimi K2 thinking for the next week or so to see if it can replace GPT-5 for my daily AI questions.
The [jagged edge of intelligence](https://x.com/karpathy/status/1816531576228053133?lang=en) makes it so that these frontier models will be seen going back and forth at the top of benchmarks, with no direct clear winner. At the end of the day, it will come down to your specific use case for what model you should be using. I tend to focus on agentic coding, since that's what I use these models for the most, but your needs may be different. Because of this, I recommend building out your own small evaluation set and using it to test existing and new models that come out, so you can assess whether or not you should switch to it.
## Big Llama.cpp updates
The Llama.cpp team has had enough of being known as the unacknowledged backend for [subpar](https://news.ycombinator.com/item?id=44867238) tools like Ollama, LMStudio, and Jan, and have rolled out changes to make the library easily accessible for all.
The first change is a revamp of the default UI that is available when running a llama.cpp model.
Previously the UI had been very bare bones and did not save anything for the user.

<center>The revamped UI, very similar to the ChatGPT interface</center>
Now the UI has a much more standard, intuitive, and better looking interface for you to use. It also has chat history and more advanced tools like modifying sampling parameters or having the model [follow structured outputs](https://x.com/ggerganov/status/1985727399271518689?s=20).
One of the long standing issues with Llama cpp has been the difficulty of setting it up, easpecially when compared to tools like Ollama.
This has been fixed now, with the release of [LlamaBarn](https://x.com/ggerganov/status/1986072781889347702?s=20), a Mac menu bar app that allows you to run LLMs with just a single click.

LlamaBarn will automatically handle model download, optimization for your specific hardware, and then the actual running of the model.
It will start a OpenAI API library compatible server for you to use in your code, and also start the new web UI mentioned above.
If you are running models locally on your Mac right now with tools like Ollama, LMStudio, Jan or any others, I would highly recommend switching to Llama.cpp, as it is what all of these other tools are using under the hood.
By using Llama.cpp you will be getting the first party experience of running these models, without any of the bloat or "performance tweaks" that degrade model quality that they other libraries provide.
# Quick Hits
## OpenAI Codex updates
OpenAI has released [some updates](https://x.com/OpenAIDevs/status/1986861734619947305?s=20) to their coding platform Codex, most notably increasing rate limits by 50% and releasing GPT 5 Codex Mini, which uses rate limits half as fast as the regualr Codex model, and is also noticably faster.

## AI Scientist that can work for days
A company called [Edison Scientific](https://x.com/andrewwhite01/status/1986094948048093389?s=20) has come out with an [agent system](https://platform.edisonscientific.com/) that they say can run for days at a time.
They say it has already written 7 papers on unique, previous unknown/ unexplored topics, and that its success rate is 80%. They also include a [paper](https://arxiv.org/abs/2511.02824) as well documenting how it works.
You can use it now for free if you have an academic email (you get 3 queries). After that it will cost $200 per run.

## ComfyUI cloud
Popular image and video generation platform ComfyUI has released a [monthly compute plan](https://x.com/ComfyUI/status/1985751421934059663?s=20) that gives users 8 hours of A100 40GB GPU per day for generating images and video.
If you are a power user, startup that has custom workflows that you want to run, or have wanted to use ComfyUI but didn't have teh compute for it, this is the most cost effective option to use right now.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.

<center>*[Epoch AI released](https://x.com/EpochAIResearch/status/1985788184245293153?s=20) a cool visualization of all the major datacenters being built right now*</center>
|
.mdx
| 8,417
|
src/pages/news/weekly-8-4-to-8-10.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/news/weekly-8-4-to-8-10.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "OpenAI week"
date: "2025-08-10"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "GPT 5 and oss, and everyone trying to release before then"
author: Andrew Mead
---
# News
## Sweden PM uses ChatGPT
Recently the Swedish prime minister [addmitted](https://x.com/rohanpaul_ai/status/1952025736111366590) to using ChatGPT "quite often" when in need of a second opinion or historical information. While he says he does not upload any documents, and that he uses it in a similar way that doctors do to gain more perspectives.
This comes on the heels of many AI labs lobbying to get used more in federal systems.
This week [OpenAI announced](https://x.com/sama/status/1953103336044990779) that each US federal agency can use ChatGPT for free for just $1 per agency.
Anthropic has also [publically](https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers) announced that they have already trained models specific for national security customers, and that any other agencies can request access to it as well.

<center>*America runs on Dunkin, and Sweden on ChatGPT*</center>
# Releases
## GPT 5
The much anticipated GPT 5 has been [released](https://x.com/OpenAI/status/1953526577297600557) by OpenAI, not without its fair share of controversy.
The announcement stream had a variety of issues, most obvious were the heinous chart crimes, including a very ironic [mislabeling](https://x.com/m__dehghani/status/1953513255328256373) of the deception score.
<center></center>
<center>*52.8 > 69.1 == 30.8* - We all can't be math majors guys</center>
<br/>
Their model naming hasn't improved much either.
<center></center>
<center>*This is gonna take me a while to remember*</center>
<br/>
When using ChatGPT with GPT5, your queries will now be automatically routed to the model that they think will be best to answer your question, much to the chagrin of many users. What didn't help was that on release day, the [model routing was broken](https://x.com/sama/status/1953893841381273969), so users were being given the lower performing models when asking complex queries, resulting in poor answers.
Getting passed all of the launch day shenanigans, are the models actually good to use? The answer so far seems to be yes.
For the casual user of ChatGPT (non technical and free plan users), they will see a large bump in quality from the over 1 year old GPT 4o and 4o mini that they are used to. This also comes with a [reduction in glazing](https://x.com/aidan_mclau/status/1953512472683721118) from the model, to hopefully prevent less users from experiencing ChatGPT psychosis.
For the more experienced users, this seems to be a bit of a quality bump from the other models on the market. Most notably is that for coding, it seems to be a potential step up from Claude Sonnet [while being 33% cheaper](https://x.com/allhands_ai/status/1953883039768989739). It is better at follwing exact instructions than Sonnet is and is capable of [pushing back](https://x.com/willccbb/status/1953596587596558490) on design decisions when needed.
Where is has been reported to fall short is on pure vibe coding, as it does not appear to do [as well on vague prompts](https://x.com/cline/status/1953898747928441017) as Sonnet does. So if you are a Software Engineer that knows what they want, GPT 5 will be a precision instrument that you can use, while if you like vibe coding and letting the model figure it out, then you are best off sticking with what you are using now.
Finally, there has been a surprising amount of pushback from the general populace on the sudden disappearance of GPT 4o, with many equating it to [losing a friend](https://www.reddit.com/r/ChatGPT/comments/1mkumyz/i_lost_my_only_friend_overnight/). This has caused OpenAI to [reinstate](https://x.com/sama/status/1953953990372471148) the model on the ChatGPT site as an option for people to use. Remember kids, not your weights, not your waifu.
## Google Genie-3
Google has [released](https://x.com/OfficialLoganK/status/1952732206176112915) their third iteration of their world generation model called Genie-3.
This model will generate custom environments that you are able to go and then walk around in, and that it will go and generate the terrain and objects within it on the fly for you.
Normally, models like this really struggled with object permeance.
So once an object went out of your line of sight, when you looked back in that direction, the object no longer be there or it would be changed.
This model [no longer](https://x.com/AndrewCurran_/status/1952746390654009794) has that issue.
They have, according to them, an emerging capability of remembering objects and their locations previously for up to a minute.
<video src="/8-4-2025/genie3.mp4" autoplay loop muted playsinline></video>
<center>*Genie 3 generation of some Greek ruins, remember this is a real time AI generated video, not a premade map or world we are walking around in*</center>
## OpenAI gpt-oss
GPT-5 wasn't the only big release OpenAI had this week. They also released their first [open-source LLM](https://x.com/sama/status/1952777539052814448) since GPT-2. The gpt-oss series of models comes in two sizes, 20 billion parameters and 120 billion parameters, both being a mixture of experts models with 3 billion active and 5 billion active parameters, respectively.
The models benchmark well, but the [general sentiment](https://www.reddit.com/r/LocalLLaMA/comments/1miodyp/gptoss_120b_and_20b_feel_kind_of_bad/) for their actual quality is poor.
These models have been trained on what appears to be a purely synthetic dataset, lacking essentially zero world knowledge.
They are very good at coding and math, but outside of these fields they struggle and their lack of diversity in their pre-training dataset really shows.
They have almost rigid boundaries in terms of knowledge, resulting in [very weird](https://x.com/_lyraaaa_/status/1952786592491508018) failure modes.
People have been reporting that for even non-coding questions, the models will hallucinate a coding question in your input and try and figure it out themselves.
Also thanks to its purely synthetic data training, the model [hallucinates more](https://x.com/jasondeanlee/status/1953031988635451556) than almost any other model out there, with a SimpleQA score in the low single digits, a benchmark that OpenAI made.
This is [very similar](https://x.com/corbtt/status/1952868822891012241) in behavior to the Phi series of models from Microsoft, which are known to be purely synthetic dataset-trained models.
These models perform well in reasoning and STEM, and other STEM fields, but for any other use case, they fail miserably.
Even if it weren't for these models' rigidness, they still wouldn't be my choice for their given sizes.
The recently refreshed Qwen3 30B MOE model has similar speeds and also similar performance while not having the catastrophic failure cases that gpt-oss has.
And then for the 120B parameter model, the GLM Air model also competes directly with that within a few percent on pretty much every benchmark, even exceeding gpt-oss for agentic applications.
But hey, look on the bright side, you can now force the model to [never output](https://x.com/sam_paech/status/1952699942704677210) an em-dash ever again.
# Speed Round
Useful tools or topics I found this week that may or not be AI related, but I didn't have time to write a full section about.
## Qwen
Qwen has been releasing so much stuff that they get to have their own section now.
### Qwen Image
A new 20B param [image generation model](https://x.com/Alibaba_Qwen/status/1952398250121756992) from the Qwen team, has very good prompt instruction following, but I find the actual image quality to be a little bit behind the top models in terms of the "AI" look that it has.

<center>Prompt: *Amateur POV Selfie: A man's face is half-submerged as he takes a selfie in a murky swamp. Just behind his head, the two eyes and snout of a large alligator are visible on the water's surface. He hasn't noticed yet.* - From [Reddit](https://www.reddit.com/r/StableDiffusion/comments/1mi9syy/qwen_image_prompt_adherence_is_gt4o_level/)</center>
### Qwen3 4B update
The Qwen LLM team has continued their post training refresh of their Qwen3 models, with two new [4B param models](https://x.com/Alibaba_Qwen/status/1953128028047102241) coming out this week.
Of note there is no coder version like there was for the other two refreshes, but this does make sense as coding is a very difficult task, especially for the smaller models.
We are starting to see what sizes of models they seems to care about and think have the most impact, being the large 235B model, the 30B MoE model, and now the small 4B model.

### Qwen Coder is now free
Qwen has their own Claude Code TUI competitor built on top of the Gemini TUI (not confusing at all),
And like Gemini, they are offering access to their model [for free](https://x.com/Alibaba_Qwen/status/1953835877555151134), giving not just 1000, but 2000 requests everyday for free when you log in with your Qwen account.
It follows the same privacy policy as Google, so they will be training on your code, but if you are okay with that then this is a great option to go and use.
## Opus 4.1
Small [version bump](https://x.com/AnthropicAI/status/1952768432027431127) of the already top tier Opus 4 model, performance is slightly improved across the board, but nothing revolutionary.
Anthropic says that they will have "substantially larger improvements" coming in the next few weeks.
## RedNote OCR model
The TikTok of China has released has an AI lab, and they have just [released](https://x.com/HKydlicek/status/1952726867020062979) a SOTA VLM for general purpose OCR and image understanding. Only 1.7B params, so it should be feasible to run on the edge.
## ElevenLabs Music
New [music model](https://x.com/elevenlabsio/status/1952754097976721737) from Eleven labs. Seems to be a step up from Suno, also allows for editing sound, lyrics, or entire sections of the songs you make. See an example of how to use it [here](https://x.com/elevenlabsio/status/1953143051943031077).
## Lightweight deep research model
We had previously converted a similar model called Jan a few weeks ago, and now there is competition in the space as former Stability AI founder Emad Mostaque's new startup Intelligent Internet has released their [own version](https://x.com/casper_hansen_/status/1952801276095336770) that outperforms Jan by quite a large margin, especially on harder research tasks.
All the data for training and how they did it is open source.
I can see these small, on-device, personal agents being the future, as they allow for easy customizability and also users can give them access to private information without having to worry about someone else having it.
This sentiment is also [echo'd by Nvidia](https://x.com/heyshrutimishra/status/1951586339293642893) in a recent paper they released, highlighting how small language models(SLMs) will be cheaper and faster while still being just as capable in most real world tasks.
## Kitten TTS model
Kokoro TTS's 70M params are just too much for your old Raspberry Pi? Well worry no longer, as there is now an even smaller TTS model called [Kitten TTS](https://x.com/divamgupta/status/1952762876504187065) which is only 15M params.
The voices are definitely worse than Kokoro, but still very much passable, especially if you are extremely resource constrained or care about having the lowest latency possible.
## Fully AI run companies in the wild
In the future there will probably be thousands of AI companies running around, but right now there are very few. Here you can [watch](https://x.com/RileyRalmuto/status/1952635304768098546) one person on TikTok figure out that they are a part of a company where all their coworkers and bosses are just different AI agents.
The videos seem fairly convincing, and even if it is fake, there will be something like this in the future that's not.
## MCP RL
Have an MCP server that your agent is struggling to figure out how to use? Now you can use reinforcement learning to [fine tune your agent](https://x.com/corbtt/status/1953171838382817625) to use your server, no data required. Just give the connection to the server, and the agent will "play around with it" to learn how to use it most effectively.
## Gemini is free for students
2.5 pro access, notebook LM, deep research, and 2TB of storage [all included](https://x.com/sundarpichai/status/1953124372480180550) for free. All you need is a .edu email. Everyone say thanks Sundar.
# Finish
I hope you enjoyed the news this week. If you want to get the news every week, be sure to join our mailing list below.
|
.mdx
| 13,091
|
src/pages/blog/how-to-train-qwen-image.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/blog/how-to-train-qwen-image.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "How to Train Qwen Image Loras"
date: "2025-10-07"
tags: ["IMAGE GENERATION", "2025", "QWEN IMAGE"]
excerpt: ""
author: Andrew Mead
pending: true
---
I recently participated in the [Huggingface Lora Training Frenzy](https://x.com/multimodalart/status/1972626121817460995?s=46), which allowed you to train loras for a variety of open source models completely for free.

<center>**</center>



|
.mdx
| 637
|
src/pages/blog/what-model-should-you-be-using.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/blog/what-model-should-you-be-using.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "What LLMs You Should Use"
date: "2025-08-28"
tags: ["LATEST AI MODELS", "2025"]
excerpt: "GPT5, Claude Opus, or Open Source, what is the way to go?"
author: Andrew Mead
---
*Written in collaboration with [Harvard Business School](https://www.hbs.edu/), [INSEAD Univeristy](https://www.insead.edu/), and [Sundai Club](https://www.sundai.club/)*
When looking at the AI space right now, it can be intimidating to keep up with all the models being released and keeping track of which is the best for what.
In this article we want to demystify the AI ecosystem a bit, and give you picks for the top models across a variety of use cases.
If you want to try these models, we suggest using the [LMArena](https://lmarena.ai/) or [OpenRouter](https://openrouter.ai/) to access the wide variety of models that are available today.
If you are reading this in the future and want to know what the current best models are, I would recommend checking [Artificial Analysis](https://artificialanalysis.ai/), as they have most of the major models benchmarked there and are very up to date. Note that high benchmarks don't always ways translate to stellar real world performance, so be sure to test the model on your use case before deploying it to production.
# Day to Day Use
For day-to-day use, we're going to be looking at the models in terms of how good they are to use given the provider's UI.
Imagine this as the best general use AI product out there right now.
There is one clear winner here, which is OpenAI.
The experience using GPT-5 in the browser on the [ChatGPT](https://chatgpt.com) website is one of the cleanest experiences you will get right now.
The automatic enabling of web search and other tools like image generation and also the ability to directly connect to different services like the Microsoft 365 or any of the GSuite products makes this the best service to use.
You can make custom GPTs so that you can cater them to your specific context or understanding. You also get code agents with the codex functionality and also image generation using GPT Image and video generation with Sora as well, all available from the comfort of your own browser.
If you had $20 and only could only pay for one service, I would recommend getting a ChatGPT account (if you are using for general use, if you are looking to primarily code, Cursor would be my weapon of choice, but you will learn more about that in the coming weeks).
### Api Pricing
[See full pricing here](https://openai.com/api/pricing/)
| Model | $ per million input tokens | $ per million output tokens |
|-------|-------|-------|
| GPT5| $1.25 | $10 |
# Multimodal
For multimodal models, there's one clear winner here as well. Google's Gemini family of models.
The Gemini models are the only mainstream models that can handle text, image, video, and audio inputs.
With their extremely long 1 million context length, they are able to process up to 45 minute long videos with audio included, and over 8 hours for audio only.
They also top pretty much all the benchmarks for image and video understanding.
They also happen to be some of the best price to performance models out there as well, especially the Gemini 2.5 Flash model, which is priced at only $2.50 per million tokens (4x cheaper than GPT 5).
You can test these models now for free on the Google AI Studio, which gives you large amounts of control to tinker with the models and see what they can do.
### Api Pricing
[See full pricing here](https://ai.google.dev/gemini-api/docs/pricing)
| Model | $ per million input tokens | $ per million output tokens |
|-------|-------|-------|
| Gemini 2.5 Flash | $0.3 | $2.50 |
| Gemini 2.5 Pro | $1.25 | $10 |
# Coding and agentic tasks
Once again, we have a pretty clear winner here for code writing and other agentic tasks, which is Claude 4.
Claude has been the number one name in the game when it has come to coding and agentic tasks for over a year now. And that has not stopped with their Claude 4 Sonnet and Opus.
It is the de facto model used by Cursor and also used in the top CLI coding tool, Claude Code.
I recommend using Sonnet for most tasks, as it will be good enough and is 5x cheaper than Opus. If money doesn't matter or if you have a particularly hard task, then you can try Opus, which has been bumped up to version 4.1 recently, and is a small improvement
One notable mention here is from the open source community represented by Z.ai's GLM 4.5 model. This is one of the first models that is able to go blow for blow with Sonnet 4 in my testing and also has the added benefit of being almost 10 times cheaper than Sonnet. It sometimes falls a little bit behind on more complicated tasks, but for day-to-day use, I see little difference.
### Api Pricing
[See full pricing Claude pricing here](https://ai.google.dev/gemini-api/docs/pricing).
GLM 4.5 model pricing taken from [OpenRouter](https://openrouter.ai/z-ai/glm-4.5).
OpenRouter is a platform that allows you to use both closed and open source models all from one place (one url and api key to access all of them). The open source models are hosted by various inference provider companies like TogetherAI and Chutes, as well as first party providers like Z.ai.
OpenRouter also provides information about each provider like reliability, latency (time to first token), and throughput (how fast the model is).
For this chart, we are using the pricing directly from Z.ai.
| Model | $ per million input tokens | $ per million output tokens |
|-------|-------|-------|
| GLM 4.5 | $0.60 | $2.20 |
| Claude Sonnet 4 | $3 | $15 |
| Claude Opus 4.1 | $15 | $75 |
# Hosted AI (Bedrock, Azure, etc.)
AWS Bedrock allows you to run Claude (Sonnet 4, Opus 4.1), and a variety of other open source models in your own VPC and pay per token. The pricing for Amazon Nova is VERY competitive if the model quality is good enough for you (and it’s pretty good). Claude token prices are exactly the same as going via Anthropic directly. The catch is that all the models aren’t available in every AWS region.
Azure AI Foundry gives you per token access to GPT 5, but at a higher price than OpenAI directly. Other models are compute based, which means depending on your use case could be very cost effective vs. AWS (batched runs where you can shut the system down after) or much more expensive (intermittent queries where the compute needs to be on constantly).
# Open Source
If privacy is of utmost concern to you, then you could self-host your own open source models.
There are two different paths you could go down for open source models. You could either host one of the larger open source models on something like an 8xH100 node (not cheap, ~$15/hr), which would be easy to set up, but pricey to run in the long term. Or you could go and take a smaller open source model and fine-tune it yourself on your particular task (although you would still need to spend $2/hr in compute to host it once you are done training).
I don't recommend either of these paths if you could avoid it, instead using something like OpenAI's secure Azure Endpoint if security is a concern.
Fine-tuning also tends to be a massive time sink as fine-tuning models is very difficult. So you should expect to spend at least a couple months and thousands of dollars before you have a model and dataset that you are satisfied with. Usually, I say you should spend more time working on prompt engineering a pre-existing LLM like GPT-5, or adding/ improving your RAG pipeline instead.
That being said, if you do want to use open source models, here are your options.
## Ready to go out of the box
The two best open source options right now are Kimi K2 and GLM 4.5. These models are both made by Chinese labs and perform highly across most benchmarks, trading blows with the likes of OpenAI and Anthropic for the top.
## For finetuning
For fine-tuning models, the Qwen3 series of models is definitely the best right now. They have a wide variety of sizes to choose from, ranging from 600 million parameters all the way up to 235 billion, and are very receptive to fine-tuning with most of the top research papers right now using them as their base for their finetuning and reinforcement learning experiments.
# Other models to consider
Here are some additional models that didn't make the list, but you could also potentially consider for your deployments. They do not stand out in any way versus the competition, but also there isn’t necessarily anything wrong with them.
1. Mistral Medium and Large
2. XAi's Grok 4
3. DeepSeek V3.1
4. Amazon Nova
# Not worth it
You may be wondering why some models that you've heard of haven't been mentioned, and so we will list them here and the reasons why we don't recommend them.
## Llama
Meta's Llama series of models have been completely outdone by the Qwen3 series and are no longer near the top and have very limited support now in the open source community, especially the latest Llama 4 models.
## GPT-oss
Trained on only synthetic data, causing it to have very little world knowledge and high hallucination rates. This causes it to be very brittle to use, especially outside math, science, and general reasoning domains.
|
.mdx
| 9,294
|
src/pages/blog/can-cloudflare-replace-the-5-dollar-vps.mdx
|
/home/andrew/Documents/projects/vector-research-lab-site/src/pages/blog/can-cloudflare-replace-the-5-dollar-vps.mdx
|
==========
Vector Lab
==========
---
layout: ../../layouts/MarkdownLayout.astro
title: "Can CloudFlare replace a $5 VPS"
date: "2025"
tags: ["WEEKLY UPDATE", "2025"]
excerpt: "The CloudFlare deployment stack has progress a bunch over the last few years. Is it a viable alternative to a cheap VPS?"
author: Andrew Mead
pending: true
---
|
.mdx
| 304
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 27