AI Soft Skills: The New Differentiator for Language Models
Raw intelligence is no longer the sole marker of a language model's utility. Cutting-edge models are reaching parity in problem-solving abilities. As a result, "AI soft skills" have emerged as a critical way to evaluate them.
These soft skills refer to a model's effectiveness beyond IQ-like benchmarks. They show how well it performs in agentic workflows where it must use tools, follow instructions over multiple steps, and integrate into complex tasks.
For example, a model might excel at answering a trivia question. But how well does it handle a multi-step coding project where it needs to read files, run code, observe errors, and adjust accordingly? This article explores why such qualities are increasingly important and compares how leading models (Google's Gemini 2.5 Pro, Anthropic's Claude 3 Sonnet, and OpenAI's GPT-4) stack up on these "soft" capabilities in real-world use cases.
What Are "AI Soft Skills"?
By AI soft skills, I mean the traits of an AI assistant that make it effective and usable in practical workflows, beyond just getting correct answers. In human terms, someone may have high intellectual aptitude but still struggle with practical tasks or teamwork due to lacking soft skills. Similarly, an AI might score high on academic benchmarks yet falter when asked to perform iterative, tool-assisted tasks or to adapt to user preferences.
Key AI Soft Skills
Iterative Reasoning and Tool Use: The ability to break down complex tasks into steps, use external tools or APIs (e.g. running code, searching the web, reading documentation), verify intermediate results, and refine the approach. Rather than just dumping an answer, the model can plan, execute, and adjust like a human problem-solver.
Instruction Following and Adaptability: How carefully the model adheres to the user's instructions and context, especially as the instructions evolve. A model with good soft skills will handle clarifications or changes in direction gracefully in real time, without forgetting earlier requirements. It stays on track and adapts to feedback or new commands reliably.
Usability in Workflow: This covers the user experience of working with the model. It includes the intuitiveness of its interactions and practical features like context window size, response style, and integration with interfaces. A model that can remember large contexts, respond in the needed format, avoid unnecessary verbosity, and generally feels like a helpful partner can be said to have strong soft skills in usability.
These qualities contribute to an LLM's effectiveness when it's deployed as an agent. In such contexts, the model is not just answering one-off questions. Instead, it's embedded in a process (such as coding in an IDE, acting as a research assistant, or driving a conversation). In these cases, the difference between success and failure often comes down to these soft skills.
As one AI engineer put it, achieving "GPT-4 level" raw performance is becoming common. What really counts is how the model works with you.
Why Soft Skills Matter in Agentic Workflows
The Limits of Static Q&A
Traditional benchmarks for AI models have focused on static question-answer performance. These include solving math problems, answering knowledge queries, or passing exams. But agentic tasks are dynamic and open-ended.
Consider a coding assistant asked to implement a new feature. It may need to understand the existing codebase, write new code, run it, then debug any errors that arise. This requires iterative reasoning: a loop of planning, execution, observing outcomes, and refining the plan.
Early prompting techniques like Chain-of-Thought (CoT) tried to elicit this behavior. They had the model "think step-by-step" in a single prompt. However, CoT alone has limitations. It doesn't truly allow multi-step interaction or using external tools. If an early reasoning step is wrong, the model might not correct it within that single prompt.
The Shift to Agent Architectures
To overcome this, modern AI systems are embracing explicit agent architectures. Research on reasoning models shows a trend toward building LLMs that can self-correct, verify intermediate results, and use tools beyond just the text prompt.
In other words, newer models treat multi-step problem solving as a core capability, not a "prompting trick." Google's Gemini 2.5 Pro is a case in point. It was described as a "thinking model" that reasons through its thoughts before responding. Unlike earlier models where complex reasoning was a bolt-on process, Gemini 2.5 has this iterative thinking integrated into its architecture.
The Importance of Tool Use
Tool use is another crucial aspect. A truly agentic AI should know when to call an external tool. For example, it might invoke a calculator for a tough math problem or run a web search for up-to-date info. This is better than relying on its internal guesses.
Many LLM platforms have enabled such features (OpenAI's plug-ins, LangChain frameworks, etc.). But the model's ability to use them effectively is part of its soft skills.
Notably, Anthropic even built direct tool-use capabilities into Claude. In late 2024 they added a "computer-use" mode that lets Claude perform actions on a computer (e.g. navigating a filesystem or executing commands) much like a human would. This kind of built-in agentic ability underscores how important tool use has become for real-world tasks.
Following Instructions and Adapting
Instruction following and adaptability are equally vital in these workflows. If an AI agent writes code but ignores the specific requirements given, it's not very helpful. Likewise, if the user says "actually, use a different approach" mid-way, the model should pivot accordingly without throwing away prior context.
Here we see meaningful differences between models. Anthropic's Claude series has been praised for its careful adherence to complex instructions and nuanced requests. Developers observed that Claude 3 (Sonnet) follows intricate instructions more carefully than some alternatives. It often gets tasks right on the first try.
By contrast, models in the GPT-4 family, while extremely powerful, have at times been noted to go on tangents. They may produce more verbose output than needed, requiring the user to rein them in. Such differences can significantly impact an iterative workflow where each step's quality and compliance affect the next.
Practical Usability Features
Finally, usability considerations like context window size or response formatting can make or break agent-based use. If the model can only remember a small amount of context, it might fail as soon as the task gets complex.
This is an area where newer models have pushed boundaries. Claude 3 introduced a 100k token context in 2024. Gemini 2.5 Pro now boasts an industry-leading 1 million token context window. These huge contexts allow an AI agent to "remember" an entire codebase or a large research document collection while working. This is invaluable for complex tasks.
As a concrete example, Gemini 2.5 can ingest and understand entire code repositories at once. It leverages that context to make more informed decisions when coding. A developer using Gemini notes that this is "perhaps its greatest strength" in software projects.
In contrast, GPT-4 (at least the original 2023 version) was limited to 32k tokens. It often had to work with truncated context or rely on the user to summarize information. When multiple top models all have high raw intelligence, these practical differences become the deciding factors in real-world utility.
Key Soft Skills in Action
Let's delve a bit more into the core soft skills. We'll see how they manifest in practice, especially in scenarios like coding where many of these aspects are stress-tested.
Iterative Reasoning and Tool Use
The Power of Thinking in Loops
Iterative reasoning is essentially the ability to think in loops. A model with this skill will approach a complex query by planning a sequence of steps. It will execute them one by one (using tools if needed). Then it will evaluate the result of each step to inform the next. This is the backbone of any agentic AI. Without it, the model either gives up or gives a single-shot answer that likely misses the mark on complex tasks.
Modern LLMs have increasingly demonstrated built-in support for this kind of reasoning. Google's Gemini 2.5 Pro, for example, was designed explicitly as a "thinking model" with an internal chain-of-thought. Instead of relying on the prompt to coax out stepwise reasoning, it inherently reasons through problems before outputting an answer.
In benchmarks that involve complex reasoning without external tools, Gemini 2.5 Pro has been a top performer. This reflects its strength in structured problem solving. It achieved state-of-the-art results on the Humanity's Last Exam challenge (a difficult multi-subject reasoning test) among models that don't use tools. This suggests it can handle very nuanced questions by internally working through the solution path.
Combining Reasoning with Tools
However, real-world tasks often benefit from tool use in addition to internal reasoning. A math question is easier with a calculator. A coding issue might require running the code to see what happens.
Some "hard reasoning" benchmarks now explicitly allow or even require tool use. Being good at those is a soft skill indicator. The latest models from OpenAI, Anthropic, and others have started to integrate with tools. OpenAI's GPT-4 can call APIs or plugins when properly set up. Anthropic's Claude can use its computer-use tool to perform operations.
These capabilities ensure the model isn't just guessing. It can retrieve real data or test hypotheses. As one overview of reasoning LLMs explains, a "Tool-Using AI" doesn't just guess the next word. It actively queries external resources to get accurate answers. In practice, this might mean an AI agent that, when faced with a question about latest market trends, actually performs a web search and then summarizes the findings for you.
Real-World Coding Differences
The value of iterative, tool-aided reasoning is especially visible in coding. Imagine an AI agent trying to fix a bug. It writes a fix, compiles or runs the code (a tool action), sees the error output, and then adjusts its approach. This loop may repeat several times.
Here, differences between models become apparent. Anecdotal reports from developers testing models side-by-side indicate that some models handle this loop much more gracefully.
For instance, users found GPT-4 (and its faster variant GPT-4o) sometimes struggled in an iterative coding scenario. It would produce a solution that didn't work. Even after multiple attempts and user prompts, it could remain stuck in the same mistakes.
By contrast, Claude 3 was often able to identify and correct the issue in the first attempt. One programmer noted that GPT-4 repeatedly returned a flawed solution for a coding task ("always stuck"). Meanwhile, "Claude, first try" produced working code that solved the problem.
In another case, GPT-4 produced a Go program that kept crashing. Claude not only spotted the serious problems in GPT-4's output, it fixed them and delivered a working solution. These stories highlight how effective iterative reasoning and troubleshooting can set a model apart.
It's not that GPT-4 isn't intelligent – it often understands the problem. But if it lacks the soft skill of rigorous self-correction, a less "smart" model can outperform it in practice. It does this by methodically iterating to the right answer.
Instruction Following and Adaptability
Different Models, Different Personalities
Instruction following may sound straightforward, but it is a nuanced skill for AI. It's not just about obeying commands. It's about correctly interpreting what the user intends (even if it's subtly stated), and adjusting to clarifications or new instructions over time. Good instruction following is crucial for an AI that collaborates with a user on a task. As the user's needs evolve, the AI must evolve its behavior accordingly.
Different LLMs have different "personalities" when it comes to following instructions. Anthropic's Claude has been explicitly trained with a "constitutional AI" approach to be helpful and harmless. This often makes it very eager to please and follow user guidance.
In practice, developers have observed that Claude 3.5/3.7 models excel at understanding nuanced requests. For example, when tasked with a complicated multi-part coding instruction, Claude would carefully execute each part in order. Other models might skip a detail or do things out of order.
One user, after heavy use of both, remarked that Claude 3.5 Sonnet consistently follows complex instructions more carefully than GPT-4. The difference was so pronounced in their projects that this user's team actually started preferring Claude over GPT-4 for their application. They cited not only its instruction adherence but also cost efficiency (Claude was roughly one-third the cost of GPT-4 in that scenario).
Handling Changes Mid-Course
Adaptability also means handling iterative instructions. If you tell the model "Actually, now do the same thing but optimize for speed," a good AI assistant will pivot smoothly.
This has been noted as a strength of Claude as well. It can be "steered" easily in a desired direction with a bit of guidance. Users felt they could nudge Claude to what they wanted with less effort. Some other models needed more prompts or would occasionally go off on a tangent that had to be corrected.
On the flip side, OpenAI's models (GPT-4 family) are sometimes too eager in following certain types of instructions. This can lead to verbosity. For instance, GPT-4 often defaults to very detailed explanations and may produce more output than needed, even when the user only asked for a simple result.
If a user says "just give me the code, no explanation," GPT-4 will comply. But if not explicitly told, it might add extra commentary "to be helpful." Some developers reported that one GPT-4-based system "insists on giving me full code almost every time" and overly verbose answers. This required repeated instructions like "please be brief".
In comparison, Claude was found to understand what you want and write clean code without needing as much back-and-forth. This kind of understanding – knowing when the user wants precision vs. when they prefer a short answer – is part of the adaptability aspect of instruction following.
The Balance Between Compliance and Speed
It's worth noting that model improvements can change these dynamics. OpenAI's introduction of system messages and style settings in GPT-4 (and newer GPT-4 variants) was intended to give users more control over instruction following. For example, you can set a system message that the assistant should always be concise.
And indeed, newer tuned versions like "GPT-4 Turbo" or "GPT-4o (Omni)" have become much faster and somewhat more user-configurable. Yet, as some users pointed out, the rush to optimize speed in GPT-4o came at a slight cost to reliability in following instructions and tool use compared to the original GPT-4.
This underscores a trade-off: the most rigorously compliant model might be a bit slower or resource-intensive. Each provider is trying to find the right balance. From a user's perspective, the best model is one that does exactly what you intended with minimal friction. Achieving that is an art unto itself.
Usability and Workflow Integration
The Power of Context Length
Usability is a broad category. In essence it's about how well the AI fits into the user's workflow. This includes tangible features like context length and supported modalities. It also includes subjective impressions like the intuitiveness of interactions or how much the AI "holds your hand" through a process.
One concrete aspect is the context window, as mentioned earlier. If you're using an AI as a research assistant, you might want to feed it several long articles and have it synthesize them. If the model can only handle, say, 8000 tokens (~6,000 words) at a time, you're forced to break the task into chunks yourself. This is cumbersome.
Models like Claude 3 and Gemini 2.5 dramatically expanded context limits (to 200k and 1M tokens respectively). In practice, this lets users load whole datasets or codebases for the model to consider.
For example, Claude 3 can intake hundreds of pages of text (like a whole book or a huge Slack chat history). It can then answer questions about it or continue a conversation with all that history in mind. Likewise, Gemini's 1M-token context means a developer could give it all the source code of a large application and ask it to refactor a component. The model can refer to any part of the codebase directly without needing the user to open files one by one.
This kind of seamless context handling greatly improves the user experience. It shifts the burden of managing information from the user to the AI. As one developer review put it, having that massive context means Gemini can "comprehend entire codebases at once," which is a game-changer for project-wide reasoning.
Style and Presentation Matter
Another facet of usability is how the model presents information and interacts. Here we can consider things like output clarity, formatting, and even creativity vs. directness. Different models have different "styles."
Anthropic's Claude has been noted for often producing very organized and well-formatted answers. For instance, it might automatically format code nicely, or present a step-by-step solution in a clean list. It also has a tendency to be extremely polite and positive (sometimes to a fault, as users humorously note Claude might praise a mediocre idea enthusiastically).
In coding applications, Claude's style translates into what some call more "aesthetic" results. A comparison by one developer found that Claude 3.7 was essentially the "king of aesthetics" when it came to front-end code generation. It tended to produce UI code that was clean and visually polished, possibly adding nice touches by itself.
Google's Gemini 2.5, on the other hand, was a bit more utilitarian. In a head-to-head challenge to create a 3D solar system visualization, Gemini's output was more interactive and feature-rich (better controls, more freedom of movement) but "less polished" in visual flair. In other words, Gemini focused on functionality, Claude on presentation. This reflects different usability philosophies. Depending on a user's needs, one approach may be preferable over the other.
Interface Design and Integration
Usability also extends to how the AI is integrated into tools. For instance, Cursor IDE and Claude Code (CLI) are two different paradigms for AI coding assistants.
Cursor (with models like GPT-4 or Gemini under the hood) embeds the AI in an IDE interface. It can make changes to files directly and the user sees those changes live.
Claude Code, by contrast, runs as a command-line assistant. It asks the user for goals and then proposes file edits or terminal commands. It waits for user approval. Both aim to achieve a similar end (AI-assisted coding), but the workflow feels different.
In a detailed comparison of the two, users noted some UX pros/cons. Cursor's IDE integration made it feel seamless to apply changes. But the interface could become confusing with many diff tabs and approval buttons popping up. The agent sometimes paused awaiting user clicks in a not-so-obvious way.
Claude Code's CLI approach kept everything in one terminal window. It simply asked yes/no questions as it went through the plan. This straightforward approach avoided interface clutter. You just see text. But it's also less visual (you're not automatically seeing the code edits until you open the files yourself).
The key takeaway is that the human-AI interaction design matters. Even if the underlying models were equally smart, one setup might feel more "intuitive" to a given user. Some developers prefer a chatty, guiding assistant that explains what it's doing (Claude tends to give more in-depth explanations of its reasoning). Others prefer a faster, minimal style where the AI just does the thing with less commentary (which Cursor's agent was optimized for).
In practical terms, these usability differences mean that choosing the "best" model or agent often depends on the task context and personal/team preferences. As one AI engineer pointed out, some projects benefit from a reasoning-intensive assistant. Others are better served by a rapid prototyping tool. It's not always a one-size-fits-all. Understanding the soft skill profile of each model allows matching the right AI to the job at hand.
How Leading Models Stack Up on Soft Skills
Let's compare our three exemplars – GPT-4, Claude 3 (Sonnet), and Gemini 2.5 Pro – in light of these soft skills, particularly in agent-based use cases like coding.
GPT-4 (OpenAI)
When it debuted (2023), GPT-4 set a high bar for raw intelligence. It excelled at tasks from legal exams to creative writing. It was also among the first to support plug-in tool use (e.g. web browsing, code execution via plugins) in a mainstream way.
In terms of soft skills, GPT-4 is generally a reliable workhorse. It can follow instructions well and shows strong reasoning, but users have encountered a few quirks. One is verbosity or over-correctness. GPT-4 is tuned to be helpful and safe, which sometimes leads to it giving exhaustive answers where a brief one would do. It may add disclaimers unprompted.
Another issue is that the original GPT-4 model could be relatively slow and had limited context (8k to 32k tokens) compared to newer rivals. OpenAI addressed speed with GPT-4 Turbo and GPT-4o (Omni) releases. These indeed cut response times drastically, bringing latency down to near-human levels.
The trade-off, as noted earlier, was that some users felt the faster versions weren't as nuanced in following complex instructions or handling tools as the original GPT-4. In agentic tasks like code generation, GPT-4 is very capable. But several reports indicate it might require a couple more cycles of prompting and fixing compared to Claude to reach a flawless solution.
Its integration into many products (ChatGPT, Bing, plugins) means it has wide tool support. But the effective use of those tools depends on careful prompt engineering and the model's judgement.
GPT-4 remains a top-tier model. But in this new landscape it's no longer an unrivaled champion at everything. Some softer aspects of "AI EQ" are now areas where others take the lead.
Claude 3 Sonnet (Anthropic)
Claude has rapidly gained a reputation as the empathetic, detail-oriented assistant. With the latest 3rd-generation models, many users see it as more attuned to following user intent and maintaining a coherent narrative over long sessions.
Its massive 200k token context window is a standout feature. This enables extremely long conversations or feeding in extensive materials for analysis. In agentic workflows (like using the Claude API in a loop), Claude often shines because it doesn't easily lose track of the goal.
For coding, as we discussed, it has shown an ability to handle complex instructions and multi-step fixes gracefully. It often requires fewer back-and-forth turns to get things right. It also produces outputs that are easy to read and well-structured. This is a boon when you're reviewing its suggested code or written analysis.
Another plus is cost. As of its 2024 release, Claude 3's pricing was significantly lower per token than OpenAI's flagship. This matters when you're looping an agent over thousands of tokens of context.
On the softer side, Claude's conversational style is very friendly and positive. It's less likely to refuse reasonable requests (thanks to its constitutional AI safety training that avoids hard rule-based refusals). One could say Claude's "personality" is an asset for keeping the user engaged and comfortable. This is not trivial in longer collaborations.
The main downsides noted have been occasional minor hallucinations (no model is immune to making things up) and some rate limits on usage. For instance, Anthropic's free tiers or early access versions had limits that some found constraining.
But purely on the soft skill metrics, Claude 3 is arguably leading in instruction-following and context management today. It might not top every academic benchmark (its training data cuts off earlier than Gemini's, for example, and it slightly trails on certain math/problem benchmarks). But when it comes to "actually getting the job done" in real scenarios, it's a go-to choice for many developers.
In a telling comment, one user said they now use GPT-4 only until it "gets stuck." Then they hand the problem to Claude who usually finds the issue straight away.
Gemini 2.5 Pro (Google DeepMind)
The new entrant, Gemini, represents Google's effort to leapfrog in both intelligence and utility. By all accounts, Gemini 2.5 Pro is extremely powerful in raw reasoning and knowledge. It leads numerous benchmarks, beating out GPT-4.5 and Claude on math, science, and code tests according to early reports.
It's explicitly designed to handle complex, multi-modal inputs: text, images, audio, and even video. This multimodality means in theory you can have an agent that not only uses text-based tools but can also interpret an image (e.g., read a chart or diagram) as part of its reasoning. This is a very useful skill in data analysis or design tasks.
From a soft skills perspective, Gemini's strengths include its integrated reasoning (no need for external CoT prompting, it's doing it under the hood) and that huge context window enabling it to juggle a lot of information.
Developers who have tried it in coding scenarios report that it's at least on par with Claude in capability, and in some cases better. For example, Gemini has been praised for exceptional code understanding. It can load an entire project and then make architectural suggestions that show a holistic grasp of the software.
It also has a knack for producing highly interactive outputs. Recall the 3D visualization example where it created more dynamic controls in the app than Claude did. This suggests a bias towards completeness and functionality. This can be great when you need an agent to not just follow orders but maybe even take initiative to enhance a solution.
On the flip side, being brand new, there are areas Gemini is still catching up in real-world use. In the SWE-bench Verified test – which simulates solving real GitHub issues – Gemini 2.5 scored a bit lower than Claude 3.7 (63.8% vs 70.3%). This indicates that Claude's more fine-tuned problem-solving on existing code issues gave it an edge there.
It's a reminder that pure intelligence (Gemini tops many synthetic benchmarks) doesn't automatically translate to solving every practical task better. Usability-wise, Gemini is only as accessible as the products that deploy it. Currently it's available via Google's Vertex AI and select apps, but it's not yet as ubiquitous as GPT or Claude.
Its cost is also reportedly high (it's Google's "most expensive model yet" in terms of usage cost). This might limit how freely one can loop it in an agent. That said, Google is clearly positioning Gemini as a model that can do it all – reason, use tools, understand multimodal data. As it matures, we can expect its soft skills to be refined with user feedback.
Early users have been impressed with its raw power. But they also note that it is less polished in responses compared to Claude. This may simply be a matter of different training focus (or something that future updates will address).
Beyond Coding: Soft Skills in Other Domains
While I've used coding assistant scenarios to illustrate AI soft skills, the same principles apply across many domains:
Research Agents
An AI that helps a user gather and synthesize information (for example, a personal research assistant that reads papers or searches the web) must use tools (search engines, OCR for PDFs, etc.) and iterate (refine queries, follow citations) effectively. Its value lies in how well it can understand the user's research goal and adapt its search strategy.
If Model A knows slightly more facts offhand than Model B, that advantage evaporates if Model B can simply look those facts up. The better research agent is the one that diligently follows leads, checks multiple sources, and asks the user for clarification when the query is ambiguous. These are classic soft skills.
We see early signs of this in products like Bing Chat (which uses GPT-4 to search the web) and in Claude's own web browsing beta. The model must decide what to search for and how to integrate the results into the conversation.
A research agent with good soft skills will cite sources correctly, avoid just copying text without context, and keep track of what information has been confirmed. As an analogy, it's the difference between an employee who just blurts out facts vs. one who carefully researches and presents findings with references. The latter is obviously more useful when accuracy matters.
Creative Assistants
In creative tasks (writing a story, composing an email, designing a graphic with AI help), iterative refinement is the norm. Rarely does the first output perfectly match the user's vision.
A model might draft a poem or a marketing blurb, then the user says "I like it but make it more upbeat and shorten this part." A creative AI with strong soft skills will incorporate those edits gracefully. It will preserve the parts the user liked and change the tone as requested, all while maintaining consistency.
If instead each edit request confuses the model and it starts contradicting itself or forgetting earlier constraints (say, the story's character names or the brand guidelines provided), the collaboration breaks down. So memory (context) and instruction fidelity are key.
Additionally, creative tasks often don't have a single "correct" outcome. So the AI's ability to respond to subjective feedback is important. Models like Claude are often praised for their conversational adaptability, which is beneficial in a creative setting. They'll cheerfully rewrite something in five different styles until the user is happy, without losing patience or consistency.
GPT-4 is also a strong creative, but some users note it can be a bit stubborn about format at times (for instance, it might stick to a certain storytelling approach unless explicitly told otherwise). This is again a subtle soft skill: openness to iterative guidance.
As more models reach a high level of generative creativity, users will gravitate towards the ones that make the process of creation smoother, not just those that occasionally produce a brilliant paragraph.
Data Analysis and Agents for Automation
There's a class of uses where an AI acts more like a decision-making agent. These include AI that looks at your spreadsheets or logs and gives insights. They also include agents that execute tasks like sorting emails, scheduling, or controlling IoT devices via natural language.
Here, correctness is critical (you don't want an AI automating your emails incorrectly!). But so is the interpretability of the AI's actions. A data agent with good soft skills will clearly explain its reasoning ("I flagged this transaction as outlier because it's 3σ away from the mean of last quarter's sales..."). It will seek confirmation for major actions ("Shall I go ahead and delete those records? (yes/no)" much like Claude Code does in CLI).
Essentially, the AI needs to work with the human-in-the-loop, not just output a result. This collaborative stance can be thought of as a soft skill. From the model standpoint, it involves a mix of instruction following (listening to the user's goals), reasoning (figuring out the right action), and dialogue (presenting the action and maybe asking for approval).
If any one of those is weak, the automation agent becomes either unsafe or unpleasant to use. For instance, if it doesn't ask for confirmation when it should, it might do something destructive. If it cannot explain why it did something, the user won't trust it next time. Hence, designers of such agents often choose models known for their clarity and reliability over those that might be a notch smarter but unpredictable in behavior.
In all these cases, as LLMs become components in larger systems (often taking on autonomous subtasks), the emphasis shifts from what they know to how they act. This is precisely why AI soft skills are gaining attention.
It's analogous to workplace skills: a genius who can't cooperate or communicate is less effective in a team than someone slightly less brilliant but who excels in collaboration and adaptability.
The New Evaluation Paradigm: Utility over Brains
A Shift in Focus
The rise of AI soft skills reflects a broader shift in how we evaluate AI systems. When only a few models (like GPT-3, GPT-4) existed at the cutting edge, the focus was on their sheer capabilities and "intelligence."
Now that we have many models that can all generate coherent text, solve coding problems, and pass exams, the conversation has moved to practical utility. In 2025, asking "Which model is smartest?" is less useful than asking "Which model is best for this particular task workflow?".
This shift is evident in how benchmarks and user evaluations are evolving. Alongside traditional tests (MMLU, HellaSwag, math benchmarks, etc.), we now see benchmarks for things like GitHub issue resolution, continuous coding (LiveCodeBench), long-document QA, and tool use evaluation.
For example, the SWE-Bench (Software Engineering Benchmark) judges models on real-world coding task completion, not puzzles. There a model like Claude 3 can outperform a seemingly more intelligent model because of better real-world problem-solving approach.
Similarly, user forums and blogs are now filled not just with one-shot challenge comparisons, but with stories of experiences over days or weeks of use. It's telling that one Redditor's highly-upvoted comparison of Claude vs GPT-4 was titled as a "programmer's perspective on AI assistants". It focused on consistent day-to-day performance, not just one-off questions.
In it, they highlight things like error rates, how many iterations needed to get code working, and how the models handle long conversations – all soft skill-centric concerns. The consensus from such community insights is that small quality-of-life differences translate to big productivity differences when using these AIs extensively.
The Importance of Ecosystem Integration
Another important point is that as models achieve "high intelligence parity," the context of use becomes a differentiator. OpenAI, Google, and Anthropic are each integrating their models into different ecosystems (product suites, APIs, interfaces) which enhance the soft skills in various ways.
For instance, if Model X has slightly weaker inherent tool-use ability but is integrated into an IDE that compensates for it with good UI, a user might still prefer X for coding. Meanwhile, Model Y might be inherently great at tool use and reasoning but offered only via a limited interface. This makes it hard for users to leverage its full potential.
So, evaluation now has to consider the whole package (model + interface + support). The best outcomes seem to occur