Model launchApril 24, 20269 min read
The Trillion-Parameter Open-Weight Wave Hits LeemerChat: DeepSeek V4, MiMo 2.5, Kimi K2.6, and a Limited Ling Promo
We refreshed our partner lineup around Chinese-led trillion-scale open models—DeepSeek V4 Pro and Flash, Xiaomi MiMo 2.5 Pro and 2.5, Moonshot Kimi K2.6, plus a time-bound free route for InclusionAI Ling 2.6 1T. Here is why we bundled them, how they differ, and how to benchmark them on real work instead of leaderboard screenshots alone.
LeemerChat exists so you can run serious models on serious work without duct-taping a dozen dashboards together. This release doubles down on that promise: we have refreshed the partner lineup around a cluster of Chinese-led, trillion-scale open-weight systems that now sit at the global frontier for coding, agents, and long-context reasoning. They are available inside the same selector you already use, with free daily access on the partner tier so you can compare them on your prompts, your repos, and your latency budget—not only on benchmark cards.
DeepSeek V4 Pro and V4 Flash

DeepSeek V4 Pro is a Mixture-of-Experts design with roughly 1.6T total parameters and 49B active parameters, built for advanced reasoning, software engineering, and long-horizon agents, with a 1M-token context window. It shares architecture with V4 Flash but trades more compute per token for depth: hybrid attention keeps long documents tractable, and configurable reasoning modes let you bias toward fast answers or slower, more thorough passes. V4 Flash lands at 284B parameters (13B active) with the same 1M context target, tuned for throughput-heavy chat, IDE copilots, and batched agent loops where cost and responsiveness matter as much as peak benchmark scores. Together they cover the two questions every team asks: how good can we be, and how fast can we ship.
Xiaomi MiMo 2.5 Pro and MiMo 2.5

Xiaomi's MiMo 2.5 Pro is the flagship entry: it is aimed at general agentic capability, complex software engineering, and tasks that stretch to thousands of tool calls— the sort of work that used to require a human expert across multiple days. MiMo 2.5 keeps a native omnimodal stack—stronger image and video perception than the prior MiMo-V2-Omni generation—with Pro-grade agent behavior at roughly half the inference cost of the flagship tier. Both models use a 1M-token window, which makes them natural fits for retrieval-light workflows where the model must hold an entire specification, a full thread, or a large slice of a repository in working memory.
Moonshot Kimi K2.6
Kimi K2.6 is Moonshot's next multimodal trillion-class model, oriented toward long-horizon coding, UI generation from prompts and visuals, and orchestration across many sub-agents. It is the successor spirit to the older K2 and K2 Thinking slots in our catalog: fewer discrete "reasoning-only" SKUs, more emphasis on end-to-end delivery—code, documents, and lightweight product surfaces in one run. We keep Kimi K2.5 in place for teams standardized on that stack; K2.6 is the forward path for new projects that want the latest Moonshot agentic tooling.
InclusionAI Ling 2.6 1T (limited partner promo)
We partnered with InclusionAI to ship inclusionai/ling-2.6-1t:free as a time-bound, free-tier route. Ling 2.6 1T is positioned as an instant instruct flagship: a "fast thinking" trillion-class model that targets roughly a quarter of the cost of comparable tiers while still competing on math, coding, and SWE-bench-style evaluations. In the product UI we surface the OpenRouter provider mark beside Ling so it is easy to spot next to permanent partner entries. The promotional window ends April 30, 2026; after that date the route may rotate or leave the free tier, so treat it as a scheduled flight, not a guaranteed permanent default.
Why this wave matters
For most developers, the important story is not nationalism but competition: when multiple labs ship trillion-parameter-class open or open-weight stacks with transparent-ish recipes and aggressive price performance, closed APIs have to run faster to earn their margin. LeemerChat is not picking a single winner—we are widening the bench. The new ordering in our partner list elevates DeepSeek and Xiaomi next to the other frontier vendors so coding-heavy models surface earlier in the selector, and our internal coding tiers now treat those IDs as first-class recommendations alongside Western flagships.
Benchmarks help orient you, but your repository always wins the argument. Run the same refactor, the same failing test, and the same product brief through V4 Pro, MiMo 2.5 Pro, Kimi K2.6, and Ling while the promo is live. Track latency, edit distance, and how often you reach for a second model to clean up the first pass—that is the only scoreboard that matters for shipping.