Back to Blog
New Model — April 9, 2026Free on LeemerChatZhipuZ.AI / GLM-5.1

GLM-5.1 Is Now Free on LeemerChat — The Model That Works for 8+ Hours Without Stopping

Z.AI's GLM-5.1 is the first model purpose-built for long-horizon autonomous coding. It can plan, execute, debug, and self-improve on a single engineering task for more than 8 hours — delivering complete, production-quality results without human input. It beats or matches GPT-5.4 and Claude Opus 4.6 on several key benchmarks. And it's free on LeemerChat.

Benchmark leader

84%

SWE-bench Verified — best-in-class for real software engineering

Autonomous sessions

8h+

Works continuously without human intervention — planning, executing, self-improving

Context window

131K

Handles full codebases, long-running agent loops, and large project contexts

Zhipu

What is GLM-5.1?

Z.AI's next-generation agentic coding model

GLM-5.1 is Z.AI's (Zhipu AI's) most capable model yet, and it represents a genuine architectural shift in how frontier models approach coding tasks. While earlier models — including GLM-5 — were optimized for minute-long interactions with human guidance at each step, GLM-5.1 was specifically designed for long-horizon autonomous operation.

The defining characteristic: GLM-5.1 can take a single engineering goal, decompose it into a full plan, execute that plan across tools and environments, evaluate its own results, debug failures, and iterate — all without a human in the loop — for more than 8 consecutive hours. The output at the end isn't a rough draft or a demo. It's engineering-grade work.

On the benchmark side, the improvements are significant. GLM-5.1 hits 84.0% on SWE-bench Verified (up from GLM-5's 77.8%), outperforms GPT-5.4 on SWE-bench Multilingual, and leads the field on Terminal-Bench 2.0 — the benchmark most directly tied to real agent work in shell environments.

Z.AI describes GLM-5.1's improvement in coding capability as a “major leap,” with particularly significant gains in long-horizon tasks. We think the framing is accurate.

What GLM-5.1 Can Do That Others Can't

8+ Hour Autonomous Sessions

GLM-5.1 is the first model built to work independently for more than 8 consecutive hours on a single task — no human check-ins needed. It maintains context, adapts to failures, and keeps progressing.

Autonomous Planning & Execution

Rather than waiting for step-by-step instructions, GLM-5.1 decomposes the engineering goal, writes its own plan, executes it across multiple tools, and revises when results don't match expectations.

Self-Improvement Loops

GLM-5.1 doesn't just execute — it evaluates its own outputs, diagnoses failures, and iterates. Engineering-grade results emerge from this tight feedback loop, not from a single pass.

Major Coding Leap Over GLM-5

On SWE-bench Verified, GLM-5.1 scores 84.0% — a significant jump from GLM-5's 77.8%. The gains are especially pronounced on long-horizon and multilingual engineering tasks.

Terminal-Native Agent

GLM-5.1 leads on Terminal-Bench 2.0 (65.2%), where tasks involve real shell environments, file systems, and toolchain operations. It's built to run inside real engineering stacks.

Production-Grade Outputs

The goal of GLM-5.1 isn't to demo well on toy benchmarks — it's to deliver complete, production-quality code, tests, documentation, and debugging at the end of a long autonomous session.

Benchmark Comparison

GLM-5.1 vs GPT-5.4, Claude Opus 4.6, Gemini 3.1 Pro, and GLM-5. Green = benchmark leader in that row.

Benchmark
ZhipuGLM-5.1
OpenAIGPT-5.4
AnthropicOpus 4.6
GoogleGemini 3.1 Pro
ZhipuGLM-5
SWE-bench Verified
Real-world software engineering tasks
84.0%80.0%80.9%78.5%77.8%
SWE-bench Multilingual
Cross-language code engineering
78.8%72.0%77.5%68.2%73.3%
Terminal-Bench 2.0
Agentic terminal workflows
65.2%58.4%61.8%54.0%60.7%
LiveCodeBench v6
Real-world competitive coding
73.5%70.2%71.4%68.9%67.8%
Long-Horizon Agent Tasks
8+ hour autonomous execution
91.4%72.0%75.3%68.1%
AIME 2026 I
Mathematics competition
94.8%93.3%90.6%92.7%

Benchmarks sourced from Z.AI technical reports and third-party evaluations. Long-Horizon Agent Tasks metric is specific to the GLM-5.1 evaluation suite.

SWE-bench Verified — At a Glance

ZhipuGLM-5.1
84%
AnthropicClaude Opus 4.6
80.9%
OpenAIGPT-5.4
80%
GoogleGemini 3.1 Pro
78.5%
ZhipuGLM-5
77.8%

Why We Made GLM-5.1 Free

Frontier coding models shouldn't require a premium subscription to access. Here's why we chose to make GLM-5.1 available to all users.

Frontier access shouldn't gate-keep building

LeemerChat was built on the idea that capable AI should be accessible to everyone — builders, students, and researchers everywhere. Making GLM-5.1 free is the most direct expression of that mission. You shouldn't need a corporate budget to work with one of the world's most advanced coding models.

Our partner program makes the math work

We absorb inference costs through our partner model program, which lets us offer select frontier models to free users without cutting corners. GLM-5.1 is cost-viable for the free tier — so it's free. Simple.

Long-horizon tasks need real tools

Giving free users a crippled model defeats the purpose. If GLM-5.1's core value is sustained autonomous coding, it needs to work at that level. We don't cut capability — we apply sensible rate limits to ensure everyone gets a quality experience.

Rate limited for free users — here's why

GLM-5.1 is free on LeemerChat, but free users are currently limited to approximately 5–10 GLM-5.1 messages per day. This is a deliberate capacity decision — GLM-5.1 is a compute-intensive model, and we want to ensure that both free and Pro users get a consistently smooth experience. Free users still get access to one of the most powerful coding models in the world. Pro users get priority access with higher throughput and no daily cap on GLM-5.1.

LeemerChat Pro

Unlock GLM-5.1 without limits

Pro subscribers get priority access to GLM-5.1 with higher daily throughput, plus unrestricted access to all other premium models: GLM-5 (744B flagship), GPT-5.4 (1M context), Claude Sonnet 4.5, MiniMax M2.7, and more. Plus 50 daily free messages across our full free frontier tier.

  • Priority GLM-5.1 access — no daily cap
  • Full access to GLM-5, GPT-5.4, Kimi K2 Thinking & more
  • 50 daily free messages on our free frontier tier
  • Higher throughput across all models
  • File uploads, deep research, and advanced tools
Launch Offer

Claim your first month for $1

We start from a base price of $1 and localize the display with a live USD conversion. If you are in Ireland, you will see euro. If you are in India, you will see rupees.

Localized for your regionBase USD fallback
Start with $1

Exchange rate data via ExchangeRate-API. Offer display adapts at runtime based on detected location.

The Free Frontier Keeps Growing

GLM-5.1 joins a free tier that already includes some of the most capable models in the world. These aren't fallback options — they're real frontier models available as part of your 50 daily free messages.

Zhipu
GLM-5.1New

Z.AI — Agentic coding, 8h+ sessions, 131K context

Zhipu
GLM-5V TurboFree

Z.AI — Native multimodal agent, 202K context

Mi
MiMo-V2-ProFree

Xiaomi — 1M context, top-tier agentic performance

M
MiniMax M2.7Free

MiniMax — 56.2% SWE-Pro, 200K context

Read our April 2026 frontier models launch post

Covers MiMo-V2-Pro, GLM-5V Turbo, Gemma 4 31B IT, MiniMax M2.7, GLM-5, and GPT-5.4 — and why we moved them to the free tier.

What GLM-5.1 Unlocks in LeemerChat

  • Delegate a full-stack feature to GLM-5.1 and come back to working code
  • Multi-file refactors and dependency migrations without hand-holding
  • Autonomous debugging sessions across complex, multi-layer stacks
  • Real shell environments: GLM-5.1 operates natively in terminal-heavy workflows
  • Cross-language engineering — leading on SWE-bench Multilingual
  • Plan generation → execution → iteration, all in one coherent agent session
ZhipuAvailable now · GLM-5.1 by Z.AI

The 8-Hour Coding Model. Free. Right Now.

GLM-5.1 sets a new bar for what autonomous coding looks like. No other model at this capability level is available for free. Try it in LeemerChat today — no credit card, no setup, no catch.

Related Posts

February 11, 2026

GLM-5 Is Live: Frontier Open-Source Scale for Complex Engineering and Agentic Work

GLM-5 launches on LeemerChat with major upgrades in scale, training data, and RL infrastructure. Built for long-horizon agentic systems, coding reliability, and complex reasoning under production constraints.

Read more
April 4, 2026

We Just Moved a Chunk of the Frontier Into the Free Tier

Three frontier-grade models go free — Xiaomi MiMo-V2-Pro (1M context), Z.AI GLM-5V-Turbo (native multimodal agent), and Google Gemma 4 31B IT (89.2% AIME 2026). Premium gets sharper with GPT-5.4, GLM-5, and MiniMax M2.7. Plus: why we cleaned up the lineup and what frontier actually means now.

Read more
February 12, 2026

MiniMax M2.5 Is Live: SOTA Productivity Model for Real-World Office Work

MiniMax M2.5 launches on LeemerChat with breakthrough performance in Word, Excel, and PowerPoint generation. Scoring 80.2% on SWE-Bench Verified and 76.3% on BrowseComp, M2.5 extends M2.1's coding expertise into general office productivity.

Read more
January 30, 2026

Kimi K2.5: Moonshot AI’s Frontier Multimodal Model, Now Live on LeemerChat

Kimi K2.5 brings state-of-the-art visual coding, 262K context, and self-directed agent swarms. We’re Ireland’s first AI platform to launch it — and it’s live free on LeemerChat.

Read more

Try These Features

Explore more:All PostsReleasesModelsBenchmarksEngineeringInsightsAll FeaturesAbout UsTermsPrivacy