On April 16, Alibaba's Qwen3.6 series medium-sized model, Qwen3.6-35B-A3B, was officially released as open-source. This model activates only 3 billion parameters, making it lightweight and efficient while demonstrating outstanding performance in agent programming, significantly surpassing its predecessor Qwen3.5-35B-A3B. It can compete with dense models such as Qwen3.5-27B and Gemma4-31B. The new model also supports multimodal thinking and non-thinking modes, positioning it as one of the most versatile open-source models currently available. Data indicates that Qwen3.6-35B-A3B is a representative of "efficient and lightweight" open-source models in the global AI community. It adopts a Mixture of Experts (MoE) architecture with a total of 35 billion parameters and only 3 billion activated, enabling higher-performance intelligent output with lower computational consumption during inference. Notably, it excels in agent programming tasks: in authoritative benchmarks such as Terminal-Bench2.0 for terminal programming, NL2Repo for long-range programming tasks, and QwenClawBench for real-world agent capability evaluation, Qwen3.6-35B-A3B significantly outperforms its predecessor Qwen3.5-35B-A3B, as well as similar open-source models like Gemma4-26B-A4B and Gemma4-31B. Qwen3.6-35B-A3B also achieves deep compatibility with mainstream agent frameworks such as OpenClaw, Qwen Code, and Claude Code, allowing the model's programming capabilities and native multimodal abilities to better empower various agents to accomplish longer and more complex tasks.
Comments