Significant changes are unfolding within the team responsible for Qwen, Alibaba’s open-weight AI model series. These developments carry direct relevance for platforms like OpenClaw, which rely on a robust ecosystem of efficient, locally-runnable models to power autonomous agents and plugin integrations.
On March 4th, 2026, Junyang Lin, the lead researcher behind Qwen, announced his resignation via a social media post stating, “me stepping down. bye my beloved qwen.” Lin had been instrumental in releasing Qwen’s open-weight models since 2024. Reports suggest his departure may be linked to an internal reorganization at Alibaba that placed a new researcher from Google’s Gemini team in charge of Qwen, though this detail remains unconfirmed.
An article from 36kr.com, a credible Chinese technology media source established in 2010, provided further context. Translated quotes indicate that at approximately 1:00 PM Beijing time on March 4th, Alibaba Group CEO Wu Yongming addressed Qwen employees at an emergency All Hands meeting. The meeting occurred twelve hours after Lin’s announcement at 0:11 AM Beijing time. Lin was described as a key figure in promoting Alibaba’s open-source AI models and one of the company’s youngest P10 employees.
Multiple Qwen members told 36Kr, “Given far fewer resources than competitors, Junyang’s leadership is one of the core factors in achieving today’s results.” Around 2 PM, Lin posted on his WeChat Moments, stating, “Brothers of Qwen, continue as originally planned, no problem,” without confirming whether he would return. The situation remains uncertain, with Alibaba’s CEO involvement suggesting the company recognizes the significance of these resignations and may attempt to retain some talent.
Several other key Qwen members also resigned, including Binyuan Hui, who led Qwen code development and the Qwen-Coder series models, responsible for the entire agent training process from pre-training to post-training and recently involved in robotics research; Bowen Yu, who led Qwen post-training research and development of the Qwen-Instruct series models; and Kaixin Li, a core contributor to Qwen 3.5/VL/Coder. Many young researchers also resigned on the same day.
These departures are particularly notable given the recent release of the Qwen 3.5 model family, which has demonstrated exceptional performance. The model family began with Qwen3.5-397B-A17B on February 17th—an 807GB model—followed by smaller versions in 122B, 35B, 27B, 9B, 4B, 2B, and 0.8B sizes. Positive feedback has emerged regarding the 27B and 35B models for coding tasks that fit on 32GB/64GB Mac systems. The 9B, 4B, and 2B models have shown notable effectiveness for their sizes, with the 2B model being just 4.57GB—or as small as 1.27GB quantized—and offering full reasoning and multi-modal vision capabilities.
For the OpenClaw ecosystem, which emphasizes local-first AI assistants and efficient model deployment, the Qwen team’s track record in producing high-quality smaller models is highly valuable. Their ability to achieve strong results with constrained resources aligns with OpenClaw’s goals of enabling powerful agent automation on consumer hardware. The potential disbanding of this team could impact the availability of future open-weight models that integrate seamlessly with OpenClaw’s plugin architecture and MCP server protocols.
If core Qwen team members start new projects or join other research labs, their future work could significantly influence the local AI assistant landscape. OpenClaw users and developers should monitor these developments, as they may affect model selection, fine-tuning strategies, and the broader open-source ecosystem that supports autonomous agent workflows.
The Qwen 3.5 models, especially the smaller variants, represent tools that could enhance OpenClaw’s capabilities in areas like code generation, vision tasks, and reasoning—all critical for building sophisticated local AI assistants. Ensuring continued access to such models is vital for maintaining a competitive and innovative plugin ecosystem.


