Kimi K2. 5 | Open Visual Agentic Model for Real Work Kimi K2 5 turns natural language and design references into polished, interactive layouts with refined visual taste You can select any section and refine it through interactive editing, letting ideas evolve directly within the live layout
Moonshot AI 你好, 欢迎探索月之暗面 寻求将能源转化为智能的最优解 Kimi Kimi 是一款AI智能助手,由 Moonshot 自研的大语言模型驱动,支持在线搜索、深度思考、多模态推理和超长文本对话。
Kimi K2. 5 - Kimi API Platform As a leading coding model in China, Kimi K2 5 builds upon its full-stack development and tooling ecosystem strengths, further enhancing frontend code quality and design expressiveness
Kimi-K2. 5 - Kimi You can access Kimi K2 5 through Kimi web, the Kimi app, the official Moonshot developer API, Kimi Code, and Together AI’s hosted endpoint The right choice depends on whether you want direct chat, structured work outputs, developer integration, or coding-agent workflows
Moonshot AI Kimi Open Platform Kimi Open Platform supports flexible API calls, so you can easily integrate advanced features and give your applications a competitive edge
Kimi (chatbot) - AI Wiki Kimi is an artificial intelligence chatbot developed by Moonshot AI, a Chinese AI startup based in Beijing First released to the public in November 2023, Kimi distinguished itself from competitors through its industry-leading long context window, which initially supported 128,000 tokens and was later expanded to handle over 2 million Chinese
Kimi Claw | 24 7 AI Assistant with Long-term Memory Automation Kimi Claw is a 24 7 AI Assistant with Long-term Memory Automation, built on OpenClaw inside the Kimi ecosystem With Kimi Claw, users can deploy quickly, keep long-term memory in the cloud, and run proactive workflows across research, analysis, and daily operations
[2602. 02276] Kimi K2. 5: Visual Agentic Intelligence - arXiv. org We introduce Kimi K2 5, an open-source multimodal agentic model designed to advance general agentic intelligence K2 5 emphasizes the joint optimization of text and vision so that two modalities enhance each other This includes a series of techniques such as joint text-vision pre-training, zero-vision SFT, and joint text-vision reinforcement learning Building on this multimodal foundation
Kimi-K2. 5 · Models Kimi K2 5 is an open-source, native multimodal agentic model built through continual pretraining on approximately 15 trillion mixed visual and text tokens atop Kimi-K2-Base It seamlessly integrates vision and language understanding with advanced agentic capabilities, instant and thinking modes, as well as conversational and agentic paradigms