DeepSeek-V3-0324 Update - Comprehensive Upgrades Across All Capabilities

DeepSeek-V3-0324 Update - Comprehensive Upgrades Across All Capabilities

March 25, 2025

DeepSeekAI UpdateV3-0324Open Source

DeepSeek-V3 Model Update: Comprehensive Capability Upgrades

DeepSeek V3 has completed a minor version upgrade, now branded as DeepSeek-V3-0324. Users can experience this updated version by logging into the official website, APP, or mini-program and disabling deep thinking mode in the conversation interface. The API interface and usage methods remain unchanged.

For tasks that don't require complex reasoning, the DeepSeek team recommends using the new V3 model to immediately enjoy a more fluid experience with comprehensive improvements in conversation quality.

Experience DeepSeek-V3-0324 Today

Try the latest enhancements and see the difference for yourself!

Try DeepSeek-V3-0324 Now →

Overview of Enhanced Capabilities

Improved Reasoning Performance

The new V3 model incorporates reinforcement learning techniques used in training the DeepSeek-R1 model, significantly improving performance on reasoning tasks. It has achieved scores exceeding GPT-4.5 on math and coding-related evaluation benchmarks.

Reasoning Performance Comparison

Enhanced Frontend Development Capabilities

For HTML and other frontend development tasks, the new V3 model generates code with higher usability and produces more aesthetically pleasing, design-conscious visual results.

Enhanced Frontend Development Capabilities

Upgraded Chinese Writing

In Chinese writing tasks, the new V3 model has been further optimized based on R1's writing capabilities, with special emphasis on improving content quality for medium to long-form text creation.

Upgraded Chinese Writing

Optimized Chinese Search Capabilities

The new V3 model delivers more detailed and accurate results with clearer and more attractive formatting in networked search scenarios, particularly for report generation instructions.

Chinese Writing and Search Improvements

Additionally, the new V3 model has achieved notable capability improvements in tool calling, role-playing, and general Q&A conversation.

Open Source Model

DeepSeek-V3-0324 uses the same base model as the previous DeepSeek-V3, with improvements only to the post-training method. For private deployment, you only need to update the checkpoint and tokenizer_config.json (for tool calls related changes). The model has approximately 660B parameters, and the open-source version offers a 128K context length (while the web version, App, and API provide 64K context).

To download the V3-0324 model weights, please refer to:

Consistent with DeepSeek-R1, the DeepSeek team's open-source repository (including model weights) is uniformly licensed under the MIT License, allowing users to utilize model outputs and train other models through methods like model distillation.

Conclusion

The DeepSeek-V3-0324 update represents a significant step forward in the evolution of the DeepSeek team's flagship model. By incorporating learnings from the R1 development process and focusing on key performance areas, the DeepSeek team has delivered a more versatile, capable, and efficient AI system that excels across a wide range of tasks.

Whether you're developing applications, creating content, or leveraging the DeepSeek team's API for custom solutions, DeepSeek-V3-0324 offers the perfect blend of power and accessibility, all while maintaining the commitment to open source.

"The DeepSeek-V3-0324 update reflects the DeepSeek team's ongoing dedication to pushing the boundaries of what's possible with open-source AI, delivering performance that competes with—and often exceeds—proprietary alternatives."

Ready to experience the latest improvements in DeepSeek-V3-0324?

Try DeepSeek Chat Online →