Nvidia's CEO Jensen Huang discussed the future of AI compute, highlighting three key scaling laws:
1. Pre-training: Continuing to scale with multi-modality and data from reasoning.
2. Post-training: Using reinforcement learning techniques, potentially requiring more computation than pre-training.
3. Inference/Reasoning: "Test time compute" or "long thinking" could require 100-1000x more compute than current one-shot inference.
Huang emphasized Blackwell's architecture is designed for these scaling laws, offering significant performance improvements:
- For training: Many times faster
- For reasoning AI models: 25x higher throughput, tens of times faster
He noted Blackwell's versatility allows data centers to configure for pre-training, post-training, or scaled-out inference as needed, leading to more concentration on a unified architecture.
Disclaimer: This earnings call summary is generated by AI and is for informational purposes only. Due to technical limitations, inaccuracies may exist. It does not constitute investment advice or commitments.Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.
Comments