To The Moon
Home
News
TigerAI
Log In
Sign Up
美股判官
美股港股咨询!
+Follow
Posts · 133
Posts · 133
Following · 0
Following · 0
Followers · 0
Followers · 0
美股判官
美股判官
·
2023-05-10
@空军大队长
Is the era of Nvidia's dominance over? ChatGPT detonates the Google-Microsoft chip war, and Amazon also enters the game
ChatGPT引爆了芯片界「百家争鸣」,谷歌、微软、亚马逊纷纷入局芯片大战,英伟达恐怕不再一家独大。ChatGPT爆火之后,谷歌和微软两巨头的AI大战战火,已经烧到了新的领域——服务器芯片。如今,AI
Is the era of Nvidia's dominance over? ChatGPT detonates the Google-Microsoft chip war, and Amazon also enters the game
看
1.51K
回复
Comment
点赞
Like
编组 21备份 2
Share
Report
美股判官
美股判官
·
2020-04-25
$Tilray Inc.(TLRY)$
$特斯拉(TSLA)$
$波音(BA)$
$亚马逊(AMZN)$
$Zoom(ZM)$
$苹果(AAPL)$
Are u OK?
看
3.92K
回复
3
点赞
1
编组 21备份 2
Share
Report
美股判官
美股判官
·
2020-03-24
$GLD 20200331 161.0 CALL(GLD)$
Thank you, Fed
看
1.03K
回复
Comment
点赞
Like
编组 21备份 2
Share
Report
美股判官
美股判官
·
2019-12-30
$蔚来(NIO)$
4.35
看
976
回复
Comment
点赞
Like
编组 21备份 2
Share
Report
Load more
Most Discussed
{"i18n":{"language":"en_US"},"isCurrentUser":false,"userPageInfo":{"id":"3433023691829222","uuid":"3433023691829222","gmtCreate":1487123528912,"gmtModify":1623032239666,"name":"美股判官","pinyin":"mgpgmeigupanguan","introduction":"","introductionEn":null,"signature":"美股港股咨询!","avatar":"https://static.tigerbbs.com/42554bc1fcf5ab0f4dec98e8a1d32c23","hat":null,"hatId":null,"hatName":null,"vip":1,"status":2,"fanSize":1465,"headSize":15,"tweetSize":133,"questionSize":0,"limitLevel":900,"accountStatus":4,"level":{"id":4,"name":"文化虎","nameTw":"文化虎","represent":"学有所成","factor":"发布30条非转发主帖,其中3条优质帖","iconColor":"8867FB","bgColor":"BDC5FF"},"themeCounts":0,"badgeCounts":0,"badges":[],"moderator":false,"superModerator":false,"manageSymbols":null,"badgeLevel":null,"boolIsFan":false,"boolIsHead":false,"favoriteSize":1,"symbols":null,"coverImage":null,"realNameVerified":"success","userBadges":[{"badgeId":"1026c425416b44e0aac28c11a0848493-4","templateUuid":"1026c425416b44e0aac28c11a0848493","name":"Tiger Star","description":"Join the tiger community for 2000 days","bigImgUrl":"https://static.tigerbbs.com/dddf24b906c7011de2617d4fb3f76987","smallImgUrl":"https://static.tigerbbs.com/53d58ad32c97254c6f74db8b97e6ec49","grayImgUrl":"https://static.tigerbbs.com/6304700d92ad91c7a33e2e92ec32ecc1","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2022.08.14","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1001},{"badgeId":"44212b71d0be4ec88898348dbe882e03-3","templateUuid":"44212b71d0be4ec88898348dbe882e03","name":"President Tiger","description":"The transaction amount of the securities account reaches $1,000,000","bigImgUrl":"https://static.tigerbbs.com/fbeac6bb240db7da8b972e5183d050ba","smallImgUrl":"https://static.tigerbbs.com/436cdf80292b99f0a992e78750ac4e3a","grayImgUrl":"https://static.tigerbbs.com/506a259a7b456f037592c3b23c779599","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2022.04.29","exceedPercentage":"93.14%","individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1101},{"badgeId":"7a9f168ff73447fe856ed6c938b61789-1","templateUuid":"7a9f168ff73447fe856ed6c938b61789","name":"Knowledgeable Investor","description":"Traded more than 10 stocks","bigImgUrl":"https://static.tigerbbs.com/e74cc24115c4fbae6154ec1b1041bf47","smallImgUrl":"https://static.tigerbbs.com/d48265cbfd97c57f9048db29f22227b0","grayImgUrl":"https://static.tigerbbs.com/76c6d6898b073c77e1c537ebe9ac1c57","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2021.12.21","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1102},{"badgeId":"a83d7582f45846ffbccbce770ce65d84-1","templateUuid":"a83d7582f45846ffbccbce770ce65d84","name":"Real Trader","description":"Completed a transaction","bigImgUrl":"https://static.tigerbbs.com/2e08a1cc2087a1de93402c2c290fa65b","smallImgUrl":"https://static.tigerbbs.com/4504a6397ce1137932d56e5f4ce27166","grayImgUrl":"https://static.tigerbbs.com/4b22c79415b4cd6e3d8ebc4a0fa32604","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2021.12.21","exceedPercentage":null,"individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1100},{"badgeId":"972123088c9646f7b6091ae0662215be-3","templateUuid":"972123088c9646f7b6091ae0662215be","name":"Legendary Trader","description":"Total number of securities or futures transactions reached 300","bigImgUrl":"https://static.tigerbbs.com/656db16598a0b8f21429e10d6c1cb033","smallImgUrl":"https://static.tigerbbs.com/03f10910d4dd9234f9b5702a3342193a","grayImgUrl":"https://static.tigerbbs.com/0c767e35268feb729d50d3fa9a386c5a","redirectLinkEnabled":0,"redirectLink":null,"hasAllocated":1,"isWearing":0,"stamp":null,"stampPosition":0,"hasStamp":0,"allocationCount":1,"allocatedDate":"2021.12.21","exceedPercentage":"93.57%","individualDisplayEnabled":0,"backgroundColor":null,"fontColor":null,"individualDisplaySort":0,"categoryType":1100}],"userBadgeCount":5,"currentWearingBadge":null,"individualDisplayBadges":null,"crmLevel":2,"crmLevelSwitch":0,"location":null,"starInvestorFollowerNum":0,"starInvestorFlag":false,"starInvestorOrderShareNum":0,"subscribeStarInvestorNum":0,"ror":null,"winRationPercentage":null,"showRor":false,"investmentPhilosophy":null,"starInvestorSubscribeFlag":false},"page":1,"watchlist":null,"tweetList":[{"id":656682686,"gmtCreate":1683672533847,"gmtModify":1683682205514,"author":{"id":"3433023691829222","authorId":"3433023691829222","name":"美股判官","avatar":"https://static.tigerbbs.com/42554bc1fcf5ab0f4dec98e8a1d32c23","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"3433023691829222","idStr":"3433023691829222"},"themes":[],"htmlText":"<a href=\"https://laohu8.com/U/74125836878304\">@空军大队长 </a>","listText":"<a href=\"https://laohu8.com/U/74125836878304\">@空军大队长 </a>","text":"@空军大队长","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/656682686","repostId":"2334722223","repostType":2,"repost":{"id":"2334722223","kind":"news","pubTimestamp":1683641719,"share":"https://ttm.financial/m/news/2334722223?lang=en_US&edition=fundamental","pubTime":"2023-05-09 22:15","market":"us","language":"zh","title":"Is the era of Nvidia's dominance over? ChatGPT detonates the Google-Microsoft chip war, and Amazon also enters the game","url":"https://stock-news.laohu8.com/highlight/detail?id=2334722223","media":"新智元","summary":"ChatGPT引爆了芯片界「百家争鸣」,谷歌、微软、亚马逊纷纷入局芯片大战,英伟达恐怕不再一家独大。ChatGPT爆火之后,谷歌和微软两巨头的AI大战战火,已经烧到了新的领域——服务器芯片。如今,AI","content":"<p><html><head></head><body>ChatGPT has triggered a contention among a hundred schools of thought in the chip industry. Google, Microsoft, and Amazon have entered the chip war one after another. Nvidia may no longer be the dominant player. After ChatGPT exploded,<a href=\"https://laohu8.com/S/GOOG\">Google</a>And<a href=\"https://laohu8.com/S/MSFT\">Microsoft</a>The AI war between the two giants has reached a new field-server chips.</p><p>Nowadays, AI and cloud computing have become battlegrounds, and chips have also become the key to reducing costs and winning commercial customers.</p><p>Originally,<a href=\"https://laohu8.com/S/AMZN\">Amazon</a>Big manufacturers such as Microsoft, and Google are all famous for their software, and now they are spending billions of dollars on chip development and production.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7e49f3bef3efc25793e385cfb981184\" alt=\"各大科技巨头研发的AI芯片\" title=\"各大科技巨头研发的AI芯片\" tg-width=\"1080\" tg-height=\"607\"/><span>AI chips developed by major technology giants</span></p><p><strong>ChatGPT explodes, major manufacturers start chip competition</strong></p><p>According to reports from foreign media The Information and other sources, these three major manufacturers have now launched or plan to release 8 servers and AI chips for internal product development, cloud server leasing, or both.</p><p>\"If you can make silicon optimized for AI, there will be a huge victory ahead of you,\" said Glenn O 'Donnell, director of research firm Forrester.</p><p>Will these great efforts definitely be rewarded?</p><p>The answer is, not necessarily.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/92c58bb9a500fa1734275375f5acd50f\" alt=\"\" title=\"\" tg-width=\"500\" tg-height=\"224\"/></p><p><a href=\"https://laohu8.com/S/INTC\">Intel</a>, AMD and<a href=\"https://laohu8.com/S/NVDA\">Nvidia</a>Economies of scale can be benefited, but this is far from the case for Big Tech companies.</p><p>They also face many thorny challenges, such as hiring chip designers and convincing developers to build applications using their custom chips.</p><p>However, major manufacturers have made remarkable progress in this field.</p><p>According to published performance data, Amazon's Graviton server chips, as well as AI-specific chips released by Amazon and Google, are already comparable to traditional chip manufacturers in performance.</p><p>There are two main types of chips developed by Amazon, Microsoft and Google for their data centers: standard computing chips and specialized chips used to train and run machine learning models. It is the latter that powers large language models such as ChatGPT.</p><p>Previously,<a href=\"https://laohu8.com/S/AAPL\">Apple</a>Successfully developed chips for iPhone, iPad and Mac, improving the processing of some AI tasks. These big manufacturers may be the inspiration they learned from Apple.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/18da5c3fe29b36b77e2a0c06f1a66858\" alt=\"\" title=\"\" tg-width=\"728\" tg-height=\"382\"/></p><p>Among the three major manufacturers, Amazon is the only cloud service provider that provides two chips in servers. The acquisition of Israeli chip designer Annapurna Labs in 2015 laid the foundation for these efforts.</p><p>Google launched a chip for AI workloads in 2015 and is developing a standard server chip to improve server performance in Google Cloud.</p><p>In contrast, Microsoft's chip research and development started late, starting in 2019, and recently, Microsoft has accelerated the timeline of launching AI chips specially designed for LLM.</p><p>The popularity of ChatGPT has ignited the excitement of users all over the world for AI. This has further promoted the strategic transformation of the three major manufacturers.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/ca3cd473a03b6e5160c3681d1b2eff89\" alt=\"\" title=\"\" tg-width=\"800\" tg-height=\"534\"/></p><p>ChatGPT runs on Microsoft's Azure cloud and uses tens of thousands of Nvidia A100s. Whether it is ChatGPT or other OpenAI software integrated into Bing and various programs, it requires so much computing power that Microsoft has allocated server hardware to the internal team developing AI.</p><p>At Amazon, Chief Financial Officer Brian Olsavsky told investors on an earnings call last week that Amazon plans to shift spending from its retail business to AWS, in part by investing in the infrastructure needed to support ChatGPT.</p><p>At Google, the engineering team responsible for manufacturing the tensor processing unit has moved to Google Cloud. It is reported that cloud organizations can now develop roadmaps for TPUs and the software running on them, hoping to let cloud customers rent more TPU-powered servers.</p><p><strong>Google: TPU V4 specially tuned for AI</strong></p><p>As early as 2020, Google deployed the most powerful AI chip at the time-TPU v4-on its own data center.</p><p>However, it was not until April 4 this year that Google announced the technical details of this AI supercomputer for the first time.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/0690ee25b0f2aaecc65aeb54a8db428b\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"593\"/></p><p>Compared with TPU v3, the performance of TPU v4 is 2.1 times higher, and after integrating 4096 chips, the performance of supercomputing is improved by 10 times.</p><p>At the same time, Google also claims that its own chips are faster and more energy-efficient than Nvidia A100. For systems of comparable size, TPU v4 can provide 1.7 times better performance than Nvidia A100, while also improving energy efficiency by 1.9 times.</p><p>For similarly sized systems, TPU v4 is 1.15 times faster than A100 on BERT and approximately 4.3 times faster than IPU. For ResNet, TPU v4 is 1.67 times and approximately 4.5 times faster, respectively.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/101223a8a12806cfc2fa2a826f048f98\" alt=\"\" title=\"\" tg-width=\"780\" tg-height=\"452\"/></p><p>Separately, Google has hinted that it is working on a new TPU to compete with the Nvidia H100. Google researcher Jouppi said in an interview with Reuters that Google has a \"production line for the chips of the future.\"</p><p><strong>Microsoft: Secret Weapon Athena</strong></p><p>In any case, Microsoft is still eager to try in this chip dispute.</p><p>Earlier, news broke that a team of 300 people secretly formed by Microsoft began to develop a customized chip called \"Athena\" in 2019.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/ff32948451fc5fb506c50218077142e3\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"608\"/></p><p>According to the original plan, \"Athena\" would use<a href=\"https://laohu8.com/S/TSM\">TSMC</a>Built with the 5nm process, it is expected to reduce the cost of each chip by 1/3.</p><p>If it can be installed on a large scale next year, Microsoft's internal and OpenAI teams can use \"Athena\" to complete model training and reasoning at the same time.</p><p>In this way, the shortage of special computers can be greatly alleviated.</p><p>Bloomberg reported last week that Microsoft's chip division had partnered with AMD to develop Athena chips, which also caused AMD's stock price to rise 6.5% on Thursday.</p><p>But a person familiar with the matter said that AMD is not involved, but is developing its own GPU to compete with Nvidia, and AMD has been discussing the design of the chip with Microsoft because Microsoft is expected to buy the GPU.</p><p><strong>Amazon: Already grabbed a position</strong></p><p>In the chip race with Microsoft and Google, Amazon seems to have taken a lead.</p><p>In the past decade, Amazon has maintained its competitive advantage over Microsoft and Google in cloud computing services by providing more advanced technology and lower prices.</p><p>In the next ten years, Amazon is also expected to continue to maintain an advantage in the competition through its own internally developed server chip, Graviton.</p><p>As the latest generation of processors, AWS Graviton3 improves computational performance by up to 25% and floating point performance by up to 2x over the previous generation. And supports DDR5 memory, which increases the bandwidth by 50% compared to DDR4 memory.</p><p>For machine learning workloads, AWS Graviton3 delivers up to 3x better performance than the previous generation and supports bfloat16.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/0d19af72c9e6355e76551f19aa45cb48\" alt=\"\" title=\"\" tg-width=\"1024\" tg-height=\"471\"/></p><p>Cloud services based on Graviton 3 chips are very popular in some regions, even reaching a state of short supply.</p><p>Another advantage of Amazon is that it is currently the only cloud provider that provides standard computing chips (Graviton) and AI-specific chips (Inferentia and Trainium) in its servers.</p><p>As early as 2019, Amazon launched its own AI inference chip-Inferentia.</p><p>It allows customers to run large-scale machine learning inference applications such as image recognition, speech recognition, natural language processing, personalization and fraud detection in the cloud at low cost.</p><p>The latest Inferentia 2 has improved computing performance by 3 times, expanded total accelerator memory by 4 times, increased throughput by 4 times, and reduced latency to 1/10.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/10067987011c905d7adb326f7e2f0f2c\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"614\"/></p><p>After the launch of the original Inferentia, Amazon released its customized chip designed mainly for AI training-Trainium.</p><p>It is optimized for deep learning training workloads, including image classification, semantic search, translation, speech recognition, natural language processing, and recommendation engines, among others.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/f3ba57317936b14e104f278577aac690\" alt=\"\" title=\"\" tg-width=\"865\" tg-height=\"382\"/></p><p>In some cases, chip customization can not only reduce costs by an order of magnitude and energy consumption to 1/10, but these customized solutions can provide customers with better services with lower latency.</p><p><strong>Shaking Nvidia's monopoly is not that easy</strong></p><p>But so far, most AI loads are still running on GPUs, and Nvidia produces most of these chips.</p><p>According to previous reports, Nvidia's independent GPU market share reaches 80%, and its high-end GPU market share is as high as 90%.</p><p>In the past 20 years, 80.6% of the world's cloud computing and data centers running AI were driven by Nvidia GPUs. In 21 years, Nvidia stated that about 70% of the world's top 500 supercomputers are driven by its own chips.</p><p>Now, even the Microsoft data center running ChatGPT uses tens of thousands of Nvidia A100 GPUs.</p><p>For a long time, whether it is ChatGPT, which has become the top stream, or models such as Bard and Stable Diffusion, the computing power is provided by Nvidia A100, a chip worth approximately US $10,000 each.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/c33f78dfa5cc8233ff32625a5924fe86\" alt=\"\" title=\"\" tg-width=\"740\" tg-height=\"416\"/></p><p>Not only that, A100 has now become the \"main force\" of artificial intelligence professionals. The 2022 Artificial Intelligence Status Report also lists companies that use the A100 supercomputer.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/9d220e98d16ada92a6fd5d48e05fb5cf\" alt=\"\" title=\"\" tg-width=\"920\" tg-height=\"582\"/></p><p>Obviously, Nvidia has monopolized global computing power, and with its own chips, it has dominated the world.</p><p>According to practitioners, compared with general-purpose chips, application-specific integrated circuit (ASIC) chips that Amazon, Google and Microsoft have been developing can perform machine learning tasks faster and consume less power.</p><p>When comparing GPUs and ASICs, Director O 'Donnell used this comparison: \"Driving normally, you can use a Prius, but if you have to use four-wheel drive on the mountain, it will be more appropriate to use a Jeep Wrangler.\"</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/90829da8a6f5a2d8163bf3716f607e95\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"608\"/></p><p>However, despite all the efforts, Amazon, Google and Microsoft all face the challenge-how to convince developers to use these AI chips?</p><p>Now, Nvidia's GPU is dominant, and developers have long been familiar with its proprietary programming language CUDA, which is used to make GPU-driven applications.</p><p>If they switch to custom chips from Amazon, Google or Microsoft, they need to learn a completely new software language. Would they be willing?</p><p></body></html></p>","source":"lsy1569730104218","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>Is the era of Nvidia's dominance over? ChatGPT detonates the Google-Microsoft chip war, and Amazon also enters the game</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 12.5px; color: #7E829C; margin: 0;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nIs the era of Nvidia's dominance over? ChatGPT detonates the Google-Microsoft chip war, and Amazon also enters the game\n</h2>\n<h4 class=\"meta\">\n<p class=\"head\">\n<strong class=\"h-name small\">新智元</strong><span class=\"h-time small\">2023-05-09 22:15</span>\n</p>\n</h4>\n</header>\n<article>\n<p><html><head></head><body>ChatGPT has triggered a contention among a hundred schools of thought in the chip industry. Google, Microsoft, and Amazon have entered the chip war one after another. Nvidia may no longer be the dominant player. After ChatGPT exploded,<a href=\"https://laohu8.com/S/GOOG\">Google</a>And<a href=\"https://laohu8.com/S/MSFT\">Microsoft</a>The AI war between the two giants has reached a new field-server chips.</p><p>Nowadays, AI and cloud computing have become battlegrounds, and chips have also become the key to reducing costs and winning commercial customers.</p><p>Originally,<a href=\"https://laohu8.com/S/AMZN\">Amazon</a>Big manufacturers such as Microsoft, and Google are all famous for their software, and now they are spending billions of dollars on chip development and production.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7e49f3bef3efc25793e385cfb981184\" alt=\"各大科技巨头研发的AI芯片\" title=\"各大科技巨头研发的AI芯片\" tg-width=\"1080\" tg-height=\"607\"/><span>AI chips developed by major technology giants</span></p><p><strong>ChatGPT explodes, major manufacturers start chip competition</strong></p><p>According to reports from foreign media The Information and other sources, these three major manufacturers have now launched or plan to release 8 servers and AI chips for internal product development, cloud server leasing, or both.</p><p>\"If you can make silicon optimized for AI, there will be a huge victory ahead of you,\" said Glenn O 'Donnell, director of research firm Forrester.</p><p>Will these great efforts definitely be rewarded?</p><p>The answer is, not necessarily.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/92c58bb9a500fa1734275375f5acd50f\" alt=\"\" title=\"\" tg-width=\"500\" tg-height=\"224\"/></p><p><a href=\"https://laohu8.com/S/INTC\">Intel</a>, AMD and<a href=\"https://laohu8.com/S/NVDA\">Nvidia</a>Economies of scale can be benefited, but this is far from the case for Big Tech companies.</p><p>They also face many thorny challenges, such as hiring chip designers and convincing developers to build applications using their custom chips.</p><p>However, major manufacturers have made remarkable progress in this field.</p><p>According to published performance data, Amazon's Graviton server chips, as well as AI-specific chips released by Amazon and Google, are already comparable to traditional chip manufacturers in performance.</p><p>There are two main types of chips developed by Amazon, Microsoft and Google for their data centers: standard computing chips and specialized chips used to train and run machine learning models. It is the latter that powers large language models such as ChatGPT.</p><p>Previously,<a href=\"https://laohu8.com/S/AAPL\">Apple</a>Successfully developed chips for iPhone, iPad and Mac, improving the processing of some AI tasks. These big manufacturers may be the inspiration they learned from Apple.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/18da5c3fe29b36b77e2a0c06f1a66858\" alt=\"\" title=\"\" tg-width=\"728\" tg-height=\"382\"/></p><p>Among the three major manufacturers, Amazon is the only cloud service provider that provides two chips in servers. The acquisition of Israeli chip designer Annapurna Labs in 2015 laid the foundation for these efforts.</p><p>Google launched a chip for AI workloads in 2015 and is developing a standard server chip to improve server performance in Google Cloud.</p><p>In contrast, Microsoft's chip research and development started late, starting in 2019, and recently, Microsoft has accelerated the timeline of launching AI chips specially designed for LLM.</p><p>The popularity of ChatGPT has ignited the excitement of users all over the world for AI. This has further promoted the strategic transformation of the three major manufacturers.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/ca3cd473a03b6e5160c3681d1b2eff89\" alt=\"\" title=\"\" tg-width=\"800\" tg-height=\"534\"/></p><p>ChatGPT runs on Microsoft's Azure cloud and uses tens of thousands of Nvidia A100s. Whether it is ChatGPT or other OpenAI software integrated into Bing and various programs, it requires so much computing power that Microsoft has allocated server hardware to the internal team developing AI.</p><p>At Amazon, Chief Financial Officer Brian Olsavsky told investors on an earnings call last week that Amazon plans to shift spending from its retail business to AWS, in part by investing in the infrastructure needed to support ChatGPT.</p><p>At Google, the engineering team responsible for manufacturing the tensor processing unit has moved to Google Cloud. It is reported that cloud organizations can now develop roadmaps for TPUs and the software running on them, hoping to let cloud customers rent more TPU-powered servers.</p><p><strong>Google: TPU V4 specially tuned for AI</strong></p><p>As early as 2020, Google deployed the most powerful AI chip at the time-TPU v4-on its own data center.</p><p>However, it was not until April 4 this year that Google announced the technical details of this AI supercomputer for the first time.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/0690ee25b0f2aaecc65aeb54a8db428b\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"593\"/></p><p>Compared with TPU v3, the performance of TPU v4 is 2.1 times higher, and after integrating 4096 chips, the performance of supercomputing is improved by 10 times.</p><p>At the same time, Google also claims that its own chips are faster and more energy-efficient than Nvidia A100. For systems of comparable size, TPU v4 can provide 1.7 times better performance than Nvidia A100, while also improving energy efficiency by 1.9 times.</p><p>For similarly sized systems, TPU v4 is 1.15 times faster than A100 on BERT and approximately 4.3 times faster than IPU. For ResNet, TPU v4 is 1.67 times and approximately 4.5 times faster, respectively.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/101223a8a12806cfc2fa2a826f048f98\" alt=\"\" title=\"\" tg-width=\"780\" tg-height=\"452\"/></p><p>Separately, Google has hinted that it is working on a new TPU to compete with the Nvidia H100. Google researcher Jouppi said in an interview with Reuters that Google has a \"production line for the chips of the future.\"</p><p><strong>Microsoft: Secret Weapon Athena</strong></p><p>In any case, Microsoft is still eager to try in this chip dispute.</p><p>Earlier, news broke that a team of 300 people secretly formed by Microsoft began to develop a customized chip called \"Athena\" in 2019.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/ff32948451fc5fb506c50218077142e3\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"608\"/></p><p>According to the original plan, \"Athena\" would use<a href=\"https://laohu8.com/S/TSM\">TSMC</a>Built with the 5nm process, it is expected to reduce the cost of each chip by 1/3.</p><p>If it can be installed on a large scale next year, Microsoft's internal and OpenAI teams can use \"Athena\" to complete model training and reasoning at the same time.</p><p>In this way, the shortage of special computers can be greatly alleviated.</p><p>Bloomberg reported last week that Microsoft's chip division had partnered with AMD to develop Athena chips, which also caused AMD's stock price to rise 6.5% on Thursday.</p><p>But a person familiar with the matter said that AMD is not involved, but is developing its own GPU to compete with Nvidia, and AMD has been discussing the design of the chip with Microsoft because Microsoft is expected to buy the GPU.</p><p><strong>Amazon: Already grabbed a position</strong></p><p>In the chip race with Microsoft and Google, Amazon seems to have taken a lead.</p><p>In the past decade, Amazon has maintained its competitive advantage over Microsoft and Google in cloud computing services by providing more advanced technology and lower prices.</p><p>In the next ten years, Amazon is also expected to continue to maintain an advantage in the competition through its own internally developed server chip, Graviton.</p><p>As the latest generation of processors, AWS Graviton3 improves computational performance by up to 25% and floating point performance by up to 2x over the previous generation. And supports DDR5 memory, which increases the bandwidth by 50% compared to DDR4 memory.</p><p>For machine learning workloads, AWS Graviton3 delivers up to 3x better performance than the previous generation and supports bfloat16.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/0d19af72c9e6355e76551f19aa45cb48\" alt=\"\" title=\"\" tg-width=\"1024\" tg-height=\"471\"/></p><p>Cloud services based on Graviton 3 chips are very popular in some regions, even reaching a state of short supply.</p><p>Another advantage of Amazon is that it is currently the only cloud provider that provides standard computing chips (Graviton) and AI-specific chips (Inferentia and Trainium) in its servers.</p><p>As early as 2019, Amazon launched its own AI inference chip-Inferentia.</p><p>It allows customers to run large-scale machine learning inference applications such as image recognition, speech recognition, natural language processing, personalization and fraud detection in the cloud at low cost.</p><p>The latest Inferentia 2 has improved computing performance by 3 times, expanded total accelerator memory by 4 times, increased throughput by 4 times, and reduced latency to 1/10.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/10067987011c905d7adb326f7e2f0f2c\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"614\"/></p><p>After the launch of the original Inferentia, Amazon released its customized chip designed mainly for AI training-Trainium.</p><p>It is optimized for deep learning training workloads, including image classification, semantic search, translation, speech recognition, natural language processing, and recommendation engines, among others.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/f3ba57317936b14e104f278577aac690\" alt=\"\" title=\"\" tg-width=\"865\" tg-height=\"382\"/></p><p>In some cases, chip customization can not only reduce costs by an order of magnitude and energy consumption to 1/10, but these customized solutions can provide customers with better services with lower latency.</p><p><strong>Shaking Nvidia's monopoly is not that easy</strong></p><p>But so far, most AI loads are still running on GPUs, and Nvidia produces most of these chips.</p><p>According to previous reports, Nvidia's independent GPU market share reaches 80%, and its high-end GPU market share is as high as 90%.</p><p>In the past 20 years, 80.6% of the world's cloud computing and data centers running AI were driven by Nvidia GPUs. In 21 years, Nvidia stated that about 70% of the world's top 500 supercomputers are driven by its own chips.</p><p>Now, even the Microsoft data center running ChatGPT uses tens of thousands of Nvidia A100 GPUs.</p><p>For a long time, whether it is ChatGPT, which has become the top stream, or models such as Bard and Stable Diffusion, the computing power is provided by Nvidia A100, a chip worth approximately US $10,000 each.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/c33f78dfa5cc8233ff32625a5924fe86\" alt=\"\" title=\"\" tg-width=\"740\" tg-height=\"416\"/></p><p>Not only that, A100 has now become the \"main force\" of artificial intelligence professionals. The 2022 Artificial Intelligence Status Report also lists companies that use the A100 supercomputer.</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/9d220e98d16ada92a6fd5d48e05fb5cf\" alt=\"\" title=\"\" tg-width=\"920\" tg-height=\"582\"/></p><p>Obviously, Nvidia has monopolized global computing power, and with its own chips, it has dominated the world.</p><p>According to practitioners, compared with general-purpose chips, application-specific integrated circuit (ASIC) chips that Amazon, Google and Microsoft have been developing can perform machine learning tasks faster and consume less power.</p><p>When comparing GPUs and ASICs, Director O 'Donnell used this comparison: \"Driving normally, you can use a Prius, but if you have to use four-wheel drive on the mountain, it will be more appropriate to use a Jeep Wrangler.\"</p><p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/90829da8a6f5a2d8163bf3716f607e95\" alt=\"\" title=\"\" tg-width=\"1080\" tg-height=\"608\"/></p><p>However, despite all the efforts, Amazon, Google and Microsoft all face the challenge-how to convince developers to use these AI chips?</p><p>Now, Nvidia's GPU is dominant, and developers have long been familiar with its proprietary programming language CUDA, which is used to make GPU-driven applications.</p><p>If they switch to custom chips from Amazon, Google or Microsoft, they need to learn a completely new software language. Would they be willing?</p><p></body></html></p>\n<div class=\"bt-text\">\n\n\n<p> source:<a href=\"https://mp.weixin.qq.com/s/mDDh98MDwqq31IGc3xpj5A\">新智元</a></p>\n\n\n</div>\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"https://static.tigerbbs.com/3009f8f41e8235e7bbbf9b09a66a4539","relate_stocks":{"IE00B775SV38.USD":"NEUBERGER BERMAN US MULTICAP OPPORTUNITIES \"A\" (USD) ACC","IE00BSNM7G36.USD":"NEUBERGER BERMAN SYSTEMATIC GLOBAL SUSTAINABLE VALUE \"A\" (USD) ACC","BK4535":"淡马锡持仓","LU0786609619.USD":"高盛全球千禧一代股票组合Acc","LU1951198990.SGD":"Natixis Thematics AI & Robotics Fund H-R/A SGD-H","IE0034235188.USD":"PINEBRIDGE GLOBAL FOCUS EQUITY \"A\" (USD) ACC","LU0211327993.USD":"TEMPLETON GLOBAL EQUITY INCOME \"A\" (USD) ACC","LU0642271901.SGD":"Janus Henderson Horizon Global Technology Leaders A2 SGD-H","LU0052756011.USD":"TEMPLETON GLOBAL BALANCED \"A\" (USD) INC","LU1242518931.SGD":"Fullerton Lux Funds - Asia Absolute Alpha A Acc SGD","LU1951200564.SGD":"Natixis Thematics AI & Robotics Fund R/A SGD","BK4507":"流媒体概念","LU0082616367.USD":"摩根大通美国科技A(dist)","LU1551013425.SGD":"Allianz Income and Growth Cl AMg2 DIS H2-SGD","IE0009356076.USD":"JANUS HENDERSON GLOBAL TECHNOLOGY AND INNOVATION \"A2\" (USD) ACC","LU0719512351.SGD":"JPMorgan Funds - US Technology A (acc) SGD","GOOGL":"谷歌A","IE00BJJMRY28.SGD":"Janus Henderson Balanced A Inc SGD","LU1923623000.USD":"Natixis Thematics AI & Robotics Fund R/A USD","LU1720051108.HKD":"ALLIANZ GLOBAL ARTIFICIAL INTELLIGENCE \"AT\" (HKD) ACC","SG9999014898.SGD":"United Global Quality Growth Fund Dis SGD","BK4573":"虚拟现实","MSFT":"微软","LU1720051017.SGD":"Allianz Global Artificial Intelligence AT Acc H2-SGD","BK4097":"系统软件","LU0198837287.USD":"UBS (LUX) EQUITY SICAV - USA GROWTH \"P\" (USD) ACC","BK4554":"元宇宙及AR概念","BK4524":"宅经济概念","AMZN":"亚马逊","IE00B3S45H60.SGD":"Neuberger Berman US Multicap Opportunities A Acc SGD-H","BK4543":"AI","LU0109391861.USD":"富兰克林美国机遇基金A Acc","LU1839511570.USD":"WELLS FARGO GLOBAL FACTOR ENHANCED EQUITY \"I\" (USD) ACC","BK4527":"明星科技股","BK4553":"喜马拉雅资本持仓","LU0353189763.USD":"ALLSPRING US ALL CAP GROWTH FUND \"I\" (USD) ACC","LU0417517546.SGD":"Allianz US Equity Cl AT Acc SGD","LU1861220033.SGD":"Blackrock Next Generation Technology A2 SGD-H","BK4585":"ETF&股票定投概念","BK4567":"ESG概念","LU0889565833.HKD":"FRANKLIN TECHNOLOGY \"A\" (HKD) ACC","BK4576":"AR","LU0957791311.USD":"THREADNEEDLE (LUX) GLOBAL FOCUS \"ZU\" (USD) ACC","IE00B7KXQ091.USD":"Janus Henderson Balanced A Inc USD","BK4533":"AQR资本管理(全球第二大对冲基金)","LU0080751232.USD":"富达环球多元动力基金A","NVDA":"英伟达","BK4587":"ChatGPT概念","LU1242518857.USD":"FULLERTON LUX FUNDS - ASIA ABSOLUTE ALPHA \"I\" (USD) ACC","LU1852331112.SGD":"Blackrock World Technology Fund A2 SGD-H"},"source_url":"https://mp.weixin.qq.com/s/mDDh98MDwqq31IGc3xpj5A","is_english":false,"share_image_url":"https://static.laohu8.com/e9f99090a1c2ed51c021029395664489","article_id":"2334722223","content_text":"ChatGPT引爆了芯片界「百家争鸣」,谷歌、微软、亚马逊纷纷入局芯片大战,英伟达恐怕不再一家独大。ChatGPT爆火之后,谷歌和微软两巨头的AI大战战火,已经烧到了新的领域——服务器芯片。如今,AI和云计算都成了必争之地,而芯片,也成为降低成本、赢得商业客户的关键。原本,亚马逊、微软、谷歌这类大厂,都是以软件而闻名的,而现在,它们纷纷斥资数十亿美元,用于芯片开发和生产。各大科技巨头研发的AI芯片ChatGPT爆火,大厂开启芯片争霸赛根据外媒The Information的报道以及其他来源,这三家大厂现在已经推出或计划发布8款服务器和AI芯片,用于内部产品开发、云服务器租赁或者二者兼有。“如果你能制造出针对AI进行优化的硅,那前方等待你的将是巨大的胜利”,研究公司Forrester的董事Glenn O’Donnell这样说。付出这些巨大的努力,一定会得到回报吗?答案是,并不一定。英特尔、AMD和英伟达可以从规模经济中获益,但对大型科技公司来说,情况远非如此。它们还面临着许多棘手的挑战,比如需要聘请芯片设计师,还要说服开发者使用他们定制的芯片构建应用程序。不过,大厂们已经在这一领域取得了令人瞩目的进步。根据公布的性能数据,亚马逊的Graviton服务器芯片,以及亚马逊和谷歌发布的AI专用芯片,在性能上已经可以和传统的芯片厂商相媲美。亚马逊、微软和谷歌为其数据中心开发的芯片,主要有这两种:标准计算芯片和用于训练和运行机器学习模型的专用芯片。正是后者,为ChatGPT之类的大语言模型提供了动力。此前,苹果成功地为iPhone,iPad和Mac开发了芯片,改善了一些AI任务的处理。这些大厂,或许正是跟苹果学来的灵感。在三家大厂中,亚马逊是唯一一家在服务器中提供两种芯片的云服务商,2015年收购的以色列芯片设计商Annapurna Labs,为这些工作奠定了基础。谷歌在2015年推出了一款用于AI工作负载的芯片,并正在开发一款标准服务器芯片,以提高谷歌云的服务器性能。相比之下,微软的芯片研发开始得较晚,是在2019年启动的,而最近,微软更加快了推出专为LLM设计的AI芯片的时间轴。而ChatGPT的爆火,点燃了全世界用户对于AI的兴奋。这更促进了三家大厂的战略转型。ChatGPT运行在微软的Azure云上,使用了上万块英伟达A100。无论是ChatGPT,还是其他整合进Bing和各种程序的OpenAI软件,都需要如此多的算力,以至于微软已经为开发AI的内部团队分配了服务器硬件。在亚马逊,首席财务官Brian Olsavsky在上周的财报电话会议上告诉投资者,亚马逊计划将支出从零售业务转移到AWS,部分原因是投资于支持ChatGPT所需的基础设施。在谷歌,负责制造张量处理单元的工程团队已经转移到谷歌云。据悉,云组织现在可以为TPU和在其上运行的软件制定路线图,希望让云客户租用更多TPU驱动的服务器。谷歌:为AI特调的TPU V4早在2020年,谷歌就在自家的数据中心上部署了当时最强的AI芯片——TPU v4。不过直到今年的4月4日,谷歌才首次公布了这台AI超算的技术细节。相比于TPU v3,TPU v4的性能要高出2.1倍,而在整合4096个芯片之后,超算的性能更是提升了10倍。同时,谷歌还声称,自家芯片要比英伟达A100更快、更节能。对于规模相当的系统,TPU v4可以提供比英伟达A100强1.7倍的性能,同时在能效上也能提高1.9倍。对于相似规模的系统,TPU v4在BERT上比A100快1.15倍,比IPU快大约4.3倍。对于ResNet,TPU v4分别快1.67倍和大约4.5倍。另外,谷歌曾暗示,它正在研发一款与Nvidia H100竞争的新TPU。谷歌研究员Jouppi在接受路透社采访时表示,谷歌拥有“未来芯片的生产线”。微软:秘密武器雅典娜不管怎么说,微软在这场芯片纷争中,依旧跃跃欲试。此前有消息爆出,微软秘密组建的300人团队,在2019年时就开始研发一款名为“雅典娜”(Athena)的定制芯片。根据最初的计划,“雅典娜”会使用台积电的5nm工艺打造,预计可以将每颗芯片的成本降低1/3。如果在明年能够大面积实装,微软内部和OpenAI的团队便可以借助“雅典娜”同时完成模型的训练和推理。这样一来,就可以极大地缓解专用计算机紧缺的问题。彭博社在上周的报道中,称微软的芯片部门已与AMD合作开发雅典娜芯片,这也导致AMD的股价在周四上涨了6.5%。但一位知情者表示,AMD并未参与其中,而是在开发自己的GPU,与英伟达竞争,并且AMD一直在与微软讨论芯片的设计,因为微软预计要购买这款GPU。亚马逊:已抢跑一个身位而在与微软和谷歌的芯片竞赛中,亚马逊似乎已经领先了一个身位。在过去的十年中,亚马逊在云计算服务方面,通过提供更加先进的技术和更低的价格,一直保持了对微软和谷歌的竞争优势。而未来十年内,亚马逊也有望通过自己内部开发的服务器芯片——Graviton,继续在竞争中保持优势。作为最新一代的处理器,AWS Graviton3在计算性能上比上一代提高多达25%,浮点性能提高多达2倍。并支持DDR5内存,相比DDR4内存带宽增加了50%。针对机器学习工作负载,AWS Graviton3比上一代的性能高出多达3倍,并支持 bfloat16。基于Graviton 3芯片的云服务在一些地区非常受欢迎,甚至于达到了供不应求的状态。亚马逊另一方面的优势还表现在,它是目前唯一一家在其服务器中提供标准计算芯片(Graviton)和AI专用芯片(Inferentia和Trainium)云供应商。早在2019年,亚马逊就推出了自己的AI推理芯片——Inferentia。它可以让客户可以在云端低成本运行大规模机器学习推理应用程序,例如图像识别、语音识别、自然语言处理、个性化和欺诈检测。而最新的Inferentia 2更是在计算性能提高了3倍,加速器总内存扩大了4倍,吞吐量提高了4倍,延迟降低到1/10。在初代Inferentia推出之后,亚马逊又发布了其设计的主要用于AI训练的定制芯片——Trainium。它对深度学习训练工作负载进行了优化,包括图像分类、语义搜索、翻译、语音识别、自然语言处理和推荐引擎等。在一些情况下,芯片定制不仅仅可以把成本降低一个数量级,能耗减少到1/10,并且这些定制化的方案可以给客户以更低的延迟提供更好的服务。撼动英伟达的垄断,没那么容易不过到目前为止,大多数的AI负载还是跑在GPU上的,而英伟达生产了其中的大部分芯片。据此前报道,英伟达独立GPU市场份额达80%,在高端GPU市场份额高达90%。20年,全世界跑AI的云计算与数据中心,80.6%都由英伟达GPU驱动。21年,英伟达表示,全球前500个超算中,大约七成是由自家的芯片驱动。而现在,就连运行ChatGPT的微软数据中心用了上万块英伟达A100 GPU。一直以来,不管是成为顶流的ChatGPT,还是Bard、Stable Diffusion等模型,背后都是由每个大约价值1万美元的芯片英伟达A100提供算力。不仅如此,A100目前已成为人工智能专业人士的“主力”。2022人工智能现状报告还列出了使用A100超级计算机部分公司的名单。显而易见,英伟达已经垄断了全球算力,凭借自家的芯片,一统江湖。根据从业者的说法,相比于通用芯片,亚马逊、谷歌和微软一直在研发的专用集成电路(ASIC)芯片,在执行机器学习任务的速度更快,功耗更低。O’Donnell董事在比较GPU和ASIC时,用了这样一个比较:“平时开车,你可以用普锐斯,但如果你必须在山上用四轮驱动,用吉普牧马人就会更合适。”然而尽管已经做出了种种努力,但亚马逊、谷歌和微软都面临着挑战——如何说服开发者使用这些AI芯片呢?现在,英伟达的GPU是占主导地位的,开发者早已熟悉其专有的编程语言CUDA,用于制作GPU驱动的应用程序。如果换到亚马逊、谷歌或微软的定制芯片,就需要学习全新的软件语言了,他们会愿意吗?","news_type":1,"symbols_score_info":{"MSFT":1,"GOOGL":1,"AMZN":1,"NVDA":1}},"isVote":1,"tweetType":1,"viewCount":1513,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":925319167,"gmtCreate":1587769765459,"gmtModify":1705305286329,"author":{"id":"3433023691829222","authorId":"3433023691829222","name":"美股判官","avatar":"https://static.tigerbbs.com/42554bc1fcf5ab0f4dec98e8a1d32c23","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"3433023691829222","idStr":"3433023691829222"},"themes":[],"htmlText":"<a href=\"https://laohu8.com/S/TLRY\">$Tilray Inc.(TLRY)$</a><a href=\"https://laohu8.com/S/TSLA\">$特斯拉(TSLA)$</a><a href=\"https://laohu8.com/S/BA\">$波音(BA)$</a><a href=\"https://laohu8.com/S/AMZN\">$亚马逊(AMZN)$</a><a href=\"https://laohu8.com/S/ZM\">$Zoom(ZM)$</a><a href=\"https://laohu8.com/S/AAPL\">$苹果(AAPL)$</a>Are u OK?","listText":"<a href=\"https://laohu8.com/S/TLRY\">$Tilray Inc.(TLRY)$</a><a href=\"https://laohu8.com/S/TSLA\">$特斯拉(TSLA)$</a><a href=\"https://laohu8.com/S/BA\">$波音(BA)$</a><a href=\"https://laohu8.com/S/AMZN\">$亚马逊(AMZN)$</a><a href=\"https://laohu8.com/S/ZM\">$Zoom(ZM)$</a><a href=\"https://laohu8.com/S/AAPL\">$苹果(AAPL)$</a>Are u OK?","text":"$Tilray Inc.(TLRY)$$特斯拉(TSLA)$$波音(BA)$$亚马逊(AMZN)$$Zoom(ZM)$$苹果(AAPL)$Are u OK?","images":[{"img":"https://static.tigerbbs.com/3cf2804174f5957d08bb9165f5633c7a","width":"750","height":"780"}],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":3,"repostSize":1,"link":"https://ttm.financial/post/925319167","isVote":1,"tweetType":1,"viewCount":3923,"authorTweetTopStatus":1,"verified":2,"comments":[{"author":{"id":"3433023691829222","authorId":"3433023691829222","name":"美股判官","avatar":"https://static.tigerbbs.com/42554bc1fcf5ab0f4dec98e8a1d32c23","crmLevel":2,"crmLevelSwitch":0,"authorIdStr":"3433023691829222","idStr":"3433023691829222"},"content":"$Tilray Inc. (TLRY) $& nbsp; $Boeing (BA) $& nbsp; $Apple (AAPL) $& nbsp; $Tesla(TSLA)$ & nbsp; $AMZN) $& nbsp;I haven't completed the construction of the warehouse [money fan] & nbsp;","text":"$Tilray Inc. (TLRY) $& nbsp; $Boeing (BA) $& nbsp; $Apple (AAPL) $& nbsp; $Tesla(TSLA)$ & nbsp; $AMZN) $& nbsp;I haven't completed the construction of the warehouse [money fan] & nbsp;","html":"$Tilray Inc. (TLRY) $& nbsp; $Boeing (BA) $& nbsp; $Apple (AAPL) $& nbsp; $Tesla(TSLA)$ & nbsp; $AMZN) $& nbsp;I haven't completed the construction of the warehouse [money fan] & nbsp;"}],"imageCount":1,"langContent":"EN","totalScore":0},{"id":920695449,"gmtCreate":1584984521481,"gmtModify":1705296499464,"author":{"id":"3433023691829222","authorId":"3433023691829222","name":"美股判官","avatar":"https://static.tigerbbs.com/42554bc1fcf5ab0f4dec98e8a1d32c23","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"3433023691829222","idStr":"3433023691829222"},"themes":[],"htmlText":"<a href=\"https://laohu8.com/S/GLD\">$GLD 20200331 161.0 CALL(GLD)$</a>Thank you, Fed","listText":"<a href=\"https://laohu8.com/S/GLD\">$GLD 20200331 161.0 CALL(GLD)$</a>Thank you, Fed","text":"$GLD 20200331 161.0 CALL(GLD)$Thank you, Fed","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/920695449","isVote":1,"tweetType":1,"viewCount":1027,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":913537164,"gmtCreate":1577719784358,"gmtModify":1705418563240,"author":{"id":"3433023691829222","authorId":"3433023691829222","name":"美股判官","avatar":"https://static.tigerbbs.com/42554bc1fcf5ab0f4dec98e8a1d32c23","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"3433023691829222","idStr":"3433023691829222"},"themes":[],"htmlText":"<a href=\"https://laohu8.com/S/NIO\">$蔚来(NIO)$</a> 4.35","listText":"<a href=\"https://laohu8.com/S/NIO\">$蔚来(NIO)$</a> 4.35","text":"$蔚来(NIO)$ 4.35","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/913537164","isVote":1,"tweetType":1,"viewCount":976,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"defaultTab":"posts","isTTM":true}