DESIGN TOOLS
High-bandwidth memory

HBM3E

The industry's fastest, 最高容量高带宽存储器(HBM),以推进生成式人工智能创新

pause

Samples now available for Micron HBM3E 12-high 36GB cube

今天的生成式人工智能模型需要不断增长的数据量,因为它们需要扩展以提供更好的结果并应对新的机会. 美光1-beta内存技术的领先地位和封装的进步确保了最有效的数据流进出GPU. 美光的8高和12高HBM3E内存以比竞争对手低30%的功耗进一步推动了人工智能创新. 

HBM3E 12-high 36GB cube

美光开始量产业界领先的HBM3E,以加速人工智能的增长

美光的HBM3E比竞争对手的沙巴体育结算平台能耗低30%,有助于降低数据中心的运营成本. 8高24GB解决方案将成为NVIDIA H200 Tensor Core gpu的一部分,该gpu将于2024年第二季度开始发货.

Micron's HBM3E, now in volume production. Fueling the AI revolution.

Advancing the rate of AI innovation

Generative AI

Generative AI opens a world for new forms of creativity and expression, like the image above, by using large language model (LLM) for training and inference. 计算和内存资源的利用会影响部署时间和响应时间. Micron HBM3E提供了更高的内存容量,可以提高性能并减少CPU负载,从而在推理ChatGPT等llm时实现更快的训练和更灵敏的查询.™

3D illustration of an astronaut made of crystals

Deep learning

人工智能为商业、IT、工程、科学、医学等领域带来了新的可能性. As larger AI models are deployed to accelerate deep learning, maintaining compute and memory efficiency is important to address performance, costs and power to ensure benefits for all. 美光HBM3E提高了内存性能,同时注重能源效率,提高了每瓦性能,从而缩短了培训GPT-4等llm的时间.

浅蓝色、粉色和黄色的数据流组合在一起,形成了右边的各种数据集

High-performance computing

Scientists, researchers, and engineers are challenged to discover solutions for climate modeling, curing cancer and renewable and sustainable energy resources. 高性能计算(HPC)通过执行非常复杂的算法和使用大型数据集的高级模拟来加快发现时间. Micron HBM3E提供更高的内存容量,并通过减少跨多个节点分发数据的需求来提高性能, accelerating the pace of innovation.

一位艺术家在昏暗的房间里,用触控笔在平板电脑上创作出色彩斑斓的抽象设计

HBM3E专为人工智能和超级计算打造,工艺技术行业领先

美光通过HBM3E扩展了我们数据中心沙巴体育结算平台组合的行业领先性能. Delivering faster data rates, improved thermal response, 与上一代相比,在相同的封装面积内,单片芯片密度提高了50%.

Micron's HBM3 Gen3

HBM3E专为人工智能和超级计算打造,工艺技术行业领先

HBM3E provides the memory bandwidth to fuel AI compute cores

With advanced CMOS innovations and industry-leading 1β process technology. Micron HBM3E provides higher memory bandwidth that exceeds 1.2 TB/s.1

Blurred image of female wearing AI goggles

HBM3E unlocks the world of generative AI

With 50% more memory capacity2 per 8-high, 24GB cube, HBM3E enables training at higher precision and accuracy.

Splash of water on neon background

HBM3E delivers increased performance per watt for AI and HPC workloads

美光设计了一种节能的数据路径,可以降低热阻抗,并使其大于2.5X improvement in performance/watt3 compared to the previous generation.

紫色霓虹色的现代高科技服务器机房生成AI插图

HBM3E pioneers training of multimodal, multitrillion-parameter AI models

With increased memory bandwidth that improves system-level performance, HBM3E reduces training time by more than 30%4 and allows >50% more queries per day.5,6

Chat Bot Chat with AI or Artificial Intelligence technology. 一名女子用笔记本电脑与人工智能聊天,询问他想要的答案

Micron HBM3E: The foundation for unlocking unprecedented compute possibilities

Micron HBM3E is the fastest, highest-capacity high-bandwidth memory to advance AI innovation — an 8-high, 24GB cube that delivers over 1.2 TB/s bandwidth and superior power efficiency. Micron is your trusted partner for memory and storage innovation. 

非裔美国数据工程师手持笔记本电脑,在被蓝光照亮的服务器室里与超级计算机一起工作, copy space

Frequently asked questions

美光的HBM3E 8-high 24GB和HBM3E 12-high 36GB提供业界领先的性能,带宽大于1.2 TB/s and consume 30% less power than any other competitor in the market. 

美光HBM3E 8-high 24GB将于2024年第二季度开始在NVIDIA H200 Tensor Core gpu中发货. Micron HBM3E 12-high 36GB samples are available now.

美光的HBM3E 8-high和12-high模块提供业界领先的引脚速度大于9.2Gbps,可支持HBM2第一代设备的向后兼容数据速率.

美光的HBM3E 8-high和12-high解决方案提供业界领先的带宽超过1倍.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of greater than 9.2Gbps achieves a rate higher than 1.2TB/s​.

Micron’s industry-leading HBM3E 8-high provides 24GB capacity per placement. 最近发布的美光HBM3E 12高立方体将提供令人瞠目的36GB容量. 

美光的HBM3E 8-high和12-high解决方案提供业界领先的带宽大于1.2 TB/s per placement. HBM3E has 1024 IO pins and the HBM3E pin speed of more than 9.2Gbps and achieves a rate greater than 1.2TB/s.​

HBM2 offers 8 independent channels running at 3.每个引脚6Gbps,提供高达410GB/s的带宽,有4GB, 8GB和16GB容量. HBM3E offers 16 independent channels and 32 pseudo channels. Micron’s HBM3E delivers pin speed greater than 9.2Gbps at an industry-leading bandwidth of more than 1.2 TB/s per placement. 美光的HBM3E提供了24GB的容量,使用了8高的堆栈,使用了12高的堆栈,提供了36GB的容量. Micron’s HBM3E delivers 30% lower power consumption than competitors. 

Please see our Product Brief.

Featured resources

1.  在制造测试环境下,基于引脚速度shmoo图的数据速率测试估计.
2.  50% more capacity for same stack height.
3.  基于工作负载用例的模拟结果的功率和性能估计.
4.  Based on internal Micron model referencing an ACM Publication, as compared to the current shipping platform (H100).
5.  Based on internal Micron model referencing Bernstein’s research report, NVIDIA (NVDA): A bottoms-up approach to sizing the ChatGPT opportunity, February 27, 2023, as compared to the current shipping platform (H100).
6.  基于商用H100平台和线性外推的系统测量.

沙巴体育结算平台的支持团队以及获取我们各个地点的联系信息.

\r\n"}}' id="teaser-1ca09e63de">

Customer support

Need to get a hold of us? 沙巴体育结算平台的支持团队以及获取我们各个地点的联系信息.

沙巴体育结算平台”页面上的“销售支持”表格,直接沙巴体育结算平台的销售支持团队.

\r\n"}}' id="teaser-6e2d5f536f">

Sales

通过填写我们的“沙巴体育结算平台”页面上的“销售支持”表格,直接沙巴体育结算平台的销售支持团队.

Order a Micron sample

Your online source for placing and tracking orders for Micron memory samples.

Downloads & technical documentation

Dive deeper into product features or functionality and get design guidance.