Advanced Micro Devices, Inc. (NASDAQ:AMD) BofA Securities 2024 Global Technology Conference Call June 5, 2024 11:40 AM ET
Company Participants
Jean Hu – Executive Vice President & Chief Financial Officer
Conference Call Participants
Vivek Arya – Bank of America Securities
Vivek Arya
I’m Vivek Arya. I cover semiconductor semi-cap equipment at Bank of America Securities. And I’m absolutely delighted and honored to have Jean Hu, Executive Vice President and CFO of Advanced Micro Devices, joining us this morning.
I’ll start with a few of my prepared questions, but if you have anything you would like to bring up, please feel free to raise your hand, wait for a mic, and we’ll be sure to get you in.
With that, very warm welcome to you, Jean. Really appreciate you joining us this morning.
Jean Hu
Yeah, thank you so much for having us, and thank you all for joining us this morning.
Vivek Arya
Yeah, we had to move to a bigger room, right?
Jean Hu
Yeah.
Vivek Arya
Lot of interest in AMD.
Question-and-Answer Session
Q – Vivek Arya
So, Jean, maybe, a very eventful week as we were just talking, Computex this week, Lisa gave the keynote session. I think it’ll be really great if you could just kind of help us get up to speed on what are the main announcements from Computex? What gets you the most excited about the opportunities for AMD?
Jean Hu
Yeah. Thank you, Vivek. Let me start to share with some key highlights from this week’s exciting announcement by AMD at Computex. You’re right, Lisa Su gave the keynote, and she unveiled the leadership CPU, GPU and the NPU architecture from AMD, which are going to power end-to-end AI infrastructure. As you probably know, AMD is the only company that has end-to-end solutions from CPU, GPU to NPU to address the significant AI demand from data center to end-user device like gaming and the AI PC and even to the edge with our embedded business. So, it is exciting time, I would say.
If you look at the announcement, the first thing is, we announced our AI PC Ryzen AI 300 Series processors for [indiscernible] notebook. This is literally one single chip that include AMD’s Zen 5 CPU core, the latest GPU core, and the most powerful NPU core on a single chip. Incredible innovative product, which actually has up to 50 TOPS, that is — Microsoft AI PC requirement is actually 40 TOPS. So, we do have a leadership performance against our competition. On the AI PC side, we also announced Ryzen 9000 desktop processors. Same thing, two times of performance on AI performance side. So, it’s quite exciting. And both the product will be available in July on the shelf where people can buy it.
And then, on the data center side, AMD is the only company that have both product portfolio and the technology to provide the CPU and the GPU to data center. On the CPU side, we previewed our Turin, which is our Gen 5 EPYC CPU processors. And this generation will have up to 192 core counts per CPU. It will extend the leadership performance, the power efficiency, TCO across the all-server platforms. So, it is exciting, and it will be available second half. So that’s the — after the very successful Gen 4 family, continue to gain market share. This is another leadership product that will drive future share gain.
And then, of course, on the GPU side, Lisa unveiled our expanded GPU roadmap, which actually is our annual cadence. We’ll introduce new product family every year, match our competitors. As you know, we launched MI300 later last year. It quickly becomes the faster ramping revenue product in AMD’s history. And if you look at MI300 today, it provide leading inference performance and a very competitive on training side, too. And then later this year, we’ll have MI325, which will expand the memory capacity to 288 gigabyte HBM3E memory, which, again, when you compare to [Technical Difficulty] memory capacity, which is quite significant. The way to think about it is, when you look at the one server made eight GPUs, it actually can support trillion parameter large language model. That’s the advantage of very large memory capacity and the bandwidth. So, we are really excited about the product.
In 2025, we’ll have MI350, which actually is based on AMD’s CDNA 4 architecture, 3-nanometer process node and also, again, support 288 HBM3E memory. So, it is a product when you look at the generation-over-generation, we actually improved the performance by 35 times. It’s actually very similar to our competition’s generation-over-generation performance improvement, and this product will be competing with the Blackwell 200. And when you look at the memory capacity and the bandwidth, memory capacity actually is 1.5 times of Blackwell 200. So, continue to lead on the memory capacity and the inference performance. That’s actually really exciting.
Of course, then in 2026, we’ll introduce MI400 alongside with the competition’s Rubin GPU. The way to think about MI400 is actually, it’s going to be based on another new architecture, we call it the CDNA Next. We’ll continue to extend the performance not only on the inference side, but also support larger cluster training.
So, it is exciting when you look at our portfolio and the product announcement from this week. We are very excited about the large AI opportunities end-to-end and our portfolio to address that opportunity.
Vivek Arya
Excellent. Thank you, Jean. Thank you for the overview. So, let’s start with everyone’s two favorite words, A and I. So, on MI300, you raised the forecast for this year from over $3.5 billion to over $4 billion. Is that a supply constrained number? Let’s say, if you get enough supply in memory and [core watts] (ph) and so forth, can that number be $5 billion or $6 billion? Like, what is dictating that number to be $4 billion and not higher this year?
Jean Hu
Yeah. As I said earlier, we literally launched MI300X last December, right? We have ramped the MI300X across $1 billion in less than two quarters. And when you think about it [Technical Difficulty] and today, we talked about in the last earnings call, we have more than 100 customers that we are engaging with either in the developing stage or in the deployment stage. So, we updated the $4 billion-plus number at the last earnings call. It’s really based on the engagement, the pace, the design wins, the backlogs that we have with our customers.
And our supply chain team has done an excellent job. As you know, the supply chain was quite tight. Even for the first half of this year, we continue to face very tight supply chain situation. So, our job is to really continue to push working with the customers through the different process. The ramping process can be complex, right? There are so many different models, different workloads, different customers. So, you work with them, go through the initial POC stage, then [Technical Difficulty] production, then deployment. So, the process of different customers is at a different stage of a process. So that’s what we are working with.
We feel like the progress we are making actually exceed our expectations, because the ROCm software, we have made a tremendous progress. So, we can help a customer to bring up their production much more quickly. And over time, we do say that we have more than $4 billion supply this year, and that you should expect us to updating you when we make more progress going forward.
Vivek Arya
Got it. Does the launch of the 325X in Q4, does that provide upside potential also?
Jean Hu
As you know, when you launch the product, typically, it would take some time to ramp up, right? So, I do think, we’ll launch it in Q4, but meaningful revenue will be next year.
Vivek Arya
Okay. And then finally, from a supply perspective, are you getting adequate support, especially from the memory companies? Because there are often reports about whether it’s supply constraints, whether it’s product is not qualified or issues, et cetera. Are you satisfied with the memory supply that you have?
Jean Hu
Yeah. We are [Technical Difficulty] with all three memory suppliers. As you can see, when we ramped the MI300X, it’s very significant, very fast ramp. We get very good support from our suppliers. And I think the capacity is still tight, but our team is working with them closely to ensure we have enough supply to support our customers.
Vivek Arya
Got it. So, it’s more a supply question rather than a technical or qualification…
Jean Hu
Oh, absolutely. Yeah. If you look at the successful ramp of our MI300X in less than two quarters…
Vivek Arya
That’s very impressive.
Jean Hu
Yeah.
Vivek Arya
Got it. Makes sense. There is — actually, let me change the question this way. Most hyperscalers, they pretty much have a good sense of what they will deploy over the next several quarters, because they have to get the land, the power and like all those physical conditions ready before they start getting all the electronics and compute, right, networking and so forth. So, is there — are those decisions still dynamic as we go towards the end of the year? And where I’m coming from is, just because your competitor has a very large product introduction in Q4, does that crowd out your opportunity near term in any way? Or that is not how decisions are made, that they’re already kind of set, right, as to which product will go where?
Jean Hu
Yeah, thanks for the question. You’re absolutely right. So, their planning actually is not just quarters, right? When you talk about land, power, data center space, when you think about how long it takes to align those things, it’s actually multiyear. So, I think this — during our earnings call and we always talk about it is we have been working with our customers, hyperscale enterprise customer for a multiyear roadmap. So, when you think about those kind of decisions they are making and the CapEx they’re spending, it got to be multiyear. You’ve got to plan out not only this generation but future generation.
The way to think about it is both — as you know, they have to invest significant resource and we also have to invest significant resource. So, that’s how we work with the customers. I don’t think there’s — like, okay, if NVIDIA introduced some new products, it will change our opportunities, the trajectory for AMD. It’s a very large market. We know it’s from nothing to $40 billion, $50 billion, now [$200 billion] (ph). And then going forward, it’s going to grow very significantly. So, as a strong contender in this market, we do think the trajectory and the pace of our progress will continue to improve.
Vivek Arya
Got it. How do we think, Lisa, about — sorry, Jean, about the strategic positioning, which is on one side, you have a competitor, right, that has been in the market for a long time, has all the software and developer support and scale and whatnot. On the other side, you have a lot of custom chips, right, which just due to their nature, claim to be optimized and cost effective. So, how is AMD carving a niche for itself and make sure that it is sustainable over time?
Jean Hu
Yeah, Vivek, that’s a great question. Maybe let me just take a step back to look at AMD’s history, right? I’m new at AMD. But when you look at the AMD history, since Lisa and Mark Papermaster joined the company, they had the strategy to drive the high-performance compute in both the CPU and the GPU platforms. So then later on, we have NPU, we added adaptive FPGA.
But for the journey over the last decade, AMD, on the GPU side, it’s just like NVIDIA. We actually share the same legacy starting from gaming graphics and then get into the high-performance — HPC market. So, even if you look at the software, it’s the same thing, AMD is ROCm. It was not [talked a lot] (ph), because it was literally just in the HPC market. But we started that software investment quite a while ago.
So, I do think it’s very important to understand that in the GPU market, there are only two players who share that same legacy. And our team has as much deeper understanding about the GPU as NVIDIA. That’s why when you look at the ROCm software, we are able to make tremendous progress in a very short time period of time. It’s because we actually understand how the hardware works with the software, how we can tune to make sure the GPUs can run really significantly efficiently.
So, from that perspective, I think we are a new entrant to the AI GPU market. But if you look at the progress we are making, look at the competitive positioning of our product and how much we have made progress on software to catch up, I think we can be very competitive. And it’s such a large market. I do think that if — from our perspective, we talk about more than $4 billion expected for this year. We’ll continue to make progress. But if you look at from the company’s perspective, it’s such big market. We can absolutely make a continued progress to address the opportunities here.
And about ASIC, Vivek, we all know it’s the semiconductor market, there’s always that ASIC, especially when it’s mature, right? Because when the functionality is very fixed, the ASIC is cheaper. And so, it’s not surprise how we think about the AI market. We talk about $400 billion. We do think there is some portion of that ASIC opportunity will be there in 2027, 2028. For us right now, we’re really focused on the merchant opportunities. But if you look at the AMD, we have been doing gaming semi-customer solutions for a long, long time.
Vivek Arya
[indiscernible]
Jean Hu
Exactly. It’s about what the customer needs. If a customer needs us to do something, we absolutely will do it. But right now, the merchant market is the largest market because the model changes so quickly. I think it’s going to be hard for ASIC and especially if two suppliers, both NVIDIA and AMD has annual cadence, and each year, we’ll have a new product to address the customers’ needs, the key question is when the functionality gets fixed enough where they can use ASIC.
Vivek Arya
Got it. The last question on that, you mentioned the $400 billion addressable opportunity out in ’27, ’28. Does AMD still feel confident, comfortable with that kind of addressable opportunity? And then, more important than that, what are your market share aspirations as part of that?
Jean Hu
Yeah. It was — as you know, you were there when Lisa talked about $400 billion opportunity, it was a huge surprise to everyone. But since then, if you look at what the market has been evolving, absolutely in 2023 and 2024 this year, it looks like it’s exactly as we projected from the market opportunities perspective. So, we do think there are more and more not only proof of point from CapEx spending, but more and more proof of point from productive improvement, people are getting return on investment to justify the market opportunity. Of course, as you know, there are a lot of assumptions. It’s about [framing] (ph) the trajectory of the market versus precisely, is this $300 billion, $400 billion, is it ’27 or ’28, but it’s the direction we feel strongly it’s there. It’s consistent with what we said.
I think right now, we are quite small. We are the new entrant. But we do have a set of competitive product we feel pretty good. And the reason we are accelerating our roadmap is because we see the demand continue to exceed our expectations. We see customer need two suppliers for this very, very large market.
Vivek Arya
Got it. Now, on the data center server CPU side, could you give us the perspective on both the AI workloads and then kind of the non-AI workloads? Because, again, there is a perception that all these AI CapEx dollars are cannibalizing a lot of the non-AI and traditional server CPU demand. And you get to see both of it. So, it will be really useful to get your perspective on that.
Jean Hu
Yeah, thank you for the great question. It is actually really interesting to look at the AI workload in both the hyperscale and cloud — hyperscale, cloud and enterprise data center. Our view, AMD’s view is really different workload need different compute engines, even for AI. If you look at AI inference, a lot of inference were done by the server in the past. Of course, when you have GPUs, it has tremendous advantage in large language model, both the training and the inference. But at the end of the day, what the customer really want is TCO, what’s the cheapest way they can do their job. They’re managing their workload and managing their applications.
So, when you think about all different applications across globally in different data centers, their foundational workload will continue to be run on CPUs. Those kind of things, actually it’s much more efficient, [get the best] (ph) of the TCO like ERP system, like your — all the — Lisa talked about all the Facebook — Instagram, Facebook and chat box and all different things, it can be run efficiently on CPU. So, even some of the inference what we’re hearing from enterprise customers, they can run it on CPU. But for the large language model, definitely, we think GPU is much more the right compute for those models, training and inference.
So, we do have broad set of product portfolio. We can address customer need. That’s actually today when we go to market, especially enterprise customers, we’ll show them both. They actually make choices what’s the best for them.
Vivek Arya
Got it. So, just kind of a more near-term question. I think Q1 server CPU were sort of in line with normal seasonality, right, down high-single digit, low-double digit and so forth. How are you thinking about the rest of the year? Do you think seasonality is the right way to model it? Or given the new product introduction that can help drive some above-seasonal trend also in the back half?
Jean Hu
Yeah. We actually have made a tremendous progress with our CPU market share. At the end of Q1, our market share — revenue market share reached 33%. So, we did guide Q2 strong year-over-year double-digit growth. And sequentially, we do expect server business to be up sequentially in Q2. We think second half, there are some tailwinds that will help us to continue to drive server business to grow faster than first half.
I think what you mentioned is we are going to launch Turin in second half, that definitely will help us, because it will have leadership performance continue to drive the TCO for our customers. But the major ramp is actually going to be 2025. But second half, what we are seeing is even though in cloud market, the demand continued to be mixed, but our Gen 4 family of product continue to gain a lot of traction, because for the first-party workload, we have quite a significant market share. But now the third-party workload we’re really making progress because of the TCO benefit.
And then secondly, in enterprise market, one of the things, it’s really — for me, as a CFO, I pay a lot of attention is, we actually, for the Gen 1 family of product, we actually can provide the same compute with 40% less servers. And what it means is not only up from the CapEx, you can cut by half, and your operating cost can also reduce by 40%. So, for any CFOs, of course, CIOs, this is a huge. And what we have done is we have been enhancing our go-to-market approach for the last 18 months. So, you need to have feet on the street, you need to talk to each enterprise customer, their CIO, their CFO, about the TCO benefit. That work has now seen the benefit. So, that’s really important, because you have to convert each large company a time. And what we are seeing is the momentum of conversion of a large enterprise customer to get TCO benefit. So, we do think that our Gen 4 product family actually going to get more traction in second half to get more market share gain. And hopefully, the enterprise replacement market will be better, too.
Vivek Arya
Got it. Makes sense. Then as we move quickly to PCs, lot of announcements on this AI PC, right? You mentioned that Lisa announced AMD Strix Point product right with 50 TOPS. I think there seems to be this race between TOPS. I think AMD — Intel Lunar Lake is like 45 or so, or 48. I think, Qualcomm is 45 or so. How important is just chasing this kind of TOPS performance? What will — first of all, when do AI PCs become kind of tangible? And then secondly, is it really just this TOPS performance that will be the differentiator, right, between these different product offerings?
Jean Hu
Yeah. It’s great question. So, when you think about the AI PC, AMD actually was the first to introduce Ryzen 7000 Series [Technical Difficulty] first generation of AI PC. And then, we had second generation, but there were not many applications. So, even though we have AI PC, we sold millions of units, but fundamentally, the AI PC requires a lot of AI applications, but Microsoft Build event and what they talked about, they’re going to have so many different applications coming in second half, that’s the most important thing.
I think when you think about the PC, the AI PC, we introduced not only AI CPU and GPU and NPU, the TOPS are important, it’s because that NPU, the offload to run all the AI applications really needed to be — have the performance to handle whatever application out there. So, we need to see what applications that can fundamentally improve our work productivity and the content creation, that’s something we are very excited about is because if those applications are introduced, how many TOPS you have, how can you handle graphics like the GPU we have, and how can you handle the CPU side, you do need all three of them, right? Because in order for PC to be — continue to be the productive tool for enterprise and content experience creation for consumers, you need all three. I think that’s why we feel strongly AMD is best positioned because we have the best CPU, best GPU and also best NPU.
Vivek Arya
Got it. ARM recently said that they expect ARM-based PCs to be 50% of the market, right, over the next five years. I imagine you would not agree with that perspective.
Jean Hu
No.
Vivek Arya
Right. I thought so. And by the way, it’s not just total PC market, they actually said Windows based, because they already have Apple, right, that’s 10%, 12%. So, effectively, they’re saying they’ll be over half the market. So, what can the PC side do? Because we have seen Microsoft just loudly, right, support Qualcomm, that’s kind of the exclusivity till the end of the year, et cetera. So, how big of a threat is ARM? And let’s say it becomes bigger, then can AMD pivot to ARM-based architectures also?
Jean Hu
Yeah. No, this is a great question. So, if you think ARM PC, it has been around for a long time, right? This is not a new. I think one of the things when you really think about the customers, enterprise or consumer in the end, do they care if it’s x86 or ARM, it’s all about the performance and the power — battery life. In the end, that’s what people fundamentally care, economics dictate that. So, I do think from that perspective, x86 has been getting more and more competitive, too.
And if you talk to our CTO, Mark Papermaster, he will say, fundamentally, when you look at the architecture level, there are not much — too much difference on the performance between ARM and x86. It’s just they are operating under a different ecosystem environment. So, x86 ecosystem environment, we have been operating that for the last 15, 20 years or whatever years. And all your software, all the everything is built on that ecosystem. For ARM, absolutely, their use case, their applications today is probably easier for ARM to use — ARM PC to use. But most of the backward compatibility, the reason x86 get burdened a little bit is because you have to address that backward compatibility everywhere.
So, we do think x86 will continue to get more and more competitive to provide the performance and battery life that people want. And from that perspective, do people really ask what’s the inside of that PC? I do think we are very well positioned and ARM PC share continue to be at a very low, right, for like a long, long time. It has been in that range. Ecosystem is very important.
Vivek Arya
Right. Makes sense. And since you are the CFO, I thought I would ask a financial question also in the last 45 seconds, which is, on gross margins. Do you think this annual cadence of launching products, right, helps your top-line growth and share? But do you think it can work against your gross margin ambitions, right, which over the long term is to get to over 57%?
Jean Hu
No, we don’t think so. We are making significant progress with our gross margin. If you look at last year, 2023, we’re at the 50%. And Q1, we end up 52.3%, and we guided 53% for Q2. Second half, we’ll get better. In general, our data center gross margin is better, right? So, when we ramp our data center business, we’ll continue to change the mix that will help our gross margin in the long term.
Vivek Arya
Even MI, you think gross margin can become accretive despite the faster…
Jean Hu
Yeah, absolutely. When you look at the data center GPU, over time, it will be accretive to corporate average.
Vivek Arya
Terrific. I have three more pages of questions, Jean, but we are out of our time. Thank you so much.
Jean Hu
Yeah, thank you so much, and thank you, everyone.
Read the full article here