By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Wealth Beat NewsWealth Beat News
  • Home
  • News
  • Finance
  • Investing
  • Banks
  • Mortgage
  • Loans
  • Credit Cards
  • Small Business
  • Dept Management
Notification Show More
Aa
Wealth Beat NewsWealth Beat News
Aa
  • News
  • Finance
  • Investing
  • Banks
  • Mortgage
  • Loans
  • Credit Cards
  • Small Business
  • Dept Management
Follow US
Wealth Beat News > News > Advanced Micro Devices, Inc. (AMD) Management Presents at UBS Annual Technology Conference (Transcript)
News

Advanced Micro Devices, Inc. (AMD) Management Presents at UBS Annual Technology Conference (Transcript)

News
Last updated: 2023/11/28 at 8:20 PM
By News
Share
34 Min Read
SHARE

Advanced Micro Devices, Inc. (NASDAQ:AMD) UBS Annual Technology Conference Transcript November 28, 2023 12:55 PM ET

Executives

Forrest Norrod – Executive Vice President and General Manager, Data Center Group

Analysts

Tim Arcuri – UBS

Tim Arcuri

Great. We’re going to get started. Thank you. I’m Tim Arcuri. I’m the Semiconductor Analyst here at UBS. Very pleased to have AMD and we have Forrest Norrod, who’s EVP and GM of the Data Center Group. Thank you, Forrest.

Forrest Norrod

Thanks, Tim. Good to be here. Good to see you again.

Question-and-Answer Session

Q – Tim Arcuri

Great. Well, I just wanted to start off and talk about what is on most people’s minds, which is AI. You’re hosting your AI Day next week in…

Forrest Norrod

That’s right.

Tim Arcuri

… San Jose.

Forrest Norrod

That’s right.

Tim Arcuri

I’m not asking you to front run what you’re going to say next week, but I’m just sort of wondering what things we could expect you to highlight. What are the things driving that business? Why should people be excited about your AI position and your AI opportunity?

Forrest Norrod

Sure. Well, next week we’re going to sort of set the next milestone on our journey to be a strong contender in the AI space. What we’re going to unveil is we’re going to launch the MI300 formally in a couple of different variants and do so while we report out on the progress, I think, maybe most importantly, of the soft — development of the software ecosystem to support MI300 and its use in AI, as well as we’ll be there with quite a number of customers and partners who will relate their experience with and embrace of MI300 as well.

So we expect to mostly be about MI300 and our position now as a very credible alternative for generative AI and large-scale AI systems. We’ll also talk a little bit about the rest of our pervasive AI strategy, including what we’re doing on the PC side. We’ve already shipped millions of PC processors with AI accelerators built into them. We’re going to be introducing our next-generation of that very, very soon.

And we’ll lay out a vision and roadmap for that as well, and I think, maybe that’s something people aren’t expecting. But we do think that will also be very important in terms of the way that people interact with AI and it augments their work and life experiences. It won’t simply be through the cloud, but it will also be on the PC as well.

Tim Arcuri

Great. And maybe with respect to the ultimate market opportunity for MI300X, Lisa did offer this new $2 billion bogey for next year. But certainly, the numbers we keep hearing is that, you’re roughly 10% unit-wise of what NVIDIA is. That would suggest numbers that are way, way bigger than that. So can you just talk about what the TAM is, maybe or at least how to think about the TAM, because if you look at their numbers, you can get very, very excited about what the TAM is for you.

Forrest Norrod

Yeah. I think, so certainly the market is moving very rapidly and we’ve seen explosive growth this year. It’s really the promise of AI has caught everyone’s attention. And the early indicators, early signposts on the level of productivity enhancements that people are seeing with some of these POCs and initial deployments of generative AI are incredible, and by the way, we’re seeing the same.

And so we think that in the long run, the TAM of the AI market is very difficult to precisely predict. But you can easily convince yourself it’s very, very, very large. If you really can deliver 20%, 30%, 80% productivity enhancements for some white-collar jobs that really have been resistant to any sort of substantive productivity enhancement in the past.

That justifies an enormous incremental investment in IT to allow companies to seize those productivity enhancements and we certainly do see indications that in many fields, you are going to see very material productivity enhancements.

And so what it is long-term, it’s a little bit hard to say. I’ve seen numbers up to — I’ve seen some numbers up into the trillions, which I’m not claiming that by any stretch of the imagination. But I’m just using that to illustrate it’s a little bit too early to tell how fast this develops.

That said, we do think that over the next 24 months to 36 months the interest level is extraordinarily high. We do think it’s going to continue to grow materially from 2024 over 2023. We think it’s going to grow sequentially in each quarter.

And so as we think about it, we are — we’ve made one comment about the size. We certainly expect to exceed $2 billion in revenue for MI300 next year and we’re making sure, of course, that we have supply for more than that.

Tim Arcuri

And can you talk about what the use case or use cases are for the customers that are adopting it? Is it mostly inference?

Forrest Norrod

It’s really a mix. MI300X has a distinct advantage on inference over our competitors’ solutions by virtue of its memory bandwidth, as well as memory capacity. You can do a lot more inferencing on each GPU. You can fit more and you can fit larger models and have fewer numbers of GPUs to deliver the same results, which translates into very material TCO advantages.

On the training side, though, as well, we’re seeing we have a very competitive part. I don’t think we’ve got quite the same level of advantage that we do on inference. But we think MI300X is a very credible training part and that’s what our customers are telling us as well. And so we really think we’re going to see large-scale deployments in both inference, as well as training.

Tim Arcuri

We’ll see them both. Okay. And maybe we can pivot to CPU, because that’s a big piece that we don’t talk about probably enough. And we’re sort of shifting to a market where server units might not go up very much, but core count’s going up a lot.

Forrest Norrod

Right.

Tim Arcuri

And so can you talk about that dynamic about, how much core count is going up and how much pricing per core, because we had seen pricing per core come down a lot, but now pricing per core seems to be flattening out a little bit. So we could begin to see some growth in a market that people are skeptical can grow?

Forrest Norrod

Yeah. We do think that one of the dominant factors, as you think about this market is exactly the one you just articulated and it’s something we’ve really been driving, which is really driving up the core counts within the CPU, driving up the core counts of high performance cores and let me come back to that in just a second.

And so that’s been driving the ASP trends you’ve already seen in the CPU market. ASPs have been going up on the server CPU side pretty dramatically and the average core count has been really going up over the last couple of years, principally driven by AMD. We were the first to 32 cores, 64 cores. We have 96 cores and 128 cores now where our competitor has really 56 cores. They may claim 60 cores and so we don’t see that slowing down.

The next generation we think that the core counts go up again substantially. That translates into much more performance per system and we do expect that to drive fewer number of systems, but still with an aggregate performance continuing to grow.

We’re going to always give customers sort of more each generation in terms of more performance per dollar. But I do think that, one way to think about the pricing is probably relatively flat in terms of per core going forward, substantial performance per core advantages each generation translating into TCO advantages at the system level.

Tim Arcuri

So you’re — you were so accustomed to talking about your share from a unit perspective, but it sounds like we should be measuring your share much more from a dollar point of view?

Forrest Norrod

Yeah. I mean, at the end of the day, I actually don’t really care that much about the unit share. There’s some long tail, very low calorie, very low value components out there. We’re much more interested from a revenue perspective, revenue share perspective. We’re at 30% revenue share today — as we sit here today and that’s more weighted towards cloud than enterprise, but we do see enterprise starting to grow substantially. But our long-term ambition, just to be clear, is to continue to grow our revenue share and we aspire in the fullness of time to be the market leader from a revenue share perspective.

Tim Arcuri

And it seems that your share really, as I think about it, people just, I think, have this mental thought, well, if you’re — if you have a performance advantage and if your competitor catches up, well, the share should ultimately be 50-50. So your shares 30%, sure, you can get to 50%. But when you really deconstruct the source of your share, it’s really pretty narrow. I mean, you have significant share at a couple of U.S. cloud vendors, but you don’t have that much share at a couple of other large cloud vendors, nor do you have that much share in enterprise. So that — so your share — is it fair to say that your share, while it is 30% from a dollar perspective, it’s actually a bit narrow when you actually deconstruct it and what you’re seeing now is you’re seeing the broadening out of your share.

Forrest Norrod

Yeah. I think that we’ve had made tremendous progress, first, with the hyperscalers, the cloud — the cloud guys and that’s — quite candidly that’s where we first focused. The — if you think about an enterprise customer, there’s many factors that go into selecting the CPU and the Intel guys have been buying me lunch once a quarter for the last 20 years. The — is — I don’t mean to be funny, but there are a number of factors that make adoption of new technology for many enterprises to be a slow thing and we knew that.

And so for the cloud guys, however, the Data Centers are factory. And so having the best TCO, having the best efficiency, the highest performance drops directly to their bottomline. And so we knew that was the first place to go and I think we’ve been very successful there, at your aggregate share in North America hyperscalers is certainly well above 50%.

In the enterprise, it’s in the mid-teens right now. But when I look at the leading indicators, particularly as we’ve put more and more focus on enterprise over the last couple of years and as our partners, HPE, Dell, Lenovo, Supermicro, et cetera, have broadened the portfolio of AMD solutions, we’re seeing great leading indicators of share gain in enterprise as well.

So that’s a key focus for us going forward. Hold on and continue to grow on the cloud, but really double down and focus on enterprise share growth going forward and I’m very confident, particularly when I see the engagement that we have — early engagement with a large swath of customers that we’re going to see substantial share gain growth in the next few years.

Tim Arcuri

And how dependent on process leadership is, is that picture? I think some people mentally ascribe your share gains pretty much entirely to having access to advanced process node, but it goes way beyond that. So even if things ended up where your competitor caught up from a process point of view, you still feel confident that you can still execute on those gains?

Forrest Norrod

Yeah. Absolutely. I mean, AMD has a tremendous history of design innovation and design excellence. And if you look — by the way, if you look back at say the Optron days back 20 years ago, AMD had a deficient process, certainly, in at least a node behind Intel and yet was able to deliver substantially higher performance parts on the basis of superior design.

I think that we’ve got both assets today. We’ve got an excellent design and we’ve been driving innovation around particularly things like chiplets packaging, advanced integration. I mean, you see now Intel sort of progressing along that same road, but we’ve been there for many years and continue to push forward.

So when I look at the future roadmap, I’m very confident in it. We always assume that Intel is going to, from this moment forward, execute perfectly and do everything that they say they’re going to do. That’s how we plan. And on that basis, with what they’ve articulated on the process side and what we know the — their product development plans, I’m very confident that we’re going to maintain both performance, as well as power efficiency leadership for the foreseeable future.

Tim Arcuri

Great. Just to that point about chiplet, there was a question that came in that I think is a good one. And the question is, what advantage does your experience in with chiplets offer you particularly in AI?

Forrest Norrod

Well, I think that, our principal competitor, NVIDIA has been focused on delivering monolithic solutions for some time. So if you take a look at Hopper, for example, it’s one large monolithic dye on a CoWo substrate surrounded by the HBM. That’s as far as you can go. To add additional capability, you’re either just dependent on the transition or increase that you can get from process or you have to embrace chiplets.

And so we think we’re substantially ahead of the learning curve of NVIDIA in terms of chiplet technology. We’ve been on this journey for many times and MI300 is sort of the ultimate expression of advanced chiplet technology. It’s 13 different — 12 or 13 depends on the configuration different chiplets, interconnected via 3D stacking on a very large CoWo substrate. We’ve got a lot of experience doing this and I think NVIDIA is about to have to go through that journey to continue driving performance and we expect them to do so.

Tim Arcuri

Can we talk just as it relates to the TAM, longer term for AI, about the restrictions in China? It’s not as much of a factor for you, given that you’re earlier in your ramp. But how do you think about the impact that these restrictions have on the TAM over the longer term?

Forrest Norrod

I think, it’s a little — again, it’s a little bit hard to say. I think that one of the things that we do expect, we do expect to be able to offer China products that are fully compliant with all U.S. export regulations and we expect NVIDIA to do so as well. But clearly and candidly, the — certainly there will be other domestic producers that or designers that will design, try very hard to design parts as well. I think that the promise of the capabilities of generative AI are so large that and China is such a large market, that they will embrace generative AI. They will work with us. They will work with NVIDIA. They will work with other sources. And so, perhaps, it’s slightly delayed from what it might have otherwise been, but I do expect it to continue to develop.

Tim Arcuri

And it also seems like if you actually read the document, it seems that there are some on-ramps that the government is actually providing where you could take a chip that from a performance perspective is otherwise banned, but if you include some cryptographic aspects to it where you can track how it’s being used, that the government might work with you to allow you to ship chips that would otherwise from a performance perspective be banned.

Forrest Norrod

Yeah. I don’t want to get into that level of detail of this discussion of roadmap options, but certainly we are focused on, again, two things, always being fully compliant with the regulatory regime and then within that set of guidelines, being very aggressive at supporting our customers worldwide.

Tim Arcuri

Great. Can we just talk about your x86 franchise? We did actually talk about that before, but there seems to be this perspective that the GPU, which for you is fine, because you also make GPUs, but there seems to be this view that the x86 TAM is just going to keep on getting cannibalized and cannibalized. So can you talk about the sort of interplay between GPU and any other custom accelerator for that matter and…

Forrest Norrod

Yeah.

Tim Arcuri

… CPU?

Forrest Norrod

I guess, our view is that certainly this year and to some degree next year, there is some cannibalization of the general compute TAM by AI. Nobody walked into 2023 expecting to see such a large imperative to embrace generative AI and so we did see cannibalization of some of the TAM over to AI systems, no question about it.

But in the long run, we don’t view this as an either or we view AI as an additive workload. The need for the existing workloads generally does not go away. Generally speaking, you’re going to still need the same systems of record. You’re going to need your transaction processing. You’re going to need your web front ends. You’re going to need the vast majority of the existing infrastructure, the power of the business and the existing workloads, that need remains.

What AI promises is an incremental set, a differentiated set of capabilities to build on top of that set of workloads to take the data, in many cases from them, generate insights and generate actions that we do think will be incredibly valuable. But we don’t generally see the existing workloads going away. We think that this is going to be additive over the arc of time.

In the short-term, there’s going to be some cannibalization. Although again, from one — from our perspective, that may even be helpful, because it places a premium on trying to get the most out of your compute infrastructure, CapEx dollars and so the performance, TCO and power efficiency leadership that we have, that we think we maintain for the foreseeable future, I think, in some ways it will add, it is adding further weight to consideration and further weight to adoption of AMD CPU technologies.

Tim Arcuri

And just as a follow-up to that, with everyone doing more custom chips, does this limit your ex-AI TAM? I mean, everyone’s talking about doing ARM PC and how do you view the competitive landscape with not only what’s going on the merchant side with ARM, but also what’s going on the custom side?

Forrest Norrod

Yeah. So, there’s certainly, the interest in custom chips is actually not new. It’s been going on for some period of time. We actually have a custom chip business within AMD that’s been very successful. But when we think about AI in particular, GPUs are very flexible — are very powerful and very flexible, generally programmable devices that have broad applicability.

In the early innings and we think we’re still in the early innings, when as algorithms continue to develop, be refined or continuing to change, that general purpose GPU we believe, not just we believe, but we’re hearing from our customers as well, is very, very valuable. Will there over time be some stabilization of the algorithms and some place where you see custom chips or purpose-built ASICs with lower programmability servicing part of the market? Probably.

But we think GPUs have quite a way to run and we also are very closely working with a number of large hyperscalers that, as they build out their overall strategy for silicon that encompasses semi-custom, as well as custom components that we can offer them solutions as well.

Tim Arcuri

Great. When I asked you the first question about what you’re going to talk about next week, you mentioned software and that’s viewed by the investment community as you being significantly behind. However, you bought Mipsology, you bought Nod.ai last quarter. Can you talk about your journey in software and some of your efforts and why maybe the investment community is too skeptical of your — of the progress that you’re making?

Forrest Norrod

Yeah. Software is absolutely the key element here. I would say, well, there’s three key pillars, there’s software, there’s sort of the base compute in the GPU and then there’s networking. They’re all absolutely essential.

We’ve been very focused on the software journey for the last not — three years or four years, not just the last couple of years. So not in Mipsology are great additions or great teams or great additions to AMD.

But the central thrust of our software efforts really has been in close partnership with a number of large hyperscalers. We’ve had very large software teams working on ensuring that the ecosystem of AI frameworks, compilers, libraries, models, are fully supportive of AMD and increasingly are optimized both on AMD, as well as NVIDIA.

I mean, we — look, we realize we’re entering this market as a second player. The NVIDIA’s got a long head start. So, when you’re facing a market like that it’s critical that you think about, how do I decrease the friction, how do I minimize the friction for adoption — for consideration and for adoption?

And so, we set out multiple years ago on that strategy to primarily focus on the frameworks, focus on the open-source community, work with like-minded partners that are some of the largest customers for this technology that very desperately wanted to have more competition to drive better TCO, but also more innovation and more openness in the ecosystem and so we’ve been on this journey for a long time.

So things like PyTorch 2.0 last year when it came out, or sorry, earlier this year when it came out, day zero optimization support was fully there for NVIDIA and it was fully there for AMD and that was the — those were the only two players with day zero support. Similarly, Open, Triton, JAX, if you look across the full spectrum of the ecosystem, we’ll talk a lot more about this next week.

Our perspective is partner with others, focus on open-source, focus on where people are developing, which is around the frameworks, around the open-source compilers, et cetera, and make sure that we absolutely minimize the friction for someone to adopt AMD.

Tim Arcuri

And how much of an impediment is a lack of a high-speed networking or any proprietary high-speed networking solution as NVIDIA has with InfiniBand? How much of an impediment is that to you growing the business?

Forrest Norrod

Yeah. And we’ll talk more about this next week as well. It — look, networking is critically important for these generative AI systems, particularly on the training side. You have to orchestrate across tens of thousands or shortly hundreds of thousands of GPUs to train these large models and so the performance of the backside network is critically important.

But likewise, our customers are telling us in no uncertain terms that they want openness, they want Ethernet. And so we have been — we have embarked over the last several years to evolve Ethernet to address any deficiencies it has vis-à-vis InfiniBand and actually solve some of the scale issues that even InfiniBand has.

So we were the principal force behind the Ultra Ethernet Consortium that now pretty much everybody in the industry has joined to evolve Ethernet Forward and we’ll talk about the journey next week of where we are today, where many of the largest models today are trained on Ethernet backside networks and how we and the rest of the industry are going to drive that forward.

Tim Arcuri

Do you think that it’s not going to be a…?

Forrest Norrod

I don’t think it’s going to be a long-term impediment, no.

Tim Arcuri

And can you talk at all about, if we play out that Intel does catch up in process, can you envision a scenario where you would engage with Intel from a foundry perspective? Is that out there in your sort of mental roadmap where, look, we’re agnostic, sure, we — we’re a major customer of TSMC, but if Intel’s going to catch up and they’re going to have viable foundry business, even though they compete with you, if it was structured properly, you would consider to engage?

Forrest Norrod

I think we would certainly listen. But I think TSMC has been a fantastic partner for us and it is — it — by the way, it’s not just about process, it’s about the full ecosystem around it. You can have a foundry with a great process that can be competitive, but if you don’t have the rest of the ecosystem out there with the right other IP, analog or other IP, that you might need, if you don’t have the right design tool support, if you don’t have the rest of the value chain that you really need on the foundry side, it’s a tough road to hoe. So I would say never say never. It is a little bit difficult to see how we would embrace them quickly.

Tim Arcuri

Yeah.

Forrest Norrod

But — and I think that’s okay in the interim, because we’ve got such a strong partnership with TSMC and they are so capable.

Tim Arcuri

And just as a last question, so op margins in your business were down about 1,000 basis points year-over-year last quarter on fairly comparable revenue. I mean, I think, there’s a lot of investments you’ve made that are yet to be monetized, needless to say. But should we still think about your business is being a 30% op margin business?

Forrest Norrod

Yeah. Absolutely. Our ambition for the Data Center business absolutely is 30% plus op margin over time. We are in a heavy investment cycle right now to make sure that we’re well positioned to ride the wave of growth that we think is coming in AI and other places. And so, yeah, it’s really an OpEx driven phenomenon, getting ready for the revenue ramp that we think will come.

Tim Arcuri

Great. Well, we’re out of time, but thank you, Forrest. Really appreciate it.

Forrest Norrod

Thanks a lot, Tim. Appreciate it.

Read the full article here

News November 28, 2023 November 28, 2023
Share this Article
Facebook Twitter Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Fast Four Quiz: Precision Medicine in Cancer

How much do you know about precision medicine in cancer? Test your knowledge with this quick quiz.
Get Started
Excelerate Energy: Nearby Best Energy-Source Cap-Gain Prospect (NYSE:EE)

The primary focus of this article is Excelerate Energy, Inc. (NYSE:EE). Investment…

Penske Is Steady, But The Road Ahead May Be Bumpy (NYSE:PAG)

Investing Thesis On Wednesday, Penske Automotive Group (NYSE:PAG) released a superficially encouraging…

Top Financial – No, Stop It, This Is Silly (NASDAQ:TOP)

TOP Financial Moves, yes, but why? TOP Financial (NASDAQ:TOP) was quite the…

You Might Also Like

News

CEF Insights: CAPIX Is An Interval Fund For Broader Private Credit Access

By News
News

Undercovered Dozen: Roku, Merck, Chevron And More

By News
News

Microsoft: Record High Price Is A New Beginning (Technical Analysis) (NASDAQ:MSFT)

By News
News

Gold ETF Flows: June 2025

By News
Facebook Twitter Pinterest Youtube Instagram
Company
  • Privacy Policy
  • Terms & Conditions
  • Contact US
More Info
  • Newsletter
  • Finance
  • Investing
  • Small Business
  • Dept Management

Sign Up For Free

Subscribe to our newsletter and don't miss out on our programs, webinars and trainings.

I have read and agree to the terms & conditions

Join Community

2025 © wealthbeatnews.com. All Rights Reserved.

Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc.

I have read and agree to the terms & conditions
Zero spam, Unsubscribe at any time.
Welcome Back!

Sign in to your account

Lost your password?