Enterprise Infrastructure - OEM Trends for AI Infrastructure
AI Market Positions
Channel partners said the current environment, in which customers have been experimenting with cloud AI workloads, has created a window of opportunity for OEMs to improve solutions through strategic industry partnerships. Sources expect OEM solutions to be coveted since they can help fill the skill gap that poses a barrier to on-premise implementations. However, partners said OEMs ability to capture the growing demand for AI infrastructure varies considerably depending on the desired solution. A U.S. source said, “If you’re building a true AI and doing LLM [Large Language Model], you’ll probably have an inference layer and another layer. They’re different hardware configurations that require different storage performance and capacities.”
Enterprise-technology channel sources said the current landscape has many of their customers exploring how to improve business processes with AI, though it remains unclear what the final configuration will be for each customer. A U.S. source said, “I would say our customers are starting to think about AI and how they might be able to use it within their environments. … I wouldn’t say that’s impacting sales just yet.” Another said, “From what I see, customers are excited about AI, particularly [about] Dell [Technologies Inc.] with their AMD [Advanced Micro Devices Inc.], Nvidia [Corp.] and Ampere [Computing LLC] partnerships, but they aren’t exactly sure what to do with it yet.”
Quickly training AI requires GPU-heavy server equipment, low-latency networking equipment that can transport large amounts of data with little error and high-performance low-latency storage environments that scale up considerably. Industry partnerships with Nvidia, AMD or other GPU/AI-focused companies, as well as development of complementary solutions for OEM products, were characterized as critical to having a strong play for onsite AI workloads.
Partners said cloud environments are a good fit to test and develop AI workloads because cloud solutions could be easily modified or decommissioned. A U.S. source said, “Public cloud is what you leverage to do these things today — or you build a GPU cluster to do it. Even a small one will work. [OpenAI Inc.’s] ChatGPT 4.0 runs on [Nvidia] 8100’s. It’s not a rack of equipment, it’s eight graphics cards. You don’t need gobs of hardware to do these things. You need the right hardware, and it’s very easy to rent that stuff from the public cloud.” Partners also said, once pathways become clearer, it could be cost-prohibitive and a security risk to continue to use the cloud for AI workloads, providing future opportunities for onsite infrastructure.
Despite the opportunities in the sector, partners believe there may be a talent gap when it comes to providing AI infrastructure, which gives an advantage to companies that can offer effective supported solutions. A U.S. source said, “All of the infrastructure vendors — [Hewlett Packard Enterprise Co.] HPE, Cisco [Systems Inc.] and especially the full-stack vendors like [International Business Machines Corp.] IBM, HPE and Dell — they’re very focused on providing that AI backend. It’s a lucrative backend, but the skills required to do it are specific and very scarce.”
Storage: Performative, low-latency, large and scalable solutions designed to optimize throughput to GPU are a must for AI storage solutions. Partners also noted multicloud capability is important, since many AI workloads originate in cloud environments or require edge solutions.The storage market space for AI has been partially driven by partnerships with AI GPU leaders, according to sources. Because NetApp Inc. has a strong cloud portfolio, large scale low-latency storage solutions and AI-focused partners, they have had some success selling NetApp for AI environments. Other storage solutions — particularly from VAST Data Inc. and WekaIO Inc. — were frequently named as well positioned in storage for AI workloads for similar reasons. Pure Storage Inc. was also mentioned as having a strong product for onsite AI architecture, though partners said it lacks cohesion with multicloud solutions. A U.S. source said, “When customers start asking the cloud question, AI is always kind of in that discussion in some form or another. … I’ve seen NetApp bring that up.”
Server: Partners said GPU processing is the primary requirement for AI-focused servers, and noted Dell has developed strong server solutions for AI workloads (such as the XE9680) because of their Nvidia partnership; however, they believe Dell’s current offering is behind competitors, such as Nvidia’s native solution. HPE’s Cray Exascale solution was characterized as well developed. Sources believe HPE and Dell’s market share in servers is an advantage that the companies can leverage for AI-workload-related sales. A U.S. source said, “[Dell’s] partnership with Nvidia is phenomenal in that space. I think that, within a couple of years, they’ll be top dog between their competitors.”
Networking: Partners said AI infrastructure requires low-latency solutions without dropped packet issues; thus, they do not believe Cisco’s networking technology is well suited for AI workloads. Higher-performing networking equipment is available from competitors, and sources said Cisco’s partner network for AI needs to be developed with infrastructure market leaders in the space. However, partners also noted Cisco recently released the G200 and G202 Silicon One chips, which they believe will be well suited for AI environments (though, none has perceived this having an impact yet.)
HCI: Despite HCI sales not being driven by AI integration, partners said HCI has a latent appeal for distributed AI workloads, though best suited for onsite edge implementations. Nutanix Inc. was largely absent from AI discussions, but partners noted Nutanix’s potential for developing solutions geared around AI. Accordingly, Nutanix’s GPT-in-a-Box solution, which was announced shortly after feedback was gathered, aligned with partners’ reported expectations.
ADDITIONAL QUOTES
General
“I don’t think there’s the talent pool out there that understands how to take advantage of AI in a business setting to create an avalanche of opportunities. [AI is] supposed to solve problems, but you have to know what can be solved and how to apply it. I don’t know that there’s people out there that can do that.” North America
“AI isn’t even something that customers are convinced that they need.” North America
“We ask [customers], ‘Is your data ready for AI to take advantage of it?’ What I’m seeing most is hyperscale cloud. I was skeptical of HPE’s announcement of supercomputers for AI.” North America
“The topic of AI is on everyone’s lips, and everyone is thinking about what to do with it. AI and machine learning have been around for a few years.” EMEA
On Storage
“[NetApp is] not quite there yet [for AI], but they’re working with Nvidia on creating platforms that are more compatible with GPUs to inset themselves in that arena.” North America
“I don’t think anyone has an advantage [in storage]. I think NetApp could have an advantage if they [focus on development].” North America
“In the AI space, we see a lot of VAST and WekaIO. I see VAST [in media and AI], NetApp and Pure with their partnerships with Nvidia [helping them sell AI solutions, but] you have to be careful how you set up NetApp — what you do and how you configure things.” United States
“If you really need a lot of performance, that’s where WekaIO comes into play. They can really scream.” North America
“We pair with Pure when we hear about an AI project. [Pure’s] platform performance and storage architecture are better suited to AI workloads [than NetApp’s]. They have platforms that address large datasets in flash. Their APIs [Application Programable Interfaces] are first class on the Pure platform, whereas for NetApp it’s an afterthought. Pure’s overall architecture is better suited for AI.” North America
“NetApp is doing OK with AI. I think they could do better. WekaIO is doing a better job specific to AI. NetApp isn’t too far behind.” North America
“[NetApp isn’t] as well positioned as WekaIO and VAST because those projects demand high throughput, and that’s where NetApp can start to struggle.” North America
“The [Komprise Inc.]s of the world are up-ticking. They help manage the data movement and help figure … out where the data is. NetApp doesn’t necessarily match when it comes to AI. In fact, they may be getting passed when you look at things like the Nvidia announcement with VAST.” North America
“I don’t see NetApp having an advantage or disadvantage over other vendors in terms of AI projects. You need massive storage for AI — NetApp has that — and then you need performance to support the compute — and NetApp has that as well.” EMEA
Server
“The momentum around AI with Dell just started building [after Dell World].” North America
“[Dell has] a lot of machines that are tackling [AI] well. With their relationship with Nvidia, they’re looking towards future gains in that space. … They’re working slowly and rigorously to make better AI and ML products with the hope that, with enough time and investment, they’ll be able to turn a good profit and maybe take market share from other AI competitors like Microsoft [Corp.]” North America
“[We lost a Cisco with Pure Storage deal to] Nvidia partnering with Dell. … We lost that deal to Dell, who put forth the Nvidia platform with EMC storage. From a storage perspective, we have a solution. But from compute, since Nvidia is selling their own AI platform, … I don’t think [Cisco has] a strong relationship with Nvidia.” North America
“[Dell isn’t] trying to compete on supercomputing. Dell has done very well with marketing AI at the edge. Their partnership with Nvidia is a definite plus.” North America
“When that XE box came out, they gave a turnkey, but Dell is still playing catch-up from Nvidia. It’s an improvement from trying to do a hodge-podge [solution].” North America
“AI is a revolutionary tool, but it will take time to really be integrated effectively for Dell.” EMEA
“Everybody gets really excited about HPE and likes to talk about Exascale because of the Cray. … The top platform [on top500.org] is based on Cray. It’s a mix of Nvidia and AMD.” North America
“[Dell doesn’t] have a dedicated AI framework yet, like HPE. … They may simply sell servers that will be dedicated to AI applications.” EMEA
Networking
“[Cisco has] some stuff in the market and on the website [for AI], but it’s unproven and too new to really speak with any authority. … I haven’t seen too much from them in that space.” North America
“I have not seen any kind of requests or questions with AI and Cisco. I have had discussions about what Juniper [Networks Inc.] has on the Mist side.” North America
“I don’t really consider Cisco a player in the AI space.” North America
“[Cisco is] doing a lot right now with [Amazon.com Inc.’s] AWS to help incentivize AI development. I don’t think it’s going very well, but it’s too early to tell.” North America
“In terms of real AI projects, it’s one area where Cisco could have done a better job. They are lagging. Vendors like Lenovo [Group Ltd. (992 HK)], Dell are doing a better job of partnering up with AI-related organizations or vendors, like Nvidia, to create specific solutions.” North America
“I think hyperscalers are going to benefit the most. This whole AI piece on the custom chipsets versus Cisco mounting products that have AI capabilities … I don’t see [Cisco] moving the needle.” North America
“I’ve had use cases in the past for Cisco AR [augmented reality], but not AI.” EMEA
“It doesn’t look as if AI is front and center of Cisco’s technology.” EMEA
HCI
“Nutanix, by its nature, is converged storage and traditional compute — not GPU compute. Nutanix certainly has a GPU side to it, but I don’t think that that’s the platform that Nutanix is optimized for. That converged stack just isn’t the platform for the level of computation that AI requires.” North America
“Nutanix isn’t really set up for [AI workloads]. [Nutanix with] AI might be an interesting solution for a server on the edge.” North America
“Nvidia has gone with other partners to support in their AI mission, and Nutanix is going to be left behind if they don’t make some major changes.” North America
“I have not seen AI-on-HCI use cases at this point.” North America
“What Nutanix can do is provide an edge AI case for on-prem services. I don’t think they’ll provide it, but a platform to run it. Isolated or localized AI solutions, which [Nutanix can do], provide a great source to run on. Evaluating Microsoft and [its] Copilot, how do I evaluate that data and lock that in?” North America
“I have to believe that [Nutanix] will eventually come out with some type of AI-specific architecture. But I’m not sure what that would look like.” North America
“In my opinion, hyperconvergence is not made to fire big data and AI, and yet [a large financial institution] does it — which uses Nutanix to have a central point of management.” EMEA