As the technology industry and the channel get to grips with the issues and the opportunities around artificial intelligence, VMware launched an industry ecosystem to help meet the challenge at this week’s VMware Explore customer and partner event in Las Vegas.
It launched its concept of “Private AI”, “an architectural approach that unlocks the business gains from AI with the practical privacy and compliance needs of an organisation”.
To make Private AI a reality for enterprises, and “fuel a new wave of AI-enabled applications”, VMware announced the launch of the VMware Private AI Foundation with NVIDIA, extending the companies’ existing strategic partnership to ready enterprises that run VMware’s cloud infrastructure for the “next era of generative AI”.
“The remarkable potential of generative AI cannot be unlocked unless enterprises are able to maintain the privacy of their data and minimise IP risk while training, customising, and serving their AI models,” said Raghu Raghuram, CEO of VMware. “With VMware Private AI, we are empowering our customers to tap into their trusted data so they can build and run AI models quickly and more securely in their multi-cloud environment.”
VMware Private AI Foundation with NVIDIA, comprised of a set of integrated AI tools, will empower enterprises to run proven models trained on their private data in a “cost-efficient manner”, and will enable these models to be deployed in data centres, on leading public clouds, and at the edge.
The effort will be ready to test and use by VMware partners and customers “early next year”. “Enterprises everywhere are racing to integrate generative AI into their businesses,” said Jensen Huang, CEO of NVIDIA. “Our expanded collaboration with VMware will offer hundreds of thousands of customers - across financial services, healthcare, manufacturing and more - the full-stack software and computing they need to unlock the potential of generative AI using custom applications built with their own data.”
NVIDIA of course will be offering its own GPUs and software to help them do it as part of the alliance. The idea is to support generative AI from customisation to deployment with the creation of “AI clouds”. “It should be easy peasy”, teased Huang in the conference keynote launching the effort. “That’s our unique selling point”.
The “turnkey offering” will provide customers and partners with the accelerated computing infrastructure and cloud infrastructure software they need to customise models and run generative AI applications, including intelligent chatbots, assistants, search and summarisation.
VMware’s ecosystem effort will initially be supported by Dell Technologies, HPE and Lenovo, along with other vendors. And VMware says it is working with global systems integrators, such as Wipro and HCL, to help customers realise the benefits of Private AI by building and delivering solutions that combine VMware Cloud with AI partner ecosystem solutions.
VMware also announced a new VMware AI Ready programme, which will connect ISVs with tools and resources needed to validate and certify their products on VMware Private AI Reference Architecture. This programme is expected to be live by the end of 2023.
In addition, VMware has introduced Intelligent Assist, a family of generative AI-based solutions trained on VMware’s proprietary data to simplify and automate all aspects of enterprise IT in a multi-cloud era. The Intelligent Assist features will be “seamless extensions” of the investments enterprises have made in VMware Cross-Cloud Services, and will be built upon VMware Private AI.
VMware products with Intelligent Assist include VMware Tanzu, Workspace ONE, and NSX+. All these are in Tech Preview mode.
Kit Colbert, VMware CTO, told press and analysts: “We are not getting into the ‘AI business’. We are providing the infrastructure to make sure it works well.”
While also acknowledging the difficulties posed by AI, in often getting information wrong or out of context in results presented, showing bias, and being a potential security risk in some circumstances, Colbert said: “The question is how we corral and restrain AI for better results. We need best practices to make it work as best we can, but it may take three to four years to work it out.”
He said one best practice solution could be to not rely on a single large language model (LLM), and to get another LLM to check out the results before using them. “As that type of system occurs you will start to see standardisation in the industry.
“Private AI reference architectures are needed, and not any one vendor can solve the problems. Alignment and integration is required and a greater focus on AIOps may well be needed.”