This was the central point of discussion in a power panel of industry analysts convened by theCUBE, SiliconANGLE Media’s livestreaming studio. While supercloud has been defined as an architecture that resides above and across hyperscale infrastructure, the term also points toward important developments in the ever-shifting cloud landscape.
“I think it really speaks to that undeniable trend of moving toward an abstraction layer to deal with the chaos of what we consider managing multiple private and public clouds,” said Maribel Lopez (pictured, center), founder and principal analyst at Lopez Research. “I see legitimate momentum with enterprise IT buyers that are trying to deal with the fact that they have multiple clouds now. Where we’re moving is trying to define the specific attributes and frameworks of that, to make it so that it could be consistent across clouds. Maybe that’s what the supercloud is.”
Lopez spoke with Dave Vellante, industry analyst for theCUBE, in advance of today’s Supercloud 2 event. She was joined on the panel by Sanjeev Mohan, (pictured, right), principal at SanjMo, and Keith Townsend (pictured, left), founder of The CTO Advisor LLC, and they discussed key developments for customers in cloud architectures and data management.
The notion of an abstraction layer built on top of other platforms stems from architectures built by enterprises such as Snowflake, Databricks, MongoDB and Red Hat that tap the underlying services and primitives of cloud providers to deliver additional value.
“Traditionally, vendors have provided the platforms for us,” Townsend said. “Supercloud is a framework or idea, kind of a visionary goal to get to a point that we can have a platform. We’re seeing this trend that there’s a desire for a platform that provides the capabilities of a supercloud.”
Yet customers are confronted with issues surrounding cross-platform functionality. Snowflake is building supercloud capabilities for customers to pursue digital transformations on its platform, but this could become less advantageous when additional cloud platforms are needed.
“It all works great as long as you are in one platform,” Mohan said. “But if your primary goal is to choose the most cost-effective service irrespective of which cloud it sits in, then things start falling sideways. How do I move my workload from one platform to another platform? That tooling does not exist.”
Limitations such as these are leading some organizations to reassess whether to continue investing resources in public cloud infrastructure or bring specific workloads on-premises, according to Lopez.
“Many people wildly overspent in the big public cloud, and there is a balancing that’s going on,” she stated. “I’m going to put the workloads that have a certain set of characteristics that require cloud in the cloud. If I have enough capability on-premises and enough IT resources to manage certain things on-site, then I’m going to do that. It’s not binary; that’s why we went to hybrid.”
Does adoption of a hybrid model also indicate repatriation, the movement of workloads out of the cloud and back into an on-premises data center?
“I think we have quiet repatriation; what I’m seeing is a rebalancing of workloads,” Townsend said. “Do I really need to pay AWS for this instance of SAP that’s on 24 hours a day versus just having it on-prem, moving it back to my data center? Private cloud technologies have moved far enough along that I can simply move this workload back. I’m not calling it repatriation; I’m calling it rightsizing for the operating model that I have.”
The operating model for cloud customers today may be a costly one. A study by Andreessen Horowitz found that some companies exceeded their committed cloud spend forecast by 2x. Gartner Inc. has predicted that public cloud end-user spending will reach nearly $600 billion in 2023, a year-over-year increase of 20%.
“The bigger problem is that people don’t know where the cost is,” Mohan said. “Data observability is one of the places that has seen a lot of traction because of cost. Data observability, when it first came into existence, was all about data quality. Then it was all about data pipeline reliability, and now the number one killer use case is FinOps.”
While the analysts provided different perspectives on the role of supercloud and the evolving IT model, there was unanimity around the central role of data in the enterprise.
“Some people think supercloud is about multicloud tooling, and some people think it’s about a whole new architectural stack,” Lopez said. “Cloud is about how to make the most of your data.”
Visit theCUBE’s Supercloud 2 event page to watch full episodes on demand!
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.