The next BriefingsDirect Voice of the Innovator discussion focuses on the growing complexity around multicloud management and how greater accountability is needed to improve business impacts from all-too-common haphazard cloud adoption.
Stay with us to learn how new tools, processes, and methods are bringing insights and actionable analysis that help regain control over the increasing challenges from hybrid cloud and multicloud sprawl.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.
Here to explore a more pragmatic path to modern IT deployment management is Harsh Singh, Director of Product Management for Hybrid Cloud Products and Solutions at Hewlett Packard Enterprise (HPE). The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.
Here are some excerpts:
Gardner: What is driving the need for multicloud at all? Why are people choosing multiple clouds and deployments?
Singh: That’s a very interesting question, especially today. However, you have to step back and think about why people went to the cloud in the first place – and what were the drivers – to understand how sprawl expanded to a multicloud environment.
Initially, when people began moving to public cloud services, the idea was speed, agility, and quick access to resources. IT was in the way for gaining on-premises resources. People said, “Let me get the work going and let me deploy things faster.”
And they were able to quickly launch applications, and this increased their velocity and time-to-market. Cloud helped them get there very fast. However, as we now get choices of multicloud environments, where you have various public cloud environments, you also now have private cloud environments where people can do similar things on-premises. There came a time when people realized, “Oh, certain applications fit in certain places better than others.”
From cloud sprawl to cloud smart
For example, if I want to run a serverless environment, I might want to run in one cloud provider versus another. But if I want to run more machine learning (ML), artificial intelligence (AI) kinds of functionality, I might want to run that somewhere else. And if I have a big data requirement, with a lot of data to crunch, I might want to run that on-premises.
So you now have more choices to make. People are thinking about where’s the best place to run their applications. And that’s where multicloud comes in. However, this doesn’t come for free, right?
How to Determine
Ideal Workload Placement
As you add more cloud environments and different tools, it leads to what we call tool sprawl. You now have people tying all of these tools together trying to figure out the cost of these different environments. Are they in compliance with the various norms we have within our organization? Now it becomes very complex very fast. It becomes a management problem in terms of, “How do I manage all of these environments together?”
Gardner: It’s become too much of a good thing. There are very good reasons to do cloud, hybrid cloud, and multicloud. But there hasn’t been a rationalization about how to go about it in an organizational way that’s in the best interest of the overall business. It seems like a rethinking of how we go about deploying IT in general needs to be part of it.
Singh: Absolutely right. I see three pillars that need to be addressed in terms of looking at this complexity and managing it well. Those are people, process, and technology. Technology exists, but unfortunately, unless you have the right skill set in the people — and you have the right processes in place — it’s going to be the Wild West. Everything is just going to be crazy. At the end you falter, not achieving what you really want to achieve.
I look at people, process, and technology as the three pillars of this tool sprawl, which is absolutely necessary for any company as they traverse their multicloud journey.
Gardner: This is a long-term, thorny problem. And it’s probably going to get worse before it gets better.
Singh: I do see it getting worse, but I also see a lot of people beginning to address these problems. Vendors, including we at HPE, are looking at this problem. We are trying to get ahead of it before a lot of enterprises crash and burn. We have experience with our customers, and we have engaged with them to help them on this journey.
You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. At the end of the day, you have to manage multiple environments.
It is going to get worse and people are going to realize that they need professional help. It requires that we work with these customers very closely and take them along based on what we have experienced together.
Gardner: Are you taking the approach that the solution for hybrid cloud management and multicloud management can be done in the same way? Or are they fundamentally different?
Singh: Fundamentally, it’s the same problem set. You must deploy the applications to the right places that are right for your business — whether it’s multicloud or hybrid cloud. Sometimes the terminology blurs. But at the end of the day, you have to manage multiple environments.
You may be connecting private or off-premises hybrid clouds, and maybe there are different clouds. The problem will be the same — you have multiple tools, multiple environments, and the people need training and the processes need to be in place for them to operate properly.
Gardner: What makes me optimistic about the solution is there might be a fourth leg on that stool. People, process, and technology, yes, but I think there is also economics. One of the things that really motivates a business to change is when money is being lost and the business people think there is a way to resolve that.
The economics issue — about cost overruns and a lack of discipline around procurement – is both a part of the problem and the solution.
Economics elevates visibility
Singh: I am laughing right now because I have talked to so many customers about this. A CIO from an entertainment media company, for example, recently told me she had a problem. They had a cloud-first strategy, but they didn’t look at the economics piece of it. She didn’t realize, she told me, where their virtual machines (VMs) and workloads were running.
“At the end of the month, I’m seeing hundreds of thousands of dollars in bills. I am being surprised by all of this stuff,” she said. “I don’t even know whether they are in compliance. The overhead of these costs — I don’t know how to get a h
andle on it.”
So this is a real problem that customers are facing. I have heard this again and again: They don’t have visibility into the environment. They don’t know what’s being utilized. Sometimes they are underutilized, sometimes they are over utilized. And they don’t know what they are going to end up paying at the end of the day.
A common example is, in a public cloud, people will launch a very large number of VMs because that’s what they are used to doing. But they consume maybe 10 to 20 percent of that. What they don’t realize is that they are paying for the whole bill. More visibility is going to become key to getting a handle on the economics of these things.
Gardner: We have seen these kinds of problems before in general business procurement. Many times it’s the Wild West, but then they bring it under control. Then they can negotiate better rates as they combine services and look for redundancies. But you can’t do that until you know what you’re using and how it costs.
So, is the first step getting an inventory of where your cloud deployments are, what the true costs are, and then start to rationalize them?
Guardrails reduce risk, increase innovation
Singh: Absolutely, right. That’s where you start, and at HPE we have services to do that. The first thing is to understand where you are. Get a base level of what is on-premises, what is off-premises, and which applications are required to run where. What’s the footprint that I require in these different places? What is the overall cost I’m incurring, and where do I want to be? Answering those questions is the first step to getting a mixed environment you can control — and get away from the Wild West.
Put in the compliance guardrails so that IT is again looking at avoiding the problems we are seeing today.
Gardner: As a counterpoint, I don’t think that IT wants to be perceived as the big bad killjoy that comes to the data scientists and says, “You can’t get those clusters to support the data environment that you want.” So how do you balance that need for governance, security, and cost control with not stifling innovation and allowing creative freedom?
How to Transform
The Traditional Datacenter
Singh: That’s a very good question. When we started building out our managed cloud solutions, a key criterion was to provide the guardrails yet not stifle innovation for the line of business managers and developers. The way you do that is that you don’t become the man in the middle. The idea is you allow the line of businesses and developers to access the resources they need. However, you put guardrails around which resources they can access, how much they can access, and you provide visibility into the budgets. You still let them access the direct APIs of the different multicloud environments.
You don’t say, “Hey, you have to put in a request to us to do these things.” You have to be more behind-the-scenes, hidden from view. At the same time, you need to provide those budgets and those controls. Then they can perform their tasks at the speed they want and access to the resources that they need — but within the guardrails, compliance, and the business requirements that IT has.
Gardner: Now that HPE has been on the vanguard of creating the tools and methods to get the necessary insights, make the measurements, recognize the need for balance between control and innovation — have you noticed changes in organizational patterns? Are there now centers of cloud excellence or cloud-management bureaus? Does there need to be a counterpart to the tools, of management structure changes as well?
Automate, yet hold hands, too
Singh: This is the process and the people parts that you want to address. How do you align your organizations, and what are the things that you need to do there? Some of our customers are beginning to make those changes, but organizations are difficult to change to get on this journey. Some of them are early; some of them are at much later stage. A lot of the customers frankly are still in the early phases of multicloud and hybrid cloud. We are working with them to make sure they understand the changes they’ll need to make in order to function properly in this new world.
Gardner: Unfortunately, these new requirements come at a time when cloud management skills — of understanding data and ops, IT and ops, and cloud and ops — are hard to find and harder to keep. So one of the things I’m seeing is the adoption of automation around guidance, strategy, and analysis. The systems start to do more for you. Tell me how automation is coming to bear on some of these problems, and perhaps mitigate the skill shortage issues.
Singh: The tools can only do so much. So you automate. You make sure the infrastructure is automated. You make sure your access to public cloud — or any other cloud environment — is automated.
That can mitigate some of the problems, but I still see a need for hand-holding from time to time in terms of the process and people. That will still be required. Automation will help tie in a storage network, and compute, and you can put all of that together. This [composability] reduces the need and dependency on some of the process and people. Automation mitigates the physical labor and the need for someone to take days to do it. However, you need that expertise to understand what needs to be done. And this is where HPE is helping.
Automation will help tie in a storage network and compute, and you can put all of that together. Composability reduces the need and dependency on some of the process and the people. Automation mitigates the physical labor and the need for someone to take days to do it.
You might have heard about our HPE GreenLake managed cloud services offerings. We are moving toward an as-a-service model for a lot of our software and tooling. We are using the automation to help customers fill the expertise gap. We can offer more of a managed service by using automation tools underneath it to make our tasks easier. At the end of the day, the customer only sees an outcome or an experience — versus worrying about the details of how these things work.
Gardner: Let’s get back to the problem of multicloud management. Why can’t you just use the tools that the cloud providers themselves provide? Maybe you might have deployments across multiple clouds, but why can’t you use the tools from one to manage more? Why do we need a neutral third-party position for this?
Singh: Take a hypothetical case: I have deployments in Amazon Web Services (AWS) and I have deployments in Google Cloud Platform (GCP). And to make things more complicated, I have some workloads on premises as well. How would I go about tying these things together?
Now, if I go to AWS, they are very, very opinionated on AWS services. They have no interest in looking at builds coming out of GCP or Microsoft Azure. They are focused on their services and what they are delivering. The reality is, however, that customers are using these different environments for different things.
The multiple public cloud providers don’t have an interest in managing other clouds or to look at other environments. So third parties come in to tie everything together, and no one customer is locked into one environment.
If they go to AWS, for example, they can only look at billing, services, and performance metrics of that one service. And they do a very good job. Each one of these cloud guys does a very good job of exposing their own services and providing you visibility into their own services. But they don’t tie it across multiple environments. And especially if you throw the on-premises piece into the mix, it’s very difficult to look at and compare costs across these multiple environments.
Gardner: When we talk about on-premises, we are not just talking about the difference between your data center and a cloud provider’s data center. We are also taking about the difference between a traditional IT environment and the IT management tools that came out of that. How has HPE crossed the chasm between a traditional IT management automation and composability types of benefits and the higher-level, multicloud management?
Tying worlds together
Singh: It’s a struggle to tie these worlds together from my experience, and I have been doing this for some time. I have seen customers spend months and sometimes years, putting together a solution from various vendors, tying them together, and deploying something on premises and also trying to tie that to an off-premises environment.
At HPE, we fundamentally changed how on-premises and off-premises environments are managed by introducing our own SaaS management environment, which customers do not have to manage. Such a Software as a Service (SaaS) environment, a portal, connects on-premises environments. Since we have a native, programmable, API-driven infrastructure, we were able to connect that. And being able to drive it from the cloud itself made it very easy to hook up to other cloud providers like AWS, Azure, and GCP. This capability ties the two worlds together. As you build out the tools, the key is understanding automation on the infrastructure piece, and how can you connect and manage this from a centralized portal that ties all these things together with a click.
Through this common portal, people can onboard their multicloud environments, get visibility into their costs, get visibility into compliance — look at whether they are HIPAA compliant or not, PCI compliant or not — and get access to resources that allow them to begin to manage these environments.
How to Better Manage
Hybrid and Multicloud Economics
For example, onboarding into any public cloud is very, very complex. Setting up a private cloud is very complex. But today, with the software that we are building, and some of our customers are using, we can set up a private cloud environment for people within hours. All you have to do is connect with our tools like HPE OneView and other thin
gs that we have built for the infrastructure and automation pieces. You then tie that together to a public cloud-facing tenant portal and onboard that with a few clicks. We can connect with their public cloud accounts and give them visibility into their complete environment.
And then we can bring in cost analytics. We have consumption analytics as part of our HPE GreenLake offering, which allows us to look at cost for on-premises as well as off-premises resources. You can get a dashboard that shows you what you are consuming and where.
Gardner: That level of management and the capability to be distributed across all these different deployment models strikes me as a gift that could keep on giving. Once you have accomplished this and get control over your costs, you are next able to rationalize what cloud providers to use for which types of workloads. It strikes me that you can then also use that same management and insight to start to actually move things around based on a dynamic or even algorithmic basis. You can get cost optimization on the fly. You can react to market forces and dynamics in terms of demand on your servers or on your virtual machines anywhere.
Are you going to be able to accelerate the capability for people to move their fungible workloads across different clouds, both hybrid and multicloud?
Optimizing for the future
Singh: Yes, absolutely right. There is more complexity in terms of moving workloads here and there, because there is data proximity requirements and various other requirements. But the optimization piece is absolutely something we can do on the fly, especially if you start throwing AI into the mix.
You will be learning over time what needs to be deployed where, and where your data gravity might be, and where you need applications closer to the data. Sometimes it’s here, sometimes it’s there. You might have edge environments that you might want to manage from this common portal, too. All that can be brought together.
And then with those insights, you can make optimization decisions: “Hey, this application is best deployed in this location for these reasons.” You can even automate that. You can make that policy-driven.
Think about it this way — you are a person who wants to deploy something. You request a resource, and that gets deployed for you based on the algorithm that has already decided where the optimal place to put it is. All of that works behind the scenes without you having to really think about it. That’s the world we are headed to.
Gardner: We have talked about some really interesting subjects at a high level, even some thought leadership involved. But are there any concrete examples that illustrate how companies are already starting to do this? What kinds of benefits do they get?
Singh: I won’t name the company, but there was a business in the UK that was able to deploy VMs within minutes on their on-premises environment, as well as gain cost benefits out of their AWS deployments.
We were able to go in, connect to their VMware environment, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs. They saved 40 percent in operational efficiency. They gained self-service access.
We were able to go in, connect to their VMware environment, in this case, and allow them to deploy VMs. We were up and running in two hours. Then they could optimize for their developers to deploy VMs and request resources in that environment. They saved 40 percent in operational efficiency. So now they were mostly cost optimized, their IT team was less pressured to go and launch VMs for their developers, and they gained direct self-service access through which they could go and deploy VMs and other resources on-premises.
At the same time, IT had the visibility into what was being deployed in the public cloud environments. They could then optimize those environments for the size of the VMs and assets they were running there and gain some cost advantages there as well.
How to Solve Cost and Utilization
Challenges of Hybrid Cloud
Gardner: For organizations that recognize they have a sprawl problem when it comes to cloud, that their costs are not being optimized, but that they are still needing to go about this at sort of a crawl, walk, run level — what should they be doing to put themselves in an advantageous position to be able to take advantage of these tools?
Are there any precursor activities that companies should be thinking about to get control over their clouds, and then be able to better leverage these tools when the time comes?
Watch your clouds
Singh: Start with visibility. You need an inventory of what you are doing. And then you need to ask the question, “Why?” What benefit are you getting from these different environments? Ask that question, and then begin to optimize. I am sure there are very good reasons for using multicloud environments, and many customers do. I have seen many customers use it, and for the right reasons.
However, there are other people who have struggled because there was no governance and guardrails around this. There were no processes in place. They truly got into a sprawled environment, and they didn’t know what they didn’t know.
So first and foremost, get an idea of what you want to do and where you are today — get a baseline. And then, understand the impact and what are the levers to the cost. What are the drivers to the efficiencies? Make sure you understand the people and process — more than the technology, because the technology does exist, but you need to make sure that your people and process are aligned.
And then lastly, call me. My phone is open. I am happy to have a talk with any customer that wants to have a talk.
How to Achieve Composability
Across Your Datacenter
Gardner: On that note of the personal approach, people who are passionate in an organization around things like efficiency and cost control are looking for innovation. Where do you see the innovation taking place for cloud management? Is it the IT Ops people, the finance people, maybe procurement? Where is the innovative thinking around cloud sprawl manifesting itself?
Singh: All three are good places for innovation. I see IT Ops at the center of the innovation. They are the ones who will be effecting change.
Finance and procurement, they could benefit from these changes, and they could be drivers of the requirements. They are going to be saying, ‘I need to do this differently because it doesn’t work for me.” And the innovation also comes from developers and line of businesses managers who have been doing this for a while and who understand what they really need.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.
You may also be interested in:
-
Using AI to solve data and IT complexity — and thereby better enable AI
-
How Automation and Intelligence Blend with Design Innovation to Enhance the Experience of Modern IT
-
How HCI forms a simple foundation for hybrid cloud, edge, and composable infrastructure
-
How Ferrara Candy depends on automated IT intelligence to support rapid business growth
-
How real-time data streaming and integration set the stage for AI-driven DataOps
-
How the composable approach to IT aligns automation and intelligence to overcome mounting complexity
-
How Texmark Chemicals pursues analysis-rich, IoT-pervasive path to the ‘refinery of the future’
-
How HPC supports ‘continuous integration of new ideas’ for optimizing Formula 1 car design