Companies large and small are changing in order to innovate faster, provide better customer experiences, and achieve greater cost efficiencies. British philosopher Alan Watts has a suggestion for dealing with this type of disruption: “The only way to make sense out of change is to plunge into it, move with it, and join the dance.”
Sounds simple, right? It is not.
Many businesses are dancing straight into the arms of public cloud because it enables them to meet time-to-market deadlines by scaling quickly and easily. Yet, others find that certain workloads are not appropriate for this type of Tango due to cost, performance, compliance, security, or complexity issues. And a growing number of enterprises are looking for a mix of IT deployments to attain ideal results. In order to adjust quickly to changing business needs, IT wants the flexibility to place some applications in the public cloud and others in a private cloud on-premises – sort of like choosing to enjoy both hip-hop and ballet.
Transforming the traditional
As organizations try to select the best deployment options, they are finding that cloud is no longer a destination; instead, it is new way of doing business that focuses on speed, scalability, simplicity, and economics. This type of business model allows cloud architects to distribute workloads across a mix of on-premises and public clouds. No matter where IT places the workload, everyone in the enterprise expects fast service delivery, operational simplicity, and optimization of costs.
If this scenario sounds too good to be true, it actually is…for the moment.
IT is struggling to achieve this type of cloud transformation due to a number of constraints typically found in data centers. Most people acknowledge that much of today’s data center infrastructure is slow, complex, and manual, which means that IT can’t properly deliver the services needed for a modern, cloud-based deployment model. Yet, the challenge is actually much bigger – it involves legacy thinking, which can be harder to change than technology.
Out with the old way of thinking … in with the new
Many developers in the past routinely used a type of waterfall model for project management, where the project leaders define the project at the start, and then it goes through a number of sequential phases during its lifecycle. This model has its roots in engineering where a physical design was a critical part of the project and any changes to that design were costly. Changes occurred infrequently and all at once. IT operations was comfortable with this process, because the old way of thinking believed that if the frequency of change is reduced, risk is also reduced.
Modern developers have discovered that the opposite can be true. If something goes wrong with a massive change, it could very well bring down the entire company. Therefore, the new way of thinking is to implement small changes much more frequently. That way, if something fails, it is a small failure – and the team can quickly change course without causing major problems.
A transformed data center needs a new mindset that embraces an agile set of principles, similar to how application developers work – delivering and accepting project changes in short duration phases called sprints. During each sprint, continuous change is encouraged, creating a more agile and flexible environment. And failure is allowed, because that is when learning – and adjustment – occurs.
Another big change involves capital spending and total cost of ownership. The old thinking involved inflexible consumption models that forced the organization to pay for everything up front. Again, IT believed that this model was less risky because they knew the costs upfront and could accurately plan accordingly.
Yet this model can be more risky because it is not agile; IT could not increase infrastructure for a short duration during a critical need, and then dial it back down when the need no longer existed. Today’s new way of thinking about IT infrastructure involves a flexible, as-a-service consumption model, where customers only pay for what they use when they use it.
Creating a perfectly choreographed experience across your enterprise
Hewlett Packard Enterprise (HPE) is working to solve your legacy thinking challenges in the data center and in the public cloud. Cloud Technology Partners (CTP), a Hewlett Packard Enterprise company, will help your team learn the mindset changes your business needs to succeed in a digital transformation and the steps you need to make toward a truly hybrid model.
HPE is also creating a perfectly choreographed series of solutions that will quickly modernize your data center and public cloud infrastructure footprint. With the help of HPE’s industry experts and innovative infrastructure, you can quickly turn your legacy data center into a hybrid cloud experience that combines modern technologies and software-defined infrastructure such as composable infrastructure, hyperconvergence, infrastructure management, and multi-cloud management.
A new hybrid cloud operating model built for speed, agility, and cost optimization is upon us. Make sure you have the right partner to “plunge into it, move with it, and join the dance.”
Advisory services at Cloud Technology Partners can help you understand how to take advantage of today’s new, modern multi-cloud technology. To learn more about how composable infrastructure can power your digital transformation, click here. Visit HPE OneView and HPE OneSphere to read how to simplify and automate infrastructure and multi-cloud management.
About Gary Thome
Gary Thome is the Vice President and Chief Technologist for the Software-Defined and Cloud Group at Hewlett Packard Enterprise (HPE). He is responsible for the technical and architectural directions of converged datacenter products and technologies.
To read more articles from Gary, check out the HPE Shifting to Software-Defined blog.