Jonathan Seelig is co-Founder and Executive Chairman of Ridge. He was previously co-founder of Akamai.
The public cloud has revolutionized computing by enabling enterprises to interact with resources and data in a cloud-native way: remotely, on demand, and pay as you go. Many popular and ubiquitous applications depend on the public cloud. And yet, most IT computing takes place outside of the public cloud in on-prem or third-party data centers. This fact is made pretty clear by IT spending numbers: Cloud IaaS and PaaS were about a $200 billion per-year industry in 2022, while the IT computing market is measured in trillions.
So, will the trillions of dollars of IT spending all shift to the large, public cloud? I don’t believe so. For many potential users, the public cloud has “coverage gaps” because computation resources reside in a limited number of large data farms. This geographic constraint affects the cloud in three primary areas: performance (latency and throughput), data sovereignty (residency and regulations) and commercial (competition and pricing). Aware of these limitations, many enterprises hesitate to migrate on-premises IT functions to the cloud.
This cloud Catch-22—the public cloud has spawned a new generation of applications for which it may not be flexible enough to provide adequate services—presents a large business opportunity for the cloud alternatives: local data centers and, in particular, IT integrators. By taking advantage of advances in application development (virtualization and containerization), IT integrators can create cloud resources in local data centers at the network edge, close to end users and exactly where enterprises want them.
Let’s take a look at five reasons why I believe that for many application owners, their next cloud may be one that they (and their IT integrator) architect themselves.
1. Because Modern Applications Will Require It
Application performance is critical for emerging applications requiring ultra-fast response times, such as AR/VR, telemedicine and autonomous vehicles. They depend on the ability to process large amounts of data in real time, which is a problem when the cloud is located many miles away. It’s just a matter of physics.
2. Because The Market Is Growing
In my experience, IT providers and integrators want to maintain and grow their business by being a host for customers’ applications wherever, in any location they need to be. But building and running a network of large data centers is prohibitively costly. Converting existing capacity into a cloud platform, however, enables them to add cloud coverage without investing in new infrastructure.
3. Because Of The CFOs
Cloud spend is a huge part of many businesses’ budgets, in some cases being one of the largest portions besides payroll. As with all purchasing, vendor lock-in should be avoided. Open protocols for app development enable businesses to architect applications so that they aren’t locked into one cloud. With greater leverage, businesses are able to get the performance they need at lower cost.
4. Because Even The Large Clouds Know It’s Coming
Recognizing the need for additional granular distribution, the big cloud players are beginning to offer solutions (i.e., AWS Outpost, Google Anthos, etc.) to extend the cloud to any location. However, these solutions depend on proprietary hardware (which can be expensive) and require end customers to be responsible for ongoing operations (monitoring, repairing, etc.).
5. Most Of All … Because They Can
Standards-based software has democratized the building blocks of cloud computing so that applications can be deployed in specific locations without needing to purchase services from the hyperscalers. Data centers are able to provide cloud-native platform as a service (PaaS) offerings, such as managed Kubernetes, containers and object storage.
Preparing For The Move
While many subscribers to the large public cloud are perfectly happy—and rightfully so, as AWS, Google and Microsoft offer amazing services—I believe that migration to localized clouds is inevitable for a growing number of businesses, either as a stand-alone cloud or through multicloud/hybrid cloud architectures. In preparation, there are steps that application owners need to consider as they head for this shift. Certainly, the first step is merely your recognition that there is a cloud alternative.
Second, I believe app owners should avoid using proprietary development tools. Instead, consider becoming familiar with open specifications, such as Kubernetes, as part of your cloud environment. In that way, you can create applications that are agnostic to any data center’s underlying physical resources and that use modern APIs to programmatically run applications in any location. There are protocols developed by the Kubernetes community whose purpose is to achieve interoperability. Although vendors would like you to use their proprietary tools, the simplest way to ensure flexibility is to build within the standard and deploy within the standard.
There can be hurdles to overcome. Just this past month, we have seen the public cloud hyperscalers battling over whether proprietary software and licensing restrictions are suffocating free cloud competition by creating vendor lock-in. The effect of such constraints for a given business is to remove its flexibility to choose any cloud provider upon migration from on-premise computing to cloud computing. The FTC has opened an inquiry into cloud market competition.
Nevertheless, the direction is clear. Businesses are finding they can enjoy the benefits of public clouds while retaining the advantages of locality. Although the large public cloud is here to stay, smaller cloud providers and IT integrators are discovering that they can be major players in cloud computing by providing customers with full cloud-native services in any location.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here