Skip to main content

Siddharth Kushwaha

EVERYTHING ABOUT CLOUD NATIVE COMPUTING



The term “cloud-native” gets thrown around a lot, especially by cloud providers. Not only that, but it even has its own foundation: the Cloud Native Computing Foundation (CNCF), launched in 2015 by the Linux Foundation.

‘Cloud-native’ defined

In general usage, “cloud-native” is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. “Cloud-native” is about howapplications are created and deployed, not where. It implies that the apps live in the public cloud, as opposed to an on-premises datacenter.
The CNCF defines “cloud-native” a little more narrowly, to mean using open source software stack to be containerized, where each part of the app is packaged in its own container, dynamically orchestrated so each part is actively scheduled and managed to optimize resource utilization, and microservices-oriented to increase the overall agility and maintainability of applications.

Differences between cloud-native and on-premises applications

Cloud-native application development requires a very different architecture than the traditional enterprise applications.

Languages

On-premises apps written to run on company servers tend to be written in traditional languages, like C/C++, C# or another Visual Studio language if deployed on a Windows Server platform, and enterprise Java. And if it’s on a mainframe, it’s likely in Cobol.
Cloud-native apps are more likely to be written in a web-centric language, which means HTML, CSS, Java, JavaScript, .Net, GoNode.js, PHP, Python, and Ruby.

Updatability

Cloud-native apps are always current and up to date. Cloud-native apps are always available.
On-premises apps need updates and usually are delivered on a subscription basis by the vendor, and require downtime while the update is installed.

Elasticity

Cloud-native apps take advantage of the elasticity of the cloud by using increased resources during a use spike. If your cloud-based e-commerce app experiences a spike in use, you can have it set to use extra compute resources until the spike subsides and then turn off those resources. A cloud-native app can adjust to the increased resources and scale as needed.
An on-premises app can’t scale dynamically.

Multitenancy

A cloud-native app has no problem working in a virtualized space and sharing resources with other apps.
Many on-premises apps either don’t work well in a virtual environment or don’t work at all and require a nonvirtualized space.

Connected resources

An on-premises app is fairly rigid in its connections to the network resources, such as networks, security, permissions, and storage. Many of these resources need to be hard-coded, and they break if anything is moved or changed.
“Network and storage are completely different in the cloud. When you hear the term ‘re-platforming,’ that is typically the work to accommodate the changes in networking, storage, and even database technologies to allow the app to run in the cloud,” says Deloitte’s Kavis.

Down time

There is greater redundancy in the cloud than there is on-premises, so if a cloud provider suffers an outage, another region can pick up the slack.
On-premises apps might have failover ready, but there’s a good chance that if the server goes down, the app goes down with it.

Automation

So much of the cloud is automated, and that includes app management. “The benefits of cloud-native delivery, especially speed and agility, significantly rely on a substrate of reliable, proven, and audited known-good processes that are executed repeatedly as needed by automation and orchestration tools rather than through manual intervention,” says Splunk’s Mann. Engineers should look to automate virtually anything they do more than once to enable repeatability, self-service, agility, scalability, and audit and control.
On-premises apps have to be managed manually.

Modular design

On-premises apps tend to be monolithic in design. They offload some work to libraries, to be sure, but in the end it’s one big app with a whole lot of subroutines. Cloud-native apps are much more modular, with many functions broken down into microservices. This allows them to be shut off when not needed and for updates to be rolled out to that one module, rather than the whole app.

Statelessness

The loosely coupled nature of the cloud means apps are not tied to infrastructure, which means they are stateless. A cloud native app stores its state in a database or some other external entity so instances can come and go and the app can still track where in the unit of work the application is. “This is the essence of loosely coupled. Not being tied to infrastructure allows and app to run in a highly distributed manner and still maintain its state independent of the elastic nature of the underlying infrastructure,” Kavis says.
Most on-premises apps are stateful, meaning they store the state of the app on the infrastructure the code runs on. The app can be broken when adding server resources because of this.

The challenges of cloud-native computing

One of the big mistakes customers make is trying to lift and shift their old on-premises apps to the cloud, Mann says. “Attempting to take existing applications—especially monolithic legacy applications—and move them onto a cloud infrastructure will not take advantage of essential cloud-native features.”
Instead, you should look to do new things in new ways, either by putting new cloud-native applications into new cloud infrastructure or by breaking up existing monoliths to refactor them using cloud-native principles from the ground up.
You also need to dispense with your old developer methods. The waterfall model certainly won’t do, and even agile development might not be enough. So, you must adopt new cloud-native approaches like minimum viable product (MVP) development, multivariate testing, rapid iteration, and working closely across organizational boundaries in a devops model.
There are many aspects to being cloud-native, including infrastructure services, automation/orchestration, virtualization and containerizationmicroservices architecture, and observability. All of these mean a new way of doing things, which means breaking old habits as you learn the new ways. So do it at a measured pace.
As cloud computing was starting to hit its stride six or seven years ago, one of the important questions people were struggling with was: "What do my apps have to look like if I want to run them in a public, private, or hybrid cloud?"
There were a number of takes at answering this question at the time.
One popular metaphor came from a presentation by Bill Baker, then at Microsoft. He contrasted traditional application "pets" with cloud apps "cattle." In the first case, you name your pets and nurse them back to health if they get sick. In the latter case, you give them numbers and, if something happens to one of them, you eat hamburger and get a new one. The metaphor was imperfect—as well as being perhaps a bit culturally insensitive—but it did capture an essential distinction between long-lived unique instances on the one hand and large numbers of essentially disposable instances on the other.
There were other attempts to codify the distinction. "Twelve-factor apps" is an explicit methodology for building software-as-a-service apps. Looking at the question from more of a business angle, industry analysts Gartner used Mode 1 and Mode 2 to distinguish classic IT (focuses on attributes like stability, robustness, cost-effectiveness, and vertical scale) from cloud-native IT (emphasizing adaptability and agility).
These remain useful perspectives. Many modern, dynamic, and scale-out workloads run as virtual machines in both public clouds and private clouds like OpenStack. They're clearly developed and operated according to a different philosophy than traditional scale-up, long-running apps on a "Big Iron" server.
However, cloud-native has come to usually mean something more specific, especially in the context of application architecture and design patterns. It's the intersection of containerized infrastructure and applications composed using fine-grained API-driven services, aka, microservices. The combination has been fortuitous. Companies like Netflix were promoting the microservices idea as a way to make effective use of cloud computing. Containers, first as implemented through early platform-as-a-service offerings and then as part of a broader, standardized ecosystem, came along as a great way to package up, deploy, and manage those microservices.


“One of the things we've learned is that if you can't get it to market more quickly, there is no doubt that the market will have changed and no matter how well you've engineered it or built it or deployed it or trained your folks, it's not going to be quite right because it's just a little too late.“
James McGlennon
Executive VP and CIO, Liberty Mutual Insurance Group

Comments