What is Next Architecture?
Growing adoption of microservices, cloud, containers, and orchestration signal a paradigm shift that we're calling Next Architecture.
Every company has a digital presence—a digital expression of itself that encompasses everything from its public-facing websites to its deepest back-end databases; a digital presence whose reach extends from the hippest social media platforms to the ancient MUMPS database on life support in its shrinking on-premises data center. A company’s digital presence touches all of its business operations and business processes: finance, sales, marketing, HR, operations, and, of course, IT itself.
Here’s the thing: virtually every company is dissatisfied with its digital presence. Software projects are rarely completed on time and on budget, and, once delivered, almost never suit the needs of the business. The most obvious explanation for this is that software is designed and built with primary reference to where the business was—not where it needs to be. The needs and expectations of customers change. The lay of the market changes. New, agile competitors emerge, unburdened by legacy systems and thinking. Rules and regulations change. The structure of the business itself changes. People come and go. Products are spun up or spun down.
IT has had an especially fraught relationship with change. Prior to the dot-com bust, it wasn’t unusual for IT to dictate the scheduling of system roll-outs, upgrades, and maintenance, along with the pace of new software development. Since then, IT has been under enormous pressure to align its priorities and scheduling with those of the business. We see tangible evidence of this in the growth of cloud, the widespread adoption of virtualization and automation technologies, the ascent of agile programming and project management methodologies, and in a slew of other business-oriented innovations.
Ask one of your business customers, however, and you’ll hear that software still takes too long to build and is too hard to change. Your digital presence is not responsive enough, adaptive enough, cheap enough. Scaling to meet demand remains difficult, with reminders of downtime during periods of peak demand. For all your progress, you still have trouble delivering new products and lines of business quick enough. Adapting your digital presence to exploit new business opportunities, new devices, or competitive threats takes so long as to appear impossible.
The good news is we’re learning how to build systems that are more fluid and adaptable. We’re doing that by using the cloud, so we can scale capacity with demand; by automating software infrastructure, so we can deploy new servers and services rapidly and confidently; by decomposing monolithic applications into systems of loosely coupled services that can be extended to support new products, new kinds of users, and new kinds of applications. Above all, we’re moving toward an architecture of data flows that allows us to respond dynamically to changes in demand, to build new applications by recombining services, and to upgrade our services without rebuilding the whole infrastructure.
We’re moving to what we at O’Reilly call Next Architecture. In a sense, we’re already there.
You can’t wish your IT inheritance away
Think of “digital presence” as a different, more continuous and holistic frame for thinking about software development and software architecture—to see your existing IT systems less as an irksome inheritance and more as a bridge to a Next Architecture.
For many organizations, their digital presence presents a sort of archaeological site of systems, networks, applications, programming languages, databases, tools, and processes preserving the artifacts and cruft of priorities, trends, fads, and anachronistic conventional wisdom of different epochs.
These assets and practices aren’t just inseparable from your business; in an essential sense, they—no less than your physical assets—are your business. We are moving away from the notion of application software conceived as the “digital twin” of the business to thinking about digital presence as interpenetrated with business processes, moving beyond supporting or mirroring to an increasingly lead role—i.e., constituting or instantiating the business.
Next Architecture: Right here, right now
There is a well-worn trope used to describe software upgrades as akin to changing the tires on a rapidly moving car. To help explain Next Architecture, consider software architecture as a locus of continuous transformation. That is, a car designed to have its wheels changed while in motion. A primary tenet of Next Architecture is that software architecture should be adaptable, agile, resilient, and tolerant of change.
The core components and practices that make up Next Architecture—cloud computing, container virtualization and orchestration, microservices, serverless or functions-as-a-service (FaaS) computing—are not only extant but in relatively wide usage, as we find when analyzing behavior on the O’Reilly online learning platform. Moreover, what ties the Next Architecture concept together—decomposition—may be difficult, but it is well understood, widely practiced, and (mostly) uncontroversial.
Let’s briefly explore what decomposition is and why it’s important, as well as describe the technologies that provide the nuts-and-bolts scaffolding for Next Architecture. We plan on publishing a companion article covering these and other issues at greater length. What we’re interested in looking at is how, or to what degree, existing technologies and practices permit Next Architecture to be applied today.
The cornerstone of Next Architecture is the concept of decomposition—i.e., the idea of breaking things into small, loosely coupled components. To get an idea of how this works, think of the hundreds or thousands of pieces in a Lego set. Most pieces are generic, a few perform specific functions, but they can all be connected via a common interface. The Lego kit can be used to build what the set specifies or to create any form or object the designer/architect can imagine. The pieces are reusable, repurposable, and reconfigurable.
That’s the logic of decomposition in a nutshell. The most popular implementation of this concept is microservices—although serverless (otherwise known as functions-as-a-service, or FaaS) architecture is an even more aggressive take on decomposition. With the availability of FaaS-oriented offerings such as AWS Lambda, Google Firebase, IBM BlueMix OpenWhisk, and Azure Functions, serverless computing is a viable (and increasingly compelling) complement to microservices.
Why is decomposition a, or the, foundational principle of Next Architecture? For the simple reason that it makes it easier to rapidly reconfigure an application or system. For example, adding a new feature to or changing the behavior of an existing application becomes a matter of building one or more new services—or of exploiting already deployed services. This is integral to the flexibility requirement that is at the heart of Next Architecture. It’s what makes it possible to add new capabilities or services without impacting an entire application. The Lego metaphor is apt: decomposition gives you the freedom to customize components or services that you can knit together into a bigger app (or platform) for customers, partners, internal employees, and other consumers.
Decomposition depends on the ability of an organization to quickly build, test, deploy, and scale software artifacts, whether in the form of microservices, FaaS code, or via some as-yet-undetermined future innovation. But organizations have very different expectations with respect to scalability today than they did in the past. Organizations expect to be able scale resources (and the software that runs on them) elastically, such that they can spawn or terminate instances of applications, systems, or services as needed. Today, for all practical purposes, this entails the use of three commodity technologies: cloud, containers, and orchestration. The trends we at O’Reilly see regarding the adoption of these technologies provide further evidence that organizations have already embraced many of the key Next Architecture concepts. It’s helpful, however, to keep in mind that each of these technologies is a means to addressing some requirement that is integral to Next Architecture. In some cases—such as how or in which context containers are used, or how or by which means they are managed—the specific details of an enabling technology implementation could change. A technology could be superseded or, at least, augmented. Kubernetes has emerged as the dominant means of orchestrating containers, for example; in the Kubernetes-less FaaS cloud, however, platform-specific services provide orchestration-like capabilities.
To accommodate the scaling required to spin up or down services on demand, the distributed architecture model of the cloud is required. Next Architecture is about building systems that can be more fluid and adaptable. The cloud enables capacity to scale with demand. By decoupling compute, storage, and networking connectivity from one another, cloud infrastructure likewise permits granular capacity scaling. What this means in the context of Next Architecture is that you can put things where you want to put them, quickly gain resources, quickly give up those resources when you don’t need them, and “right size” what you’re doing. Every single part of this is responsive.
Containers are used to facilitate the build phase of Next Architecture. Containers provide a lightweight way to achieve the kind of modularity valorized by decomposition and the cloud. Container technology like Docker makes it easy to automate the deployment of the microservices that are the products of decomposition: all the pieces needed for development and deployment are consolidated into a single tidy package, providing easy access to the development, deployment, and run resources needed to spin up and run software anywhere, portably, on any platform. The emergent serverless computing models make use of containers, too. But whereas container virtualization is platform- or service-agnostic, serverless computing is, for the most part, dominated by platform-as-a service (PaaS) offerings. In the serverless model, code executes in ephemeral containers that are spawned as needed and terminated (or killed) when they’re no longer needed. In the conventional container model, a management daemon or service (usually Kubernetes) is used to orchestrate the starting and stopping of (as well as the scheduling of dependencies between and among) containers; in a serverless scheme, usually some variant of a cloud-specific service (e.g., AWS Step Functions) is used to provide similar capabilities.
For the deployment phase of Next Architecture, orchestration tools are used to conduct how the components and services work together. The data we’ve seen regarding interest in or uptake of Kubernetes is especially suggestive. After all, Kubernetes isn’t a tool you take for a casual spin—if you’re trying it out, you’re likely implementing it, not just kicking the tires. In the context of Next Architecture, this is further evidence that companies are starting to build these kinds of distributed systems. This makes sense. Kubernetes is geared toward the large-scale deployment needed for decomposed systems—it can manage not just tens of things or hundreds of things, but tens of thousands of things. Still more evidence comes via the availability of FaaS orchestration services, such as AWS Step Functions, Azure Logic Apps, or Sequences in IBM’s BlueMix OpenWhisk. The demand for orchestration capabilities in the serverless cloud suggests that people are exploiting FaaS for more than one-off uses. In other words, some users are creating the kinds of complex and interdependent flows that are characteristic of applications. Another related sign is a spike in interest in Knative, a Kubernetes-based platform optimized for serverless computing. In speaker proposals for the 2019 O’Reilly Open Source Software Conference, the term “Kubernetes” fell just outside the top 10—at No. 11. The term “serverless,” for the record, cracked the top 25, at No. 21.
We’re already there
Today, many organizations are already thinking and developing software in consonance with Next Architecture’s foundational priorities and principles, even if they are not consciously (or conscientiously) “doing” Next Architecture. For example, in embracing the logic of continuity as distinct to that of discontinuity—e.g., continuity in software development, deployment and/or delivery, integration; continuity between roles or personae, with developers taking on ops-related roles, and vice-versa—they’re laying a foundation for Next Architecture. In adopting software development practices and methods that emphasize decomposition, they’re producing software that is highly tolerant of change. In decomposing core infrastructure services into function-specific services—such that a monolithic security/access control service is decomposed into the discrete authentication, authorization, event-logging, etc., functions that comprise it—they’re designing flexible, adaptable infrastructure software, too. In designing function-specific microservices that are optimized to address one or more customer-focused metrics or priorities—e.g., services that meet thresholds with respect to response time, availability, reliability, performance, etc.—they’re addressing both the essential purpose (supporting and enabling essential business processes) and the overriding goal (customer satisfaction) of software architecture. And in emphasizing these and similar priorities, organizations are not only employing techniques and practices associated with microservices architecture, they’re designing and developing in consonance with (if not explicitly in the style of) serverless architecture, too.
There’s more. In incorporating disruption, perturbation, failure, and sometimes even catastrophe into software development and operations, as with the Chaos Engineering approach pioneered (and formally instantiated) by Netflix, organizations have embraced and co-opted the uncertainty and unpredictability that are the most unwelcome aspects of change. In building tools to simulate failure, downtime, data loss, etc., they’re proactively identifying previously unknown or unanticipated network and/or cascading effects. And in practicing the previously unthinkable—namely, testing code in production—they’re developing a capacity to respond to and contain ever more esoteric, improbable, or wholly chaotic modes of failure.
In developing their decomposed microservices in accordance with standardized patterns—be they high-level patterns for decomposing tasks (e.g., decomposition based on business capabilities, subdomain, etc.) or highly granular patterns, such as those associated with concepts such as observability—they’re likewise laying the groundwork for serverless computing. These and other practices permit flexibility and encourage adaptability. A microservices-based application can more easily be refactored to run in a serverless context than can a traditional monolithic application. It’s a matter of changing the tires, so to speak; worst case, it might entail replacing the motor, transmission, and drivetrain, improving the braking, or optimizing the fuel injection system. In any case, you wouldn’t have to do all of this at the same time. The concept of accommodating change on an incremental basis is core to the adaptability requirement of Next Architecture. It’s a lot like changing your oil and rotating your tires as needed, replacing your brake pads and changing your tires as necessary, and servicing your motor or transmission proactively.
In Next Architecture, each of these things is possible and self-serviceable. In conventional software architecture, you’d be buying a new car. And you’d have to pay someone to tow away your old one, too.
Most of Next Architecture’s foundational principles are well known, well understood, and, for the most part, widely practiced. The logic of decomposition, for example, descends from the practice of separating an application’s programming or business logic from its user interface elements—or, rather, its presentation layer as such. This problem first bubbled to the surface in the 1970s when minicomputers began to share space with mainframes, and really came to the fore in the 1980s, thanks to the explosion of PCs and the shift to client-server computing. We’ve known for at least 40 years that separating a program’s logic bits from its presentation bits makes it a lot easier to port (or retrofit it for new uses) from one model or context to another. The idea of separating programming and business logic from data access is likewise uncontroversial. The logic of decomposition (if not the idea itself) is implicit in these practices. And the logic of decomposition has been an explicit theme in software development for two decades or more: e.g., it was integral to service-oriented architecture. These and other ideas have been selected for and refined over the course of 50 years of software development. At this point, we’ve got them down to a science.
Next Architecture is open for business, too
Some of the business benefits of Next Architecture might seem obvious. For example, a software architecture that is flexible and tolerant of change is, by definition, cheaper to build and maintain. Notionally, Next Architecture gives organizations a means to manage and control costs at a granular level, too. If you are not deploying components and services that are not being used, you are not paying for the human, IT, utility, etc., resources they are not consuming. You have the capacity to respond to dynamic conditions by only spinning up/spawning the resources required to handle any given load at any given time. You likewise gain development efficiencies, starting with increased speed of development. Developing in small bits is much faster and requires smaller teams than developing in a monolith system where unanticipated (or previously unknown) dependencies and side effects can cause delays.
We need to qualify “maintaining” software in the context of Next Architecture. If an application’s functions are abstracted at a granular enough level (as with a microservices-based or serverless architecture) it can be easier—and cheaper—to rewrite function-specific services than to maintain them. Given limitless time and money, almost all software/software architecture problems could be adapted to change—although not easily. This is the reason most organizations allocate substantial portions of their IT budgets to maintaining legacy applications. Next Architecture heralds a different, better approach.
Reducing or eliminating costs is important. If nothing else, money that we don’t spend on application maintenance is money we can use to promote other (value-creating) activities. But we aren’t doing Next Architecture simply to cut costs; cost-savings is a byproduct (or spandrel, if you like) of Next Architecture. The primary reason an organization would plan to build in consonance with Next Architecture is as a means to position itself to thrive in the next decade and beyond. Adaptability and maximizing potential are two defining habits of highly successful organizations—not complacency. To adapt, we’ve had to invent a new way of thinking about and building software. Even if it isn’t quite fully realized, critical pieces of it are—and we can see its other bits coming into focus, too.
Next Architecture is likewise premised on the understanding that software (and software architecture) is not merely a digital twin of your business—i.e., its virtual complement or mirror—but is, in an essential sense, your business. This shift in thinking is embodied in the frame of digital presence.
Again, you already have a digital presence—whether you think of it that way or not. Your digital presence has its being in the miscibility of your IT systems and resources with your business operations. These systems do not simply “complement” or “enable” core business operations and their constitutive processes. Increasingly, and even for companies with huge material presences in the physical world, their systems are inseparable from core business operations and processes. Sometimes, new business processes and operations will emerge solely out of your digital presence. In some of these cases, their instantiation or realization in the physical world will be optional—perhaps even moot.
There are already indications that the concepts, technologies, and practices that underpin Next Architecture will see strong uptake in precisely these miscible or “digital-first” scenarios. For example, the subsidiary of a car maker building a cloud service for connecting cars—in anticipation of expected disruption in the automobile industry—told us they’d cut development cycles for some applications from a year to just a few weeks thanks to their embrace of a microservices architecture.
Let’s briefly consider a few of the specific business benefits of using the concepts, technologies, and practices of Next Architecture to build software:
- Feature agility — You can quickly develop features and add them to your digital presence.
- Feature flexibility — You can use a simpler process to configure different feature sets as well as more easily customize them to serve different user populations.
- Feature testing — You can test the popularity, usefulness, and usability of new services by rapidly making them available to random samples of likely users.
- Scaling — You can quickly and easily add new servers to support popular and/or intermittently busy services and features. Similarly, you can quickly/easily exploit low-priced commodity compute capacity (e.g., spot capacity) as it becomes available to address the same use case.
- Resource efficiency, retention, and recruiting — Next Architecture gives you the flexibility to develop software in whatever language and platform it makes sense to do so—up to and including COBOL. Notionally, flexibility of this kind should boost retention as well as increase the pool of software engineers willing and able to work on your digital presence projects.
One important programming-related note: in Next Architecture, it’s important to pick the right language, tools, and methods for the task. If a team of programmers likes working in Rust—and if Rust is a good fit for a proposed service’s or feature’s requirements—let them code in Rust. There’s an obvious caveat to this, of course: you wouldn’t want to build a highly parallelized, performance-dependent microservice or function in a language like Ruby, after all. Or, to take another example, if you’re a large enough financial institution, you likely still have at least one IBM mainframe—and you’re likely still running CICS, DB2 on the mainframe, etc. If you want to incorporate certain features or functions of your CICS applications into a microservice architecture, COBOL is an option—along with Java, C, C++, Python, and any language that compiles on Linux, for that matter.
Next Architecture enforces no hard-and-fast requirements with respect to which languages/platforms should or shouldn’t be used. A software engineer has the freedom to select the tools that best support the requirements she needs to address: e.g., she might choose to work in Go not only because she likes it, but because the language and its ecosystem of tools are flexible and powerful. As an added bonus, Go compiles extremely quickly, permitting her team to accelerate its build cycles. This lets them quickly improve existing features—or deliver new ones—as needed.
Next Architecture challenges
While the data shows many organizations directionally adopting Next Architecture, these organizations face challenges that span across training/hiring, culture, distributed data integrity, new cost regimes, migration, managing complexity, and decomposition.
The tools that make up Next Architecture are new enough—and in the case of Kubernetes, evolving quickly enough—that architects and software developers are compelled to keep up with changes and related tools in the Kubernetes ecosystem (e.g, Istio for implementing the service mesh that binds and monitors microservices).
From our own surveys, we know that most organizations plan on retraining staff to cover the technologies and tools that make up Next Architecture. And that doesn’t include the learning best gleaned from experience (both internally gained or gathered from others) on how to make decomposition work for your context—i.e., how to successfully architect and design services that coordinate well, that maintain data integrity across distributed compute and storage resources, that are self-contained enough to limit side effects, that scale, that don’t become too complex to manage, and other factors.
Managing complex, distributed systems consisting of thousands of services requires a more sophisticated and nuanced approach to monitoring and managing alerts. Tracking alerts from thousands of distributed services may stretch the bounds of human cognition. We expect many organizations will require machine learning-based models to augment the human interface that monitors operations.
For organizations addressing the move from legacy monolith architectures, the migration to Next Architecture requires a close look at what makes sense, both from a technology and cost perspective, to cleave off as microservices that can be deployed in the cloud. In a recent survey, about 45% of organizations responding identified migrating monoliths as a significant challenge—trailing only lack of skills on the list of top challenges.
While we mention the benefits of serverless as a next step in Next Architecture, covering the details of serverless is beyond the scope of this report. We can say that serverless presents issues around implementation, latency, and cost management that the community of vendors, software architects, and software developers continue to address.
Next Architecture is consonant with the way organizations want and need to run—a process for staying relevant by making adaptability primary, creating the space for efficiency, scale, and creativity.
Next Architecture harnesses containers, service orchestration (via Kubernetes, Swarm, or similar platforms) and other commodity technologies in what could nominally be described as a microservice architecture. Its foundational principle, decomposition, describes a method of breaking large tasks into highly granular, function-specific units. These units (which are typically, but not invariably, instantiated as microservices) can be combined like Lego bricks to form highly complex assemblages.
Next Architecture’s overarching goals are two-fold: first, it aims to deliver an improved overall service experience for customers; second, it aims to produce software architecture that is more flexible and resilient—in other words, that has a capacity for adaptability. With regard to this second goal, Next Architecture is not coupled to any specific architectural regime. Today, adopters are using AWS Lambda, Azure Functions, and other FaaS services to complement their microservice-based development efforts with serverless capabilities. These and other architectural regimes (e.g., service mesh) work by decomposing the kinds of tasks that, in a monolithic design, used to be performed by a single large application. As small, function-specific Lego-like bricks, they’re easier to design, build, improve, and maintain. Ideally, in fact, they aren’t maintained: if a microservice is basic enough, it can easily be rewritten.
Next Architecture is scalable and resilient, too. Another advantage of decomposition is that the function-specific units it produces can be spawned or terminated as needed, and at a scale—to the tune of tens or hundreds of thousands, even millions of instances—that would otherwise be impracticable, if not unimaginable. These and other characteristics lend themselves to superior cost efficiencies. Decomposition also has other benefits. For example, you can build your microservices (or functions, or granular features) using the best or most appropriate tool set. If productivity is what matters most, by all means, build the service in the development environment you and your staff are most comfortable with. However, if performance is paramount, build in a development environment that supports faster runtimes and more efficient processing.
As with any change, there will be costs. Next Architecture will require new skills specific to Docker, Kubernetes, and other enabling technologies. It will produce greater, not lesser, complexity, particularly from the perspective of IT operations. It will require software and processes suitable for distributed data management—ACID-like transactional integrity is an especially difficult problem in this context—as well as autonomic features that, unlike most of the other technologies we’ve mentioned, are still very primitive. Yet once the skills are honed and the complexity tamed, Next Architecture will do what good software architecture should: get out of the way. We’re used to software architecture being conspicuous precisely because it’s irksome or infuriating: it’s brittle and inflexible, it forces us to do things a certain way. In Next Architecture, by contrast, inconspicuous is a feature, not a bug.
Nascent or no, Next Architecture is a paradigm shift that’s underway. Expect to see a new layer of expectations around what software and software architecture can deliver—both from the customer’s perspective and from the perspectives of the people charged with deploying new products and services. Already, user expectations are shifting in this direction. The principles and priorities of Next Architecture align with how users want and expect their products and services to behave.
The bar has been raised. Soon, everyone will be forced to adjust their standards accordingly. And, rest assured, if you don’t adjust, someone you compete with will.