Adam was a seasoned developer at Initech Ltd., well-respected in the company and a veteran of many projects in his time. One morning, he received an email from his boss, Ron. Ron had just invested in a new rapid development platform, touted by its creators as a trailblazing tool that promised to deliver at 10x the speed of traditional development platforms.
Ron's email contained a new project estimate for their upcoming multisite project. He was taken aback by the estimates provided by Adam. “Why so high, Adam? With the new platform, shouldn’t these estimates be significantly less?” Ron questioned.
Adam knew it was time for an important conversation about the true essence of rapid delivery. He explained, “While our new platform indeed has a lot to offer, speed of delivery isn't only about technology. A major part of our success hinges on how we work, and how we manage complexity using good software engineering practices. Aspects like cohesion, coupling, separation of concerns, modularity, and abstraction don't disappear just because we have a new high-productivity platform. On the contrary, they become even more significant as we may have less control over them.”
“Writing code is only a small part of software delivery. Rapid delivery sounds great, but racing at full speed in the wrong direction only benefits our competitors.”
Ron wasn't entirely convinced. So, Adam pulled out the big guns. He referred to Fred Brooks' famous essay, “No Silver Bullet” – the lessons of which are just as relevant now as they were back then. Brooks discussed the distinction between 'essential complexity' and 'accidental complexity'.
“Ron,” Adam began, “According to Brooks, 'essential complexity' is inherent to the problem we're solving. In our case, it's the complexity of building a multisite system that caters to diverse regional needs. On the other hand, 'accidental complexity' is the complexity that we introduce as a result of the solutions we choose – our technology choices, for instance.”
“No high-speed platform can rid us of this essential complexity; it's simply part of the problem we need to solve. Our new platform may help us manage the accidental complexity – the technical overheads of our solution. However, much of the work, and thus our estimates, comes from dealing with the essential complexity. We have to design a way to navigate uncertainty using experimentation and feedback and manage complexity through solid engineering practices that allow us to course-correct towards our goal as we learn more about the real problems we’re trying to solve.”
Ron suddenly looks a little pale. Adam, sensing his boss’s despair, makes the save: “Don’t worry, Ron, there’s an approach we’ve used before that’s just as applicable to multisite development, and it’s all geared towards fast, frequent releases. In fact, the largest scientific study ever conducted on software development correlates it strongly with the highest-performing teams in the industry, able to release into production many times per day.”
“I bet it has a name I won’t remember,” Ron returned.
Jez Humble and Dave Farley, in their influential book of the same name, define Continuous Delivery (CD) as a software engineering approach where teams produce software in short cycles, ensuring that the software can be reliably released at any time. The ultimate goal is to build, test, and release software faster and more frequently – and gather the empirical evidence to validate assumptions of what users actually want out of the software, i.e., are we heading closer to our goal, or further away from it?
CD recognises that software development is about discovery and learning – and recognition of the fact that not all the answers are known upfront. The best way to get the answers is to put the software into the users’ hands as frequently as possible and gather their feedback.
The fundamental principle of Continuous Delivery (CD) is therefore to create a repeatable, reliable and incrementally improving process for taking software from concept to customer. It leverages automation, from integration, and testing, to deployment, thereby reducing risk and allowing faster feedback cycles.
The Deployment Pipeline is a core concept in Continuous Delivery. It represents the path that changes (new features, bug fixes, experiments) take from the developer's local environment to production. The main goal of the Deployment Pipeline is to provide everyone involved - developers, testers, and managers - visibility into the system's flow and health.
Each change runs through a series of validations, both automated and manual. These validations aim to find any defects or deviations from expected behaviour. They can range from automated unit tests and integration tests to exploratory testing and user acceptance testing. The purpose is to catch issues early when they're the easiest and least expensive to fix.
The deployment pipeline consists of several stages, each of which increases confidence in the software's health and readiness for release. A typical pipeline includes a commit stage, acceptance stage, and capacity stage, each verifying the software's health at different levels of detail and scope. Successful completion of all stages allows the software to be released to production at any time.
In the context of a multisite development project with multiple teams involved, Continuous Delivery becomes a bit more complex yet even more valuable. Each team might work on a separate part of the system, whether a microservice, module, or feature. As such, each team may operate its own Deployment Pipeline.
This scenario is common in microservices architectures, where each service can be developed, tested, and deployed independently. Having separate Deployment Pipelines means each team can release their work independently at their own pace, without waiting for other teams. It also means we can deploy a fix or feature to one part of the system without having to deploy everything else.
This approach requires a careful design of architectural boundaries to avoid dependencies that can slow down delivery. Each team should be able to make changes to their part of the system without requiring changes to other parts. The architecture should facilitate loosely coupled but highly cohesive modules.
With multiple pipelines in place, it becomes critical to manage the interdependencies and interactions between different parts of the system. For this, comprehensive automated testing at the integration and system levels is vital. Also, robust monitoring and observability practices will ensure that the system as a whole behaves as expected and that any issues can be quickly identified and fixed.
Multisite teams can greatly enhance their chances of success through Domain-Driven Design (DDD). This is a strategy that shapes software based on business structure. It identifies 'subdomains' within the business, each representing a specific operation. These subdomains become 'bounded contexts' in the software, each handling a particular business capability. This approach reduces software intertwining, allowing teams to work independently and swiftly – with their own Deployment Pipeline. Additionally, DDD provides a clear understanding of the business, crucial for managing inherent software complexity.
None of this is easy, and it’s where good software engineering practices come into play – something that panacea-peddling rapid delivery product vendors will rarely surface in their sales pitches!
It may not immediately become apparent, but Continuous Delivery (CD) is very complementary to approaches like Scrum, Agile, and Extreme Programming (XP).
Scrum emphasizes delivering potentially shippable product increments at the end of each sprint. However, it doesn't dictate when to release these increments to production. CD fills this gap by ensuring every change, from new features to bug fixes, is releasable to production at any time. This allows Scrum teams to deploy changes more frequently, aligning with the agile principle of delivering working software frequently.
Agile development values collaboration, adaptive planning, and early delivery underpinned by continuous improvement. CD fits well within the agile mindset by allowing teams to react quickly to changes, a fundamental agile principle. CD emphasizes deploying software quickly, safely, and sustainably, enabling the rapid response to change and frequent delivery that agile principles advocate.
XP focuses on producing high-quality software and improving the development team's quality of life. A key practice of XP is Continuous Integration, requiring frequent integration of work. CD builds upon this by ensuring the system remains in a releasable state after any change, emphasizing automated testing and deployment to guarantee quality. This aligns perfectly with XP's commitment to quality and adaptability.
CD practices deliver value irrespective of the development methodology in use, facilitating the rapid and reliable release of software, providing faster feedback, and enabling swift response to change.
Rapid delivery, as our developer Adam rightly points out, is not about blindly speeding up software delivery with flashy tools. Instead, it's about moving swiftly and precisely in the right direction, continuously re-evaluating the software through multiple dimensions of quality. Continuous Delivery presents a proven methodology to do just that. It redefines the perception of speed in software delivery, advocating for a blend of automation, robust testing, and a learning culture.
Alongside, the timeless software engineering principles of cohesion, coupling, separation of concerns, modularity, and abstraction remain the building blocks of successful software delivery, irrespective of the tools or platforms in use. In the world of multisite systems, where essential complexity is the norm, the principles and practices of Continuous Delivery, coupled with fundamental engineering practices, are more relevant than ever.
Ultimately, whether we're dealing with a monolithic application or a distributed system composed of microservices, the real challenge isn't just about choosing the right technology. It's about managing complexity and risk while continuously delivering value to users. It's about a way of working that embraces change and learns from feedback. And for this, there's no better approach than Continuous Delivery.
If you are interested in discussing Rapid Application Deliver or any of our digital consulting services, please do not hesitate to contact our team today – we would be more than happy to assist.