Quick question – what is your confidence that your next release will ship without bugs? If you are like the rest of us, you have invested heavily in CI/CD tools to automate the entire software development lifecycle for one reason - speed. In today’s CX-first world, your customers demand new features released monthly… weekly… daily. And yet, when the moment arrives to push a new release into production, confidence in quality is at an all-time low.
We have all experienced this. Over half of all development teams today encounter production issues* each month, preventing teams from realizing the potential that DevOps promises. Why is this?
Golden user paths don’t reflect reality
As is often the case, testing starts with translating stories and acceptance criteria into assumed flows that users will take. These ‘golden’ paths are hypothetical and don’t reflect the circular nature of how an application is actually used.
At some point leadership is informed of the current state of the release, usually an assessment of its quality and readiness to be deployed to production. This is a completely subjective analysis, even if the team tracks different levels of coverage across requirements, code, tests, etc. We can touch all requirements and code as we run our tests, but that doesn't mean we've touched them in all possible permutations coded in by the developers. Therefore, bugs can still be out there. We can never test everything, even if the team has unlimited time and resources. So, the team decides to deploy to production and hope for the best.
The result? Real user paths are not tested, firefighting erupts
We know what happens, right? Customers find bugs that leaked from the previous testing efforts, impacting the CX and, therefore, the company's business metrics (e.g. retention, revenue, NPS, etc.).
… And firefighting begins! The team stops all work in the new release to start troubleshooting bugs found by users in production at the highest priority. That means developers will not be using their time to build new features or bring enhancements to their users. Instead, they have to deal with massive context-switching and deal with the high-pressure of patching code at speed to keep the users going in production. This is not only slowing down the next release, but it's adding technical debt that hopefully gets addressed later… you know… at some point.
Endless break-fix insanity ensues
Once the fix is in, in a totally reactive manner, the team creates new tests to make sure the functionality that broke doesn't regress again in subsequent releases. Those new tests are added to the regression testing suite. That suite just keeps on expanding to a point where it becomes too big to run quickly and provide developers with the immediate feedback they need in order to mark a user story as done. Eventually someone starts removing tests from the regression testing suite so it runs faster, with complete disregard for the CX - i.e. what user flows truly matter in production that must always be covered by the regression testing suite. And the cycle starts all over again…
Ever since software testing became a formal practice many decades ago, we have been giving a thumbs up on the quality of applications based on the passing status of a set of arbitrary tests that were created by a person or group of people who have (adequate) knowledge on the application and the business domain. But if you ask a different person or set of people with similar domain knowledge and skills, they’ll come up with a different list of tests. And then, which tests are better? Don’t get us started on the topics of test coverage, requirements coverage, code coverage… They’re all great metrics to understand how effective your teams are in their software testing process, but not to understand the software quality of your app. See the difference? Testing is an activity. Quality is a customer-verifiable outcome.
What if there was a new, customer experience-led approach to understanding how users navigate through the app so that we can make sure those critical business flows are properly covered in the next release of the app? That way, we could ensure there are no regressions on the CX for those users in the next release and, therefore, no business impact. Sounds obvious when you say it out loud, right?
Relicx makes continuous deployment possible.
--- Electric AI
Relicx has brought together the world of Observability and Software Testing to catch, debug and fix CX-related app regressions before each release. The best part? You don’t have to spend time guessing the best tests to be created for the sprint or worry about automating them to run on the CI/CD pipeline.
How the Relicx magic works
Dev teams are expected to operate at an ever-increasing pace due to tremendous business pressure to stay ahead of the competition by delivering an outstanding customer experience. Embracing DevOps has enabled unprecedented software delivery speed for the dev team. Unfortunately software quality, and therefore CX, has suffered. Evolution of development practices keep on leapfrogging software quality practices as they’ve been basically the same for the past several decades.
What’s left for dev teams to do then so they're not slowed down by outdated software quality practices? Lean more on observability to ensure application quality. So as you normally push your code to production, keep on observing how different tech stacks are behaving as you already do today and then apply the CX lens to understand how your users in production interact with your application.
The traditional way of thinking about app quality is subjective and outdated for a cloud-native world. Advanced dev teams accept that what truly matters, what will force them to rollback code from production is if the Customer Experience is impacted. Users can live with minor bugs that don’t impact their CX. So if the CX is good, then quality is good. Relicx gives you both CX quality AND speed. Confidence regained!
Want to learn more?