The Importance of Application Decomposition in App Performance Testing
Discover faster, more efficient performance monitoring with an enterprise APM product learning from your apps. Take the AppDynamics APM Guided Tour!
Summary
Learn how to maximize the quality of your metric data by breaking down your app and applying proper instrumentation.
DevOps is changing the way companies develop and maintain software. By embedding operations engineers into software development teams, companies are reducing the average number of days from code completion to live production and eliminating wait time and rework, among other benefits. But as I pointed out in my previous post “Performance Testing in a DevOps World” performance testing remains the weakest link in the process.
Along with continuous integration and continuous delivery (CI/CD), companies need to practice continuous performance testing to fully realize the benefits of DevOps. Proper instrumentation is crucial to ensuring you are collecting rock-solid metric data, and having a repeatable process for collecting is something that will benefit you regardless of how you are finding correlation. The old adage, “garbage in, garbage out” still applies and always will.
My process starts with what I like to call “Application Decomposition.” In essence, I want to understand all the dimensions or attributes about each transaction in an application or service. Most, if not all, modern APM software like AppDynamics uses bytecode injection to tag and trace transactions from known entry points to exit points in code. In this way, a transaction comes in to an API, or starts on a particular class/method, and will trace across threads and exit with a header tag that the next service running an agent is able to correlate.
Where the bytecode injection is configured is of much importance as is how we define the transactions to the proper degree of uniqueness. To illustrate this, let us assume we have an API that offers an endpoint for a ShoppingCart.
Now, let us assume that this ShoppingCart endpoint can do a few things depending on the HTTP method used (i.e. GET, PUT POST, etc). Each one of these methods invokes a different path through the code. To decompose the app, we want to capture each of these as their own transaction by monitoring the servlet and split on the HTTP methods.
Once you have configured your transaction, you will want to call the endpoint with each of the HTTP methods and preview to make sure everything is working right. Once this is done, save the settings and start some load. Enabling developer mode will ensure you are capturing snapshots so you can analyze the call stack and understand how the application ticks. Make note of how much time is spent in data transformation, on backend calls, in DB, on CPU, and disk (if applicable). What are its dependencies? And so on. App decomposition is about understanding the fundamental behavior of what an app or service does, and how it does it. I can’t tell you how many times I have done this and found opportunity for optimizing an app without ever driving load. Time spent doing this is never wasted time.
Evaluating how you derive metrics is a crucial step in building or improving your performance engineering program. While the strategy does change depending on the framework in question the principles are the same. You will want to develop a strategy of monitoring the mission-critical code paths, so that the like transactions are bucketed and baselined under the same business transaction names. If you combine too many different transactions together (or fail to split them properly) your baselines will reflect the differences in the characteristics of the transactions and the rate they are being processed rather than changes in system performance. While you are at it, make sure you are naming these transactions in a way that will have meaning not only to your DevOps engineers, but to the Business as well.
When decomposing apps for my customers, I prefer to script up a load-generator that will run a single virtual user through each of the transactions, over and over on a loop. Ideally I do this in a quiet environment where nothing else is happening, mostly because I am just after a nice, clean dataset that I can dive deep into and understand. I capture all the data on the request and response as I really want to understand primarily if any of those attribute values of changes result in significant changes in response time or resource consumption. If they do, chances are we are invoking a different code path, and I want to break it out into its own transaction.
If you have decomposed the app properly, as you progress into driving some load the performance profile of a business transaction should be deterministic. Meaning, you should see a response-time hockey stick in your chart data, and not the Rocky Mountains. If the response time varies greatly, and seemingly for no reason, that could be an indication that there is something wrong with the system, test harness, or the app is improperly decomposed. There are edge cases where apps are just not well written and don’t align with performance engineering dev practices. Nondeterministic interfaces are more common in legacy code and sometimes it is not easy to get predictable response times based on the data present in headers or request payloads. In these cases, we may elect to add logic to derive metrics from the code being executed either through Data Collectors or through the use of our Agent SDKs.
Most business transactions can be broken down easily, however, making it relatively easy to do continuous testing, particularly if you have a unified APM solution like AppDynamics that will automate much of the process.
Now that you’ve ensured the quality of your metric data, you’ll want to think about testing. In my next blog post, “Five Use Cases for Performance Testing in a DevOps World,” I’ll cover recommended test patterns for improving the efficiency and reliability of your application.
Colin Fallwell is part of AppDynamics Global Services team, which is dedicated to helping enterprises realize the value of business and application performance monitoring. AppDynamics’ Global Services’ consultants, architects, and project managers are experts in unlocking the cross-stack intelligence needed to improve business outcomes and increase organizational efficiency.
Discover faster, more efficient performance monitoring with an enterprise APM product learning from your apps. Take the AppDynamics APM Guided Tour!