I have been in the performance engineering business for about 30 years. I get involved early on during system development and move off as the systems move to maintenance. Then, I move onto the next one.
I sometimes get called in to troubleshoot a performance crisis. I have sometimes been compared to a locust moving onto the next greenfield and, at other times, compared to Kwai Chang Caine (of Kung Fu fame), wandering the earth sowing justice, peace and performance.
Whichever you choose, I have noticed some similarities between the various projects that I have worked on.
1. Speed to Meet the Need
There are two aspects to performance engineering:
- To speed up the system to meet the required response times
- To talk the customer into lowering their response time requirements
It continually amazes me how high level system people will start off with unnecessarily fast response requirements. One customer wanted sub-second browser response time, even though the user would stare at the resulting web page for many minutes. Really, it was ok if it took a couple of seconds to fetch the web page.
So don't feel afraid to talk your customers into a lower performance criteria.
2. Keep Track of Your History
Performance requirements are always patchy. Yes, they get the big ones (user response time), but it is totally typical for the subcomponents to not have requirements on them, making it unclear where and how to make the end user response times. Complicating this, generally, all of the discussions leading to the system designs are lost. There was a big debate as to how large the system should be for how many users. Someone chose that 256 CPU server monster, but what was their thinking? Half way through the project, when the response requirements have been relaxed and the number of users has been increased, it sure would be nice to look though those emails and see what people were thinking so you can estimate what system changes need to be made to accommodate the business changes.
So during the early phases, have an archive account to which people can cc "important" emails. Later, these can be reviewed and all that wonderful history and knowledge will make the system change and maintenance a lot easier.
3. Dev vs. Ops
The development people do not really address the maintenance phase of the project. The development people are in a heated rush to meet performance goals and deadlines while not over running costs. The project managers are very focused on the deadline of "going live".
Then the system goes live, everyone throws a party and then they realize that they have not planned well for this lesser period of effort: maintenance. What artifacts did the developers leave the maintainers? Did they leave a workable load test? Did they leave a well organized set of production diagnostics implemented as part of the application? Did they leave proper alert monitoring? Did they leave proper daily reporting?
Ideally, the last stage of development is to set up the system for long term, cost effective maintenance, not the first stage of maintenance. This task is properly done when the development team is still available, not after the brightest are gone to their next adventure. As much as possible the maintenance tasks (such as daily reports, regression tests and load tests) should be automated and that may require changes to the system, changes that the development team can do easily, but will be much more difficult for the whittled down maintenance team.
4. An APM Solution Itself is not the Solution
Management often feels that buying a tool will solve a problem. "We have performance problems so we will buy an APM solution." This belief often results in shelfware: management has underestimated the effort to implement the solution.
Yes, an APM solution is good. But you need to have enough labor to implement it and look at and understand the results. Plan on it up front. Don't focus on the capital cost to your organization, focus on how it will integrate into your organization and what will be done on an ongoing basis.
5. How Important is Performance?
How important is your project? Everyone feels that their project is important, but some are more so than others. Some systems really are not that performance intensive. So, you can save money by not worrying so much about performance; a high performing system is more expensive to monitor and maintain than one that requires less performance.
Make the decision early on as to how important performance is. If it is important, make sure that it is considered important by everyone. If it is not so important, embrace that too and make clear goals for performance. Even if performance is not important, clearly indicate the goals. Even easy to reach goals should be articulated.
I have seen systems where the performance goal is to wait for users to complain, then address it. Sometimes that works fine.
Oliver Cole is President of OC Systems.
The Latest
To help you stay on top of the ever-evolving tech scene, Automox IT experts shake the proverbial magic eight ball and share their predictions about tech trends in the coming year. From M&A frenzies to sustainable tech and automation, these forecasts paint an exciting picture of the future ...
Incident management processes are not keeping pace with the demands of modern operations teams, failing to meet the needs of SREs as well as platform and ops teams. Results from the State of DevOps Automation and AI Survey, commissioned by Transposit, point to an incident management paradox. Despite nearly 60% of ITOps and DevOps professionals reporting they have a defined incident management process that's fully documented in one place and over 70% saying they have a level of automation that meets their needs, teams are unable to quickly resolve incidents ...
Today, in the world of enterprise technology, the challenges posed by legacy Virtual Desktop Infrastructure (VDI) systems have long been a source of concern for IT departments. In many instances, this promising solution has become an organizational burden, hindering progress, depleting resources, and taking a psychological and operational toll on employees ...
Within retail organizations across the world, IT teams will be bracing themselves for a hectic holiday season ... While this is an exciting opportunity for retailers to boost sales, it also intensifies severe risk. Any application performance slipup will cause consumers to turn their back on brands, possibly forever. Online shoppers will be completely unforgiving to any retailer who doesn't deliver a seamless digital experience ...
Black Friday is a time when consumers can cash in on some of the biggest deals retailers offer all year long ... Nearly two-thirds of consumers utilize a retailer's web and mobile app for holiday shopping, raising the stakes for competitors to provide the best online experience to retain customer loyalty. Perforce's 2023 Black Friday survey sheds light on consumers' expectations this time of year and how developers can properly prepare their applications for increased online traffic ...
This holiday shopping season, the stakes for online retailers couldn't be higher ... Even an hour or two of downtime for a digital storefront during this critical period can cost millions in lost revenue and has the potential to damage brand credibility. Savvy retailers are increasingly investing in observability to help ensure a seamless, omnichannel customer experience. Just ahead of the holiday season, New Relic released its State of Observability for Retail report, which offers insight and analysis on the adoption and business value of observability for the global retail/consumer industry ...
As organizations struggle to find and retain the talent they need to manage complex cloud implementations, many are leaning toward hybrid cloud as a solution ... While it's true that using the cloud is not a "one size fits all" proposition, it is clear that both large and small companies prefer a hybrid cloud model ...
In the same way a city is a sum of its districts and neighborhoods, complex IT systems are made of many components that continually interact. Observability requires a comprehensive and connected view of all aspects of the system, including even some that don't directly relate to its technological innards ...
Multicasting in this context refers to the process of directing data streams to two or more destinations. This might look like sending the same telemetry data to both an on-premises storage system and a cloud-based observability platform concurrently. The two principal benefits of this strategy are cost savings and service redundancy ...