Welcome!

Continuous Integration, Continuous Delivery & Continuous Testing

Tim Hinds

Subscribe to Tim Hinds: eMailAlertsEmail Alerts
Get Tim Hinds via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Application Performance Management (APM), Continuous Testing, DevOps Journal

Blog Post

Why Software Testers Should Strive to Go Unnoticed | @DevOpsSummit #DevOps

Flying under the radar

Ah, the plight of software testers - destined to crank away behind the scenes ensuring all runs smoothly. Unlike musical conductors who command the spotlight while overseeing the performance of a piece, testers only seem to attract attention when things go terribly wrong.

The same can be said for most IT professionals, at least according to Reddit. In December, this question was posed to the AskReddit community:

"What is a job when done right no one notices, but the moment its [sic] done wrong all eyes are on you?"

Second only to a response about bass players, a comment citing IT professionals ranked highest with more than 5,500 upvotes. I'm sure many testers can lament with the commenter, who added, "When things work well, it's because of the glorious vision of Operations and Marketing. When things don't, it's because we're plagued by IT glitches that hold back the glorious vision of Operations and Marketing."

Okay, so life isn't fair and software testers seem to have drawn the short stick, at least in regards to the blame game. When users notice application performance issues, the fallout may prove detrimental to the success of not only the application, but also the organization as a whole. To prevent reoccurring performance problems, understanding what went wrong and why is critical. Failing to adjust testing strategies accordingly will only lead to unhappy users, unhappy managers, and ultimately an unhappy you.

When Users Notice
Slowly but surely, organizations are realizing the monumental importance of validating application performance. In the same vein, web and mobile application users are becoming less patient, opting to visit a competing site if pages either fail to load or take too long to load. How long is too long?

According to surveys of online shopping behavior, 47% of consumers expect a web page to load in two seconds or less and 40% will abandon a website that takes more than three seconds to load.

Three seconds.

A lot could happen in three seconds, or nothing could happen in three seconds – it simply depends on how long it takes your application to load. BUT WAIT, THERE'S MORE! Think three seconds is nothing? Check out what can happen in one or less:

  • For Bing, a one-second delay results in a 2.8% drop in revenue
  • For the Aberdeen Group, a one-second delay resulted in 11% fewer page views, a 16% decrease in customer satisfaction, and 7% loss in conversions
  • For Amazon, every 100ms increase in load time results in a 1% decrease in revenue

When a company faces site outages and poor performance, the finger pointing begins - usually at software testers - whom many believe to be at fault. After the QA team is forcibly dragged into the spotlight, testers must then turn to the task at hand: discovering what went wrong and why.

What Went Wrong?
Application crashes and slow load times can be attributed to a number of factors including but not limited to software configuration issues for web servers, databases, or load balancers, poor network configuration, poorly optimized software code, for example, code that does not allow for concurrent access, and insufficient hardware resources or lack of auto-scaling elastic computing. Obviously, these areas should receive immediate attention after end users experience poor performance.

If a thorough analysis of all of the above returns no red flags, you may be faced with one of several other common performance problems that you can explore in detail here.

If your team is utilizing a synthetic user monitoring tool, it should be able to access all the synthetic user data inside a browser to provide you with a rich set of dashboards. A deep dive into this data should yield insight into common and/or complicated user paths and the experiences synthetic users are having within said paths. With a synthetic monitoring system, you'll be able to identify performance issues before real users encounter them on your production environment - the ultimate safeguard against spotlight-inducing performance problems.

Preventative Measures
It's not enough to merely identify and fix a performance issue after the fact. Software testers should use this information to create a list of action items designed to prevent future crashes and/or poor application performance.

After you understand the core issue, it may be tempting to stop at the initial, immediate solution. However, installing more storage after determining the problem was disk space still won't explain why you incorrectly estimated how much you needed in the first place. Keep this thought process in mind and you'll be more likely to discover the precise solution.

After determining the solution, look for ways to recreate the issue in a controlled way. For example, use a load testing tool to simulate a large number of users stressing the system through a variety of realistic testing scenarios. Focus on scenarios that have been known to cause problems in the past, are new and untested, are likely to produce bottlenecks, involve complex transactions, and/or are critical paths for users.

Scenarios like these are crucial for discovering how your application reacts with abnormal amounts of load. By simulating user performance, you'll be able to understand how tasks and transactions will function for your real users.

Flying Under Users' Radar
If testers are able to successfully ensure the performance of a web or mobile application, users should remain unaware of their involvement. Sure, software testers may never be famous in the eyes of the public, but when it comes working in QA, going unnoticed is, more often than not, a good thing.

Testers that go unnoticed deliver high quality applications designed to provide exceptional performance across different browsers, devices, locations and more. Throughout a career, most testers will have to face the spotlight in the aftermath of application crashes or performance problems - but if you can learn from mistakes, invest in the right tools and initiate preventative measures, you'll be able to not only, for the most part, remain out of the spotlight, but you'll also be able to handle the heat when the spotlight becomes unavoidable.

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.