Welcome!

Continuous Integration, Continuous Delivery & Continuous Testing

Tim Hinds

Subscribe to Tim Hinds: eMailAlertsEmail Alerts
Get Tim Hinds via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Blog Post

Cruise Control: Automation in Performance Testing | @DevOpsSummit #APM #DevOps

Automated performance testing is important for many different reasons

Listen closely to the background hum of any agile shop, and you'll likely hear this ongoing chant: Automate! Automate! Automate! While automation can be incredibly valuable to the agile process, there are some key things to keep in mind when it comes to automated performance testing.

Automated performance testing is important for many different reasons. It allows you to refactor or introduce change and test for acceptance with virtually no manual effort. You can also stay on the lookout for regression defects and test for things that just wouldn't come up manually. Ultimately, automated testing should save time and resources, so you can release code that is bug-free and ready for real-world use.

Recently, I spoke with performance specialist Brad Stoner about how to fit performance testing into agile development cycles. This week, we'll use this blog post to follow up with greater detail around performance testing automation and recap which performance tests are good candidates for automation. After all, automation is an important technique for any modern performance engineer to master.

Automation Without Direction
Most of the time, automation gets set up without performance testing in mind. Performance testing is, at best, an afterthought to the automation process. That leaves you, as a performance engineer, stuck with some pretty tricky scenarios. Maybe every test case is a functional use case, and if you want to adapt them for performance, you have to go back and modify them for scale or high concurrency. Or perhaps the data required for a large performance test is never put together leaving you with a whole new pile of work to do.

Use cases are strung together in an uncoordinated way, so you have to create another document that describes how to use existing functional tests to conduct a load test. And of course, those test cases are stuck on "the happy path" making sure functionality works properly, so they don't test edge cases or stress cases, and therefore, don't identify performance defects.

None of these scenarios is desirable, but they can be easily rectified by incorporating performance objectives into your automation strategy from the start. You want to plan your approach to automation intelligently.

What Automation Is - and Isn't - Good For
You can't automate everything all the time. If you run daily builds, you can't do a massive load test every night. That idea would be even worse if you build several times a day. Instead, you'll have to pick and choose your test cases, mapping out what you do over periods of time in coordination with the release cycle for the app.

Too many use cases to cover at a time will kill your environment. Constantly high traffic patterns are next to impossible to maintain. Highly specific test scenarios can also cause difficulty because you may need to adjust performance tests every time something changes. That's why it pays to be smart about what you automate.

Look for a manageable number of tests that can be run generically and regularly. Then, benchmark those tests. After that, you can focus your manual time on ad hoc testing, bottlenecks, or areas under active development. This isolation will catch a ton of defects before production.

Get Automation Working for You
Automation can be great, but it has to identify performance defects and alert you. Just like functional tests validate a defined plan of how an application should behave, performance tests should validate your application's service level agreement. Define the tests in which you want to leverage automation. Is it for workload capacity? Or are you looking for stress, duration, and soak tests? Will you automate to find defects on the front end?

It's easy to automate these problems, and you can do it at a low cost. You'll want to establish benchmarks and baselines often to see if performance degrades as applications are further developed. Testing with direction means that you don't test just for the sake of testing. You always test with a purpose and motive: to find and isolate performance defects. This is a critical thing to do as a performance engineer because you're always dealing with pushing the envelope of the application. You need to know where that boundary lies.

Get Ready for Smooth Sailing
Automated performance testing can be a huge time saver. To make the most of that time-saving potential, you want to do it right. Work smart by always testing with purpose. Ready to dive even deeper into these topics? Jump right in and check out the full webcast here where we go into greater detail about automation strategies. You can also learn how Neotys can help you with the overall agile performance testing cycle.

Photo: Pixabay

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.