Test Retries | Bondar Academy
Course: Cypress UI Testing with JavaScript
Module: Advanced Features
Instructor: Artem Bondar
Lesson Summary
In this lesson, we discuss test retries in Cypress, addressing the issue of flaky UI tests . Flaky tests can pass or fail unpredictably, making debugging challenging. This can lead to inaccurate reporting, where tests that should pass fail intermittently. Understanding Flaky Tests Flaky tests may fail due to: State of the application at the time of the test Environmental factors affecting the test execution Implementing Test Retries To manage flaky tests, Cypress allows you to configure a retries feature . This feature automatically retries failed tests to determine if the failure is due to flakiness or an actual bug. Configuration Steps retries: { runMode: 1, openMode: 0 } In this configuration: runMode : Number of retries in headless mode (e.g., CI/CD pipeline). openMode : Number of retries when running tests in the Cypress GUI. Why Separate Configurations? Separating configurations is beneficial because: Open Mode : Used for developing tests where retries may not be desired. Run Mode : Typically more flaky due to slower execution in CI/CD environments. Advanced Retry Configuration You can also set specific retry attempts for individual tests: it('test name', { retries: 2 }, () => { ... }); This allows for more flexibility in handling particularly flaky tests. In summary, configuring test retries is a simple yet effective way to enhance the stability of your Cypress tests and manage flakiness efficiently.
Video Transcript
Hey guys, welcome back and in this lesson, we will talk about test retries. So UI tests are flaky by its nature. You may run your test 10 times, but on the attempt number 11, for whatever reason, you may test, may fail. Then you run this 12th time and then your test works. Then situations like this is very difficult to debug and understand what was the root cause of these situations. Sometimes it's not necessary a problem with your script. Sometimes it's just the app, the state of the application that was right at that moment when Cypress was trying to interact with it. Unfortunately, flaky tests like that create the impact on your reporting. When you expect, let's say, 100 tests to be passed and then one or two tests just failed for whatever reason, and you're like, damn, you know that this test should work, and you run this test individually and this test really works. So to handle the situations like this, to make sure that this test is not the real failure, it is just a flaky test that's sometimes just unstable, you can implement a retries feature in Cypress, which is very easy to configure, which will automatically retry for you running the failed test, just to make sure that this is either flakiness or there is a real bug. Let me show you how to configure this. So going back to our Conduit project. For this demo, I will use one of the tests that we created before, modify API response, and I will put API it's only like this, so we'll run only this test, and I want to fail this test eventually, so I will just modify the assertion, so the test is going to fail 100 percent of the time. Now, I will just show you how the retry feature works. So going back to configuration in this E2E block, you call the property retries, this guy, and then set the number of times, how many times do you want Cypress to retry failed test after the initial failure. So let's say if I put one, it means the Cypress run test once, and if it fails, it will make one more attempt to run this test, and if this test pass, then everything is fine, test pass. But if the second attempt also fails, Cypress will mark this test as failed. So let's see how it works. So npx Cypress open, and I'm running the runner, running the framework, and let's try to run this test. So it's running the test, you see the assertion right now should fail, and then Cypress tried one more time to repeat the same exact test, and it failed as well. And if I collapse the runner, you see it's showing attempt number one and attempt number two. And if both attempts failed, it marked that the entire test failed. Very convenient, isn't it? So, and you have a couple more options, how you can configure it a little bit more flexible. So this retries would work for everything, for run mode and for the open mode. But you can configure separately. It has open mode, let's say we put zero, and we also can, I need to use comma, and then run mode, I can put one. So what's the difference? Open mode is when you use command npx Cypress open to open the runner and run your test. And run mode is usually running the test in headless mode using the command line. This is the mode that you use when you run your test in CI-CD pipeline. Why is it useful to separate the configuration of retries, open mode and run mode? Because when you use open mode, normally in this mode, you run some individual tests, maybe you working on development of new tests and you don't want Cypress to rerun your test when you know it absolutely should fail. For example, you're just developing a new assertion or experimenting with a locator or something. And it's just irritating and annoying when it's running the retry on the open mode. But on the run mode, this is a default mode that is gonna be used when you run the full regression test suite. And also test in CI-CD pipeline, just naturally more flaky because test in CI-CD executed a little bit slower, the machines are not so powerful as your local computer. And the flakiness is far more often in CI-CD pipeline than on your local computer. That's why it's useful to separate. And so if I try to run this NPX Cypress run right now, we will see that it's gonna trigger the execution of the same test. And in the command line, you're also gonna see the attempts of running the test. So here we go, attempt number one, and then attempt number two. All right, test failed after the two attempts. And also it's created two screenshots. This is what was on the screen during the attempt number two and the attempt number one. So you can see the screenshots are named accordingly. And one more option you have to configure the retries. So let's say inside of your test suite, you have some specific test that is extremely flaky and you need more retry attempts for that particular test compared to all other tests. Let's say your default configuration is run mode just one, but for some tests you want three attempts or four attempts to try it often to make sure that, yeah, the test is really, the problem is in the test that some issue with the test and not the test flakiness. In this case, in the test level, you can configure this. So you just create the object in the test name. So after comma, you put the object and comma then this callback function. And you put the retries property also available over here. Let's say I put two over there. And now the settings for this test applied individually, no matter what settings is configured in the configuration file. So let me try to run this one more time. And with this configuration, this test should be retried two times. So total three execution, initial attempt, and then two retries. So let's see. So this is attempt number one. And attempt number two. And we are waiting for attempt number three. Yeah, and test failed. And also we have three screenshots generated. All right, guys, that's it. So this is how simple you can increase the stability of your test. If you deal with a test flakiness, one of the first thing to try, just configure the retry mode. Just add one or two times. And for especially flaky test, if you can't find the reasons why the test is flaky, you can add extra configuration in the test level to increase the number of attempts before failing this test. All right, that's it, guys. And I'll see you in the next lesson.