Assertions Generation and Agent Auto-Debugging | Bondar Academy
Course: Playwright API Testing with TypeScript
Module: AI-Assisted Scripting with Copilot
Instructor: Artem Bondar
Lesson Summary
In this lesson, we explore how to use Copilot to suggest missing assertions for API tests. Here’s a structured overview of the key points discussed: Setting Up the Environment Modify playerconfig.ts to include a flag for the HTML reporter as openNever . This prevents the automatic opening of the HTML reporter when tests fail, allowing Copilot to continue working in the same terminal. Testing with Copilot We begin with a test for fetching articles that already includes some assertions. The goal is to see if Copilot can suggest additional useful assertions based on the response JSON object. Steps to Use Copilot Run the test to obtain the JSON response object. Copy the response and prompt Copilot by asking for useful assertions while specifying what to exclude (e.g., data type validations). Review the suggested assertions, such as: verifyArticleSorted verifyEachArticleHasRequiredField verifyArticleCount Select meaningful assertions to keep, such as verifySlugMatchTitleFormat . Implementing and Testing Assertions After selecting an assertion, provide business context for validation requirements. For instance, the slug should match the title format with specific rules. Running Tests Use the command npx player test -g to run specific tests. If a test fails, Copilot can analyze the error and attempt to fix it automatically. Key Takeaways Copilot can interact with the terminal and self-correct based on error messages. Using a more advanced model like Cloud 3.5 may yield better results. Always verify that assertions remain meaningful after Copilot's modifications. In conclusion, Copilot's ability to suggest and refine assertions can enhance API testing efficiency, but careful oversight is necessary to ensure the validity of the assertions.
Video Transcript
Hey guys, welcome back. And in this lesson, we will ask Copilot to suggest us some of the assertions that we probably may be missing for our API tests. And let's see how it's gonna work. Let's jump into it. So before we continue, in the playerconfig.ts, add a little flag over here for HTML reporter as open never. So what this thing does is when you run test in the command line and if the test fails, it will prevent automatic opening of HTML reporter. And that way, it's kind of blocking the terminal. And if Copilot working in the same terminal at the same time, then Copilot cannot continue doing its job. So adding this flag, preventing this. And yeah, that's it. Let's move on. This is our get articles test that we created before. It already have some assertion to match schema. Should be less than equal 10 for the length of the articles and articles count 10. Let's see if Copilot will be able to suggest any other useful assertions for this test. And for that, I think I will need provide some context about the response for this articles, the JSON object that Copilot can look into and then give the suggestion. So first, let me get this object response log, and I'll run this test. So running the test, and this is the entire JSON object. And I'm copying the whole thing, Ctrl C. Opening the Copilot, and let's start our conversation. Here is my JSON response object for the get articles test, and I paste it right here. I want you to suggest the assertions that can be useful for validation, something like this. And also, let's talk about what we don't want to be included over here. So for example, it can add some maybe data type assertions or something like this, but it's covered by our schema validation, so we don't need this. Do not include any data type validations as it's already Covered by shouldMatchSchema method. ShouldMatchSchema method, and I believe that's it, so we can remove this console log, and yeah, let's run it. Let's see what's gonna happen. So it's working, it's analyzing current test file. All right, so verifyArticleSorted, verifyEachArticleHasRequiredField, verifyArticleCount, verifyArticleSlogMatchTheirTitles, and verifyFavoritesCount is a non-negative number. So this is the assertions that Copilot suggests. So now it will add the implementation. My god, so it's a lot, let's see. So validateArticles are sorted by createdIn. Maybe, but probably we don't need it. VerifyEachArticleHasValidData, and now it's looping through. ArticleTitleSlog is not equal empty, so it's kind of checking if it's not empty, not very useful. VerifySlogMatchTheTitleFormat, okay, it's doing something else. So I need to adjust assertion method to match custom expect implementation. Okay, it's using shouldEqual, okay, so it's updating with shouldEqual. Added several meaningful assertions, good boy, okay, required fields, slug format, non-negative, author perspective. Okay, stop, stop, stop, okay. So let's see, verifySlogMatchTitleFormat. This one is actually good, because our slug is actually based on title, right? So it's kind of meaningful business validation that slug should be based on the title and the format should be the same, all right? So verifyFavoritesCount, non-negative, maybe, and verifyArticleHasAnAuthor, also maybe as well. So let's use just a single example out of this list and work with it. So let's say we want to add this SlugMatchTitleFormat. So, okay, I want to keep only this assertion in the test. And I provide it like this for the reference. Remove all other suggested assertions. Okay, and probably would be useful to give some business context behind the business requirements. What are the requirements for this slug in relation to the title? Here are some business context requirements for the slug format, like this. Okay, slug should be the same as a title, except the words in the slug should be separated by dash. And at the end of the slug, the words should be separated by dash, and at the end of the slug should be added any number, like this. So we have a slug, which is a title, and then dash, any number. So this is the business requirements. Yeah, I guess that's it. No, hold on, it's not. It is always a good example to give the example in the prompt. So the compiler has a better reference. So let's give him example of the title and the slug, so examples. So here's the title, title, and how can I get the title? I close the execution, and open the terminal, new terminal, test results. Okay, here we go. So this is the title, title, and let's add slug. And let's add slug like this, and I believe it should be enough, all right? So let's run this, let's see. So I understand what you keep slug format validation, and that according to the business requirements, all right, go for it. All right, it's updated the test, it's keeping only a single validation as we asked, mm-hm, all right, the only one step left is to make sure if it's actually working. So to run the specific test by the name, I have created an instructional prompt. All right, run test, now specific test and play rate by test name. You can use command like npx player test-g, add to the item, where to do item is the same name of the test. Run, GetArticlesTest. And it should use the terminal command to execute just this GetTestArticle. Okay, so it's asking me, do I want to execute this command? Yes, let's continue, and it's running the test, and unfortunately, test fails. Why did it fail? Because, okay, we have an entire response object expected true, but received this one. Okay, so Copilot is now trying to fix this automatically. So it's reading the response in the terminal, making the adjustments. And according to our business requirements, trying again and again until this test pass. So let's see what's gonna happen. So we run it again, and again, it failed. There is a new error, most likely. So true, and yeah, no progress so far. All right, it may change again. Okay, finally, after many attempts, it was able to figure out this assertion. But you see, it kind of look a little bit ugly over here. And let's ask it to simplify. Okay, good job. Now, let's simplify this assertion. Refactor to look as simple as possible. And let's see. Okay, it did a refactor into just two lines. It's a little bit better. Okay, test still passing, also looks good. The only one thing I guess I don't like is this assertion. So let's ask him to refactor this assertion to look simpler. And you not necessary need to use a custom. Custom assertion should equal. Use a more suitable assertion method. If needed. Okay, so it's offering to use two match assertion. All right, now it looks much better. All right, and let's run this test to make sure that it still works. No, it doesn't work. Come on, man, what did you change? Yeah, go ahead and fix it if you broke it. All right, finally, it's passed. All right, test passed. It replaced the assertion. The assertion finally looks a little bit better. I think if we would tweak it a little bit here and there with this Regex expression, we can look even more better. But anyway, it's okay for us because it's a valid assertion that would validate our business requirement. So guys, here's the main takeaway. The copilot can interact with the terminal and can fix itself. So if you ask it to do something, and if the test fails, it can read the terminal log immediately, analyze the error message, and use this error message to kind of auto-fix itself. Also, if you would use a better model, more newer model, that Cloud 3.5, which has a bigger context window. I'm sure that the result would be better and faster than this. So the main takeaway that copilot has this kind of autopilot mode that it can run, self-improve, self-improve until you have the passing result. But keep an eye, sometimes it can just stuck in this vicious cycle and can simplify so much that assertion doesn't make sense. So just double check those iterations, making sure that your assertion are still meaningful. All right, that's it, guys, and see you in the next lesson.