Update Assertions using Copilot Agent Mode | Bondar Academy
Course: Playwright API Testing with TypeScript
Module: AI-Assisted Scripting with Copilot
Instructor: Artem Bondar
Lesson Summary
In this lesson, we explore using Copilot agent mode to enhance our testing process by adding new assertions to our tests automatically. The focus is on utilizing Copilot to fill gaps in our existing tests, specifically by implementing schema validation assertions for all API requests that return responses. Key Steps in the Process: Open the smoke test.spec.ts file, which already contains some tests and assertions. Identify that schema assertions were previously added only for get articles , get tags , and post articles . Switch to agent mode in Copilot and use the CLAW 3.5 model. Craft a specific prompt to instruct Copilot to add schema validation for all relevant API requests. Prompt Structure: Your prompt should include: A clear headline stating the task. Specific details on how to implement the task. Contextual information about the method should match schema , including its arguments: Folder name of the schema location. File name format: [request type]_[endpoint] . An optional Boolean argument to generate a new schema file. Execution and Review: After running the prompt, Copilot adds the necessary assertions while respecting existing ones. It also generates new schemas as required. The process concludes with running tests to ensure everything functions correctly. Finally, a follow-up task is performed to remove the true flag from all assertions. Takeaway: The effectiveness of Copilot relies heavily on the clarity and specificity of your prompts. Experimentation with different tasks will enhance your interaction with AI tools. For reference, the prompts used in this lesson will be provided in a separate document.
Video Transcript
Hey guys, welcome back and in this lesson, we will use Copilot agent mode to add new assertions to our test. So let Copilot do the work instead of us. How about that? Let's jump into it. So this is our current smoke test.spec.ts. It already have some tests and some assertions. Previously, we created should match schema assertions, but we didn't add it to all API requests. So we have it only for get articles, get tags, and post articles. But the rest of the tests do not have those assertions. So let's ask Copilot to fill this gap and add the assertions where needed inside of our test according to our specification. So going back to the Copilot window and make sure that you are switched to the agent mode instead of edit or ask mode. Also, I'm going to use CLAW 3.5 model for that. When you want to give a task to a Copilot to do something, you need to provide a prompt. Your prompt have to be as specific as only possible what you want to do and how you want to do it. Normally, when you start the prompt, you start with some headliner of what in general you expect Copilot to do for you. After that, you provide a specific details, how this job should be done, and what's the expected result. This is a layout of your prompt that you should target. Also, when you design the prompts, make sure that you are as specific as possible and that your prompt does not have any conflicts in what you are trying to do. For example, if you write something like, add a new prompts and delete not needed ones. It's like, okay, adding prompts or delete, and what's not needed prompts even mean. Vagueness and conflicts, avoid that. If you be clear and specific, Copilot will try its best and most likely it's going to do what it's supposed to do. To save just some time, I have prepared the prompt in advance to complete this task. Let me just copy-paste this prompt, and when we review it together. Here's our prompt and let's review it. The headline, I want to add schema validation to all API requests that have responses. Why I added this that have responses, because a delete request does not have response, so we don't need a schema validation. To make schema validation, use method should match schema. This method should be used for all API requests that have response. Delete request does not have response, so schema validation is not needed for it. Just to make sure, I gave extra clear instructions that, hey, for delete request, we don't want a schema validation if it will try to add this. Now, let's give it explanation to Copilot, how should match schema actually works, so it will better understand the context. Should match schema have three arguments. First argument, folder name of the schema location. It should match the name of endpoint, example tags, is the endpoints and the folder name should be tags. Providing examples to AI is considered a good practice. So you give it instruction and then give an example. Hey, this is example how I would expect it to work. The second argument, file name of the schema. It should be in the following format, request type. I put it in the square braces, underscore endpoint. Example, get tags, where get is request type and tags is the endpoint. The third argument is Boolean and it's optional. When true is provided, new schema file will be generated. So we explained how this method works, so Copilot understand the context. Now, what we want it to do. Add the schema validation assertion to every API request according to the instructions, and set third argument to true to all new assertions. Ignore API requests that already have schema validation assertion added. So we also added instructions making sure that Playwright or Copilot will not touch the assertions that already in the script. We know they are correct and they are working. We just want to add a new assertions to the API requests that don't have it. Instructions are set. So just let's kick it off. So I running this prompt and let's see what's going to happen. So it's typing some. Okay, I will have schema validations doing some stuff. It will clarify the requirements, and right now it should start adding the code. All right, seems like it's done, and it is saying what it did. So for create delete request, for create update delete article test, it's added no schema validation needed for here, and their result already exists. This already exists, will be generated. So look, it figured out that these two schema already exist. The new schema will be generated when we run the test. Now we can review what's actually was changed. So this test was not touched. This was not touched as well. So it's added this line, but we didn't ask it to add. Look, so we already have the assertion. So it missed a little bit. It means that for the future requests, we probably need to provide more explicit prompts, maybe with some attention or something like this, so it will not create this miscoding that we required not to do, because we ask specifically, hey, everywhere where it is, where a should match schema already exists, don't add it, but it still did it. It worked fine for this one, but it ignored the second one. It's all right. Now you can just accept or ignore. So for this particular one, let's click Backspace, and we ignored this one. All right, moving on. For the rest, it seems fine. So it's created. Should match schema get articles true. It's okay. This one get articles. This is post articles. Look, it created a completely new one, which is put articles. So Copilot figured out that, hey, this is a put request. So the syntax should be articles based on the endpoint as we provided, and put articles as a file name, and so on. So far, so good. So you can just keep all those changes instead of even one by one, or just keep everything, keep everything, and everything is added. Now let's, I believe, run this test and see if it is working. So you can either run the test the same way how we did, just clicking on the Run in the Test Explorer, or you can ask Copilot to do this as well. Copilot has access to the terminal and it can run any commands in the terminal. So I can ask something, run current spec file. And I think I would say, run tests in their current spec file. And hit Enter. Let's see. All right. And it's offering us, do you want me to run the test? And you can say, yes, continue. It's running the terminal. And all tests are executed successfully. Great, all tests passed, and everything is great. So after this was completed, we probably should have a new schema generated. Yeah. Put article schema was generated automatically. So after this part is done, we can say something. Looks good. Now remove the true flag from all should match schema assertions. And let's do its job one more time because after we updated all the schemas, schemas were generated, we just want to remove all the flags. And let's compile it, do this. All right, it's completed. So let's check what it was done. And everything is correct. So look, it's removed all the true flags, keeping the rest of the assertion correct. So we can just keep it and changes are saved. And also look, do you want to run the test one more time just to verify if it's still passed? Yeah, sure. Let's run it one more time to make sure the test is still working as expected and everything is working as expected. And Copilot was able to confirm that. Perfect. All right, so this is how Copilot agent mode works. It can interact with your files and write the code instead of you covering some of your routine tasks. What is important takeaway is that you have to be as specific as possible and as clear as possible in your prompting and providing enough context. So if you will not be clear enough, then you may have not the results that you expect. So play around with it, experiment, try give it a different task and you see what's gonna be the outcome. You're gonna get a feeling later with a more practice with the Copilot and with AI, you'll get a feeling of how to interact with LLM and adjust your prompting. The prompts that I used in this lesson, I will attach to a separate document below this lecture. So you can just copy paste them directly in your Copilot chat to play around with it as well. So that's it for this lesson and see you in the next one.