Managing Instructions and Prompts | Bondar Academy
Course: Playwright API Testing with TypeScript
Module: AI-Assisted Scripting with Copilot
Instructor: Artem Bondar
Lesson Summary
In this lesson, we explore how to effectively use instructions and prompts when interacting with AI, specifically within the context of Copilot. The key takeaway is that providing detailed and descriptive prompts leads to better outcomes. Understanding Instructions vs. Prompts Instructions are a set of commands that define how tasks should be performed, while prompts specify what needs to be done. It is recommended to keep instructions and prompts separate for clarity and reusability. Types of Instructions and Prompts Instructions: Workspace Instructions - Automatically applied to every prompt. Custom Instructions - Specific to certain operations. Profile Instructions - Related to user preferences across projects. Prompts: Custom Prompts - Specific to a project. Profile Prompts - Saved across different projects for user-specific tasks. Creating and Using Prompts and Instructions To create a structured workflow: Create a folder named .github in your project root. Inside, create instructions and prompts folders. Define a copilot-instructions.md file for workspace instructions. Use Markdown syntax for formatting your instructions, which will be automatically converted for better readability in GitHub. Leveraging AI for Automation By providing clear instructions and prompts, you can automate repetitive tasks. For instance, AI can generate instructions based on your application context, which you can then refine. It's essential to ensure there are no conflicting instructions to avoid unexpected results. This lesson emphasizes the importance of detailed prompts and instructions to enhance AI interaction and streamline workflows in your projects.
Video Transcript
Hey guys, welcome back and in this lesson we're gonna talk about how to use instructions and prompts. So, so far we were interacting with AI using prompts, providing those prompts in the chat window of the copilot. And I think you have noticed that if we provide not good enough prompt, not well descriptive enough prompt, the result is well not that predictive as we want it to be. And if we want a good result as the outcome, we need to paste like a big descriptive prompt with the details, with the examples and all that stuff. So for the repetitive workflows you definitely want to keep those prompts saved somewhere. Instead of typing the prompts again and again, you just paste it into your window and execute. That's it. But keeping those prompts somewhere in the notepad or Google Docs or somewhere, it's probably not a good idea. And Copilot has a built-in mechanism for saving the prompts and the instructions that you can later reuse for your typical repetitive operations. So now let's talk about some theory what is instructions and what is prompts. So think about instruction as a set of commands that defines how things should be done. And the prompts is the set of commands that defined what have to be done. So see the difference right? How and what. And you probably may ask the question, well can I put everything in the single prompt? First define how and then define what. So for example put the instruction and then put my prompt. Well yes you can. But in most of the cases you will have a repetitive operations that requires you to provide the context of how you want things to be done. So instead of copy pasting this description providing the context between the prompts, you better want to separate those. So you will save instructions in one file and then prompts that referencing to those instructions getting the context about how things should be done and then what have to be executed. So that's the basic idea behind it. So now let's talk about what types of prompts and instructions Copilot has. So before we jump into the project let's talk about some theory. So Copilot has three types of instructions and two types of prompts. For the instructions it have workspace instructions, custom instructions and profile instructions. And for the prompts it has just custom prompts and profile prompts. What's the difference? So the workspace instructions is the instructions that applied automatically to every prompt that you type in the chat window. So it's the context related to a high-level information about your project. So what your project is about, how it's structured, what's the goal of this project, what's the main components of this project, how do you use this project and all that stuff without any specific what exactly have to be done. It's just a description. What is it about, what it do and so on and so forth. And when you type the command the workspace instructions applied automatically to every request to AI. And then you have a custom instructions. Instructions related to a specific operations that you want to perform within your framework. So you put those separately. And you have a profile instructions and that related to your profile. Profile instructions can be useful if you work for example with different projects and you want from AI some specific type of output. For example you want response to be shorter or something like this. Or you want a specific styling from the AI and it's related only to you. It's not related to the project. And you want to keep this instruction from project to project no matter what repository you open this instruction will be applied. We're going to talk about only two types of the instructions. Here is the workspace instruction and custom instruction. And for the prompts the same thing. There are custom prompts and profile prompts. So custom prompts related only to the project and the profile prompts the same thing. It can be related to your profile. If you work with different projects across different projects you can have your own custom prompts that you want to be saved across different projects and reuse across the different projects. So that's the difference. And with all this theory done I think it's kind of a lot. Let's jump to our project and I will show you all this stuff in action how this can be applied. All right let's get into it. So this is our test project and first of all what you will need to do if you don't have it yet. You will need to create a folder in the root of your project dot github. So this folder will be used to save our prompts and instructions. And also this is a default folder for github actions. For example if you decide to configure CI CD with github actions your configuration file will also live over there. So this is a pretty common folder for the projects dot github. After you created this folder I expand this and I currently have some predefined structure. We're not going to type anything. I will show you some of the prompts and the instructions that I created in advance and you will be able to find those below this lessons just to copy paste if you want to repeat this on your site. So you will need also create here instructions folder and prompts folder. So these folders again they defined by the github as a default naming convention for the instructions and prompts under the dot github folder. You can rename it and override the naming in the VS code settings but in this case you will need to update the default settings. We don't want to do this we're going to use all the default thing. And also we see in the root of dot github we see this file copilot-instructions-md. So this is instruction file which is a workspace instruction. So that's the file that I mentioned before. Instructions defined in this file applied automatically to all the requests that you provide in the copilot. And the naming should also be like this copilot-instructions-md. So md is the markdown type of the document and this is the most understandable format for the LLMs to interact with LLMs and provide the instructions. And it's actually very easy to use. So let's open copilot-instructions-md file and let me show you around. So as I mentioned before workspace instructions is a file that is defining on the very high level what your project is about, what it do, what's the structure and what's the outcome of this project, what is it for, some coding standards, some syntax standards and so on. And let's look into this file. So how to use md file. When you have some some syntax. So when you have a hash sign it's h1 tag like a bigger header. Double hash is h2 tag. When you have three hashes it means that it's h2 tag and h3 tag. When you have this double asterisk it means it's a bold text. You can also use things like a dash or numbers to structure this. You can also put the code snippets with this blocks with triple backticks or a single backtick with a code reference. And this format is actually very nicely displayed. If you right-click, reopen with editor and I reopen with markdown preview. I click like this and look it's nicely formatted. This is our object with all the examples and so on. And this format is how you will see this file if you open it in the github. So github will automatically convert md into this nicely formatted view. So let me switch it back so to text editor and let's make a review. So project overview. First of all I start with a very high level what this thing is about. So this is playwright based API testing framework designed for testing REST APIs with features included custom measures, schema validation and authentication handling. Well very very high level. Key technologies we use playwright framework, TypeScript, AGV for schema validation, JSON for schema generation. All right pretty clear. Then project structure, all the folders and files, what they do, core patterns. The request handle class provides fluent API. This is the example how we write the test in the TypeScript. Then description of each method that we have. Then custom expect measures. We show that we use custom match assertions and this is the example. Should match schema, should equal, should be less than. Then we have test structure pattern is that two imports and then test body. We also describe that. Authentication pattern. So since our framework has a custom way of automatically managing the authentication we need to manage and it's preferably to mention that is in in the custom instruction. So authentication token automatically created. Token is included and use clear auth to remove the token. Then few paragraphs about how schema validation works and then common development patterns. So when creating new test, import test, use API fixture, validate every response. Single test can have a sequence of several API tests. Use camel case for constant names. Assign API response to the constant. Do not assign response for the delete request. We have to be crystal clear how we expect this framework to be used. So for example even assign API response to the constant. If the response is provided then assigned to the constant. And do not do this just to make sure for the delete request. Most likely copilot should know it but anyway we better to provide it. Then a separate instruction. When creating positive pull request we describe that we want the request object file, save request object into request objects folder and MD file also provide the way to create hypertext link. So this request object is the hypertext link for this path within our framework and this type of syntax is also very useful because copilot can use those internal links. You can link documents together or direct links to the files specific in your project and copilot will be able to use that link to navigate to this file. So this syntax is very useful. We provide the pattern how the request object should look like. Import request object into the test. Create clone import request object for every test that needs it. Use a structured clone. Remember that thing that we talked about before that we need to create a unique object for every test to make sure it will not overwrite. And provide the examples. Look example number one, example number two, example number three. Examples are super super important to set up the context. And the final thing is test example. Example test template and this is just a template how typically we write the test. So this file will be provided for absolutely every request that we provide in the chat window for the copilot. And copilot will now understand the context what's actually going on. So moving on. So opening the file and now we have the instructions folder and prompts and folder. How those can be used. So for the instructions remember when we were trying to create the schema validations for the test that did not exist. So we created kind of like a prompt that big and it worked but not really. So we had a mistake when as far as I remember for one of the tests the prompt didn't work as expected. So we did not update the test how it was set in the prompt. That's just because we did not provide enough and clarity in the description. But if you provide a separate instruction how to work with the schemas this will never happen again. And look this is the example for the schema instructions. For the schema instructions you can set at the very top apply to which define the scope. Which type of files this instruction can be applied. In our example we need to apply this instruction only to the spec files. It should not be applied to anything else. And the same thing. So this particular instruction defined in the very very very details what do we want to do. Required add schema validations to all requests that return response body. Delete request. No. Exclude the delete request and skip API requests that already have should match schema. Then detailed description how should match schema works. Argument number one. Argument number two. Argument number three. What they do. Implementation pattern. Boom. Example. Code placement. We say add schema validation immediately after the API request. Place before any other response. Assertion. This is how we want the formatting to be done. And make sure use async await syntax for the expect. And examples again. Get, post, put and delete. And you see this file defines only how we do schema validations. It doesn't say anything what we should do. It only says the context. Hey we have this method. This is how method works. And this is how you should use it. And then after you describe this instruction now you can create a custom prompts what to do with this instruction. I have created two prompts. Let's start with the simple one that we did before. Add schema validation. So you see this one is significantly smaller because you don't need to provide all that big context anymore. Your instruction handles this context. Now the prompt handles only exactly what have to be done in relation to the instruction. And here add should match schema. So I start my prompt specifically with what I want to do. Add should match schema validation to all API requests in the file that return responses. Right? This is what we want to do. And I add follow the schema validation instruction. And I provide the direct link. You see it's internal link to the file inside of my project. And I provide internal link which instruction have to be used. And when I use this type of the format the instruction automatically will be added to the context window once I call this prompt. Because the link is related. And you can add as many links as needed right here to provide as much context as you need for this specific prompt. And then straightforward task. Add schema validation. Get post post delete. Skip delete. Set third argument to true for new schema generation. Skip requests that already have schema validation. Follow naming convention from the instructions. And a quick example how things should be done. And that's it. And now you can call this predefined prompt to update the schema in the test if they are missing. So let's try it. Okay. So previously we did this. This is our smoke test.spec.ts. And currently it has all the schemas created. So let's mess around a little bit. And I will just remove the schemas that we created before. So I remove this. Remove this. Remove this. And everything that was before. And I will use a predefined prompt that I created to add a missing schema. So two of the tests should match schema exist for get articles and get text. But the rest does not have a schema. So let's do this. So going back to the compiler. And here how to call the prompt. So you just call the forward slash. And look the predefined prompts automatically become available over here. So currently I have add schema validation and update schema. So let's run this prompt. So I click on this. And I don't need technically type anything else. Because prompt is defined. A prompt has relation to the instruction. So it should just do the stuff. And I click enter. And look once I click the enter the schema validation instructions was added automatically to this context. And the github compiler already know what to do. And just you know processing the standard response. Identifying the scope of work and should do this stuff. Let's wait. All right and I think we are all set. Let's check it out. So look the existing schemas it didn't touch. So this assertion was not touched. But for those that did not exist it added this one. This one right here. Right here. Right here. And it was following exactly the convention and the rules that we provided inside of our instructions and inside of our prompts. Everything done perfectly fine. So this is how this stuff works. Let me keep all those changes. And we are all set. And now let's ask it to remove all the true assertions. So again for that we need to provide right now just a simple prompt. Because the schema validation instructions you see it's already attached. And for example if you need to attach the instructions without the custom prompts you can do that as well. So you just click on add context right here. Click on the instructions. And look the instruction is available. Schema validation. And now I can ask something. Remove true from all assertions. And hit enter. And it will figure out that the true have to be removed only for should match schema assertions. Because it knows that this is related to schema validation. So let's wait for that. All right. Everything is set. So let's check it out. And it did everything correctly. So all the true was removed. And I keep all those changes. All right. So we effectively went back to the initial state of this spec file. Now let me show you a more interesting example. So imagine that you have 100 tests in your application. And then you run your framework. And you find that about 20 or 30 tests fail schema validation. And the schema have to be updated. So what you have to do is technically go to every of those 20 tests. Change the value to true to update the schema. Run the test. Update the schema file that way. Then remove the true. And that way validate one more time that updated schema work correctly. So kind of a lot of work, right? So this is a long workflow. Can we ask Copilot to do for us this kind of a maintenance work? Well, yes, we can. Let me show you how to do that. So going back. And all I do is I create a new prompt. And I call update schemas prompt over here. So let me open it for you to read. And look, this prompt defines exactly what I just described with you step by step. And I again refer to my schema validation instructions. So the Copilot will know what this schema is about, how does it work, and all that fun stuff. And then I just provide the task. Hey, do this. Run tests using npx playwright test. Done. Read the terminal output to identify schema validation failures. Parse failure messages to extract test file names with schema validation failures and specific should match schema calls failed. So what do we want to call? What do we want to look in the terminal? Update only those failed schema validations by adding true as third argument. Run tests again to regenerate schemas and verify fixes. Remove the true flag from all should match schema that were updated on the step four right here. And then run tests again to validate all tests should pass. So this is a complete workflow that we just described that we need to update the schemas. And then some additional instruction pattern recognition. Look for error patterns like error schema validation. Test file paths in the stack trace, for example, like this. And should match schema method called in error output and some implementation examples. So only change failed validations from to this stuff. And additionally, a few more requirements. Read and analyze to compile terminate provider. Only modify schema validations that actually failed. Leave successful schema validation untouched and target specific test file mentioned in failures. That's it. So this is a workflow that you probably may need time to time for the framework maintenance, right? So let's compile and do this stuff for us. So let's test it. I will mess with some of the schemas. So response schemas, articles, let's say post articles. And let me change the favorites count from integer to string. So this schema will fail 100%. And then what I also gonna do, I will temporarily comment these two assertions. Why I'm doing this because this test gonna fail since we will try to create the article and when the article will not be deleted, it will create more numbers of the article. So this assertion will fail that we were gonna have more than 10 articles. So I don't want to interrupt this. So I temporarily comment those steps and that's it. Let's run this workflow. So going back over here, creating a new chat window, and I called the prompt update schemas. So what's gonna happen now? Copilot will run all my tests. It will identify which one is failed and then will regenerate the schema, will update the schema and will validate that this was successful. So let's run this. Okay, it's working. You see the schema validation instructions also added automatically. First step is to npx playwright test. Let's go. So first it's running the tests. Let's see. And yeah, the test failed. Two tests failed. Now it's gonna look what's actually failed. Okay, it found that two tests failed. Good. And I'm gonna work on fixing this thing. All right, so it's identified which exactly assertion have to be updated. So it's post articles true and post articles true. Okay, continue. It's running the test. This should update the schemas. All right, this step passed successfully. Great, passed. Moving on to the next step. All right, now it removed all the true flags and we need to validate one more time. The schemas were updated. Running tests again. And test passed successfully. And finalizing this workflow. Perfect. All tests passing. Identified the failures. Added the true. Verified all tests pass. And this prompt was completed successfully. So this is how you can leverage AI for your repetitive tasks and workflows by providing the detailed instructions and detailed prompts to just automate your routine stuff that you may need for the API automation. And one more thing. You saw this quite long detailed instructions and prompts, right? But the thing is that I didn't write those by my hands. AI generated those. So I have just provided the entire application to the AI. I say, hey, generate the instructions for this application, for this framework. And it's generated 70% of what you saw on the screen was generated actually by AI. Then I made a review, corrected what is not needed, removed the fluff that it's not needed. AI is often do repetitive things, unneeded things. So you just remove all that fluff and add what is missing. So 70% of these prompts and instructions were actually generated by the AI. And one more thing. So when you're designing prompts and instructions, make sure that in different instructions and different prompts don't have internal conflicts. Because if you define some of the rule or behavior in, for example, a workspace instruction, but describe it differently in the custom instructions, you may have a conflict. And AI may behave differently or give you some weird result that you don't expect. And the thing that AI will not tell you what went wrong, it will just give you the result based on what you have provided. So it's also a good idea to ask AI to feed in the context like, hey, check my instructions and prompts. Do you have any conflicts? Give me those lines, which you may be not clearly understand. So that way, AI can look into your instructions, look into your prompts and say, hey, this paragraph or this sentence is not clear. It contradicts with this sentence in this instruction. Can you correct those? And you're like, oh, okay. All right. You will updating those, making everything crystal clear. All right, guys, I know this was a pretty long lesson, but now you know what's the difference between instructions and prompts, how to save those, and how to leverage prompts and instructions to automate your workflows in the test application. All right, that's it and see you in the next lesson.