Copilot Ask Mode and Context Window | Bondar Academy
Course: Playwright API Testing with TypeScript
Module: AI-Assisted Scripting with Copilot
Instructor: Artem Bondar
Lesson Summary
In this lesson, we explore the functionality of Copilot Chat , which offers three main modes: ask mode , edit mode , and agent mode . Modes Overview Ask Mode: Allows users to ask questions related to their project without modifying any code. It provides answers based on the current context of the application. Edit Mode: Enables users to ask questions and allows Copilot to make modifications to the code, which can then be accepted or rejected. Agent Mode: A more interactive mode where users can assign high-level tasks to Copilot, such as creating configurations or writing tests. Model Selection Users can choose from various models, with Cloud 3.5 being a recommended free option. For better results, users can opt for Copilot Premium , which provides access to advanced models like Cloud 3.7 . Context Management Providing context is crucial for effective responses. Users can: Use the active file as context. Add additional files or folders to the context. Utilize workspace participants to enhance the quality of responses. It's important to balance the amount of context provided to avoid LLM hallucination and ensure relevant answers. For instance, if analyzing test files, only include the test folder in the context. In summary, understanding how to effectively use context in Copilot Chat is essential for maximizing its capabilities while minimizing irrelevant outputs.
Video Transcript
Hey guys, welcome back. In this lesson, we're going to start using Copilot Chat. So let's get into it. All right, so Copilot Chat is opened on the left, maybe it's on the right for your site, and let's quickly overview the interface of the Copilot Chat. So here in the bottom, Copilot Chat has a few different modes. So ask mode, edit mode, and agent mode. So what's the difference? In ask mode, this is the mode where you can ask questions to LLM, including the context of your application, and chat will just give you the answers related to your project, but will not change anything in your project. When you switch to edit mode, in this mode, you can ask questions to your project, and Copilot can modify your code and the file that you're currently working on, and then decide accept it or reject. And the agent mode is a kind of a more interactive mode of the Copilot, when you give some high-level tasks to Copilot. Hey, let's, I don't know, update for me something, or create some configuration, or write a new test suite or something with these parameters, and it will go and create needed files if needed, update the existing files, write terminal commands for you, and do everything what is needed to complete this task. So it's a kind of the more, not aggressive, but the more interactive way of using Copilot Chat. So let's start with the ask mode with this beginner, just to get first feeling of how it works. And what else? Also, you can choose the model. So currently, I selected Cloud 3.5. In ask mode, you have more models, you can see, but if you switch to the agent mode, you have just three models. In ask mode, we have additionally O3 Mini and Gemini Flash to work with images. I would prefer to use Cloud 3.5, and I'm talking about the free models, and GPT 4.1, or either of those. If you want a better result, better context window, we'll talk about it in a second, you need to sign up for Copilot Premium. You will have less limitations on how many prompts you can use inside of your project, and also you will have access to more advanced models, such as Cloud 3.7. But for us, 3.5 gonna work just fine. What else? Also, you can provide the context for your requests. By default, you see the context is selected the file that is currently active. For example, when I'm switching between the spec files, you see the name of the files is also changing. So if I currently ask something, any question to Copilot, to LLM, it will do its best to provide the answer based on the context of the file that's added to the context. I can remove the context completely if I click on this little eye over here, then this request will be completely contextless. So Copilot will not know anything what we are talking about, anything about our project or something. You can add more files to your project by clicking Add Context button, and either add files or folders. You can select, for example, response schemas. Maybe you want to add it to the context, or let's say you want to add entire tests folder. In this case, Copilot will know about all files of the folder and stuff like this. And you can use participants, how they called it, chat participants in the ask mode, to use the entire workspace information. So you can select, for example, workspace explain, or just workspace, and then ask the question in related to the workspace. And that actually matters because the quality of the responses depends on how much context do you provide to LLM. And let me show you the example. So let's say I will ask the question, and by the way, using the ask mode is useful when you, let's say, exploring the project. Let's say you clone the repository, don't know what this project is about. You want to get an idea about how it works, maybe to create some plan, how to use this project. And the ask mode is the safest mode to just brainstorm within your project. And while the other modes is when you actually want to modify the code. So the ask mode is for this purpose. Let me show you the example. So let's say I click right now, smoketest.spec.ts, currently this file in the context, and I will type something, explain the architecture of this framework. Something like this, and let's see what's gonna happen. So it's thinking and giving me some answers, test runner, custom API client, supports HTTP request, custom assertions. It has data management through Faker. It shows some, yeah, it found some structure like this. So you see, it's only was able to pull the structure of the framework just based on the imports that we provided over here. It does not see anything deeper. Building pattern, fixture pattern, object pattern, reusability, and that's pretty much it. So it's kind of gave us the answer and gave some explanation, but not deep enough. And now let's ask exactly the same question with the entire context. Then look, when we ask the question, it used one reference. If you drop this down, and that's this file was used for the reference. Now let's ask the same question, but we'll add a workspace at the beginning, like this workspace, and send this question one more time. Right now it used 18 references. So it looked in other files, and now look, and this is significantly more intelligent answer. Based on testing framework, custom text, test and expect it's AGV library for schema validation. Then we have request handler, API logger, schema validator. It explains what everything does. Project structure, we have a custom assertions. Example, how we design the test, how we do the assertions, data generation, configuration, support different environments, where we configure environments. And it also gives you the references. Okay, the configuration is done here. Data generator is done here. You see a significantly, significantly better answer. So what you need to understand about the context. So every LLM has a limited context window. The cheaper version of LLM you have, or the earlier version of LLM, you have the smaller context window. So think about the context window as operating memory in your computer. So it's limited. And remember if you open 50 tabs in Chrome browser on old laptop, what gonna happen? Your laptop becomes struggling. It works slower, it responds slower, glitching and so on. The same exact stuff happens with LLM. When you load too much context to the context window, then LLM can start hallucinating, providing you false responses and stuff like this. If you provide not enough context, then you may receive a vague responses, just some fluff without real useful stuff. So when you work with LLMs, you need to find the right balance between the loading the context and how much you need to provide and how much you can provide. It's not necessary that you need to load the entire workspace for every question every time. That way you run out of your context window very fast and then the next question will be quite fluffy. You want to provide enough, just enough context to answer the question you're looking for. For example, if you want to analyze just the test files and for example, I don't know, which test do you have, the number of tests and so on. Instead of providing the workspace, you can create add context window, folders, files and folders, and only add test folder, that's it. And only inside of this context, the questions will be asked. If you want to ask the questions related to configuration, you can do something like this. For example, you can open this configuration file, you can open this configuration file. These are both configuration files. Close these ones, so currently both of those are open. Then going to add context, and here you can select open editors like this. And when you select open editors, all files and tabs that are opened right now inside of your editor automatically added to the context and you can ask questions related to that. Well, that's pretty much the main functionality. We're not gonna use the ask much. Over here, we mostly need the agent to actually write the test, to speed up writing API testing, but that's like context 101 for you. When you want to get a good result, provide just enough context, but not overload to minimize LLM hallucination. All right, we're done with this section. And let's see you in the next one where we actually start writing some tests using AI.