In this chapter we're going to add tools to our AI. The tools we are going to add are going to be the Terminal tool, which will allow the agent to run commands. Create or Update Files tool, which as its name says, will allow the agent to create or update any files within its environment. And finally, we're going to have read files, which will be able to read files. We're then going to add a completely new prompt for our agent and we are then going to implement the agent network and the routers.
So we are going to heavily rely on AgentKit by Ingest. You can use the link you can see on the screen or the link in the description to let them know you came from this video. So in here, what we are going to do is we are going to add some tools. Tools are used to extend the functionality of agents for structured output or for performing tasks. So for example, they are used to call code, enabling models to interact with systems like your own database or external APIs like E2V.
So let's go ahead and let's create a very simple tool which will be allow our agent to interact with the terminal. So the first thing we're going to do is ensure we're on the main branch and we are going to synchronize our changes and just confirm your last merge was e2b sandboxes. After that, let's go inside of source, ingest functions. Now in here, we're going to do the following. After you create the coding agent, go ahead and do the following.
Right after your model add tools. Open this array and let's create tool which you can import from ingest agent kit. In here let's go ahead and give this tool a name. It will be called Terminal. Add a description.
Use the Terminal to run commands. And add the parameters, which the AI will pass to this tool. It will be a very simple command which is a type of string. And after you have added the parameters and the handler method, extract the command from the first argument and extract the step from the second argument. In here you are going to return the step execution.
Await step?run. The reason we need to use question mark is because step can be undefined. So let's go ahead and run a step called terminal. It's going to be an asynchronous method inside of this step. And the first thing we're going to do is we are going to create an object called buffers.
Inside of here we are going to set std out to be an empty string and std error to be an empty string as well. Now let's open a try and catch method. Instead of the try, let's get our sandbox here using await getSandbox() and pass in the sandbox ID. Then let's grab the result of await sandbox.commands.run and pass in the command. So we are reusing our getSandbox method from utils.
I also imported Zod, so make sure you add this as well. And we are basically doing the same thing we're doing right here. We are establishing the connection with our sandbox using this simple util here, so we don't have to repeat this every time. And now what we are doing is we are running a command. So you can learn more about this by going inside of the E2B documentation and simply learning about the command.
Let me see if I can find that here. Here we have the commands. So this is how you basically run commands in your environment. Perfect. So now let's go ahead and define some more settings here.
So after we run the command, let's define on std out, grab the data, which is a type of string, and for buffers std out, simply add that data. And the same thing on std error. So data is this type of string. And in here, simply add buffers.stdError plus equals data. So we are handling all of the results of running the terminal commands in this object.
So we are going to know if the terminal command succeeds or if it fails. That's why it's important to keep track of the result. And then let's go ahead and let's return result.std out. And then in the catch method let's extract the error here. First let's do console.error simply so we see this in the terminal.
Open MagTix and you're going to say command failed. Render the error And then you can use forward slash n to break into new line, so it's more readable. Std out will be buffers std out. And then go ahead and break line again. Std error will be buffers std error.
So that's going to be the console log. And then you're going to return the exact same thing. This will basically tell the agent what went wrong with additional information about std out and std error. So yes, just make sure that you don't accidentally type this incorrectly as it is important for AI to understand what's going on. And that is our first tool.
Our agent now has the ability to use the terminal. It uses sandbox API for this and it will keep the results of this terminal execution. So either a success message or an error with detailed information about what happened. Because commands can fail and thanks to Ingest they will automatically retry but now with the context of what happened. So if you're building Next.js with me for some time, you know that sometimes when we install a package which doesn't support React 19, it fails because we need to add ____ legacy peer deps.
So if that happens here, it will first fail, and then the AI will read the message and say oh okay I need to add dash dash legacy peer depths and Ingest will automatically retry the terminal step with that new information and then it will succeed. So that's how powerful Ingest background jobs are and now their amazing Ingest agent kit. So now that we have finished the terminal tool, let's go ahead and create a new tool. This one will be called the create or update files. The description will be, and let me just see, I think I'm doing something, yes, I'm doing something correct here.
I think it needs to be here and add a comma here there we go so yes so just make sure you are doing it at the end of this create tool bracket right here The description will be create or update files in the sandbox. Let's go ahead and let's add the parameters. It's going to be an object of items And it will accept the files. Files are going to be an array of objects. And inside, we're going to have path, which is a type of string.
And content, which is a type of string as well. And it's going to be the only thing we will accept. So now we can build our handler method. Inside of this handler method, let's go ahead and let's extract a few things. The first things will be the files.
And the second thing will be step and network. And now in here let's go ahead and let's get the new files by doing await step question mark run create or update files open the asynchronous method here and let's go ahead and open a try and catch block. Let me just add catch here, there we go. Instead of try, let's create updated files by first looking at network.state.data.files or an empty array. Then let's get the sandbox here, await getSandbox() and pass in the sandbox id.
And now for const file of files await sandbox files write file.path file.content. Updated files file.path is now file.content. So basically when the agent gets access to create or update files tool, it will give us a structured input of files it just created. So imagine this first part, which we already have. We've already seen it create a few files, I think.
There we go. So here's a simple button component. And then it just returns JSX. So now it will do exactly that, but it will recognize the input this accepts. So what it's going to do is it's going to return back an object.
Let me try like this. It's going to return an object like this, and it will have app.psx, and then in here it will be paragraph app page. This is how it's going to look like. And then the same thing for any new components. This is what it's doing now.
And then we're going to iterate over each of that and we're going to write that to the sandbox file explorer using files.write, which is a similar API like sandbox commands. So that's how we know which file to write where. And then we just keep track of updated files internally in our network state simply so we can later tell the user which files were changed. Because technically, we could just ask it in the prompt, hey, when you finish, also tell me each file you changed. But you can't really rely on that, because AI has a token limit.
It can only talk for so much. But for this part you can rely on this, right? So each file it actually writes in the sandbox, in the file explorer, we are going to save it. We are going to keep track of it. And the reason we are choosing the format of an object rather than an array is because this way it is very easy to overwrite any files if they change by invoking this step again.
Because this step can be called 50 times for all we know. That's why we are choosing an object rather than an array. So we can just simply overwrite any path if it changes. And then let's go ahead and do return updated files. And then in the error here, let's return error and simply render the error.
Perfect. Simply render the error. Perfect. And then outside of this, if typeof newFiles is equal to object, networkStateDataFiles will be newFiles. So why are we waiting for this to be an object?
So the new files are basically the return of this step. This step can either be an object or it can be a string. So we are basically waiting for it to be an object and only then do we store it into our internal network state. Perfect. So that's another tool finished.
Now let's go ahead and let's add a handler method again an asynchronous method and in here we are going to return await step and we just need to extract the step from here so let's do that first extract the files from here and then extract the step from the second step. Be mindful, if you forget to do this, you won't get an error, simply because we have a step defined elsewhere. We have it here. So be careful. You always have to extract the step from the tool because it holds different context.
So step.run read files, asynchronous, and in here open a try and catch block. In the try block, connect to the sandbox. Await, get sandbox, sandbox ID. In here, open the contents array. And for const file of files, push to that contents array.
So const, individual content is await sandbox files read file. So we read the file, and then we simply push the path to the file and the content next to it. So in here, it doesn't really matter how we store this data, Because this is not for us. This is for the AI if it needs to read data. So yeah, let's just fix this path.
So if we in our prompt instruct the AI agent to read before they do something, they're going to use this. Why would they need to read something? Well, so they don't hallucinate, right? So we tell it, if you attempt to use a chat.cn component, make sure to read inside of the components folder. So then it's going to use this tool to attempt to read a file.
And if it doesn't exist, it's going to say, oh, okay, then I need to create it or I need to use something else. It's basically not going to hallucinate or assume a tool exists. That's why this step is quite useful and also why we don't really care about the format too much because AI can read from various formats here. And if this fails we return an error like this. And that is the last tool that we need.
What we have to do next is we have to update our prompt and tell it that it can use these tools. I have prepared a prompt for you in my public GitHub of assets. You can use the link you can see on the screen or the link in the description to access it. Now be mindful of something. I am not a prompt engineer.
I have no idea if this is a good prompt or a bad prompt. I have generated it using AI itself, so I assume it's okay. I have found it to work quite well for my use case, but you are free to modify it however you want. I started out very simple, like it was almost just like one or two lines. And then I had to add it more and more and more instructions until it understood things very well.
And I found this to be kind of a very, very good, at least starting point if nothing more. Right. But the cool thing about this project is your app can get twice as good just by adding a new model. So if OpenAI or Anthropic release a new model, all you have to do is use that new model, and your app is suddenly twice as good. So that's the cool part about working with AI.
So copy this prompt from my assets. Let's go inside of source, and let's create prompt.ts, and let's paste it here. So I'm gonna slightly go over it just so you understand what I'm doing here. So you're a senior engineer working in a sandboxed Next.js 15.3.3 environment. Why that environment?
Well, because that's what we define here. So I'm telling it exactly where it is running. And then I tell it some tools. You can write files with create or update files. You can execute commands via terminal.
I also tell it to use ''yes'' simply because it's not a human. So basically we must not get it in a position where the terminal waits for human input. Then I tell it you can read files with read files. So those are the first three instructions that I give it. I then tell it, you know, some general rules.
Don't install package.json or log files directly. You can install packages, but don't modify these directly. I tell it the main file is in the app folder page.tsx. I tell it that all Shazian components are pre-installed and imported here. And then I tell it some general rules, like you must never add UseClientToLayout, this must always remain a server component.
I tell it to never create any CSS or SCSS files, styling must be done strictly with Tailwind. And basically some rules like that. So you can of course tweak this if you think you can modify it a little bit. Of course, Maybe I will modify it during this tutorial. But basically it's just a bunch of rules that I have added after I experienced it fail.
So after I saw that it does something incorrectly, I added a new rule for that. And this is important. The final output. So after it's fully completed, I instructed to return this type of format task summary and inside a description of what it just did so it needs to return with this and you're gonna see why in a second This is super important and this is why I'm very strict about it because this is the only way to terminate the task. If it omits or alters this section the task will be considered incomplete and it will continue unnecessarily.
So now that we have our new prompt, let's go inside of ingest functions.ts and let's go ahead and change our code agent here and let's extract the OpenAI here a little bit. And now what we are going to do is we're going to change this model to be GPT 4.1. The four parameters here will be temperature 0.1. Now, if you're using something that is not OpenAI, this might not exist. And that's completely OK.
You don't have to modify this. What temperature means is randomness. So the larger the number here, the more random is something going to be. And when it comes to generative UI, I kind of want it to be deterministic and reliable, rather than completely random. But I give it a little little little chance of randomness.
So if you're using grok or anthropic you don't have this probably. So that's that's completely okay. You can even do without this in OpenAI. And the reason I changed the model is because 4.1 is much, much, much better at generating UI than 4.0. And for Anthropic, the better model is 3.5 Sonnet.
For Grok or Gemini, I simply don't know. And as I said, with Gemini, I have problems running these tools. You can try, but for me, I just got errors. So now let's go ahead and modify the system here to use our prompt constant. Prompt at prompt, which we just added.
So just make sure it's this one. Perfect. Now let's go ahead and let's add a slight description here. An expert coding agent. There we go.
So I'm just going to bring back this temperature 0.1. If you're using OpenAI, you can add this, so both you and I will get similar results, I hope. Great. Now that we have these tools, we are still not ready to try them out just yet. Because what we have to do now is we have to add a lifecycle here.
So after the array of tools ends, so make sure that you find where the tools end, and go to the bottom here. We're going to add a lifecycle. And in here, we're going to get onResponse. And from here, we're going to get the result and network. And now what we're going to do here is we are going to check if the last message that is in this cycle, because this is a cycle, right?
This is not going to be linear. It's not going to go using the terminal, then create or update files, and then read the files, and we're done. No, it has access to all three tools equally, and it will create its own plan. It might use them 50 times in a row. That's why in the prompt we tell it when you are finished, when you know that you're done, go ahead and return the task summary.
So now in the life cycle we're going to extract the last message from the assistant and we are going to check if that message includes the task summary. If it does, we will break the cycle and then we can go ahead and do these steps where it actually shows to the user what it generated. So in order to implement that part we have to go inside of utils here in the ingest where we have get sandbox. Let's export function here last assistant text message content. It will accept a result which is a type of agent result from ingest agent get.
Let's go ahead and do const last assistant text message index which will be result.output, find the last index, extract the message and we are going to find the index of a message whose role is assistant. So we know, OK, this is what Assistant said last. Then let's extract the actual message content from that index. So result.output lastMessageIndex as a of either text message, again, from ingest agent kit, or undefined. So it can even not exist, right?
And then let's return message question mark dot content. And we're going to add a ternary below it. If type of message.content is string, we're going to simply render message.content. Otherwise we will do message.content.map. Get the individual inner content and simply return that inner content's text.
And then we are going to join it all in a single string. And then finish the outer return by adding or undefined here. Like that. Perfect. I'm going to pause the screen just so you can double check your code.
Now let's go ahead here And let's use that inside of here. So I'm going to do const lastAssistantText, just this. LastAssistant, let's do message text. And in here, we're going to call our last assistant text message content, which we can import from the utils and pass in the result like this. And let me just see if I did something incorrectly here.
So, lifecycle on response. I have to end this here. There we go. Perfect. And then we're going to check if lastAssistantMessageText.
And if we have network, if last assistant message text dot includes task underscore summary, add to the network state in the summary key last assistant message text like this and then outside of this outer if clause return result and let me just see what I did incorrectly here so I think I have to end this maybe like that let me just try and fix this quickly so let's see I think I don't need this part. Do I need this? Okay, so that was the extra. Perfect. So basically what we're doing here is we're extracting last assistant message text using our util last assistant text message content, which simply finds the index of the last message whose role was assistant, and then if it is type of text message, and if it's string, it just returns that content.
But there is obviously a special type of message where it can be an array of strings. So in that case, we simply join that into a single string using a very simple method. And then, once we parse that message, if we have network available, and if we have LastAssistantMessageText, and if that message includes task summary, which is our rule here, right, to return that if it's done, we store the network state data summary last assistant message text and we return the result. And what we will be able to do now is the following. Go below this entire coding agent and create a network.
The network will be create network from ingest agent from ingest agent kit like this and in here go ahead and add the following the name coding agent network agents will be our coding agent or whatever we called it. We called it code agent so let's just add it here. Code agent max iteration will be 15. Now this will basically, this is a number that will limit how many loops the agent can do. So what is a loop?
As I explained previously, the agent can pretty much do things indefinitely if it wants to. We need to find a way to tell it to stop. So we are doing that currently using the task summary. But there has to be some kind of limit. We cannot really let it go forever.
So I'm gonna say if you reach 15 iterations you're doing something wrong. You should have already been done and this is too much and I have to stop you because you will use all of my OpenAI credits. That's what max iterations is. Now let's add a router here, which is an asynchronous method, and it can extract network. Let's add the summary here to be network.stateDataSummary.
And if we have the summary, we're going to break this network. Otherwise, we're going to return the agent. The agent will be code agent. Like this. There we go.
So, this is how we break the loop. If we detect this summary in the network state, we break the network. Otherwise, we return code agent. So the code agent will call itself many times until finally we detect there is a summary. If we detect the summary, we say, great, you are done.
Perfect. So now I have a bunch of these errors that I have to fix. So I'm going to go ahead and just see, did I maybe remove an important bracket or something? Because something seems to be wrong here. I'm going to start by trying to reload my window just to see if that maybe fixes it.
Looks like it did not fix it. So I'm going to go ahead and see exactly what I did wrong. Okay, I think I found it. It's all the way up here. Somehow this happened, I'm not sure how.
So create network. There we go. Perfect. And now the only error in our app is this unused network variable. Everything else seems to be working.
No errors seem to be flying around, perfect. So once I have this network, what I can now do is I can run this network instead of running the actual code agent. So I will remove this now and I will do const result that comes from the entire network.run. And I will simply pass in event data and I believe it was value that we pass so let's go ahead and do that. And then in here what we are able to do is this part can stay the same and in the result part we can actually do the following we can show each file that was changed so next to sandbox url Let's do it like this.
So URL is sandbox URL. The title will be for now just a fragment. Files will be result state data files. And summary will be result state data summary. So right now these are a type of any, depending on the version you use maybe they will even become errors.
Don't worry, we will add the types later. So if we've done this correctly, we should have working code now, especially after you change this. And if you add this prompt, I believe this should be working. Now, keep in mind, it is hard to be reliable and deterministic with AI agents. I might get one result, and you might get a completely different result.
And that will actually quadruple if you're using a different AI model than me. So, again, my biggest advice is use the same model I'm using, use the same OpenAI, even put the same temperature. This will make it much more easier for you to have the same result as me. Let's try it out now. So I have my app running here.
And I'm going to go inside of here. And I will create a calculator app. Let's try this. And let's invoke a background job. Let's see, maybe it fails immediately, maybe it works.
We're going to see. Let me refresh the runs. Are they working? Are they not working? Not sure.
OK, it seems like there is some kind of error happening here. So let me just try and debug this. Alright, so what I did is I've shut down the npx-ingest-cli, I have shut down npm-rundev and I restarted both of them. So do that, shut them down and restart them. And let's see what's going on.
So I managed to get the sandboxed ID. That's a good start, it means we successfully started the sandbox. And now we are running the code agent and you can see here, we are running it with all the tools available. The tools to use the terminal, the tools to create or update files, the tool to read files, and now it will use those tools. So there we go, it used create or update files.
Let's see what it created. It created calculator.tsx and then it imported that calculator from that file. At least that's what it says it did. And it returned back the code sandbox URL. Let's check it out.
I'm going to click here. And if it worked, we should now be seeing a simple calculator app. Fingers crossed. And looks like something went wrong. So what happened here?
What happened is that it forgot to use state. But here's the cool part. For you, maybe this didn't even happen. I don't know, right? It can behave randomly.
So how do we fix this thing? Well, we can fix it in two ways. We can fix it by making the prompt even more strict. We can find the places where I add. For example, in here, I added, you must never add useClient to layout TSX, line 13.
Maybe it's confused because of this. It reads this part and then it gets confused. Let's remove that part. Let's search for useClient again. File safety rules, again, never add useClient to appLayout.tsx.
Maybe it gets confused by this. So I'm going to remove this where it says never add use client because it seems like it is avoiding to add use client in that case. I have another instance of use client here where it says if building a form or interactive element include proper state handling and add use client to the top. Perfect, that's a good example. Only add use client at the top of files that use React hooks or browser APIs.
Never add it to layout TSX. So I will remove this part as well, simply because I feel like it is leading it away from using useClient properly. So let's see if our next iteration will be better. So I will add it again. Create a calculator app.
And if it doesn't work, then we can instruct it in this prompt further. Right. And as I said, for you, maybe it worked first try. I don't know. That's kind of the part about building this type of apps.
You have to simply rely on luck Sometimes, right? Sometimes the agent will perform very well. Sometimes it will perform very bad. And the better this models get, the better your results will actually be. And as I said, your prompts will also get better with time.
So let's go ahead and see if this was any better. Let's see if it added use client this time and there we go. I have a working calculator app generated by AI. Can you believe that AI has generated this? Now, as I said, I have no idea what kind of result you are going to get, right?
I don't think you will get the same result as me. Yours might have some colors. Yours might not work again. If it doesn't work again, you can try and go inside of here and then explicitly tell it, be mindful of useClient where it needs to be added. You can tell it that and then it definitely won't make that mistake.
Or, you know, read through my prompt and see if there's something you don't like here. Or maybe paste my entire prompt inside of chat.gpt and tell it to improve it somehow. So for me, I removed those couple of lines for use client. Perfect. Let's try build a landing page.
So you can now pretty much, you know, you're pretty much, let's say, finished when it comes to back end side. You can now only improve these tools and improve the prompts. So what we're going to do next is we're going to implement saving this to the database. And we're going to implement creating a summary of what it just created so we can save that to the database and kind of return a message back to the user. And if you want to, you can instruct it to use some package like use drag and drop, use a framer and then you will see it use the terminal tool.
So let's see if it managed to create a landing page. There we go and may I say a pretty good landing page, right? Very impressive. Perhaps you should try with the landing page example because it doesn't use any use client or things like that. I'm very very impressed by this.
This is better than I expected. So let's try telling it to use framework this time. Build a landing page, use motion package. Let's try this. So in here we can see that now it is using terminal and we can see the result.
Added three packages, right? So let's see if it actually used that or something else. I think you can also click on the code agent right before it uses the terminal and click on the output. And in here, you can see npm install framer motion. So that's what it runs.
I'm not sure if that's the newest version of Framer. Maybe this won't even work, I don't know, but let's click on Get Sandbox URL. Let's click here. And let's see, maybe it will be broken. Yeah, looks like this doesn't work.
It should use Motion Package, not Framer motion. So as I said, it's not perfect, right? You can break it every now and then, but you can also improve it just as easily, right? As I said, Claude's Sonnet 3.5 is by far the most reliable coding agent because it just it is up to date with everything. It just knows everything, right?
But you will hit limits very, very soon. So your best option for now is to create this kind of app, using OpenAI and simply improve the prompt as much as you can. So I did this myself and I am not a prompt engineer, so you can definitely create this better than me. Your starting point should be this, you're a senior software engineer. And the other important part is this, give it a very important ending.
So this should be your ending. Everything in between, before final output, you can change. So I wrote all of this with the help of AI, and I basically added more things as I saw some things fail. So for example, sometimes it attempted to run dev itself or build. So I told it, you must never do that, right?
It's always working. So yeah, you can learn how to prompt a little bit better, and you will get better results. Or you can simply use a newer model. So how about we try build a Kanban board, use React, beautiful, drag and drop. Let's try that.
Maybe we will have some better results with this package So this is the result of the query to build a Kanban board as you can see the first terminal command actually Failed And we can actually see the error here. So let's go ahead and scroll down here. Error, unable to resolve the dependency. And it probably told it that it needs to use legacy peer depths. You can see this, retry this command with dash dash force or legacy peer depths.
And then what happened is it simply retried that and you can see then it worked so that's the power of ingest and that's the power of returning the result of the terminal tool right so we tell it the command failed and we tell it why it failed So that way it knows how to retry. And my get sandbox URL was this, what seems to be a working Kanban board. Test. Amazing! It seems to have some issues.
It's missing a prop set here, but honestly, other than that, pretty damn good. Look at this. Amazing, right? It even highlights where it's going to land. Very, very cool.
Great, so I think that marks the end of this chapter. Till we finish this project, we will add some methods to improve the failing builds, right? We will allow the user to tell the AI, like, hey, you forgot to add useClient. So it understands what happened previously and then it can just easily fix the issue. That's at least what we are going to attempt to do.
So even if something fails, we will allow the user to instruct the AI and tell it, hey, it failed. Can you please fix it? Because I saw lovable fail, I saw replit fail, I saw V0 fail, all of these apps fail, right? They are just AI. It's a language model after all, right?
So it can definitely fail, but I think it is super impressive given the fact that we built it so soon and so fast. Amazing, amazing job. Let me mark all of these things as complete here and now let's go ahead and branch out. So 07 Agent Tools. I'm going to go ahead and create a new branch.
07-agent-tools. I'm going to stage all of my changes. 07-agent-tools. I will commit. And I will publish my branch.
As always, you have a completely free CodeRabbit extension you can install instead of Visual Studio Code if you wanted to review your files. And now let's go ahead and let's open a pull request so we can merge our changes and so we can review them here with a summary. Then here we have the CodeRabbit summary. So we have enhanced agent capabilities with multi-tool, multi-agent network for sandbox interactions, including terminal commands, file operations, and summary extraction. We also introduced a comprehensive system prompt outlining coding standards and environment constraints for improved code generation and consistency.
Perfect! So that's exactly what we did. And in here we even have a sequence diagram of how it happens. So once the background job is triggered, we can see that now the coding agent can use the terminal, create or update files or read files as needed. And then the tools return results using stdout, using files contents or anything else.
And depending on that the code agent is either calling another tool or finally it returns with the last message which includes the task summary tag signaling that it is over and then we can return the sandbox URL. So in here it actually recommends not doing a double ternary instead ending early here. So that's quite a good suggestion. We could possibly do that. Then below here It fixed a typo to agent.
That is definitely a mistake. Great. And in here is something quite interesting. So what I do here is if I fail, I simply return an error. So I practically never store anything if it fails.
But in here it recommends actually doing partial saving, right? So if there is at least some files which were successfully created, save them but still throw an error. So quite a good suggestion but in my experience if it fails on one file it will fail entirely because this doesn't mean that it wrote incorrect code. If it throws an error here it means it lost access to the file system. That's why I'm not exactly worried about this.
I will pretty much always expect it to be able to write all files it needs. But very good suggestion here to handle partial success. I will look into that. Let's go ahead and let's merge this pull request here. I'm not going to delete the branch as always so I have access to it right here.
And now that we are here let's go ahead and go back inside of our main and let's go ahead and synchronize our changes and that should officially mark the end of this chapter. Just a sanity check Here, there we go, we just merged 0.7. Amazing, amazing job. We are now ready to start building our UI. See you in the next chapter.