In this chapter, working to implement AI background jobs. In order to do that, the first thing we're going to have to do is choose our AI provider. In here, I have added a list of all the options that we have and some comments for each of them. Starting with the best choice, which is OpenAI. It is by far the most reliable, the most normal rate limit with a fast reset, and a very very good coding model.
This is the coding model that I have chosen GPT 4.1 and it is almost perfect. The absolute best coding model, though, is Claude, specifically Sonnet 3.5 or 4. They are kings of coding models. The problem with Anthropic is a very strict rate limit, and when you hit the rate limit it will take you longer than 24 hours for that rate limit to reset. So it's it is just very very annoying to work with.
If you want to you can choose Anthropic but you will almost certainly hit a rate limit and you're gonna have to either change the model or create a whole new organization and account. So basically Anthropic allocates their resources to higher paying customers, right, which are these very very large companies, So it's not exactly suitable for tutorials. As per Grok or XAI, I'm not sure. I haven't worked with it. It is on the list of the supported AI models.
So I don't know. I won't recommend it, and I won't tell you not to use it. I'm not sure. And as for Gemini or Google, the great thing about Gemini is the amazing free tier. The biggest problem with it, For our use case, it is just not good for calling tools.
It will straight up be errors all around. So because of that, at this moment, I just don't recommend it. I've heard that Grok AI has 3 tier, So I would rather you use Grock than Gemini. So unfortunately, at this point, I cannot recommend Gemini. It is OK for this simple chapter that we're going to do now.
But later, when we use AI for the thing we will actually need to use it for, it will simply not work. So if you really need a free tier, you can try and use Grock rather than Gemini. The absolute best choice and the choice that I will be using is OpenAI, specifically this model. As I said, there is a chance we might hit the rate limit here, but the reset is around two seconds, which is completely fair. And it will happen rarely, only when we are doing some very, very large tasks.
With Anthropic, we get the amazing results. It completely understands Next.js ecosystem. It understands what chat CNUI is. But once you hit a rate limit, and you will hit it very soon, it is almost impossible to get rid of. You will almost be stuck in a rate limit.
So in my opinion, choose OpenAI. It is the simple best solution for this project. If that is possible for you, you will have the best experience using OpenAI. And now I'm going to show you how you can find if any changes have been made regarding this if you're watching this tutorial in the future. So you can use the link in the screen again, or the link in the description to visit Ingest and in here go to the documentation and then go ahead and find AgentKit and in here go ahead and click on this support for OpenAI, Anthropic and Gemini or click on the models here.
So in here, you will see all supported models. As you can see, OpenAI, Anthropic, Gemini and Grok. As I said, even though Gemini is supported here, I just wasn't able to get it to work. If you want to, you can try, but I wasn't able to get it to work. Anthropic worked amazingly, especially the 3.5 CNET versions, but the rate limits were very easily hit.
The OpenAI I initially tried with 4.0 and I really was not satisfied with the results, it's not that good, but even though it's not on this list, you can try 4.1, so that is confirmed. I tested it myself and it works no problem and it's amazing not as good as Anthropic 3.5 but very very good and very reasonable rate limits so What we have to do next is we have to create our account in one of these providers. I'm going to show you what I do with OpenAI and then you can do whatever you want to choose here. In my case, I'm going to go ahead to platform.openai.com. You can use the link you can see on the screen or link in the description.
Once you've created your account, you're going to go ahead into settings. Once you're in the settings, you're going to go into billing. In here, it is very important that you have a credit balance. So maximum of $10, even less will be enough for you to complete this tutorial many times, which will of course depend on how often you create new websites and apps with this project. But I barely spent that amount and I tested pretty heavily.
Once you have filled your account, you can go ahead and obtain an API key. If you're using Grok or Gemini, you have a free tier, but as I said, Gemini just doesn't work and Grok, I'm not sure. You can try. So let's go ahead and let's create a new secret key. I'm going to call this vibe development.
I will use the default project and I will select all permissions and I will create the secret key. I will then copy this key and then what we have to do is we have to add that to our IDE. I mean to our project. As always, ensure that you're on your main branch and you can synchronize the changes just to make sure you're up to date. As you can see, my last chapter was background jobs.
So now, what I'm going to do is I'm going to go inside of .environment here and I'm going to create OpenAI here. OpenAI API key. And I will paste it inside, like this. If you're using something else, let me show you how to add that. So I'm going to go inside of the Ingest AgentKit documentation here, and here you have it.
Environment variable used for each model provider. If you're using OpenAI, it is OpenAI API key. If you're using Anthropic, it's Anthropic API key. If you're using Gemini, it is Gemini API key. Or if you're using Grok, it is XAI API key.
So make sure that you've added one of those here. Perfect. Now that you have done that, let's go ahead and do the following. Go inside of the AgentKit by Ingest and go inside of installation. And let's go ahead and install Ingest AgentKit.
So I'm gonna go ahead and install this and I'm gonna show you the version. Once this has been installed I'm just gonna go inside of the package.json and show you the version 0.8.3. That's the version I'm working with. Now let's go ahead and let's use the AgentKit. In order to do that I just want to do the following.
Let's go ahead and do npm run dev and let's go inside of source app folder page.tsx and in here what I'm going to do is the following. I'm going to add a simple input from components, a UI input. And above this, the RPC methods, I will add value, set value, and a simple use a state from React. Make sure you import that. I'm then going to give the input a value and onChange a simple event calling setValue and setting it to event targetValue.
You've probably done this a hundred times. And this will simply be... It can stay in Vogue, Background Job, it doesn't really matter. Great. So now let's go ahead and run npx ingest-cli-latest-dev simply so we have both our app and the dev server running and now let's go ahead and do the following let's go inside of our ingest functions here and let's just remove this one leave this one for five seconds like this and change this to let's just say input let's call it that And then I'm going to change this to be input as well.
Actually, I'm going to change it to be value. So we control it from this input here. I'm going to go instead of the invokeTRPC method, so it is inside of routers here, I will change this to be input I will change the input to be input well I just call that in a dumb way didn't I why don't we just call it value that would be better sorry so let's go set up invoke change this to value input dot value and call this value And then make sure to save this file, go back inside of the functions, and change this to helloEventDataValue. So let me show you the changes again. Inside of the page, we added the use state and the input with value and set value.
We then added a control to this input with those fields, and we modified slightly the invoke.mutate to pass in the value to be the value from the state. We then modified our TRPC router to accept value in the z object. And we've changed the ingest.send to pass in value in the data object. And of course, we modified the function to read.value, and we removed an extra waiting step. So now that you've done this, let's go ahead and let's run our app on localhost 3000 and let's open our development server here.
So now I'm going to call this test value and I will click invoke background job. And then in here in the running text, I should see value test value here. And in finalization, hello test values are exactly what we pass here. Perfect. That's a very good setup.
Now that we have AgentKit installed, let's go ahead and do the following. In the Ingest documentation, which is outside of the AgentKit, you can find a very, very simple example by going inside, let me just find, Ingest Functions, Step and Workflows, AI Inference here. And in here, where they show you AgentKit for the first time, they show you this very, very simple way of doing it. So this is what we're going to do. I'm going to add the following import.
So let's now go inside of ingest functions here. And I'm going to add this. Agent, agentic open AI as open AI and create agent from ingest agent kit. And then I'm gonna go ahead and open this function. It's already open right so basically I'm gonna now write inside of here you can leave this hello world this can be unchanged let's create a new agent like this so let me just show you this and you can remove this it is directly open AI right And you can remove the step here as well.
So instead of a writer, let's call this Summarizer. The name will be Summarizer. You are an expert summarizer. You summarize in two words. So something very obvious, right?
A very easy task. You can give it the model gpt4o if you're using OpenAI. And let me just remove the things I don't need for now. Let me remove the step from here since I don't need it. So in here we open a summarizer agent like this.
And since I'm using OpenAI, these are the models that I can use. One of them is GPT-4.0. If you import Anthropic from here, You can see that then you're going to have to choose one of these models. So just pick the one you like. And same is true for XAI or Gemini, whatever you ended up using.
So now we have to find a way to invoke this summarizer with event data value. And since you saw, when I copied this import, I had to fix the invalid OpenAI import, because I've copied it from here, right? So it would be best if you follow the instructions for AgentKit on the actual AgentKit documentation. Again, you can find it right here under AgentKit. I simply used this one because I thought it was a very similar example to what we discussed in the previous chapter with the summarizer, right?
But I think it will be better for you to follow the agent kit documentation here because this is the one that is kept up to date constantly. So please follow this one. So you can again, go inside of the agents here and you can find this exact thing we just did. We created an agent, we called the summarizer and we gave it a system prompt and then we gave it a model so we did that correctly. Now what we have to do is we have to run it.
So let's go ahead and do that right here. I'm gonna go ahead and add this, summarizer, summarizer.run, like this. And let's go ahead and learn what to type here. So I'm going to go ahead and add, summarize the following text. And I'm going to open backdicks so I can insert event data value like this.
And now let's go ahead and add await here. And now we have access to the output here. So let me just see. I'm not sure if I know the API by default, but let me try output first in the array. Is it like that?
I'm not exactly sure. Let's try and let's just say success Okay here and let's rely on the console log or perhaps we can just return out the whole output Like this, maybe this would be easier to work with. So we just created a very simple summarizer agent, which is an expert summarizer and can summarize in two words. We imported OpenAI and CreateAgent from ingest-agent-kit new package that we have installed. We specified a GPT-4.0 model.
Another hint here, I mean, I'm basically just repeating what we previously went over. Make sure that your environment variables are properly set. Because as you can see, We did not define the API and variable here. So it will search for itself. So the name is very important.
But if you want to name it differently for whatever reason, you can do that. And I think that inside, you can pass the API key. And then you can call this API key if you want to, or if it's not managing to find your environment variable for whatever reason. Let's try this out now. So I'm going to go ahead.
And honestly, I don't know how this will perform. So I'm going to call this, I am Antonio and I am a developer. Let's go ahead and try and doing that. So in here, as you can see, it immediately finished and you can see the step was called a summarizer and you can see the content inside. So the content is you are an expert summarizer, you summarize in two words and then we passed in the role user, Summarize the following text, I am Antonio and I am a developer.
And in here, I think we can already see the output. And there we go. The output was Antonio, developer. And if you actually look at the finalization step, I think that is exactly what you will find. So output.content is Antonio developer.
Amazing. So we officially created our first AI background job. So now just for fun, Let's try and change it up just a little bit. How about we change the system prompt here. Actually, let's change the name of the agent to CodeAgent.
And let's call this CodeAgent. And call CodeAgent.run. And now we're gonna say, you are an expert Next.js developer. And let's go ahead and just say something like you write readable, maintainable code. And let's go ahead and also answer you write simple Next.js snippets, like button component, Next.js and react snippets.
Okay, let's just do that. And then write the following snippet like this. So this is still called hello world, that's perfectly fine. We don't have to change anything else. But let's just see what we've achieved now.
For example, I'm going to say create a button component. I'm going to click invoke background job And you can see this is a bit longer running task. And let's see what it created. I'm not really sure what the output would be here, but here we have it. Here's a simple and reusable button component using Next.js.
And you can see how it actually writes code. Import, react, const button with props on click, children type button, class name, and it returns JSX button on click. It has class names, it uses tailwind, it added the class name prop, it has children in the button, export the default of the button. So basically, a fully working button. So we are, you can say, halfway there, right?
We just made AI create a React component. So the next step that we have to learn is how to make it use tools and run this code snippet it just created inside a sandbox, inside of a cloud environment that we can then show to the user as a result. So that's what our next chapter will be about. And I think that in this chapter, we've done what we aimed to do. So let me just check this.
We chose our AI provider, and we've set up in just AgentKit and we even tried a very simple AI step. So basically that's how you're going to write AgentKit tools And we are then only going to extend it by introducing tools. One of the tool can be terminal usage. Another tool can be create files. A third tool will be read files.
That's what we're going to do. And then we're going to explore networks and routers so we can keep the agent in an execution loop so it consistently so it constantly creates new components until its task is finished You saw that in the intro video of this tutorial. I had a lot of coding steps, that's because it is in an execution loop until it completes its task. So that's what the tools will be used for. We're then going to have the state, history, a bunch of things.
And then finally, we're going to have the finalization step where it will save to the database and it will save the URL of the sandbox so we can show that to the user. So in order to advance further and create these tools and things like that, we're gonna have to establish our sandbox because without the sandbox we can't work. Right? So that will be our next step for this chapter. We did a very good job, we created a very simple interface here on the front end and we are now able to call AI background jobs and we are able to get some AI code right here.
So later on when we actually connect this to a proper network the function will say coding agent, right? You're gonna see it's very very cool. Amazing! So now what we have to do is we have to open a new branch and push to GitHub. So let's go ahead and open 05AI jobs branch.
I'm going to go here. And as you can see, six files changed. One of them was the Ingest database upgrade, which is again, I'm guessing some cash for the Ingest developer server. So in here I'm going to create a new branch and I'm going to call it 05AIJobs. I'm going to stage all changes here, 05AIJobs and I'm going to go ahead and click commit and then I'm going to publish the branch.
And if you want to you can press yes and then this code rabbit free code rabbit extension will analyze all of these files. Or if you prefer, you can go to where we are now going to open our pull request. And in here, we're going to have that very same review. And here we have the CodeRabbit summary. So we added an input field allowing users to submit custom prompts for code generation.
So you can see how it connected all of those separate entities of ours from the frontend input to the TRPC invocation of a background job to the actual content of a background job. And we now generate code snippets dynamically using an AI agent specialized in Next.js development. So in here we can see step by step, we can see the sequence diagram as always, you can see how it now features the new agent kit right here. And in here we have some potential issues. So you can see how it cares about our TRPC value because we are lacking any kind of validation.
We're not even requiring a minimum length so obviously it is telling us that that's something we should add of course and we will. Later on we're going to change our form schema entirely so you don't have to worry about that. Right now it's just for demonstration purposes. In here it's recommending using constants instead of hard-coded strings and that is exactly something we will do. So later on I have prepared very very large system prompts which I have tested on which gave me the best results.
So I will share them with you, and then you will paste them in your app and you will be able to use them as constants. And in here, it again suggests some sanitization and some other limits on the front end. My apologies, in the background job I thought that this was the submit function. It is not. We will take care of that as well.
And yeah, no need to do anything else here because this will not look like this. We are going to modify it quite heavily in the next few chapters when we introduce the actual agent network. Great! So I'm going to go ahead and merge this pull request so 05 AI jobs I'm not going to delete the branch simply so I have everything here and then I'm gonna go instead of my IDE here and I'm gonna go back inside of my main branch and I'm going to synchronize the changes like this. So everything is now up to date.
I'm going to select no for this trigger of the Code Rabbit extension this time simply because this is a merge which we just reviewed, right? I'm going to open the graph here just for a sanity check to confirm that my last changes were 05 AI jobs and they are. Great. So that marks the end of this chapter and in the next chapter we're going to learn how to create an online sandbox, cloud sandboxes, which run Next.js applications, which in the following chapters will be something our agents will work on and create new components and run terminal commands into. Amazing job, and see you in the next chapter.