In this chapter, we're going to get familiar with the most important concept for building an automation app, and that is background jobs. Let's take a look at a few examples I've prepared to show you why we need them. So we all know a normal networking example. For example, the user clicks login, we send a network request and we get an instant response, success or fail. But what if we had a more complex example?
For example, an AI summary generator. When the user clicks generate summary, we send a network request. And during that time, the backend generates summary. Now the way backend do this is a bit complex. Imagine it's a summary of a YouTube video.
Imagine it's a summary of my YouTube video which lasts for 12 hours. We have to include three external services for this. First, we have to fetch the YouTube video. Then we have to transcribe the YouTube video. And after that, we have to send the transcription to some AI provider to summarize it.
During those three external services, many things can happen while the user waits. Maybe something can time out, maybe the connection can get lost between any of those three services, or maybe the user itself can break the connection. All of those things can cause the user to never get the result. Because of that, we're going to think of something called background jobs. This time, when the user clicks on generate summary, we're going to send the network request as usual, but instead of immediately executing it, we're just going to queue a background job.
And once the background job is queued, we're going to send to the user that the summary is being generated. And at that point, the user is free to close the tab or move on. So they will instantly see a response the same way that they did in the first example, right? Instant response, success, right? We successfully started a background job And then the user can move on to do other things.
This is very important for our automation app because imagine the user had to wait to complete all of those complex services. If you remember in the intro video in the demo, I demonstrated how we can transcribe something with OpenAI and then send it to Slack and then send it to Discord. Imagine if the user had to wait for all of those things and imagine if something timed out or if something failed. Well, because of that, we are implementing background jobs, which will simply notify the user when something is finished. So let's go ahead and try and do a simple demonstration of this.
I'm going to go ahead and head inside of my Prisma and I'm going to go inside of schema Prisma and then at the end here I'm going to create a new model which we are going to need anyway. Workflows. Let's actually call it just workflow since all the other models are single as well. So account, verification, workflow. Let's give it an ID which is a type of string, give it an ID decorator and a default value of cuid.
And then let's give it a name with a value of string. So let's keep it very simple for now, no need to do anything further. And then let's go ahead and do npx prisma migrate dev. And for the migration name we can simply do workflows table. There we go.
Now restart your server if you haven't already and you can go to localhost 3000. The last thing we did here was the login screen and the protected server component. So if you are logged out, you should be seeing the login screen and then you can just log in. I use this simple combination of email and 12345678 password. And in here all I do is I fetch using a protected procedure I fetch the users in the database.
So now I'm going to slightly modify it with my new Prisma model. Instead of source DRPC routers, I'm going to go ahead and change this from get users to get workflows and I'm going to change the Prisma user find many to Prisma workflow find many. We can ignore the error for now we will resolve that in a moment. Let's actually go ahead and add create workflow procedure. Let's also do a protected procedure but instead of query let's add a mutation And let's go ahead and we don't have to extract the context really, let's just open a function and let's return Prisma workflow create, pass in the data name test workflow.
As simple as this. Now that we have that ready, you can remove the base procedure from here since we're not using it and go inside of page.tsx in here where we have the error because we're calling getUsers which no longer exists. And let's simplify this a little bit let's mark this as use client let's remove asynchronous from here remove await require out and instead of getting the data like this let's go ahead and this time get it by using use query from 10 stack react query and let's get the TRPC from use TRPC from the TRPC client and then in here we can pass TRPC get workflows query options. There we go. And now in here after a brief second of loading you should just see an empty array.
So now what I'm going to do is I'm going to add a constant create and I'm going to add use mutation from 10 stack react query drpc create workflow mutation options and then I'm going to add a button here create workflow I'm going to give it on click to be create.mutate and let's do this to fix the type errors and then I'm also going to add a disabled prop when create is pending So this way I can track exactly how much time has passed. So right now if I go ahead and click and actually while we are here we can also do one cool thing. In the mutation options go ahead and open an object. On success. Let's go ahead and do...
Let me just get the query client, use query client from downstack react query. Query client dot invalidate queries, drpc dot get workflows dot query options. And I think we can also pass this like this or maybe not. Okay, I think this should work. I think we call use query client.
So now immediately when you click create workflow it will update because we invalidated the queries here so we fetch them again. So this way you don't have to refresh your page. So you can see it works pretty instantly right the moment I click create workflow a new one is created. But what if we had a more complicated example? What if inside of my app router here in the create workflow, instead of having instant response, I had to communicate with external services.
So for example, I will mark this as an asynchronous function. And then in here, I'm first going to fetch the video. Imagine this is the transcription process, right? So first I fetch the video and that will last 5000 milliseconds or 5 seconds. After that, I'm going to transcribe the video.
So that's another 5 seconds. And then I'm going to send the transcription to OpenAI, for example, another 5 seconds. So let's see how this looks now. I'm going to go ahead and click Create Workflow. And you can already see that this isn't a nice experience.
The user now has to wait for 15 seconds before seeing any kind of feedback. The user doesn't know if they can log out, the user doesn't know if they can refresh the page, they just have to wait for 15 seconds until this completes. And now imagine if something happened. What if this step here failed? But we already used some resources to fetch the video.
The user would have to start the entire process again. Because of this, we're going to implement background jobs. This way, if a single step fails, we can easily retry it without having to start the entire job again. And that's just one of the many advantages using background jobs gives us. So let's go ahead and add Ingest to try and improve this problem.
So you can use the link on the screen to head to Ingest. And if you want to, you can create an account here, but you can also create the entire development environment without creating an account, which I think is absolutely amazing. I love when apps gives us an option to do that. So let's go ahead and create next JS and let's go ahead and install ingest. So I'm going to add ingest here And I'm going to show you which version I'm using simply so you are aware and if you want to use the same version as me.
So if you want to you don't have to install it immediately. You can wait till I show you the version and then you will see where you stand. So let me open my package. Jason here. Let me search for ingest and here it is 3.44.1.
There we go. And once we have ingest, we also need to run our ingest CLI. And for this I will also just show you the version real quick so ingest CLI version you can see it's 1.12.1 and I think that this will either return the version or maybe throw an error if the version flag doesn't exist looks like it exists so yes this is the full version that I'm using. So yes now you can do npx ingest cli is that the correct one it's not ingest-cli latest and dev. What this will do is it will basically spin up an instance of Ingest locally for you and you can access it on localhost 8288 and in here you will see all the runs that are happening in the background because that's the only way you can actually keep track of them.
So let's go ahead and make sure that you have both that running and npm run dev running. And then let's actually set up Ingest following their documentation here. So we just set up the Ingest dev server and we've opened dev server here. Now let's create the Ingest client. So very simple, just two lines and let's go ahead and add that inside of source.
Let's create a folder called ingest and then inside of here client.ts. As simple as that. And let's import ingest from our newly installed ingest package and let's export const ingest with new ingest and the name can be node base. There we go. We have our client here.
Now let's go ahead and let's create the ingest route. So this is something that we've been doing for pretty much all services that we've added here. So instead of app folder API, we did for auth and drpc. And now let's also do it for ingest like this. And let's go ahead and just create a route.ds inside.
And let's copy this and let's paste it and let's see what it's about. So we are importing serve from ingest forward slash next and then we just need to add our ingest client. Yes you can replace that relative import with an alias to kind of teleport you to the root of your project so your imports look nicer. We currently have no functions so we can't pass anything here but you already recognize this right we are exporting get post and put endpoints using this route.ts the same thing we kind of did here, right? We are exporting get and post and we did it here, right?
So it's a common pattern that we are doing here. So just make sure you have added that and then let's write our first ingest function. So I'm going to go inside of source, ingest, so where your client.ts is and just add functions.ts. And let's go ahead and copy this and paste it and let's see what we have here. So we are importing Ingest from its neighbor client and we export a function called hello world.
We use ingest.create function to create it. We give it an id and very importantly we give it an event. The event will be used to execute this function later on. It does a very simple thing. It sleeps for one second and then it returns a message using some payload that we can pass.
You can see that the payload can be anything. I can change this to name, I can change this to surname, but let's keep it email as their original example. Once we have created hello world, following their example we have to add that function to serve. So let's go ahead back inside of our app folder API, ingest route, and inside of the functions here let's add hello world from ingest functions just like that and the moment you do that if you have your ingest CLI dev running and if you have your npm running you will also see a bunch of requests to ingest and in the beginning you will see a bunch of invalid requests here because at this point something I mean something this the CLI was searching through our app for ingest endpoint it tried netlify it tried this basically until it found the working route and then it just sticks to that app which works. And once you have Hello World you can now go to your...
Let me just find it... Here it is. Ingest dev server localhost 8288. And when you click in functions, you will now see the Hello World here. And what you can do now is you can actually invoke it from here.
And you can pass in the object like this. Perhaps it will be empty for you. This is because I already used the ingested dev server. So yes, feel free to just write an object. Make sure you're using quotes for your fields here.
And then just pass in the email, for example, Antonio at mail.com. And make sure to not use any commas. I think that's an invalid json. Yes. And click invoke function and you will see that the function is now running and it finished pretty quickly.
And what happened was is exactly what we said will happen. It slept for a second and then it finished with a message hello Antonio at mail.com. So let me just open the docs simply because I have a habit of not finishing through the docs because I'm excited to show you how to use it, but I'm pretty sure that's what the function uses you here. I think then it goes into some other examples which we are going to encounter anyway. Yes, we are now going to trigger from code.
So yeah, let's go ahead and do that here. So instead of triggering it by clicking the invoke button let's go inside of source trpc routers underscore app and let's go ahead and focus on this here so now instead of doing these three things here, let's go ahead and remove it and instead let's do await ingest from ingest client dot send. Name test forward slash hello dot world, data, email and well we could technically extract the email from here but it really doesn't matter. We can just do AntonioMejo.com or whatever you want. We are now just trying to execute this through a TRPC function.
So awaitInjus.send and the name here matters because it needs to be the same as the event. So make sure you didn't misspell it. You can copy it and you can paste it here directly if you are not sure. So let's try it out now. So I'm going to go ahead back here.
I will click create workflow. And now, okay, it immediately finished, so let me try and make this a little bit longer so wait for a moment I'm going to change this to 10 seconds I'm gonna refresh I'm gonna click create workflow again and now you can see that it's sleeping for 10 seconds. But the cool thing is this is not pending, right? So it did exactly what we planned to do. Let me show you.
When the user clicks a button, we send a network request but we immediately send to the user okay we started the background job you are no longer disabled right your button is no longer disabled as it's pending you can move on you can do something else We are doing this in the background. It makes no difference if you wait here or go somewhere else. And we're going to finish the background job and notify you when we are done. That's exactly what's happening here. So now in here you can imagine that this is fetching the YouTube video.
Right. And then in here, we are transcribing the video. And then in here, we are sending transcription to AI, right? So let me just go ahead and change this to five seconds here and we can even change this then. We no longer have to do this here.
Instead, we can do await step.run, create workflow like this And let's go ahead and return Prisma, which we can import inside of this ingest environment. So Prisma.workflow.createData, name workflow from ingest. So, prisma.workflow.createDataNameWorkflowFromIngest. So we have basically now abstracted this entire function, which no longer needs to return the Prisma workflow. Instead, it can just return success, true, and a message, job queued.
Queued, not queried, sorry. All right, and now let's go ahead and try it out. So we have the same example that we had in the beginning of our project, right? But this time you can see it immediately returns this back right so this is no longer blocked and you can see that the jobs are happening right so this is waiting for a moment then this is waiting for a moment and if this step failed we could easily as an admin see why it failed here and we can even retry it right so I should probably call this transcribing I should call this fetching and I should call this sending to AI. But basically, if any of these steps fails, I can always control the retries.
For example, five retries, right? Or if I want all steps to always work, I can set zero retries. So that's what's cool about this background jobs besides the obvious, right? In case transcription step fails, we can very easily just retry that step. Maybe it's timed out, maybe their server broke at right this moment.
But the thing is, we are going to delay next retry for twice amount of time. I just said something that I don't know if it's true, but we definitely delay the next retry, so we give it time to not hit any rate limits or things like that. That's another cool thing about the ingest. So let's go ahead and try again and this time let's make it even more obvious by going to page.tsx here and on success we no longer have to do this instead let's just do toast from sonar.success job queued. Import toast from Sonar.
And now you will get a message, job queued. And now the user knows, okay, so whatever I just did is happening in the background, right? I don't have to worry about it because right now it's fetching the video for five seconds, then it's transcribing my video, and now it's sending it to OpenAI. And finally, in a real world example, you know, it would have all the data and it would create the workflow, right? That's what I'm trying to demonstrate here.
And you can see that, well, now we have to refresh to see the new model, but don't worry, later we are going to connect Ingest to their real-time service to show to the user in real-time when the job is finished, when it's in progress, or if the job failed. So that's what's absolutely amazing about Ingest. And I believe that that is a very good introduction to background jobs, which are a very, very crucial concept for us here. So let's see if we did everything we intended to do. We set up ingest, we created a background jobs, but we didn't add this.
So the reason I wanted to add this is definitely not required. I just think it's cool. So you can see how I need to have two terminals. And every time I start this project I have to remember to run both this and I have to remember to run this so if you're interested you can install this package it's called mprox like multiple processes you can also use concurrently but the thing I really like about this is that it will show you the processes that are running individually and you can see if any of them fail. This is obviously not required it's for running multiple commands in your development mode But it's very easy to install.
You can just use npm install global multiple processes. So I can go ahead and just add this for example. But since this will be kind of in your app, perhaps it will be better to install this as a dev dependency and then all of your collaborators can do that as well. So let me show you my package. Jason here and prox.
There we go. 0.7 point three inside of my dev dependencies here. And now that you have this, all you have to do really is create a simple mprox.yamlprox. Pronounce this extension, let me just show you. So in the root of your file, create mprox.yaml.
Prox, I mean processes. And let's add ingest cmd npm run ingest dev. Next, cmd npm run dev. Again, this is completely optional. You don't have to do this, right?
It's already working for you. You can run both of them at the same time. But if you want to do it with the same command, you can use concurrently or you can use this. This is a new thing that I found and I really like it. And now let's get inside of our package.json and let's get inside of scripts here.
So I'm now going to add ingest dev and I'm going to add ingest-cli dev and I'm going to add dev all and in here I'm just gonna do mprox and let's also install ingest-cli as a dev dependency the same way we did before. So I will actually shut down this now and I will do npm install inside of my dev dependencies in just CLI. This way whoever you know ends up working on your project will have the exact dependencies that you had. Like this, mprox and ingest-cli. And if I've done this correctly, and I think I did, When you run dev all, it should find its configuration file and it should run both ingest and npm run dev at the same time.
So just make sure you call this ingest dev and then it will just make sure you didn't misspell it here, right? So let's try npm run dev all and there we go you can see how it starts both ingest and nextjs. Now keep in mind that it's a little bit different so for example if you want to quit this you have to press the letter Q. Ctrl C will not work So you have to press the letter Q and then it will shut down everything. Same thing if you want to copy something from here.
If I want to copy this, you can see I'm in copy mode now and I have to press the letter C to copy. So when I press C, that copies it. Ctrl C or command C will not copy it it's just the letter C that will copy it. You can use your mouse to select the process or you can go up and down and you can also do a cool thing you can once your process is selected you can press the letter R and that will restart it. You can see that now I just restarted Next.js.
Or if I want to restart my ingest, I can just press R. I personally think It's cool. If you don't like it, you don't have to use it. I don't know. But I just wanted to show you this in case you don't.
So you don't get surprised if in the middle of the tutorial you see me having this multiple processes thing. It's inspired definitely by the Turbo process manager, which uses DMUX, I think. But setting up Turbo might be a little bit of an overkill for this project. I don't know. I'll have to explore it a little bit more.
I usually use it only when I need Monorepo. And setting up DMUX is not as simple as doing an npm install. Right, so if this is not working for you, no problem. Just continue with the tutorial. But if it is, I think it will kind of help you because we will have some more processes to run synchronously.
And that will be like ngrok, our local tunnel. So it will be very easy to have all of them here together. Great. So if it works for you great. If it doesn't no need to use it.
And I think that now that marks the end of everything we wanted to do. So let's go ahead and let's commit this. So 0 6 background jobs. I'm going to go ahead here and I will create a new branch. 06 background jobs.
And now let's go ahead and stage all 10 changes that we have. 06 background jobs and let's click commit and let's click publish branch. And now let's go ahead to our repository and as always let's open a pull request and let's review our changes. And here we have the summary by CodeRabbit. We added a workflows list with client-side loading.
We introduced a create workflow button with success notification and disabled state while processing. Background tasks triggered on creation to handle processing. We converted the main page to client side. We only did this so it's easier for us to execute the functions. We added local tooling to run multiple development processes concurrently exactly and we introduced new development runtime dependencies which is referring to Ingest.
So what I'm excited about is the sequence diagram. You've probably guessed it already. So let's go ahead and take a look. When the user clicks create workflow, what we do is we call a mutation.createWorkflow. The TRPC router receives that, and once it sends the event to test hello world, the ingest API acknowledges it, And we simply send back to the user the success message.
And if you remember this is exactly what I planned on doing. Let me just find my example right here. So long running task example with background jobs. The user clicks generate summary. We send a network request.
We queue the background job and we simply acknowledge the background job. The summary is being generated. That's what this step is. User clicks the button, we send the network request, we queue the background job and we send to the user a success message like hey that's it that's all you have to worry about. But what's actually happening is that something's going on in the background while the user can do whatever they want.
In our case we just sleep for 15 seconds but we are pretending that this is fetching the video, transcribing the video, and sending the transcription to the AI. And then we create some record, which obviously makes no sense. But in the real example, this would use some data from this three steps that we just did. That's basically what we will do in the future. And that's exactly the flow that CodeRabbit managed to understand every time I see this and every time I'm impressed by how well it understands the code that we are writing.
In here we have a couple of actionable comments, but we will not do anything regarding that simply because we just modified a bunch of our files just to demo, right? So none of this is really gonna stay here. We're going to change our file structure a lot for this page.tsx as well as the actual functions are not going to look like this so it makes no sense to change them because they are not real, right? We just use them as an example. So, amazing, amazing job.
Let's go ahead and merge this all request here. And let's go ahead and see if that's all we had to do. I believe that marks the end of this chapter. It does Amazing, amazing job and see you in the next chapter. And yes, before we of course end the chapter, I almost forgot, we have to go back to our main branch.
And after your main branch here, make sure to synchronize the changes, click OK and then head inside of your source control, your graph and in here you should see 06 background jobs being detached from the main branch and then merged back here and for your sanity check make sure you are on the main branch and that you can see for example mprox configuration file or ingest file. That means everything's fine. And that marks the end of this chapter. Amazing job and see you in the next one.