In this chapter, our goal is to add background jobs, more specifically AI background jobs for our project. So why do we even need background jobs? There are many reasons why a project would need those, but in our case we specifically want to avoid timeout from long running tasks. This is specifically problematic with AI generations. If you've ever watched any of my previous videos where I used AI, I think in two of the tutorials that I did, timeout was the issue when we actually deployed the application to production.
So locally there is no timeout. We can keep the task running for as long as we want. But when we deployed, a lot of comments came and said, hey, this is timing out and it doesn't work in production. Now there are other solutions which are not background jobs. Those would be edge functions, which is a temporarily solution because they increase the timeout level.
And the other one is, well, a webhook, which would accept a successful or finished result from an AI job. That webhook procedure is more similar to what we are going to do here, which is a background job. So besides avoiding timeout from long-running tasks, another useful thing we can use background jobs for is to ensure retries in case of a failure. And after we implement some basic background jobs, you might even get an idea of your own to where to put a background job. Perhaps our storage cleanup from upload thing, which we worked hard on in the previous chapter could be a candidate for a background job instead of blocking our webhook, right?
So I hope that after this chapter, regardless of what we do with background jobs, you will have a better idea of how to use them. And the good news is we're going to use Upstash Workflow which is their new product for background jobs and we already have an Upstash account because in the beginning of the tutorial we used their rate limiting service. One thing that unfortunately is required for this AI feature is a credit card. I searched if they still offer any free tiers, but they do not. They do not offer any free tiers and they do not offer any free credits.
Regardless, if you create a brand new account, I tried just a moment before starting recording this, and it's true, they have no free trial and no free credits. The good news is, as little as $5 will be enough for this entire tutorial and that's exactly how much I have added. So let me just go ahead and write that here. So $5 will be more than enough for you to finish this entire tutorial in regards to OpenAI queries. Now if you have no access to credit card at all you can still follow along this chapter but you will not be able to properly finish these background jobs right but if you want to you can learn the code and how it would work.
Unfortunately, that's the only thing we can do now. If you do feel a bit more adventurous, you can find an alternative SDK. You don't have to use OpenAI. Appstash Workflow allows you to do pure post requests to any API that you know of. So if you know any free SDK for AI, well, you can use that one.
But for this tutorial, I'm going to be using OpenAI. It is extremely cheap. $5 is, I think, affordable for most people. And I know exactly how to use it, and I know what to expect of it. So let's go ahead and start with integrating Upstash workflow right here.
So I'm gonna go ahead and go into my Upstash dashboard and I'm gonna click on their new workflow tab right here. And if you want to, you can click on the docs. And then inside of here, you will have a nice documentation from Josh tried upstage, or you might know him from Josh tried coding. So feel free to watch this video. That's gonna give you a very good example of how it works.
What we can do is we can go onto Quickstarts, Next.js, and let's go ahead and do the following. So the prerequisites for this are an upstash, Qstash API keys, Node.js and npm, another package manager. So we have all of these things yet, but let's start by getting our Qstash API key. Let me just see if we need that immediately. We do.
So let's go ahead and get that. I'm going to go here and I'm just going to copy my QStash token. I think that's the one I need in case we need something more. We can easily come back here and let me just... So it's Qstash token like this.
As always, don't share your tokens with anyone. So I have added my Qstash token here. And now let's go ahead and let's add bunadd.opstash.workflow. Now, since this is a relatively new product, I would recommend you to use the exact version that I'm using because it is possible that they change it fairly soon after this tutorial. You can see that it's not even in 1.0.
Well, I mean, that doesn't really mean anything. I don't know how they do their versioning, but yeah, if you're interested, this is the exact version I'm using, just in case you want to use the exact same one. There we go. So you would do it like this. Also, while I'm here, you might notice that I have these errors here.
I believe these came from our 24-hour deletion from mux. So let me just remind you of that. Videos get deleted after 24 hours. Right. So it looks like something works incorrectly with their auto deletion method, because when we delete an asset manually, that webhook works.
But looks like when they fired their 24-hour deletion, maybe that doesn't pass the proper MUX signature to our webhook. So I think the fault is actually at their end. So yeah, in case you're looking at your app and something doesn't make sense, you have more or less videos than you would expect. It could be because your videos were deleted, right? So what I'm going to do here is I'm just going to go ahead and delete all of my assets here.
Make sure you are inside of your new tube development. So I'm going to delete all of them for a very simple reason. You know, it's a new day and I want to make sure that none of my assets get deleted in the middle of development. And I would recommend you do the same. So when I refresh here, make sure again you're in YouTube development, you should just see empty here.
And if my webhook works, this is deleted as well. In case yours wasn't deleted, you can of course go to Drizzle Kit Studio and just, you know, after you've cleaned up your assets, simply go ahead and clean up your videos manually here. So I'm gonna remind you this every now and then just because I don't want you to have any confusing experience. Hey, what's going on? Why are my videos not working?
All right, now let's focus back on Upstash. So we just added bun add upstash. Let's confirm that in our package.json. I'm gonna open my git changes here. There we go.
Upstash workflow is here. We have added our .environment.local here. So the Upstash workflow is powered by QStash which is another technology of their own. So they're wrapping their own technology that's quite good. They can afford competitive prices to alternatives like Ingest and trigger.
Excellent! So now that we have that we need to find a way to run the workflow these background jobs and they give us two options. One is using local QStash server and the other one is using local tunnel. Since we already have local tunnel set up thanks to our bun run dev all method, which runs our ngrok URL, we can use this option. Why would you want the first option?
Well, the first option is useful if you want to do some heavy, heavy testing, which might affect your billing. Basically, the logic behind this is that local QStash server will not affect the billing, whereas if you massively test via your local tunnel, that will literally call the app stash, QStash and workflow, so it might affect your usage. But as you can see I'm on a complete free plan. I don't even have my credit card added here so I'm pretty confident that for this tutorial we're not going to overstep any limits at all. Great!
So we're going to choose option 2, local tunnel. So copy the QStash token from Upstash console. We already did that. We can confirm that by going inside of .environment.local here. QStash token is here.
And now we need to add upstash workflow URL. Now the upstash workflow URL is going to be our static ngrok domain. So let's go ahead and get it here. There we go. Like this.
I'm just going to go ahead and add an HTTPS prefix like this. After that, let's go ahead and see what else we have to do. So this looks like it's working. We already have the QStash token. And now let's go ahead and let's create a workflow endpoint.
So this is quite simple. All we have to do is we have to create a route somewhere in our app. So I'm going to go into the source app folder API. Let's go inside of videos here and let's create workflows like this. And then let's for example create title and then route.ts so inside of videos we're gonna have workflows and then title and then route.ts So this is the equivalent to localhost 3000 slash api videos workflows slash title.
Basically a workflow for title inside of the videos API. And let's just copy this entire thing from here. So they are using a different example, but ours makes more sense like this. So import serve from upstash workflow next.js. And inside of this serve, they have prepared two steps for us to run.
And they have also overridden the post method. I'm not sure if overridden is the correct term to use. Maybe they've created a post method for us thanks to this serve method here. And this is actually it. This is the background job, right?
So here's what's important now to test this out. The first important thing is that you have your webhook running, right? You should be able to visit this URL right here. If you cannot visit this URL, it means your webhook is not running and your testing of this will not work. The other thing that you have to do is you have to try posting this endpoint which we just created.
And once you post this endpoint, this code inside of these steps will not run on your backend, but instead it will be a background job in Upstash, which has much higher timeout limits. It has proper retries in case of error, and you're going to see exactly what happens with each step inside of the usage here, inside of events and in case something fails after it tries, you have this dead queries which then you can manually check out like what's going on, why did they fail and then you can debug and you can refresh them if needed. Let's go ahead and let's try doing this. So I believe that this is what they say to do next npm run dev which is our bun run dev all and now all we have to do is we have to do a curl and post method to our API workflow. So what I'm going to do is I'm going to change this to go to slash videos workflows slash title and let's see what's going on.
It looks like I am getting an error. Of course I'm getting an error. I think that we have to do it on Angular, or maybe we don't have to do it on Angular, maybe it is an actual issue here. Let's see what exactly is going on here. The issue is that I used HTTPS.
There we go. So, don't use HTTPS. My apologies, I thought that the issue was because I have to run the local tunnel, but that's not true. We should be able to query on our local host and then the local tunnel will be used to communicate from AppStash back to our application. So if you want to, if you have the ability to use the curl command, don't worry we're later going to test this using the usual interface.
But if you want, you can try this. Make sure it's HTTP localhost 3000, API videos, workflows slash title, and you should get back the workflow run ID. And now, if you go back inside of here, there we go. You have the job and you have all the steps which were happening here, the initial step and the second step right here. Perfect.
So you can use these steps to do whatever you want, but just these steps just by themselves don't seem too useful, right? The reason they don't seem too useful is because we don't really know what's inside of this context. There's no way for us to run this in any other way than a curl post. So what we have to understand next is how to actually run these kinds of apps. So they do have a step 5, deploying to production.
We're going to focus on this later when we actually deploy. So there are a couple of things we have to do now. First of all, I believe we have to secure our endpoint. Basically we need to have the QStash current sign-in key and the QStash next sign-in key. And there are two ways they allow securing.
You can either use QStash's built-in request verification or you can develop custom header and custom authorization mechanism in case you need that solution. So let's go ahead and go back to QStash and let's go ahead and click on signing keys and let's prepare Q-Stash current signing key. So I'm going to open my dot environment. Local here. So Q-Stash current signing key and Q-Stash next signing key.
And let's go ahead and copy the first one, add it here, and the second one, and add it here as well. Both of them begin with SIG, British short for signing or signature. Now that we have those, let's just confirm that we properly added them. So QStash current signing key and QStash next signing key. The names are important here because I believe that the QStash...
We don't have QStash yet but I believe that when we actually create a lib for QStash, it will automatically look for these environment variables here, and that's how it's going to secure the endpoint. So now that we have added those, if we try this again, I'm not sure, I think maybe it will still work. It looks like this doesn't work now. There we go. Because it detected that we are using QStash current signing key and QStash next signing key inside of our application, but this request which we just tried doesn't have those.
So it's no longer allowing anyone to just trigger a workflow from our application, right? Only we who have these keys can query our endpoint and trigger a background job. So this is now secure from any intrusions or any attacks which are aiming on overflowing our background jobs and causing massive usage or billing. There we go. So we successfully secured our Q-Stash background jobs.
Now the question is how do we call this? And as you can see, you can have the, they do support the edge cases in case the environment variables are not supported, then you can explicitly protect it via receiver like this. But we don't have to do this step. As you can see, we don't have to add anything here because we support environment variables normally. And of course, they also offer the custom authorization method if you are in an environment where you need that.
So now that we know how to secure our run, let's learn how to start a run. This is the recommended way of doing it, which is client.trigger. So we have AppStash workflows. So how about we go ahead and define this. I'm going to go ahead and go inside of lib here.
So we already have the redis and we have the rate limit and now I'm going to create Qstash here. You can also name it workflow if you want to but I think Qstash may be a bit more specific actually. Let's call it workflow. Let's go ahead and let's import the client from upstash workflow. And let's go ahead and export it from here.
And I'm just gonna call this workflow. And inside of the token here, we need to pass process.environment. And we need to pass the QStash token, which was the first thing that we added, and I will put a little exclamation point at the end. There we go. So we now have the workflow.
I'm still deciding on whether I should name this constant workflow because I'm fearing that we might have conflicting names between the imports, right? Something else might be named the workflow. So that's why I wasn't sure whether I should name this Qstash or workflow but let's leave it like this for now and this is how you would run it now so let's go ahead and let's try this now we're gonna go ahead and do the following we're gonna go and create a little procedure right So we immediately connect this to our code. Let's go inside of videos. Let's go inside of server and procedures.
Instead of here, we already have the restore thumbnail. How about we add generate thumbnail? Let's add a protected procedure here. The input can be empty and let's add a mutation here. Let's actually remove the input in case it's going to actually be empty.
Let's make sure this is asynchronous. And I'm not going to do anything here besides just triggering a call. So I'm going to go ahead and do await workflow from lib workflow and I will simply call trigger like this and inside of here what I have to do is I have to pass in the URL which actually I'm not sure if we need to pass this explicitly. I think that if we've added upstash workflow URL, I think that it's actually going to read that URL. So let's try without passing it.
And then we're going to see maybe URL is a required parameter. Maybe I am wrong. Let's just do workflow run ID here. So this is, oh, my apologies, I got confused about how this works. We need the URL.
Let's add process.environment and let's add upstash workflow URL. And the way we choose which workflow we are triggering is by going to slash API slash videos slash workflows slash title. So this is how we choose which workflow to trigger. Everything else is optional, but if you want to, you can pass a body, I am a body. Or maybe more interesting, it would be if we could destructure the context from here.
If we could get the ID, user ID, from context.user like this, and then we can pass in inside of the body here, user ID like this. So we know which user triggered this background job. And we would have the workflow run ID here, and we can just return that back. So now we have the generate thumbnail protected mutation. So let's go inside of our form section.
And we have this dropdown menu item AI generated. So how about we copy the restore thumbnail here, change this to generate thumbnail, use a video to your PC, generate thumbnail. It really doesn't matter what we do here, but yeah, we will eventually... Actually, since this will only trigger a background job, we should not do any of those, right? Instead, what we are going to do here is the following.
We can add a title, background job started, and description. Does it allow title? Let's see, background job started, and then maybe description. It could be, this may take some time, like this. And now let's use the generate thumbnail similarly to how we use restore thumbnail, a restore that restore thumbnail here.
So on click here, generate thumbnail, mutate, and we don't pass anything inside for now. So now what should happen is once I click on this, let me just go inside and create a random video here. I forgot that we removed all of our videos So let me just go ahead and upload this demo again and just wait a few seconds for everything that needs to be generated around it, get generated. So I'm just gonna refresh here. It's loading, it's loading.
It looks like it's still preparing the video. Hopefully soon. There we go. Ready and ready, which means we have the thumbnail. Great.
So now if I click AI generated, this should fire a new background job. There we go. Background job started. This may take some time. And now if I go inside of Qstash, inside of workflow, I should see API workflows title right here a few seconds ago.
And you can see it successfully passed the user ID here. So now this background job all of a sudden in my workflows title has more sense all of a sudden because now we know that this context has the user ID which triggered this API call. That's something we did not know until now, right? So let's go ahead and learn how to actually extract the body from this context right here. What I like to do is I like to create an interface input type here and I'm gonna pass in the user ID string and how about we pass in video ID string as well.
And then what I'm going to do is I'm going to go ahead and define input to be context request payload as input type like this. And then in here I will be able to destructure my video ID and my user ID from the input. And then what we will be able to do is, for example, we would be able to dedicate one of these runs to do a long-running task. For example, this is not exactly a long-running task, but what we could do is find a video. So this would be existing video.
And we would do context.run and we can call this get video like this. And inside of here, we would get the data, which will be await database.select. Let's just import the database from add slash database here. I will just reverse these two imports. So await database, select everything from videos from our database schema here.
And let's go ahead and add where. Let's add and, and let's add two equals. And inside of here, we're first gonna check if videos ID match our video ID. And the second one, if user ID matches our videos user ID, actually we should reverse this too. There we go.
And then let's go ahead and return data zero. And inside of this step, if we want to, we can check if that exists or not. So if data, if the first element in the array does not exist, we can throw new error here, not found. And this will break this step. And then when we have the existing video here, we can continue using it.
We can console log the existing video if we want to, or we can continue doing something in step two. And the cool thing about this steps is It doesn't make too much sense to just do a normal database fetch inside of a background job. That's not a long-running task. But you can already get the idea of what this could be useful for. And we can separate all of these things in these steps.
What I particularly think this is useful for are webhooks, especially our videos webhook, because we do so many things in each of these things. We do thumbnail generation, then we do thumbnail cleanup, then we go ahead and update the video, right? Those are three steps which we could delegate to a background job if we wanted proper retries and order and everything, right? And especially for things like this, right? What if the thumbnail generation fails?
Our entire webhook fails in that case. So that's something that we can dedicate to a background job. And then if it fails, it's simply going to repeat this step and only then it's gonna go to the next step. So how about we try this again? I'm gonna go ahead and prepare my runs here.
At the moment this run I only have two runs so I'm gonna go ahead and try this again. I think that I first have to modify the input here. I will just copy the one from restore thumbnail. So let's also accept the ID. Let's go back inside of the form section and we will just pass this here.
There we go. So let's try clicking AI generated and now if I refresh here in a few moments we should be seeing the new background job. Let me try a hard refresh. Let's see maybe we've gotten an error here or something. I don't believe it's that.
Let me see. Oh, it looks like we do have an error. Error submitting steps, Q-Stash in parallel step execution not found. Okay, Let's see what's going on. Maybe it was because the moment I saved and run the query, I didn't refresh, so it never got the proper...
Yes, so the issue is that it never got the video ID because I clicked save and I didn't wait for hot reload to update, I believe, or maybe my procedure doesn't even send it. Yes, let's go ahead and let's extract video ID and use input.id here. Input from here, Like this. There we go. And this is an accidental import.
Yes, so this one, for example, keeps failing, right? This get video step, for example, is now constantly failing. And if it fails more than three times, which I think is the default, maybe not, maybe I'm incorrect. Nevertheless, I could define my retries here. So I could say, hey, only try it three times.
Don't try more than that, right? And once that happens, once it fails, it will end up here in, I'm not sure what this is short for, maybe deadline query, I'm not sure, but it is descriptive and it will allow you to have a separate view of all of the steps or jobs which failed so that you can manually go here, you know, take a look at why they failed, see the body that they had, you know, what's going on and then try and manually rerun them. And if you have a task which you know is incorrect like this one, you can always close it yourself. So this will cancel a workflow. So now that we are properly passing in the video ID from our input.id, let's go ahead and refresh this entire page so we ensure we have three of those.
And let's try running another one. So this is some unrelated issue, maybe some hot reload. Yes, I think this is something hot reload related. What we're gonna do here is we're gonna try this again. AI generated, background job started.
There we go. This one was cancelled. Exactly what we expected. And in a few seconds we should be getting a new one. But it looks like we do have some error here.
So let's go ahead and debug this one together. This is an upstairs workflow error thrown after a step executed. It is expected to be raised. Make sure that you wait for each step. Perhaps I did something wrong again here.
It's specifically here with existing videos. So maybe I'm doing something wrong and I didn't even notice it here. Which is funny because all I'm doing is trying to demonstrate the video fetching here. Let's see, videos ID, video ID and videos user ID from user ID. I think this should work fine.
And it looks like it does work fine. There we go. So get video right here. And you can see that it got the entire video. So I'm not exactly sure what this error was here.
Maybe this video actually came from something from before. Right? Maybe that's what's going on here because this one looks like 200. Yes, I think this came from something else. Great, so I think that we've now kind of established what the background jobs will be used for.
We've secured them and we learned how to trigger exactly the job we need to trigger. So now let's go ahead and let's, well, let's make this actually does what it needs to do. Let's go ahead and use the generate thumbnail one, which I know sounds funny now, but we've already called it generate thumbnail. Let's still make it use workflows title, And let's use it to update the title of a video. How about we do that?
Right, so we're going to break the step here. Let's go ahead, I'm gonna call this video. I'm gonna head in the existing video here. If we can't find the existing video, throw an error and otherwise return the existing video. That's gonna be my first step, get video.
Now in my second step here, what I want to do would be update video. So I'm gonna go ahead and simply do await database, select my bulges update videos, set title, updated from background job, where, and let's just copy these two here. And make sure this is asynchronous. And we don't need the last step, right? So we now have the first step, which gets our video.
So we could use, yeah, I mean, we don't really need this. It's more of a, for demonstrated purposes. So we could, you know, just do this one. If you want to, you can still use this one and then maybe from here, you can use video.id from above and video.userid from here. I just have to await the context run to get the video user ID.
An example, right? So you no longer rely on the props here but you now rely on what you actually fetched. So let's go ahead and try this now. So now, after I call my new background job, this title should change for this video. So I'm gonna click AI generated.
This started a background job and after that background job successfully runs, it should change the title of my video. There we go. This is my latest run here and we can see that we successfully got the video and it successfully updated the video. So let's go ahead and refresh this page now. There we go, updated from background job.
And now I'm pretty sure you understand all the steps we need in order to use AI to do this step for us. So let's go ahead and let's go to OpenAI platform. So make sure to go to the platform and not to their chat instance and go ahead and log in. So I've created a brand new account for this one just to confirm that unfortunately there are no free tiers. You will have to add something.
So I've added $5 here. And now what you have to do is you have to find your API keys. Let me go inside of API reference. I'm not sure how to use this interface to be honest. Let's go into settings, API keys and let's create a new secret key.
I'm going to call this next new to and my default project and I will select all or permissions and I will copy this and now let's go ahead and let's store it inside of our environment so open AI API key and let's pass it here. There we go. So now what we have to do is we have to go inside of, my apologies, inside of the documentation here and you can find integrations with OpenAI. And inside of here, you can see how you can make a context API call specifically to OpenAI. So there are two ways you can do it.
You can use their SDK, context API OpenAI call, or if you want to, you can... Let's go ahead and see. I'm not sure where... Maybe this is the example. Let's see.
You can just do a normal HTTP request, but I can't find an example now. But they also support Anthropic, they support Resend, Versel, AI, SDK. But this is actually quite interesting. I don't think this was in the documentation when I was building this. They've added an example to use DeepSeek.
This is very interesting because maybe you can do it for free. If you really have a problem with adding a credit card, you can go to apideepseek.com and try creating a free account. I'm not sure if they have any limits. I have no idea how their API works, so I can't guarantee you that it's free, but you could try using that instead. And you can still use OpenAI Call here, it seems, because they support the exact same API.
Great, but since I've already decided on OpenAI, I'm going to continue with this. I'm going to go ahead and copy this. I'm going to go inside of my route title here. I'm going to add another step here, which I'm going to define const generatedTitle. And this will be await and then this.
Basically, I'm going to await context API open AI call, and I'm going to call this generate title. This is the same as context.run, the name of the step, and then just the configuration. Let's go ahead and change this to process.environment. And let's go ahead and let's use open AI API key, add an exclamation point at the end. Operation is chat completions.create.
Let's see, is this strictly typed? It is, so you should be able to see everything they have here. There we go, chat completions create. Models, looks like models are not strictly typed. So you have to be careful here, make sure you type it like this.
And then inside of here, you can give a role to the system and then what the user asks. So let's for example try and do that. So the role for the system here should be a prompt, it should be a prompt which would basically tell the OpenAI what they have to do. And if you have access to my source code, you can simply find the title system prompt. Otherwise, you can just use the public gist where I've added these logos and you can find the title system prompt for example.
And let's add it here. So title system prompt, your task is to generate a CEO focused title for a YouTube video based on its transcript, please follow these guidelines right here. So just a simple system prompt here, and then that's what we're going to add to the system. And then the user can say, hi everyone, we are basically mocking, we are mocking a transcript of a video. Hi everyone, in this tutorial we will be building a YouTube clone.
Let's try if this works. This is not a real transcript so I'm not sure how well this is going to work but we can try. And what this can do, this OpenAI call can do now, You can immediately destructure the body from here, like this. And then you can update the video based on the results. So for example, let's get the title here to be Body Choices, first one.
And then this will be optional. So message, optional. Actually content looks like it always exists there we go and then let's pass that as the new title here The issue is that this can be null, so let's default this. Do we have our first... We have the video here.
Great, this is actually useful now. Let's just fall back to our original title in case this generated one is null, for whatever reason. So if you've added your OpenAI key, this should now work. Let's try and see. So right now it says updated from background job, But if I go ahead and click AI generated, and if I go ahead and just focus on this, wait a few seconds, we should see a new title now.
There we go. So finished a few seconds ago and looks like it successfully generated the title in this step as you can see. So we can see the response. I'm not sure how good it will be. I'm not sure where can I read the message here?
Let's actually see what did it do. Did it do something useful? There we go! Build a YouTube clone step-by-step tutorial. It's actually amazing that it generated such a good prompt just from this very simple transcription.
Great, so that is basically what we wanted to achieve with this background jobs. I hope that now they make more sense, right? Because this kind of thing is a potential long running task. I think even this one is quite simple, so it could be a weighted. But imagine you're using this to generate a thumbnail.
Imagine you're using this to generate parts of the video, right? Any long running task that you can imagine is better off put inside of a background job like this one. Excellent. So now that we have this, let's go ahead and let's actually get our transcript, right? And the good news is we have the transcript.
You just don't know how to get it. So in order to get the transcript, we first have to get our video. After we get our video, let's create the transcript. And let's do that in a background job as well. So context run, get transcript, asynchronous.
And we're going to get the track URL from HTTPS stream.mux.com video from above, mux playback ID slash text and then we would need mux track ID, video mux track ID dot txt. So this is the important thing right. Mux track ID gets generated inside of our videos webhook when we get video asset track ready. So only then we generate the track ID and we add it here. And remember, this webhook only fires if we enabled the subtitles, which we do inside of our video procedures here.
So we added this part. So that's the first requirement for that webhook to fire. The second requirement is that there is actual audio. And I think in this case, English audio in the video, and only then will you get this part right here. So in order to correctly test this I would highly recommend using a video with English audio.
If you remember in my previous chapters, I'm not sure which one it is. Is it this one? There we go. You can visit tinyurl.com slash newtube dash clip where I gave you a demo mp4 which I'm using for my jobs here. So find that video or any other video and confirm, you can confirm inside of your studio, that you have a MuxTrack ID.
So basically I want you to confirm these things so you don't get confused as to why my works and your doesn't, right? So be careful and confirm that whatever video you're testing this on has mux track ID and you should also have mux track status set to ready. So if subtitles are ready, you should be able to do what I'm doing here, which is generate a text file from, well, stream.mux.com. Now we have to actually fetch this. So let's do await fetch track URL and let's return response response dot response this is a typo dot text.
There we go. So we now have this right here. So we can call this final transcript or let's just call it text like this. If there is no text, throw new error, bad request. Otherwise, return back the text, which will ultimately be the transcript.
Great. And now inside of here, you can pass the transcript like this. There we go. And inside of you can move the title here, I believe, and you can do the same thing here. If for whatever reason, the title was not able to be generated, you can throw a new error here and simply say, bad request.
There we go. Actually, it might be better to do this outside of the step because throwing an error here will cause it to retry, but it couldn't do anything new with the result it got. So perhaps throwing it here would be a better idea, right? I'm not exactly sure how this one gets retried, but I think since they have this built in, if you return the body, I think that it will, if it errors, I think that it will just retry in itself. So first thing that I want to confirm is that something like this works.
So let's go ahead and just go inside of Mux here and I want to go inside of docs just to show you exactly where I found this URL so it's not just magic for you. Let's go ahead. I think they actually have a section called AI workflows, automatic translation, summarizing and tagging. So go here and then inside of here, they should teach you how to retrieve the transcript file. So if you click here, we've added this.
And now we should be able to retrieve the file. There we go. So this is how you retrieve it. Stream.mux.com, playback ID, text, and then track ID. Of course, if it is a signed asset, it will also require a token.
That's not the case for us. So exactly what we did here in case you're wondering, where did I get that from? Perfect. Let's try it out. So I know I have some audio here, but honestly, I don't even know what I talk about in the video, but let's see what AI will generate.
I know it's funny we use the thumbnail button, but that's because it's the only AI button we have, right? So let's go ahead and wait a few seconds for this new job to finish and see if our title gets updated. There we go. Looks like it was a success. And you can see how Get Transcript works.
This is what I was interested about. This is a short clip with English audio so we can demonstrate mux upload and subtitle generation. Thank you. So this is exactly what I say in the video, I believe. And let's see what's my new title now.
Mux upload and subtitle demo. I'm so impressed by this. This is so cool. The way we use transcript, a background job, all of that combined to generate a title. I think that's super cool.
So what we have to do now is we have to add dedicated buttons here for AI, here for AI, and then generated AI thumbnail generation. So let's go ahead and do the title one. So let's start by going back inside of the procedures here, and inside of the videos procedures, and let's find the generate thumbnail one and let's just copy and paste it above and call this one generate title like this and it's going to go to API works for title nothing else should be changed really And now let's go back inside of our form section component. Let's go ahead and copy this generated thumbnail and let's rename it to generate title. And everything here can stay the same.
And now we have to use this generate title somewhere. So let's go ahead and find our first input, which is the title. There we go, to do add AI generate button. So I'm gonna go ahead and add a div here, remove the to do, and give this a class name of flex item center and gap X of two. And then below that, I'm gonna add a button with sparkles icon.
And in here, I'm going to go ahead and give this a size icon, a variant of outline, a type of button, very important, class name, rounded pool, size six. And then I'm going to lower the size of the SVG using this targeter. And on click here, I'm going to call generate title, mutate, and I'm going to pass the ID, video ID. And disabled if I have... It's going to be disabled in case generateTitle is pending.
And then we can also create a nice little if generate title is pending. In that case, we're going to render a Loader2 icon with the class name of Animate Spin. Otherwise, we can render the Sparkles icon. There we go. Now we have a nice AI button here.
So if I click here, it says background job started. This may take some time. I'm sure how different the new title will be because this one is already relevant to our transcription. So now that we have this, let's go ahead and copy the entire content inside of this form label here, including the title text here. And let's find the description and let's just do the same thing.
And I will just change this to description. Like this. So now the description will have it as well. Let me refresh to see, maybe we got a new video. We did.
Mox upload and subtitle generation demo. So it created a new one regardless. Great. So we have this. Now, let's go ahead and go inside of procedures and let's copy and paste generate title.
This one, let's make it generate description. And let's make it go to API videos workflows description. Description like this. And now we have to create the description workflow. But that's going to be quite easy because it is exactly the same.
So inside of videos workflows, let's change this to description And let's change this. Well, you have to find the prompt for my assets here. So, description system prompt. You can write your own if you want to. There we go.
And simply use the description system prompt here. And this will be generate description. And then call this description. And simply update description. Like this.
So that's our new background job with the new description system prompt. And now we have to go inside of our form section. Our description button currently uses generate title. Let's change that by copying and pasting generate title and renaming this to generate description. There we go.
And now Let's go ahead and let's replace this. So this is for the title, that's correct. But for my description, these three should be generate description. There we go. Let's go ahead and try it out.
So I'm going to click this Now, background job started. Perfect. And now I'm just gonna wait and see if my new job for description will work as well. So this should show slash description in a few seconds here. There we go.
Run success on slash description here. And let's go ahead and try it out. And so if I refresh this, this video demonstrates how to upload the video to Mux and generate the subtitles. Amazing. Exactly what we expected.
So the only thing that's left here is AI generated thumbnails, but I will leave that to another chapter simply because it is a whole new component that we need and a prompt here that we need. And this is already an hour long, but I think you got the gist of how we're doing this. We will, of course, finish this, but just in the next chapter. And one thing that I would highly recommend doing is also disabling these buttons. If video itself does not have MaxTrackID Like this.
So if it doesn't have max track ID, you can completely, you know, prevent it from even trying to generate something. The only videos which are not going to have max track ID are videos whose subtitle status is not ready. For example, that could be a video that you will just upload, right? So let's go ahead and select this demo.mp4 here. And right now you can see it's disabled and we want it to be disabled because it has no subtitles yet.
If I refresh now, there we go. Since subtitles are ready, I can already do that. It looks like I already have my track ID, which is interesting because I always expected the video status to be ready before the subtitles. That is interesting. The reason I say it's interesting because in my videos webhook in the track ready I depend on the asset ID but I only assigned the asset ID in my let's see when do I do it?
So in video asset created. Okay, so let's hope that video asset created fires first. Because if this, if video asset track ready fires before my video asset created, I never actually have the asset ID to look for in the video. But it looks like it's working, right? So if I go ahead and try both of this, yeah, I can just start both of them, right?
That's no issue. So if I run both of this now, it should work. But you saw how in the beginning, both of them were disabled because we did not have any, there we go, so title was generated. And I believe the description will be generated in a few moments as well. We can, of course, keep track here.
There we go. This is the one looks like it's still running. It's generating the description. Let's refresh. Maybe it's done.
Looks like it's still running. All right, I'm going to give it some time to run. Maybe there is an issue. Maybe we're doing something incorrectly here. But I think everything is correct.
It has both the video ID and the user ID. So let's refresh. Looks like it's taking some time. I'm gonna pause the video and unpause when it finishes. So it's been two minutes and it's still running.
So I am going to debug why this is happening. Perhaps for you, it works perfectly, but looking at my logs here, only title has been hit so far. It could be that I did something wrong when I fired them simultaneously. So I'm going to check out the documentation on that part, but I will also leave it running. Perhaps the fact that we are on three tier and we run two queries at the same time caused this one to be queued for after some time.
So I'm going to go ahead and end the chapter here. And let's just see if we did everything we were supposed to do. So we integrated Upstash workflow. We triggered a background job. We've set up OpenAI SDK.
We've generated the title and the description, but we will leave the thumbnail for the next chapter. Great, great job.