In this chapter we're going to focus on executing our nodes. In the previous few chapters we focused on building the UI for each node and for the editor itself, but we never actually executed or used any data from those nodes. So that's what we are going to be focusing on today. Let's go ahead and first improve the props for our HTTP request node so it's easier to work with its data. So inside of features, executions, components, HTTP request, open both dialog and the node.tsx.
And let's go ahead and see the problem. So the problem is that we are passing these default form values in three separate props instead of just one. So let's go ahead and fix this. The first thing I want to do is modify HTTP request node data and make it so that I remove this last part. So we actually never even used this, I just added it here for flexibility.
Once we remove this our HTTP request node data matches exactly what our form provides, an input for the endpoint, a select drop down for the method and an optional body. Now that we've fixed that let's also fix something in handle submit here. No point in handling these like that, we can just spread values, it's much simpler. Now, let's go ahead and rename this form type that we are exporting from here. So we have to go inside of the dialog here and we have to find the form type and now I want to rename this to HTTP request form values like this.
Once we do that let's go back inside of node and let's import it here. And now just make sure that you use it here in the values. Now let's go ahead and let's modify our props for the dialog component. So go ahead and remove these three and instead add default values optional partial of http request form values. Let's go ahead and make sure we use them here as well Let's go ahead and modify the default values here to be this.
Let's just see default values is. Let me just see the problem here. Let's go ahead and give this an empty object as the default. This way we don't have to do the question mark thing. Basically we're just doing some fallback here for a better user experience.
And we have to do exactly the same for the form reset so let's do it here as well perfect and for the dependency array we can now just use the default values once we've done that we can go back inside of node and we can now simplify this a lot. So we can now just pass default values node data as simple as that. That's the first task finished. Let's go ahead and mark it as finished. There we go.
So the second thing we have to do is we actually have to display the execute button. So if you take a look at any of your workflows, there is currently no way of executing them, right? For example, I have this super simple manual trigger and then I have an HTTP request. Get post whatever it does not matter. But how do I even execute this right even if I save it nothing really happens.
So the first thing we actually have to do is we have to show the execute button but we should only show that if we have a manual trigger. Luckily for us we can do that quite easily. So let's go ahead inside of source, let's go inside of features, editor components, and in here I want to add a new component called execute workflow button dot tsx. I'm going to go ahead and import button from components UI button, I'm going to import flask icon from Lucid React and then I will very simply export const execute workflow button like this. I'm going to create the props here to be a very simple workflow id with capital id.
Let's go ahead and extract it here and then I'm very simply just going to render set button with a text execute workflow and with the icon we imported from above. Let's give the icon itself a class name of size 4, let's give the button size of large, let's give it an on click of an empty arrow function and disabled to be explicitly false so we remember to change it later to something dynamic. Now that we have this we can go ahead instead of editor.tsx and in order to render this dynamically we first have to create a constant has manual trigger and let's do use memo here so it doesn't re-render too often so make sure to import this from react. Let's go ahead and fill the dependency array with nodes because that's what we're going to be using here. So let's do return nodes.sum, node, node.type, matches, and let's use our enum from our Prisma schema, node type which we can import from at generated Prisma, dot manual trigger.
So just make sure you've imported the node type. Let me just show you from generated prisma. And now that you have the has manual trigger, you can go ahead and duplicate this panel and use the has manual trigger boolean to render in a position of bottom center our new execute workflow button and passing the workflow ID prop. You should have the workflow ID here. So this should work.
Let's fix this by using center. There we go. And now you will see that every time that I have a manual trigger. I also have the execute workflow button. If I delete it, it gets deleted as well.
Since I didn't click the save button, even if I refresh it's still here and the execute workflow button is here. Perfect. So that's another thing that we can check off. Let's go ahead and just do that. And now we actually have to create the execute ingest function because right now clicking on the execute workflow does not do anything.
Let's start with first defining the background job. So we're going to go ahead and revisit our source features, my apologies, source ingest functions.ts. And let's go ahead and remove all of this here because we're no longer going to need it. So all of this, generate AI, Sentry, all of it. And we can remove every single thing within the actual step, even the return.
Now let's go ahead and rename this from execute to execute workflow. Let's go ahead and change the ID here to be execute-workflow. And for the event, let's follow the structure workflows forward slash execute dot workflow like this for now this doesn't have to do anything we can just do await step.sleep let's call this test and five seconds perfect now that we have the execute workflow we have to also revisit our app folder api ingest route.ts let's import execute workflow here copy it and simply paste it here. Perfect, that should resolve the error that just appeared. Now let's go ahead and revisit our trpc procedures, more specifically the procedures for the workflows.
So instead of the workflows server routers.ts, let's go ahead all the way to the top here and create execute. It's going to be a protected procedure. It's going to have a simple input which will simply receive an ID which is a type of string and it's going to be a mutation. The mutation itself will have an asynchronous function. So let's go ahead and prepare it like that.
And let's also prepare the input and the context. And now what we can actually do here is super simple. We don't have to use the input or the context. We can just go ahead and first fetch the workflow by using await Prisma.workflow.find unique or throw and I know I just said we won't use the input. My apologies.
I just got an idea that we actually do have to fetch the workflow. So let's do it while we are here. ID will be input.id and user ID will be context out user ID. Perfect. And let's return the workflow.
And now in between those two we're actually going to await ingest, so make sure you import ingest from ingest client. Dot send ID and let's go ahead and quickly remind ourselves the ID is actually the event. So let's copy that and paste it here. And I think we have to define the data. Maybe I'm forgetting something.
Just a second. Let me remind myself of how I'm executing. So it is ingest.send and this should be name. There we go. Yes, that makes more sense like the event name.
And then we don't even need the data. So just a Super simple execute procedure which just calls a background job. We actually did this already when we explained background jobs. Now let's go ahead inside of features, workflows, hooks, use workflows and let's find, Let's copy use update workflow. Let's paste it.
Let's rename this to how hook to execute a workflow. Use execute workflow. You don't need the query client so you can remove it. You don't need any invalidation. This will be executed and this will be failed to execute and this will use workflows.execute, make sure to modify this.
So OnSuccess has the data and data name because in our routers for the workflows we return the fetched workflow so if you don't return the fetched workflow you will see that this will fail because it won't have any data So make sure you return the workflow from our new execute procedure here. Perfect. We now have use execute workflow. Now let's go ahead and set up our execute workflow button which we started developing here. Let's define the hook const execute workflow use execute workflow.
You can import it from features workflow hooks. Let's do const handle execute or handle submit, however you want to call this, and let's just do execute workflow.mutate and pass in the id workflow id. Let's go ahead and modify the on click to call handle execute and modify the disabled to be execute workflow is pending. There we go. Before you start this, make sure that you have both ingest and next.js running.
So I am doing this by using npm run dev all because I've set up mprox, but in your case you can either use... I think we also defined some package.json scripts here. Let me just quickly see. Yes, perhaps you can use ingest-dev or if you didn't set that up you can always just use ingest-cli with NPX like this. All of them will work.
Perfect. So just make sure you have all of them running here. I will now refresh my Next.js, I will refresh my Ingest development server. Now let's go ahead and check it out. So I have no runs running.
And when I click execute workflow, I get a success message and I have something running and it's just a sleep test for five seconds. Perfect. Which means that we now officially have something happening when we click on the execute button. Perfect. But right now what we should be focusing on is this function.
So it needs to somehow fetch the workflow that's been executed, it needs to fetch all of its nodes, it needs to sort them topologically, and then it needs to run a specific type of request depending on the type of the node. So those are our next goals. Let's start by first checking if we have enough information to even fetch something. So inside of execute workflow go ahead and do const workflow id event.data.workflowid and if we have workflow ID missing throw new whoops let me fix this throw new non-retriable error. Make sure to import this from ingest.
So when you throw this error Ingest will not retry. This is because there is nothing to retry if workflow ID is missing. So just workflow ID is missing. As simple as that. We can't proceed further.
We have no idea what to execute in this case. Let's not waste any resources here. So if you go ahead and try now this should fail. If I click execute workflow here, there we go. Immediately it fails and you can see there is just a single attempt and no retry.
Workflow is missing. So if you didn't use non-retriable error, if you just threw a normal error, you will see that it's a different behavior. So let me go ahead and click this. You can see it's running, it's failing, and it will now continue to attempt to do this for the next three times for no reason at all, right? We know if it's missing once it's going to be missing for the next three attempts as well.
Perfect. Now that we have that what we are supposed to do is we can finally get rid of this and we can instead do const note and let's go ahead and do await step.run prepare workflow and let's open an asynchronous function here. First things first let's fetch a workflow with await prisma workflow of course let's import Prisma from lib database, prisma.workflow find unique where id is workflow ID. And let's also add includes nodes true, connections true. And if there is no workflow, let's throw new non-retriable error, workflow not found.
So that's why I didn't use find unique or throw because this will just restart the query. Now you could decide for yourself if find unique, I mean yes technically this could fail if maybe the database is unreachable. So maybe we shouldn't exactly throw unreachable. Yeah maybe we can do find unique or throw here. And then it will just, you know, if the database is missing.
I mean, if the database connection is bad, it can it will just retry. Maybe that's actually a good thing. Yeah. Let's keep it like that. And now let's just do return workflow.nodes.
As simple as that. Like we just want the nodes. And let's go ahead and return nodes. Perfect. So a super simple execution here.
Now let's just modify our execute here to actually pass that. So data workflow ID will be input.id. Make sure you don't misspell workflow ID. It's used like this. Let's go ahead and try it now.
So I'm going to let me just try and refresh here. And now I will just click execute workflow and let's see what's going on. So prepare workflow finalization. Let's try and open this. Can I close the sidebar?
I can. Perfect. And here we go. We should have two nodes. First one is the HTTP request and the second one is the manual trigger.
Perfect. Amazing. That seems to be working just fine. What we should do now is we should somehow sort these nodes. Why do I say we have to sort of this notes.
Well look at it. Okay. This is super simple right. We can just sort by date of creation in a linear example. But what if we branch out.
What if we do this and this. Right. So that's why we need to use topological sort. So it can handle this type of branching. For now, please keep it simple like this so you can have similar results as me, right?
Let's go ahead and work on the topological sort now. So, in order to do that, we need to have one helper package installed called toposort. There's a bunch of these packages which help for this but I found this one to be the simplest to use. And once you have that installed, let's go ahead and go inside of the ingest folder and create utils.ts. Let's import toposort from toposort.
Looks like we also need the types for it. So let's go ahead and just install the types. Npm install save to development types toposort and let's wait a second perfect let's export const topological sort Let's go ahead and accept first parameter nodes to be a type of node from generated Prisma and the second one connections. Connection from generated Prisma and an array. And it will return node like this.
Array of nodes. So first things first, if no connections, return node as is, meaning they are all independent. So let's just return my apologies let's check if connections dot length is equal to zero return nodes so what is this example here well if this didn't exist can I remove a connection I can't so this right these two are not connected so if we try executing them well obviously you can decide for yourself what should happen? Should anything even happen in this case? Maybe not.
Maybe you should just throw an error, right? But we're just handling that case for now, right? If no connections, let's just return back the nodes. There's nothing we can do, right? We have no idea what is the actual connection between these nodes.
Otherwise, let's go ahead and create the edges array for toposort. Const edges is going to return a matrix. So string string like this connections dot map and then get the individual connection and return an array like this and in here in the first one use from node ID and the second one to node ID. Now let's go ahead and let's add nodes with no connections as self edges to ensure they're included. So const connected node IDs will be new set with a type of string.
And let's do a simple for loop here for connection of connections. Connected node IDs dot add connection from node ID and then another one to node ID. Now let's go ahead and do a simple for loop over our nodes. For node of nodes if not connected node IDs has a node ID Let's go ahead and push it to our edges array. Edges push and open an array inside passing node.id and node.id inside.
So, these are connections as self edges. And now let's go ahead and finally perform the topological sort. Let sorted node IDs return a string. Let's go ahead and click try. Sorted node IDs are going to be toposort, passing the edges.
Let's go ahead and remove duplicates from self edges, so that's this part here. We're going to do that by simply returning a set. Sorted node IDs, new set sorted node IDs. Otherwise, let's go ahead and catch an error. If error instance of error and if error.message.includes cyclic.
Make sure you don't misspell this like me. Cyclic. So if this happens, it means that this type of node array that we received and their connections are cyclic, meaning we cannot create a linear sort from them. So because of that we need to throw a new error here, workflow contains a cycle. As in something is wrong, this is not linear, we can't actually do this.
Otherwise just throw that error back. And now finally let's go ahead and map sorted IDs back to node objects. So const nodeMap new map nodes.map use a shorthand constant n and return an array n.id and n itself and finally return sorted node IDs dot map get the ID and return node map dot get ID. Use an exclamation mark here for a non-null assertion fix and then do .filter boolean like this. If you have biome turned on this will most likely give you a warning.
This is fine. We are not going to have too many of these cases, but in this one it just helps simplify the code. So now our nodes and their edges should be sorted. The only exceptions are if we do a cycle, which shouldn't be able to happen because we should never be able to do this and then this. You can see how our UX does not allow us to do this.
That's because on the triggers, we removed the edge here. So that cannot happen. But still, even if somehow someone breaks that, we are going to take care of it here by throwing an error. So I think this is okay now. Let's go ahead and try it now inside of routers.
I think that, I mean, I'm not sure if this is a good example. Maybe I'm not understanding this correctly, but you can see that the first node that was returned here was actually HTTP request and then manual trigger. When it's actually the opposite, right? It should first be manual trigger and then it should be HTTP request. But then again perhaps it just depends on how we read this array.
I'm not even sure. Let's just try it so we can actually see. Instead of functions now, well, let's rename this entire constant to sorted nodes, like that. And let's return sorted nodes here. And now instead of returning workflow.nodes we can do return topological sort which you can import from .slash utils pass in nodes as the first argument and pass in connections as the second argument there we go.
So now we have that let's go ahead and run this So we can see if there are any differences now. Just make sure you have a connection and click execute workflow. And let's go ahead and see. Perfect, no errors. And I can already see the first node is manual trigger and the second node is HTTP request.
Amazing, amazing job. Obviously, we don't have enough nodes right now to, you know, try and create some complex scenarios, but you can try like something like this. I mean, it will be very hard to debug if it's actually okay or not, because even if I try and do this, it's just going to look the same. It's going to be manual trigger and after that everything will be HTTP requests. So we don't really know what was the order here really.
So let me go ahead and look here. Yeah. So as I expected, manual trigger and then just a bunch of HTTP requests. But at least we can count the number of HTTP requests. So one, two, three.
I think I counted three. One, two, three. Meaning all of them are now considered in a linear sense. So we can now go ahead and map over those sorted nodes and we can execute each of them. Perfect.
So even though we kind of branched out, we made sure to have all three in our new array. Perfect. So at least that works. And now that we have topologically sorted our nodes, what we have to do is we have to execute each node depending on their type. So we're going to wrap this chapter up by kind of preparing that registry of executors that each node will have for itself.
Basically the way to execute in its background job. So before we return sorted nodes let's go ahead and initialize the context with any initial data from the trigger. Now this doesn't make too much sense right because I'm about to let context to be event dot data dot initial data or an empty array, an empty object because we never actually pass this now right? You can see where we call it. We execute it right here.
Await ingest.send. So what should we really pass in the initial data here? Well, in this specific example where we have a manual execution, absolutely nothing. That's why this is optional. But to give you a better idea of when this will be populated with something.
Imagine a webhook trigger or a Google form submission. Those are the types where we are going to also execute this job like this, but since it will be within a webhook we're going to have some payload there and then we will be able to do initial values and just pass in or initial data however I named it right. We're just going to do payload dot data right something and then we will be able to run our executors with that initial data. I'm kind of trying to explain how this will be used in the future. If it's confusing you don't worry it will make more sense once we actually implement a Google form submission or something like that.
So now let's execute each node here. For const node of sorted nodes let's go ahead and first get the executor. So for each node we need to get its executor. GetExecutor, a function which does not exist yet, and pass in node.type as nodeType here from generated prisma. And now we have to develop the executor registry so I'm going to develop that inside of features, executions let me create a new folder called lib and in here executor registry .ts executor-registry.ts looks fine and let's go ahead and export const executorRegistry like this.
It's going to be an object and let's give it a specific type here. Record. The first argument will be node type and for now let's just make the second one unknown. Import this from generated Prisma and now you will have to use node type and then put for example manual trigger and then this will be the executor. Then node type dot initial and that will have its own and then node type dot HTTP request and that will have its own.
So that is the point right. We're now going to go through each of our nodes and depending on their type, so we just did export const executor registry, Let's now do export const get executor type, node type, unknown const executor is equal to executor registry type. If there is no executor found in that object above, throw new error and let's be specific so open backticks, no executor found for node type. And let's just pass in the type. And return the executor.
There we go. And now we can use the get executor here. From features, executions, lib, executor, registry. Not too sure if that's like the best place to put it, but kind of makes sense, right? Executions, Executor, Registry, I guess.
I don't know. And now that we have here the executor, let's go ahead and let's do context, await, executor, open an object here and pass in data to be node.data as record and string unknown passing node id to be node.id context and step. But now we have a problem. So this executor is a type of unknown because obviously that's what I've typed here. So let's go ahead and give it a proper type instead.
So Let's stay inside of features executions here and let's go ahead and add types.ts here. Let's go ahead and import get step tools from ingest and ingest itself And we can limit this to be types. Let's export type workflow context to be a simple empty object. So string unknown. Then let's export type step tools to be get step tools in jest dot any.
Let's export interface node executor params to use a generic T data which is a type of record string unknown because it can truly be anything that we are going to pass in these nodes right there data will be able to be anything. Okay fifth try on the unknown And I still don't know how to spell it. Unknown. There we go. So give it a type of data, Tdata, which is just anything, right?
For example, in HTTP request node, this will be an object with endpoint body and method. Then in Stripe, it's going to be, well, Stripe is a trigger, so it's a bad example. But in OpenAI node, it's going to be system prompt and user prompt and model. In Anthropic it's going to be similar, right? In, I don't know, any other node that you create it will be its own type so basically the data which is this dynamic thing represents whatever we have in this dialog here which can be anything we're depending on the node That's why it makes no sense to do any strict definitions of it.
Then let's do node ID so we know exactly which node we are working with here. Context, workflow context, which again can be anything because the context will simply expand as each node progresses because we will be able to use the context of the previous node into the next node so again we can't really define that we have no idea what nodes will return. What will this HTTP request return? We don't know. Maybe it will be JSON, maybe it will be a string, maybe it will point to an API, maybe it will be an error.
We don't know. That's why it's defined as this. And let's do step, step tools. Later, we are also going to have publish here, but I'm going to comment this out and just do to do add realtime later. Because we don't have that now.
And finally export type node executor tdata equals record string unknown okay unknown unknown, okay unknown, params, node executor params, tdata, And return promise workflow context. All right, very complicated but that's all the types we need. Perfect. We can now head back instead of the executor registry here and we can modify this unknown to be node executor from dot dot slash types which obviously means that all of these are now going to fail. But let's also change this unknown to be node executor.
And now we have to develop proper executors here. So for now I'm going to just focus on the manual trigger here. Let's go ahead and find where it is. So it is inside of features, triggers, manual trigger right here. And this one will be super simple.
So inside of manual trigger create a new file executor.ts and let's go ahead and import type nodeExecutor from FeaturesExecutionsTypes and export const manualTriggerExecutor give it a type of nodeExecutor and inside of it we just need a empty object. So this can for example be let's go ahead and call it type manual trigger data record string unknown trigger data record string unknown like this and then just pass in that here open an asynchronous function here like this, there we go Now the params which we're going to have our data, node ID, context and step. So exactly the ones which we just defined here. Data, node ID, context and step. For the manual trigger the data will not actually exist so you can already remove it.
I just wanted to show you type safety works here. Let me add to do publish loading state for manual trigger because we don't have the real time yet but to do that will be the first thing we're gonna do here. Otherwise let's just do const result await step.run manual trigger and a very simple asynchronous function which simply returns the context. So basically this will be passed through as in there is nothing to do in here. Just go to the next node and let's do to do publish whoops success state for manual trigger.
So after we succeed just go ahead and proceed. There we go and return this and now we have our first executor manual trigger executor let's go ahead and use it here there we go now obviously I think we have some problems here we didn't add initial or HTTP request So can I maybe just do like partial? I want to find a way of not having to add every single one of them here like node type dot initial and then I have to think of something node type dot HTTP request. You can just add all of these right now to get rid of type errors. And now for each node that we have, we're going to have to develop their executor and this way we kind of have like a big switch case inside of our where is our ingest folder functions so basically each of those topologically sorted nodes are now going to get their executor and then we're just going to execute it and for each of them we're going to extend the context even more, right?
So if the first HTTP request node returns some JSON, the second HTTP request node will be able to access that context. And that's what users will be able to define using these variables. So the users will be able to use HTTP request dot users or dot to do's, right? That's how that's going to work. Perfect, so now we have this.
Not sure if we are like ready to try this. Well, here's what I want you to do. I want you to copy this executor here. Copy it. Go inside of executions, components, HTTP request and paste the executor here.
Change this from manual trigger data to HTTP request data. This will be HTTP request, HTTP request executor. And change these instances to be for HTTP request and step.run HTTP request. And let me just quickly compare this with my source code. So we are kind of right, let's call it, okay, yes, I think this is okay.
We can now go inside of executor registry and just change this to be HTTP request executor. And okay, yeah, we're kind of mixing features. Yeah, I kind of don't like that we have the executor registry in the executions folder and then we have a specific executor in the manual trigger in the features triggers. It's kind of spaghetti going everywhere. But let's leave it like this for now.
Just make sure you can import them and yes the initial will actually never happen but we have to add something here just to satisfy the type errors here and now if we actually try I think this should work just fine The only thing we ought to modify is what we return here. So what we should actually return is the following. We should return the workflow ID. We should return the result as context because the context will be fully modified by the end of this for loop even though right now nothing will really happen because we just return back the context and go to the next node That's the only thing we do right now. So let's go ahead and try it now.
The only thing we should see now is I think, one, two, three, four. We should see four steps now happen when we execute. So right now we had one step. This one doesn't count. This is finalization.
Now we should see four steps. One for each node. So make sure to click save here. And then let's go ahead and try and execute. So let me just see it.
Okay, fully saved. Execute workflow and let's see. Prepare manual trigger HTTP request, HTTP request, HTTP request. Perfect. Amazing.
That is exactly what we wanted. And if you remove one and save, now it should have three steps. So let's try this again. Manual trigger, HTTP request, HTTP request. So yes, not counting the preparation one, just these ones.
Perfect. You can now see that our workflow background jobs has exactly the amount of steps in the exact order as their graphical interface here. And now to end the chapter, let's actually make an HTTP request node fail or succeed. So keep it simple for now. Just do a very simple connection between a manual trigger and an HTTP request node.
And for the first example, don't configure it at all. As in, this should say not configured. Don't pass any endpoint URL, don't do anything at all. And let's focus on the HTTP request executor. So the first thing I want to do is I want to define proper HTTP request data.
So I'm going to go ahead and define endpoint to be an optional string. I'm going to define the method to be an optional string, body, And that's it. So basically the exact thing that's inside of, let me go ahead and try and find node.tsx HTTP request node. There we go. This basically.
So yeah, perhaps the method should be this. Perfect. So we are kind of passing now to the back end what are the possible options for this HTTP request. And now that we have that we can actually bring back data from here. Because once we have the data we can actually do something with it.
For example, before we do the result let's go ahead and check if there is no data dot as you can see we now have completion here if data endpoint is missing to do Let's do publish error state for HTTP request but what we can do is throw new non-retriable error here. HTTP request node no endpoint configured. So just throw that error and I think that already if you try this now this should fail. So just make sure to save this super simple example make sure this is not configured and once it is saved let's go ahead and execute it And now we should see this fail. Okay, it's running.
And there we go. So what happened? Let's see. HTTP request node, no endpoint configured. So exactly what we expected.
The only thing that's missing is visual feedback which we are going to be working on later. Now, how could we do a request? Well, we could do a request by using await step dot fetch and using data endpoint here. That is one way of doing it and then we could get constant result like this. Let me go ahead and do this And just return result.
You can actually remove this. There we go. That's one way of doing it. So if I go ahead and try and change this to HTTPS code with Antonio dot com Google or something. Maybe this is a bad example.
Maybe this will fail now because this will return text instead of JSON but let's just see if we will at least see step.fetch here it is, step.fetch is now happening so you can of course use ingest's built-in step.fetch and you can see the output here, body, right? So you can use that but I kind of found it easier to use a bit more, I'm not sure what should I use to describe it. Not exactly advanced because here's the thing. Step.fetch is a wrapper around normal fetch, which we all know and love, but we also know there are certain limitations with it, right? It's quite hard.
I mean, hard. It takes a lot of code to do a super simple post request with it. So for that reason, I recommend that you actually do npm install ky, which is like a lightweight alternative to Axios. So let's import KY from KY. Obviously so if you prefer Axios you can do this with Axios.
If you prefer I don't know step.fetch you can just use a step.fetch right. So this is what we're going to do now. I'm going to go ahead and do a result and instead of step dot fetch this will be step dot run HTTP request. And this is why I also prefer using my own kind of fetch execution so that I can have a step that run independently like this. And inside of this I'm going to go ahead and do const method to be data.method or I will fall back to get method.
Maybe I can even stop doing this all the time it's getting a little bit annoying and I can just throw an error if method is not defined because it is getting a little bit annoying right now. And then what I'm going to do let me just define const endpoints to be data dot endpoint like this because at this point I think I can can I yeah I'm just going to do this this will also throw you a warning if you're using biome or any linters but just leave it like this for now so we're doing non-null assertion here because we know that at this point the data don't end point will exist. And now I will define options to be method like this options will be a type of K y options. K y options where can I import that? Okay.
So K y import type options as ky options perfect and now that we have this let's see if we should also attach the body property so if open an array post put patch includes method if we have data.body if openAnArray.post.put.patch.includes.method if we have data.body options.body is going to be our data.body The reason I'm doing this instead of an if clause is because we are going to first it's a bit hard to explain but the way we will be able to write this is also using variables. So if instead of post method here you do this HTTP response data ID you have to parse that here. We are not going to be doing that now simply because it's unnecessarily complicated. But let's just leave it like this for now. Yeah let's let's just do this.
OK. Simple as that. And then let's do const response here. Await KY endpoint and options. And let's go ahead and do const response data to be await response dot JSON dot catch response dot text and finally let's go text.
And finally, let's go ahead and let's return, spread the context and do HTTP response here, status, response.status, status text, response status text, data, response data, and well status, response.status, status.text, response.status.text, data, response.data, and well I think this might be enough for now and this will actually fail if you try to fetch something that's not returning JSON. So instead what you can do is you can check content, okay, const content type here is equal to response that headers dot get content type and then in here check if content type question mark includes application forward slash json then do await response dot json otherwise do await response dot text perfect so now you can use the response data as the actual data here. All right. And I think that should work just fine. So let's try it out now.
So I'm going to go ahead and just make sure this is a get request pointing to my website here. I will click save and then I'm going to execute the workflow and basically I'm not expecting much to change except in the finalization step in the result. I am expecting this right the body the headers I'm expecting to see that in the finalization step here. So let me click execute workflow and let's see did I develop this correctly or not perhaps I made some mistake in the KY implementation. There we go.
Finalization now has the result from the previous HTTP request node, which has the data for my, I mean, this is useless, in a sense that we are not fetching any API, we are just fetching HTML. So let me try and just find you a nicer example to make this make more sense. So there is this public API that you can use. JSON placeholder dot typeecode dot com to do forward slash one. So let's just use get save save here.
And after you've saved up here let's go ahead and execute workflow. And now we should have a nicer response here. It should be in a form of JSON. In the finalization, here we go. The HTTP response now has data.
And in here we just have some mock to do, completed false, ID1, title something, user ID something. We have status, we have status text. Basically exactly what happened here. But here's the thing if you I'm not sure how this will now behave but if you try and map like to HTTP requests now and let's go ahead and just copy this. Oh wow, I think I just did the circular thing.
Let's remove that. Okay, that will definitely fail. So go ahead and put number 2 here, get request, save, click save here. And now technically we should have two objects with the name of HTTP response. So that could technically fail.
I think that this might actually cause an error. I'm very interested to see what will happen because, okay, it completed. So two HTTP requests happened. You can see one with an ID of two, one with an ID of one. So what I think, oh, it's just over, over, yeah, we just get the HTTP response of the second one.
So it overrides the first one. That's not good. We should think of a solution to allow the final step to have the data from multiple HTTP requests. We can do that quite easily, but by maybe introducing a third field here called variable name and then you will have the exact variable name that will be used here in the result. And then the user will be responsible for making sure that they don't override themselves.
But I think that's why I wanted you to just use this super simple example so you don't run into those kinds of issues. So you can see that there is still some work to do here, but I think you kind of get the idea now, right? Each of our nodes now has a topological order and it has its own way of executing. So in this specific HTTP request, it's quite simple, right? But when later when we create open AI instead of doing this we're going to check do we have a user prompt if we don't throw an error user prompt is required and instead of doing a fetch request we're just going to be doing an open ai request and returning back something so that will be the way we're going to move forward So let me go ahead and check.
We created the execute in just functions, we did a topological sort and we created the executor registry. Amazing. And yeah, instead of our executor registry, we have one which just kind of passes through. This is the one instead of triggers manual trigger executor. So yes, this executor doesn't do absolutely anything besides have its loading state and its success state, which is purely used for user satisfaction.
So they can see that something is happening right even though this will immediately show loading and then immediately show success right there is absolutely nothing happening here. We just kind of make sure that the context gets passed further so we can also even remove node ID. I think we don't even need node... Actually, we will need a node ID later for the loading and success states. But I think that this is enough for this chapter.
In the next chapter, we're going to solve two problems. We're going to solve the problem of our HTTP request nodes overriding themselves and we're going to start to do this. So instead of doing it like this, we will be able to do, I don't know, previous node dot ID. You will basically be able to use the context of your previous nodes. So think of it like this.
Let me show you. Instead of Ingest development server. So this HTTP response now returns user ID 1 or ID 2 or title or something. What if you wanted to use that? So you would do HTTP response dot user ID, right?
That's kind of the goal that you are able to use the data from one node into another node. That's what we're going to be focusing into the next chapter. So for now I think this is enough for us to get introduced into this execution thingy. So 18 node execution. Let's go ahead and commit all of these files here.
So I have 15. You might have 16 again if you have that mprox log file here. Otherwise this should be it. Stage all changes before I commit. I'm just going to go ahead and click on main here create a new branch 18 node execution there we go so now I'm in this new branch I have staged my changes 18 node execution I will click commit and then I will click publish branch.
Now that this branch has been published I'm just going to go ahead and open the pull request. So I'm going to click compare and pull request and create pull request 18 node execution and now let's go ahead and review our changes. And here we have the summary by CodeRabbit. New features. We added an Execute Workflow button in the editor when a manual trigger node is present.
We introduced workflow execution from the app via a new action and hook. We enabled HTTP request nodes to perform real requests and return response data. And we of course refactor the workflow runs now follow dependency order, meaning the topological sort with improved orchestration and a unified context result enhancing reliability and clarity of outcomes. As always, file by file walkthrough here, But here is the sequence diagram that interests us the most. So let's go ahead and try and follow it.
So when the user clicks on execute workflow we call the TRPC mutation with a workflow ID. We then call workflows. OK. So this is the actual TRPC execution. And the only thing this does is it sends the event workflows execute workflow.
Immediately we return back the workflow with a success message to the user. Okay. And what actually happens here is the background job. So we let's see we trigger the function with the event will load the workflow its nodes and its connections we return that data and we map it to the topological sort function and then the topological sort gives us sorted nodes and we can then do the for loop for each of that node and we can find the appropriate executor for that node and execute it and return the updated context along. Great.
So that's exactly what we are doing. We do have some comments here, so let's go ahead. First one is in the HTTP request dialog here. So I changed from three individual props to just one. And in here it says destructure and depend on specific form methods instead of the full form object.
Okay, I will look into that. Now in here it is telling me to do appropriate error handling for ky package, we will work on that in the next chapter where we improve the entire HTTP request executor itself. Same here. So yes, we completely forgot to pass the option headers which will lead to some servers to reject the request. Again, this will be in the next chapter where we improve the HTTP request executor all together.
In here it is telling me to improve the way I handle errors in case I cannot find an executor for a certain node. Yes, so I could look into doing this. I think it's fine as it is, but yeah, it wouldn't hurt to have even more strict checks here. I will look into that as well. Same thing for this.
So I think the error message is cyclic but in here it's telling me is cycle. So I will just look at the source code of toposort or documentation and see which one of those it is. And in here yes, so I told you that we can use the non-null assertion here with the exclamation point. So in here it suggests not doing that and instead just quickly checking if a node is missing and throw an error. Perhaps that is safer, yes, we could do that.
Okay, amazing suggestions from CodeRabbit. I will take a look at them and for the next chapter Maybe prepare a few that I think are important so we can proceed but for now, let's go ahead and merge this pull request Amazing job. This was a complicated chapter. Let's go back inside of the main branch and let's make sure to click on synchronize changes and OK. And let's go ahead inside of our source control, open the graph and in here we should now see 18 node execution.
Amazing that means everything here is merged which I think means we are ready to go ahead and wrap this chapter up so we pushed the github created a new branch created a new pr and reviewed Amazing amazing job and see you in the next chapter.