In this chapter, we're going to go ahead and implement AI agents functionality into our project. This is our goal. We're going to create a nice interface to render both AI and user messages. And we're going to create a backend system capable of handling AI responses. And yes, you will be able to use any AI provider.
Gemini, OpenAI, Anthropic, DeepSeek, every single one of them are handled by AI SDK. Let's start by fixing the security issue from the previous chapter. If you remember, going inside of packages, backend, convex, public, conversations, we implemented the get one query. But there is a problem here. We do validate the session, but we never actually check if this conversation has anything to do with our contact session.
So let's actually do another check. If conversation.contactSessionId is different than the contactSessionId from the arguments here or you can use the sessionId to use the actual fetched record. Go ahead and throw a new convex error here. And we can just go ahead and copy this inside. And let's go ahead and just say incorrect session in this case, right?
And we can actually throw errors for the case of conversation missing too. So this will be not found. And we're simply going to say conversation not found. And then we can catch this in an error boundary later. Like this, there we go.
So we just fixed that security issue. Now we are at least checking that that contact session has anything to do with the conversation we are fetching. Perfect. Now let's go ahead and let's add convex agent component. So if you go inside of convex documentation, let me just go here.
And in here you can find their guides agents. And in here you can find everything you need to know about their agents components And they are absolutely amazing and you're gonna see why. Because remember, we still haven't even added any messages schema. And I'm gonna tell you a little secret. It's because Convex is dedicated to making amazing developer experience on building AI apps using convex.
And that's why you feel like half of your app is actually missing when in fact, this is the other half of our AI logic because they have developed it so well. Let's go ahead and let's add it to our project. So we're going to add convex-dev-agent to our backend app. So let's go ahead and use pnpm filter backend add and let's add convex-dev forward slash agent. There we go.
Once that was added we have to create a convex.config.ts so let's go inside of packages backend instead of convex and let's do convex.config.ts and in here let's go ahead and let's import define app from convex server and let's import agent from convex dev agent convex.config Then let's go ahead and define the app and let's add app.useAgent and let's go ahead and let's add export default app here. Save this file and go ahead and run TurboDev. Let's see if everything is working or if we need to fix some things here. And there we go. So it says installed component agent and convex functions are ready.
And I'm just interested in one little thing. When I initially developed this project I actually got an error when I added a convex agent because I was missing convex helpers. So just in case anyone here is getting errors here's how I fixed it. Pnpm f backend add convex helpers. As simple as that.
So just add convex helpers to your backend and then do turbo dev again and if you got any errors they should no longer exist now. But I think they have obviously updated that and fixed it so it works even without them. Great! So just make sure that inside of your packages backend you now have convex dev agent and convex helpers because we will be using helpers either way so please do add them regardless. Great!
Now that we have added that let's go ahead and let's create our first agent. So I'm gonna go ahead and I'm gonna go inside of my packages, back end convex and I'm gonna create a new folder called system and under system I'm going to semantically separate all the things that are neither public nor private. They are internal, should I say, or maybe shared, right? So inside of system, I'm going to create another folder called AI just to separate things that are AI related. And inside, let's create our support agent.ts.
And just a quick tip, whenever you're adding files inside of convex, you have to use, you cannot use dashes, right? So because of how they parse their functions and everything, you need to make sure it's one word, right? So don't add like a dash between, just a small tip. And now, as you can see in here, they say that we have to add AISDK OpenAI. So let's talk about AISDK and what if you don't want to use OpenAI.
You can visit the link on the screen or simply Google AISDK to head to this website. So this is the AI toolkit for TypeScript. And first things first, we're going to have to install their AI package so let's do that first. I'm gonna go ahead Hey there, Antonio from the future here. I am editing this video and I realized that in the middle of our tutorial AISDK version 5 has officially came out and became the default version.
So what does that mean for you? If you want to you can use version 5 but please keep in mind that that will require you to do some migrations. You will have to migrate from version 4 to version 5. So I cannot guarantee that you will be able to follow this tutorial exactly the same. For example, if I use a field maximum tokens you're gonna have to use the field maximum output tokens.
If I import a type core message you're going to have to import a type model message and so on and so on. This isn't too big of a problem and you can absolutely use version 5. It shouldn't break too many things but I would highly suggest that instead of doing that right away you follow the exact same versions that I was using to make this tutorial. It will make things a lot easier for you and you can always upgrade later once you finish the tutorial as a personal challenge as an additional task in this project. So how do you know if you installed version 5 packages or version 4 packages?
There are a couple of important packages that you need to have in your project. I'm now going to open my package json at the time of me making this video. As you can see, my convex dev agent is 0.1.16. My AISDK OpenAI is 1.3.23, and my AI is 4.3.19. So these are the versions needed if you want to use AI SDK version 4.
And here's how you can find out those versions yourself. This is especially important if you are using Gemini for example. So I have prepared for you all of these npm packages here. You can see that three days ago all of them were published on version 5. AI is now on version 5.
AISDK is now on version 2.0. AISDK Google which is Gemini is also on version 2.0. I am assuming most of you will either use OpenAI or Google if you're using Gemini. And as per their migration process here, you can see exactly what you have to modify. So you can see that if you want to use version four, you have to use version lower than 2.0 of any AI SDK and then specific provider packages.
And Zod also has to be a specific package number, which is also why I didn't want to upgrade right now, simply because we already had some problems with Zod, right? So I don't want to accidentally break that again. So here's what I want you to do. When you go to AI npm package here you can click onto the versions here. You can see that all the latest ones here are 5.0 but if you scroll a little bit down you will be able to find the last version 4 which has almost half a million installs.
So that's how you can recognize it. So once you have identified that version, that is the version of your AI package that you have to use for this tutorial. It is the exact same version that I have here. Now what if you're using OpenAI? Same thing.
Click in the versions here and scroll down until you find the last version that is lower than 2.0. In my case that is 1.3.23 and same here you can see 300 000 installs And it's the exact version that I'm using, 1.3.23. What if you're using Google, Gemini? Same thing, AI SDK Google, go inside of versions here. You can see that 2.0 is the current version.
Scroll down until you find the last version that's not 2.0. In Google case, that is 1.2.22. So, what does that mean for you? When you follow my instructions in this tutorial, instead of installing backend add AI, you would specify the version. So AI at, and then you would find your version here 4.3.19 and you would install that.
If you are installing OpenAI you don't just paste it like that. Instead you go ahead and find the latest version that it's not 2.0 like this. And if you're using Google, you install the latest version that is not 2.0 as well. So you replace this with Google and you add the appropriate version. It would be 1.2.22.
What happens if you already added a package and now you have a higher version? No problem at all. So far in this tutorial, all of these packages dependencies I believe are only in one package json packages backend I think that is true if I search them all of them okay looks like they are also in the widget so that's important okay so what I would suggest you do now is you search through your code to find all of these packages. And if you already have them installed and you can see that they are on version five and that this is on version two, for example, simply find them in your dependencies and change them to this version. Change convex dev agent to this version.
Change AI to this version, and make sure it's everywhere like that. After you've done that, in the root of your app, simply run pnpm install. That is how you can ensure that you are using AI version 4 and that you have the proper provider version which must be under 2.0. And then later when you finish this tutorial you can simply follow this migration guide because there are some breaking changes as you can see. And I want you to have a nice, easy time following this tutorial.
I don't want you to have to think about those breaking changes right now. But if you find yourself in a situation where you just have to use version 5, nothing is working for you, then you're gonna have to follow this migration process. Luckily it's not too difficult. If anything it's better but it's just different So you're gonna have to carefully watch whenever I'm doing something and compare it with this change here to see, okay, I have to modify this, I have to modify that. But again, you can just use these exact versions.
I'm not sure that the convex dev agent version is actually that important but AI version is definitely important and your provider version which needs to be below 2.0. So I hope I cleared that up. Simply head to the npm package of the provider you are using. If that is grok, then you will search for grok. Let me show you the process, right?
So aisdk grok xai, right? For example, if you're using this for whatever reason, the same thing. You go inside of versions, you scroll down until you find the last version that is not 2.0. I assume most of you will be using OpenAI and Google, which is Gemini. That's why I showed you the examples on those two.
I hope that cleared it up and that you will be able to continue with the tutorial. And I'm gonna go inside of PMPMF, back and add, and let's just do add AI. So that is basically this, right? Perfect. Now let's go ahead and let's go inside of documentation here and inside of here, you can find providers and models.
Now just keep in mind, you can see that I have this big banner here that I am looking at AI SDK for documentation. So to know what you should be looking at, you simply go inside of your package JSON here in the backend and check what version of AI do you have. If you want to, you can use the exact same version as me. So 4.3.19. So this would look like this, right?
You would add it like that and then you will have the same version as me. But trust me, this is handled by Vercel and Next.js. So I'm pretty sure they are taking backwards compatibility seriously here. So even if you are on AI version five, all you have to do is make sure that you're looking at AI SDK version five here, or by clicking here, or it will simply be the default version so don't worry if you can't find it. The reason I'm not using SDK 5 is because it is in beta right so that's the only reason why but I'm pretty sure that for what we need to use this package is going to be exactly the same.
The reason I'm even telling you this is because I want to be careful. I want you to be able to finish this tutorial without any troubles. So again, you can use the same version as me, but if you're using version 5 just, you know, look out a little bit for any subtle differences between the documentation. So definitely do open the documentation and follow the documentation with me. So go ahead and find the providers and models or whatever is the equivalent in the version 5 documentation here.
And in here you will find everything that you can use. Open AI provider, Google Generative AI provider. I believe that this is Gemini, right? Or if you want XAI Grok, I think they also provide free tiers. Or if you want to use DeepSeek, I've heard they are super cheap, if not also free, not sure.
So basically go ahead and choose the one you like, the one you prefer. I personally use OpenAI in all of my projects. I have credits there, so I'm going to be using OpenAI provider but you go ahead and click on the one you want. And the first thing you're going to have to do is simply install their version of the package inside of your backend. So pnpm f backend add aisdk forward slash openai so let's go ahead and install that and there we go aisdk openai so this is my ai version and this is my AISDK OpenAI version just in case you want to use the same versions as me.
And once you have added AISDK OpenAI it is very important that you scroll a little bit down here until you can find the API key. So what you need to find is the default environment variable for your provider. In my case that is OpenAI API key. So I'm going to copy that and I'm going to go inside of packages, backend, environment local and I'm just going to add it here. So for example, if you were using Google generative AI, you would go ahead and add this, right?
You would scroll a bit down API key and you will add this environment variable, right? Now I'm going to show you how you can obtain API keys using OpenAI but for other providers you're going to have to research a bit on your own. But it shouldn't be hard at all. Alright so in order to obtain the API key for OpenAI use the link on the screen to head to the OpenAI platform and go ahead and create an account. Once you're logged in the first thing you want to do for OpenAI is you want to click on the little settings button and head into billing and make sure that you have some balance.
So five dollars will be enough for this entire tutorial. If you're using Gemini or something like that it should be completely free for you but just make sure if you're using OpenAI like me, you need to have some balance here. And after that, head into API keys. So I'm gonna go ahead and create a new secret key, and I'm going to call it Echo Tutorial, like that. And I will simply choose the default project, permissions all, and click create secret key and copy it.
And that's it. That is how you obtain OpenAI API key. Immediately after you have added your OpenAI key, Anthropic API key, Gemini API key, Grok API key, whatever you are using, copy it, head into the convex dashboard, go inside of settings, environment variables, add and paste it here. So the exact same thing that you just added here locally make sure to save inside of your cloud platform here. Perfect!
You can now close the OpenAI platform, Google platform, Anthropic platform, whatever. And let's go back inside of building our agents. So now what we have to do is we have to import the package that we installed inside of our support agent.ts file that we started developing. So for me, that's OpenAI from AISDK OpenAI. In your case, it might be Google from AISDK Google, right?
And now you have to also import agent from at convex dev agent, and then import components from, and then go back until you hit generated api like that and then let's go ahead and do export const support agent new agent components.agent let's go ahead and give it chat and this will simply be open the openAI.chat. And let's go ahead and specify GPT-40mini. So if you're wondering, what should I use for Google? Well, I don't know. You are going to have to research the models for Google or whatever else you are using, but this is strictly typed.
So all you have to do is add annotations and you will see all the options. So just choose one. There is no specific one that I recommend you use, right? I'm just using whatever I think they had in their example here. Let me just check.
Yes, they're using GPT-4.0 mini. So that's what I put. That's the only reason. You can go ahead and Google the equivalent of Gemini or Anthropic or whatever it is you prefer. And in the instructions here, well, let's go ahead and let's simply say you are a customer support agent.
That's it. Later we're going to create a whole big prompt for that. Perfect. So now what I actually, I'm gonna do, instead of AI, actually this is fine. Let's leave it inside of AI, let's leave it under support agent like this.
And now we have to go ahead and delete all of our existing conversations. So let's go inside of that data, conversations, select all of them and delete them because all of them have the fake thread ID. So now inside of convex public conversations find the create mutation here and we're going to modify it ever so slightly. So what we're going to do is we're going to finally remove this to do and fake constant with constant thread id from await support agent that you can now import dot create thread passing the context and passing the user ID. Now the user ID in this case should be the organization ID because that's how we want to associate this thread with.
So arguments dot organization ID, like that. And now you have an actual working thread ID. So support agent, make sure you have imported it from system AI support agent. You can also find these instructions here in the threads. So you can see you have to do agent.createThread, exactly what we just did.
You can see that you don't even have to pass anything other than context inside. And then you will also get the thread ID, but I like to associate my threads with a specific ID, which is also as they say, metadata. All right, so just one thing I want to do before we continue. Instead of system AI, create an agents folder and then drag and drop the support agent inside. And you're going to have to update your imports now.
So inside of conversations, make sure that you now change it to system.ai.agents.supportagent. And inside of convex, convex.config. Oops, not here. Let me just find, oh, that might actually be the only place where, yeah but inside of the support agent just make sure you have updated the components import to go three rows behind like this. Basically, make sure you have no errors.
Make sure that when you do turbo dev in the back end, everything should be working smoothly. There shouldn't be any errors whatsoever. Let's just confirm convex functions ready. Perfect. So what should we do now?
We should try and create a conversation. So, I'm going to go ahead and I'm on localhost 3001. I have a proper organization ID here at the top so I have the option to start the chat. When I click start chat you can see that now I have an actual thread ID inside and now we can use this thread ID to create new messages and to retrieve messages for this chat. So let's see where we are right now.
We added the convex component, we selected our AI provider, we added API keys, we installed AI SDK, we deleted all old conversations and we updated the conversations create function. Now it's time to add AI components from Kibo UI. Now you might be wondering what is Kibo UI? Well using the link on the screen you can access it here so you don't have to Google it yourself. It is basically an amazing extension on chat.cn.
Honestly, it is insanely good. So in here you can find all things AI chatbot related like branching, conversation, input, message, reasoning, AI response, sources, suggestion, AI tool. It's honestly amazing. But alongside that, they also have all these amazing other components. There is one slight problem that I encountered.
I wasn't able to add them to my Monorepo project. It just wouldn't work. Luckily, They are open source. They have their source code right here. So I have prepared in my assets repository, the exact code that I'm going to add to my project now.
So let's go inside of my assets. You can find the link on the screen here and go ahead and go inside of UI, inside of components and in here you can find the AI folder and in here you have all of these components available. So let's go ahead and add them one by one. So I'm going to go and close everything like this. Let's go inside of packages, UI, source components and create a new folder called AI.
And let's start with branch. Simply copy everything inside, create a new file branch.tsx and paste it here like that. And for branch, for example, everything should be completely fine because we have the button and we have CN but for some of these we're going to have to require to install some packages so now let's go ahead and do conversation which for example will need this new package so instead of ai conversation.tsx let me just check yes it is singular conversation and when I save there's an error so we have to add use stick to bottom so let's go ahead and do pnpm f And this is our UI package and let's go ahead and do add use a stick to bottom like that. So we are adding this to the UI package of ours. And when we do that in a couple of seconds when this refreshes everything should work just fine.
Perfect. Now I'm going to go ahead and pause and basically do the exact same thing and I will unpause if we have to add some more packages. So after I added reasoning.tsx I encountered this package missing So let's go ahead and add this package, pnpmfui add, and then this package right here, just make sure you filter to the UI package. And after that, this should be resolved. But This one at the bottom is still an error and it still will be an error.
But this is the AI response that we are adding right now. So I believe some of the next components will be the AI response. Let's see. Yes, it's right here. All right.
Now in the response, which I just added from here, we need to install React Markdown and we need to install a remark GFM. Let's go ahead and do that. So PMPM, whoops, PMPM F UI, add react markdown and the remark gfm and let's see if after we add these packages all these other typescript errors will resolve themselves as well or not looks like they resolve themselves perfect so now our response.dsx is intact and working. Perfect. So three more left.
And here we go. So I just added all of them. So that is branch, conversation, input, message, reasoning, response, source, suggestion, and tool. I highly suggest that you do it by using my assets here simply because I think that when you use their CLI tool, They actually add more components, but I removed things that we don't need. So that's why it might be easier for you to actually do it line by line.
Yeah. Anyway, absolutely amazing UI library. We will come back for a few more of these components because they are amazing. But for this chapter, that is it. So let's go ahead now and let's create our messages function.
So I'm gonna close everything here because we now have added a very important part, but now we need to be able to actually create the messages and we need to be able to fetch the messages. So let's go inside of backend convex. Instead of public, I'm going to create messages. Whoops, messages.ts. But Just before we go ahead and do that, let's go inside of system here.
And basically, we're now going to have to create some internal packages here. So inside of system, outside of the AI folder, just inside of system, Go ahead and create contact sessions.ts. So basically follow the same naming convention for contact sessions in the system folder as you did in the public folder. So contact sessions here and contact sessions here. The difference is that in here we're going to have the following.
Export const get1 internal query from generated server and arguments here are going to be contact session ID which will be a type of v from convex values dot id contact sessions and the handler here is going to have the context the arguments and we're basically just going to return await context database get contact session id from the arguments let's just not misspell contact session id so what is this why am I even doing this what is an internal query internal query is practically the same thing as a normal query api wise but it can only be called within other convex functions so in here you add something that you either want to use in actions because remember actions are not the same as mutations actions are a separate runtime so in order to access the convex database through an action you need to have an internal query right Or if you want to protect something so it's not publicly available, right? So let's go ahead and define the internal query for compact session. And it's actually gonna make more sense to you why we're doing this in a second. But we also need another one.
And that is going to be conversations instead of the system folder. So let's create conversations.ts, export const get by thread ID. So very specific, right? And let's also add the internal query tag here. Arguments are logically going to be thread ID.
Let's go ahead and set this to be a string handler, asynchronous, like that. And let's grab our conversation to be await context.database. So context and arguments, context.database. Let's query conversations with index by thread ID, which we prepared ahead of time. And let's simply query to make sure that thread ID is equal to arguments thread ID.
And we're looking for a unique record that has that because there shouldn't be two conversations that have the same thread ID. That should never happen. So we can safely do unique. And let's return conversation here. There we go.
And now we are ready to develop our messages.ts now that we have this internal queries. So Why did we need to do that? Well because export const create in the messages won't be a mutation, it's going to be an action. Remember actions? Let me go ahead and refresh your memory.
I think it's organizations in the convex public here. Here are actions. Actions are special type of actions, functions instead of convex that are used to query third-party services. And because of that, they are not able to directly access the convex database unless you use internal queries. That's why we prepared that.
So this concept is a little bit unusual. So it's probably confusing you, but it's actually very simple. Once you write a couple of these, I highly recommend going inside of their documentation and reading about internal functions. So they can only be called by other functions and cannot be called directly from a convex client. Use cases for internal functions.
Here's the first one, the exact one we need. Calling them from actions via run query. That is the first and very example that we need them for so let's go ahead now and develop this so here are the arguments that we are going to need for this action prompt thread ID and contact session ID. Then let's add the handler here, context and arguments. And let's import convex values like that.
Now in here, the first thing I want to do is I want to get my compact Session by using a weight not context dot database dot get but instead context dot run query and the query that we are going to run is internal dot system. Oops, we have to import internal from generated API. So internal dot system dot. Let me just check the internal. Why is it not working?
Just a second. Internal generated ABA. Okay, let's go ahead and write internal.system.contactSessions.getOne. We need to have TurboDev on so Convex can update and create those internal functions first. So whenever you don't have the correct types, it's mostly because of that.
So there we go. You can see how now it's fixed. We do have an error, but it's not related to the API because it's missing the arguments. The argument needed is the contact session ID, which is arguments contact session ID. And that's how we get the contact session so that we can validate it.
So if there is no contact session or if contact session expires at is less than date now throw new convex error from convex values with code unauthorized and message invalid session like that and now we can go ahead and grab our conversation that we need So conversation is await context run query, internal system conversations get by thread ID and passing arguments thread ID. And that's how you find the conversation. Now, if no conversation was found, we can throw a new convex error here with code not found and a message conversation not found. And now, let's go ahead and just do if conversation.status here is already resolved, we shouldn't be able to send messages here. So let's throw a new error here, convex error here, code bad request, message, conversation resolved.
Great. Now I'm going to go ahead and just add a comment here to do implement subscription check. Because if we have subscription, only then are we going to allow the AI agent to respond. And now we're finally writing the code for the AI agent to respond. So await support agent, make sure you have imported it from system AI agents.
This is why this needs to be an action because we are calling a third party service. What is the third party service? Well, OpenAI or Gemini or Grok. That's why this needs to be an action. And that's why we are doing this here over there type of fetching, right?
So let's await support agent, generate text, passing the context as the first argument, pass in the thread ID arguments thread ID as the second argument. And as the third argument, passing the prompt. And let me just check is that prompt should be arguments that prompt. Yes, like that. Now later on, we will have tools which we can add here, but for now, this can just be empty.
And I think that should be it. That is our message to basically create a new AI response. And we are doing the same type of validation because this will be used by the widget user, right? So the widget user will attempt to create a new message. So we have to verify their contact session But we have to use the internal system for this one because this is an action and that's the only way we can do that.
Because if you try doing context.database.get you will get an error. There is no database here. Because they are strictly separating their action with their query and mutation. So this is for security reasons, right? Great.
So now that we have that, let's also develop export const getMany. And this can be a normal query. So yes, you can mix and match both actions and queries and mutations in the same file. That's completely fine. The only exception would be if you were to actually add useNode.
This is used in very, very rare exceptions when for whatever reason the third-party package you're using needs to have that, then you can't mix queries and actions. So what we're going to have to do here is first of all import query from the generated server. So make sure you have added it the same way we added action here. The arguments are going to be thread ID, which is a type of string, pagination options, which are a pagination. Okay, so pagination options validator, you can import it from convex server.
So basically, convex handles pagination for us, we don't even have to think about it. And contact session ID, which is a type of contact sessions ID. Let's add handler, asynchronous method, context, arguments. And now let's go ahead and first things first, validate our contact session the same way we did before. So we can now normally get it.
You can see the difference. In here we have access to convex database so we can just easily do this. But when the action is in question We have to run it through the internal system. That's why we created those internal functions. And now let's go ahead and just do the normal check if the contact session does not exist or if expires at is past, invalid or expired session.
Actually let's use convex error here. It doesn't matter. You can throw normal errors. It's perfectly fine. I just like to use the...
This method. Invalid session. Alright. And now let's go ahead and simply return back paginated items using await support agent which we already have imported list messages passing the context passing the thread id arguments thread id and pagination options and that's it. We have now not just fetched the messages but we have also associated them with their appropriate thread ID and we have added pagination.
So there are so many things we usually should do ourselves here, but Convex did it for us. Not only that, but I don't think you even noticed when we create a new message, you can see that in here, we are just passing the thread ID and that is enough for this support agent component from convex to use the history of the chat to prepare the response. So you can actually ask this AI model, what did I just say two messages ago? And it will know because of this thread ID. So all of those things are happening in the background without you even knowing.
So for us, this seems Super simple, right? We're just creating the message and listing it. The only complex part about this entire implementation is our contact session validation. Everything else is pretty easy, right? So now I believe we are ready to test this out.
So what I wanna do is I wanna go instead of apps, I wanna go instead of widget, app, My apologies, modules, UI screens, widget chat screen right here. And now Let's add all of those AI components that we added. So from the AI conversation file, that's going to be AI conversation, AI conversation content and AI conversation scroll button from workspace UI components AI conversation. So just make sure you have them right. Let's check once again inside of packages UI source components AI branch conversation input message reasoning response source suggestion and tool.
You can use the link on the screen to obtain them. After you have added AI conversation it's time to add everything from AI input. That is AI input, submit, text area, toolbar and tools. After that, let's go ahead and let's add everything from AI message. That is the message itself and the message content.
Now for the response, it's quite Easy, just response. For the suggestions, AI suggestion and suggestions. And now let's go ahead and let's try and fetch our messages here. So let's see, do we have everything we need? We have organization ID, we have compact session id and in here we have the conversation itself so I think we should be able to fetch this by using use thread messages that's right Convex has prepared a hook for us.
Use thread messages from add convex dev agent react. So let me just see. I'm not sure if I have to install this or if I have to install this now because we just installed convex dev agent. But remember only in our backend package. So now I'm just checking.
We have to install this now in our dashboard too. My apologies in our widget. So pnpm f widget add and let me just see the exact version. So this version. Let me just add it.
So make sure you filter into the widget. That's where we are adding it. Convex dev agent. Let's go ahead and run through load dev again. And there we go.
No more errors now. And another thing we can import from here is two UI messages, basically another helper here. So first let's use the use thread messages to load our messages. So I'm going to go ahead after conversation here and I'm going to add const messages here to be use thread messages. And let's use api.public.messages, whoops, dot getMany.
And then we are going to have to use the thread ID and we can obtain it through conversation which we fetch from above. So let's check if we have conversation question mark thread ID and if we have contact session ID. This contact session ID is available through the atom value. So if we have both of those, let's go ahead and let's open an object and let's pass in thread ID from conversation thread ID and contact session ID. Otherwise, skip.
And then the third argument is going to be the initial number of items we are going to fetch and that can be them. And that is our messages just like that. Perfect. So now if you actually want to, you can already go below this and do JSON stringify messages. So now if you go inside of your app localhost 3001 with the proper organization id at the top click start chat and you will see an empty array and is loading false and status exhausted meaning it loaded all messages and there are none to be loaded, but it's working because we have a actual thread ID.
This thread ID actually exists, so it's working. And here's something fun we can do. For a brief second, go back inside of packages backend and go and find your conversations.ts public here. And every time you create a new conversation, wouldn't it be cool if the AI sent the message first? Well you can actually do that.
Let's go ahead and do this. Let's do await save message. You can import save message from convex dev directly like this and in here go ahead and pass context and pass components.agent. In order to actually import the components we have to do that from the generated API. So let's do this from generated API.
That's where you can find the components from. So let's go back inside of the create mutation. So passing the components, passing the thread ID, arguments, oh, it's thread ID, we actually have it right here, okay. And then let's create a message from assistant. And the content in here will be, hello, how can I help you today?
So that's gonna be the first message of every chat. And I'm gonna add a little to do here. Later, modify to widget settings, initial message, because we're gonna allow users to customize the initial message. But right now we don't have that functionality. So refresh your entire page now and click start chat.
And you can see the difference now. We now have our first message. Hello, how can I help you today? From assistant. Perfect.
So it works. We can officially save message and we can load the message. And this actually gave you a glimpse of how we're going to enable human conversation. You can see that I was able to completely hijack this convex AI component by simply importing save message directly and deciding myself what the response will be. And that's how we're going to create human conversations without AI interfering because you can see that in order for AI to interfere we have to call await support agent and then generate text right but if you don't want AI to respond anything you can just use this.
Just directly save the message, save them as user and the content will be arguments.prompt. So now that we have all of that let's create the UI and make this pretty. In order to create the UI we have to go inside of the chat widget screen and we have to implement the form. So let's import ZodResolver from HookFormResolvers.zod. This is the problematic one, right, If you remember.
Let's go ahead and let's import Zod from Zod. And let's import, I'm not sure if we have, we do have button. Great. But let me just find, we are going to need to import use form from react hook form and I think that's enough for now. Now let's define our form schema here just above the widget chat screen.
Form schema uses Zod object to create a message which is a string, minimum value is 1, message is required. Like that. And now let's go ahead and let's create our form actually. I'm gonna do that after the messages here. Oops.
Right here. So let's define our form. Form is going to be useForm, zInfer type of form schema constant from above. And then instead of the object of that form schema we're going to add the resolver which is a type of Zod resolver and pass the form schema inside and then a very simple default values object with message being an empty string. Basically we have already done this if you search for use form z infer instead of widget out screen you can see the exact same thing but different default values so in case you're not sure where you where I found this from that's where perfect now that we have the form here let's go ahead let me just move this on back somewhere up here like that it doesn't need to be closer there so now I'm going to add const create message and I'm going to call use action because remember message is an action.
So let's pass in api.public.messages.create. So we have imported use action from convex react and we already have api from workspace back and generated api great and now let's go ahead and let's develop on submit method which is an asynchronous function here and its values are going to be a type of z.infer type of schema. Whoops form schema. And then in here let's go ahead and first do if there is no conversation, well let's just return like break the method. There's no point in going forward.
Let's immediately reset the form so we clear it and let's then do await create message pass in the thread ID from the conversation thread ID. Prompt will be values dot message, individual message one and contact session ID will be our so yes we can also check if there is no contact session ID here like that there we go And now we can just use the shorthand operator here. This is the same thing. So if you write it like this, or if you write it like this, it's the same thing if the key and the value of the variable are named the same. Excellent.
So now we have that. Let's go ahead and let's actually develop this form. So I'm going to go ahead outside of the widget header here and I'm going to remove this div. Instead I'm going to add AI conversation. So we have all of these components already imported.
Inside AI conversation content. And then inside of here, I'm gonna do two UI messages. Again, we have this imported from convex dev agent react. The UI messages, messages.results. Or empty array.
So messages, whoa, my apologies. So let's just see. Messages come from, let me just find it, use thread messages. So that's what I'm referring to right now. We are iterating over them, but let's add a fallback here to an empty array.
Question mark dot map, get the individual message in here and return ai message. From property we'll check the message role. If the role is user we're going to say user, otherwise we're going to say assistant. Key is going to be message id. Now inside ai message content, ai response and render message content.
So there isn't much for me to explain here. It's just composition of this components, right? I just follow Kibo UI documentation on how to do this. That's why I'm not explaining too much. It's just composition.
And let's go ahead and just add a little like this. Whoops. To do add avatar component. We do have out our component, but we're going to have to create a custom one like this. Okay.
In here we end conversation content, we end the conversation like that. Then I'm going to add a little to do here, add suggestions. We still don't have the resources for that yet. And I think we can already see how this looks now. There we go.
Hello, how can I help you today? Much, much better already. Now it's time to develop the actual form. So I think we forgot to import all the form elements here, but there are only two that we need actually. So somewhere at the top or here, add form and form field from workspace UI components form.
All right, let's scroll down now. Form spread the form from the hook inside. And now in here, let's go ahead and add AI input. The AI input props will be the following. On submit, it's gonna be form handle submit and passing on submit.
Class name here will be rounded none, border x zero and border bottom zero. So how did I know that I can pass on submit here? Why am I even passing it on submit here? Well, again, let's refer to our widget out screen. This is the first place where we used the form, Right?
Remember that this was the composition, form, and then a native form element inside with on submit. So why am I adding on submit here? Why not a form element inside? Well, if you take a peek instead of AI input, it's actually form, that's why. So why form handle submit?
Why not just directly on submit? Very simply, form.handleSubmit will make sure that the fields are validated first. That's why. Now inside of this AI input, let's add form field, which is a self-closing tag. Let's give it control, form.control.
So this is just a React hook form API. That's why I know how to do this. It's going to be disabled if conversation.status is resolved. So we are not going to allow the user even to type if conversation?status is resolved. The name of this form field will be message because that's the only field we actually have.
The render here will destructure the field. AI input text area, like that. Disabled will be if conversation question mark status is resolved again. On change will be field on change just like that on key down here we'll get the event and we're very simply going to check if event.key is equal to enter and if we are not pressing down the shift key we're going to submit the form as well. So form handle submit on submit and execute that.
:06 Basically this on key down prop is not required I just think it's cool that you can press enter and it will submit automatically. And now the placeholder again will depend if the conversation is resolved or not so placeholder if conversation status is resolved this conversation has been resolved otherwise placeholder will be type your message just be mindful of the question mark here and finally the value will be field.value great And this is actually a self-closing tag. So you can add a forward slash here and remove this here, just like that. Then outside of it, go ahead and add AI input toolbar. Add AI input tools.
:53 And they're going to be completely empty, but we need them just to take this space. And then AI input submit. Which is our last component, I promise. Disabled if conversation.status is resolved or if form form state is valid. Make sure to put exclamation point in here.
:15 So if it is not valid and a question mark here status will be ready and type will be submit just like that and here we have our component and now if we've done this correctly I should be able to ask the AI a question. How are you? And let's go ahead and click submit. Our message is submitted. And would you look at that?
:42 The AI has responded. I'm here and I'm ready to assist you. How can I help you today? What can you help me with? So you can see that this is now an AI model that we can talk to.
:54 And you can see that it's hallucinating right now, right? I just told it, you are a customer support agent. It actually has no access to tools, no access to any embeddings. It will just generally act as a customer support agent. But this is our blueprint and this is what we will be working on.
:11 So you can see how simple Convex made this for us. And we even saved some time by not having to develop these tools, we just have to add them and use them. But basically 90% of the heavy lifting here was done by Convex. We just had to add our AI SDK and the rest is history. So For example, if I now try and do, what did I ask you before?
:36 You will see that this AI is history aware. So you asked me how I was doing, right? Our first message was, how are you? And then you inquired about what I can help you with. That was my second message.
:49 So you can see, we didn't do anything to give this AI memory. Convex agent component did that for us using that magical thread ID thingy. Absolutely amazing. I'm super proud of what we did and here's a fun fact. So you are probably wondering, okay great I can see conversations in my database and sessions, but where are those messages?
:16 Well look at this little button here. Click here and click on agent. This is where they are. So if you go into the messages here, you will find all of the messages in here. So that's how they are stored in convex.
:30 You can see these are components. Right. And one quick tip as well. Sometimes it's hidden. Sometimes it looks like this.
:39 So you have to click on the tables. Just a quick tip. Excellent. Amazing. Amazing job.
:45 I think that's everything that we had planned. So we added AI components. We created the message functions and the internal functions. Amazing, amazing job. And we also made it look like this.
:56 There are a couple of things missing like infinite pagination and avatars, but we will add that very soon. I think it's time to end this chapter at this moment. So this is 14 AI agents. Let's go ahead and close everything. Let's go ahead and stage all of our changes.
:18 14 AI agents. Let's commit. Let's open a new branch. 14 AI agents. And let's publish that branch.
:32 Then I'm going to go ahead inside of my repository here and I'm going to review this pull request just to see that we didn't miss any critical security issues like we did last time and CodeRabbit saved us with the review. So we introduced a fully featured AI chat interface with structured message rendering, input validation, and enhanced conversation controls. We added support for AI-generated suggestions, reasoning, sources, and tool outputs with new collapsible and interactive UI elements. This is describing the Kibo UI components which we added. We enabled multi-line message input, improved scroll behavior, and dynamic message fetching with agenation.
:16 So all of this is still I believe mostly talking about the keyboard UI components besides message fetching that is our convex agent component. We integrated customer support AI agent using the latest open AI model. In my case it's open AI right for back-end conversations. And in here file by file change of course, but in here we have the sequence diagram which is always interesting to look at just to repeat our knowledge. So the user submits a message via form using widget chat screen.
:48 We call the create message function with prompt, thread id, and session id. The backend app receives that and it uses the support agent to generate the response. The support agent returns the AI response and then we update the messages and we fetch them via get many function and finally we render them in the chat UI using keyboard UI components. Amazing. So in here as for the comments it left a suggestion to wrap the await create message in try and catch.
:22 This is true we can do that. The only reason I'm not doing it is because I haven't thought of a good solution to display error messages in the widget component because I'm not sure how a toast notification would look like because remember this won't be a full screen app. This will be a very small part of the iFrame. So that's the only reason why I'm not wrapping this inside of try catch and then firing a toast because this is the widget components. It's a little bit odd, right?
:50 The check for open AI key, I'm fine with the way it is. In here, a warning to check for duplicated open AI HTTP traffic. So good advice, but I think we are good here. And yes, this is a, this, we should be aware of that. So we just added React markdown, which depending on how we use this component later on, will mean that we have to sanitize users request because without sanitization we are subject to XSS injections.
:26 And now it's just reviewing the Kibo UI components so I'm not going to really look through these changes because they are from a third party. I am okay with the way they are, which means we are ready to merge this pull request. Amazing, amazing job. And let's go ahead and go back instead of our main branch. Let's go ahead and synchronize our changes to make sure everything is up to date.
:49 Let's go ahead and select the graph just to confirm we detached and we merged back. Since we are already in the 14th chapter I would recommend that at the end of each tutorial, my apologies, at the end of each chapter, you actually try TurboBuild for a very simple reason that if something is wrong, you have time to fix it because if you don't check your build for the next, I don't know, 10 chapters, however many we're going to have, you're going to have a lot of problems Fixing them right so please go ahead and do occasional Turbo builds you can see that my widget is building Fine if yours has any errors and now would be the good time to try and fix that. It is 99% a type error or a lint error, so it should be easy to fix. But yeah, that's my advice. Basically, just start building your apps because we are deep into our app and if we if you don't try TurboBuild and only try it when we deploy you might be hit with like a hundred type errors wondering where they came from.
:58 So always TurboBuild at the end of each chapter so you are ahead of any errors. I believe that marks the end of this chapter. We pushed to GitHub. Amazing, amazing job and see you in the next chapter.