In this chapter, we're going to focus on creating production-grade error tracking within our project. In order to do that, let's first try and demonstrate an error scenario within our app. I'm going to go inside of my underscore app folder, where I have a couple of procedures, my app router where I have the test AI route. And inside of here let's go ahead and do throw new trpc error from trpc server. Let's pass in the code bad request and let's pass in a message which says something went wrong.
Now let's go ahead and let's do npm run dev or if we already have it you can just restart your next server. And now let's go ahead into localhost 3000. So now if you go ahead and click test AI, You don't even know that something went wrong unless you take a look in the terminal. And in here you can see that something is happening. A way that you can make the user aware that something went wrong is by going inside of page where you execute test ai and add the on error here and do toast error something went wrong if you try now you will get the message down here something went wrong This is an extremely basic scenario and in here we are actually forcing an error.
But errors like this are bound to happen in every application but even more likely in application that is so heavily dependent on user input. Because as a developer you are very familiar with your application and you're very familiar with your code. Which means that errors like this and bugs like this will not escape you. You will catch them and you will fix them. But will you fix all bugs that hundreds or thousands of your users will introduce and find out?
Chances are you're not going to do that so easily. Now there are ways you can add your own logging here. For example, every time you throw an error you can go ahead and call some kind of logger and pass in the message here. And then that logger would store that somewhere so you can take a look. But you can see how tedious that becomes very quickly.
Just imagine having to do that for all of our background functions, for all of our third-party services. Very soon it becomes unmaintainable. For that reason we're going to integrate Sentry, production-grade tracking which works almost magically. But besides that we're also going to use Sentry to integrate something called session replace which is basically a way to see exactly what your user was doing in an anonymized way to see where they clicked on the UI that started the error. Because sometimes just seeing the error on your backend doesn't make it instantly clear why it happened.
Even if you look at the input that was sent, it's just not clear. So when you combine that with a session replay, which is able to recreate how the UI looked at the time of the error happening, gives you a whole another perspective. And Sentry is one of the rare error tracking tools that is able to do that. But besides that, we're also going to have something very interesting called AI monitoring. And with just a few configuration steps, you will be able to track every single AI call within your application.
Not only whether it succeeds or fails, but also how long did it take and how much did it cost. And by having that information, you will be able to improve your app constantly by optimizing to cost less and last less. Let's go ahead and do that. Using the link on the screen, you can visit sentry.io and you can create an account. I've teamed up with Sentry and they are currently offering three months of Sentry team for free.
So use the link on the screen and create an account and once you have done that Go ahead and sign in. I've logged in and this is how it looks in an empty dashboard with no projects. If you have a project, that's perfectly fine. We're going to create a new one for this project. So I'm going to click right here on help and then I'm going to click on documentation.
And in here I'm going to find an installation wizard for Next.js. So let's go ahead and run this command right here. And just for information purposes, I'm also going to show you the exact version of this installation. Let me go ahead and paste this and just try doing version first. This way if you want to use the same version as me, you can do it.
And now I'm going to run the actual command. So I recommend shutting your app down for this one and just running the command. So in here it tells us that we have uncommitted or untracked files in your repo. So it is basically warning us that Sentry Wizard is going to install some files so be careful you know if it overrides them. We don't really care about this we just added some errors and some toasters for showing that errors.
We can even unstage all of those. It really wouldn't matter. So it's completely fine. So I'm just going to select yes. Now it's asking us if we are self-hosting Sentry.
That's right. You can self-host Sentry if you want to. That's another reason why I highly recommend using Sentry for error tracking. If you don't want it you don't have to but you can still self-host it. So let's go ahead and select Sentry SaaS because we just created an account on their website.
Do you already have an Accenture account? Yes, we can now select yes and this will now open the page here. So select your organization and let's go ahead and create a new project. I'm going to call this NodeBase and let's go ahead and click continue and now the wizard will connect so we can return to the terminal here and here it is installing sentry nextjs with npm our package manager for this prompt do you want to route sentry requests in the browser through Next.js server to avoid ad blockers? You can see what will happen if you select yes.
It can potentially increase the server load and hosting bill. So depending on what you want, you can select no, but then browser errors and events might be blocked by ad blockers before being sent to Sentry. So you kind of have to decide for yourself. For testing purposes, for demo, it's completely fine to select yes. We are just doing this locally.
So let's see Sentry in action as what it can do in full, right? Do you want to enable tracing? Yes. Do you want to enable session replay to get a video-like reproduction of errors during the user session? This is extremely useful and one of the most impressive things I've seen.
Make sure to select yes for this. Do you want to enable logs? Again, this is also very very useful. You will be able to follow your application logs from start to finish. So select yes.
And select yes for creating an example page so we can test if the Sentry setup was working. And in here, it gives us a warning that we are using TurboPack. It is warning us that Sentry is only compatible with TurboPack on the Next.js version 15.3.0 or later. So we are good. If you're using TurboPack with an older Next.js version, just remove //Turbo or //TurboPack from your development command.
So we don't really have to do anything here. Are you using a CI CD tool? We can select yes for this and now let's go ahead and copy the sentry ALF token and let's go inside of dot environment and let's add sentry and just paste it here I like to put it inside of quotes because of syntax highlighting. And let's say yes continue. Optionally add a project scope and mcp server so I'm not working with ncps so for me this can be no and as you can see you can add it later anytime.
Here it is, successfully installed Sentry Next.js SDK. So we can now npm run dev and we can go to forward slash Sentry example page. Don't forget to remove turbo if you are on older Next.js version, not the case for us. So let's do npm run dev all. So we start both ingest and Next.js.
Let's go to localhost 3000 and forward slash Sentry example page. And now in here let's go ahead and throw a sample error. And if you get a message error sent to Sentry it means everything is good. So let me go ahead and refresh here. In my Sentry you can see how now I got both the front-end error and the example API error.
So that's how Sentry works. The error that just happened wasn't explicitly logged. Sentry will simply catch every single log inside... Sentry will catch every single error within our app. That's exactly what we want.
We don't want to concern ourselves with whether we forgot to track an error. Sentry will monitor every single error that's happening within our app. So if you scroll down here, make sure you select the front-end error, You can actually see the session replay from this anonymous user. And you can see how it managed to recreate what was happening in the UI at the time. And you can see the user clicking on the button, you can see the success messages, you can see the arrow moving.
So this was all done using metadata and telemetry that's available to recreate the scene and this will help you a lot in production when errors keep happening and you don't know why just go to the front end and see the session replay and this will give you a clearer picture of why it happened and you can actually follow the breadcrumbs about how this happened right so you can see that right here this sentry example error is being thrown this error is raised on the front end of the example page. But you can see what actually happened, right? We fetched API century example API, and in here we got 500 error. So that's why the exception was thrown. So the more you explore Sentry, the more you're going to understand how useful this is.
These are the logs that I have been talking about. So you can see exactly everything that was happening. We navigated to Sentry example page and then in here we did a UI click and you can see exactly which button we clicked exactly what was the HTML element within the button which was a span And then you can see exactly what that button triggered, what we received from that button, and finally the exception that was thrown on the front end. And using all of that telemetry, we are able to achieve something like this. This will help you greatly within your projects.
And if you use the link that I showed on the screen, you can actually get Sentry theme three months for free. But even after the free trial, if you don't want, if you don't have the budget, Sentry is still free and has an extremely generous free tier. I highly suggest just keeping it here in case you know some errors happen. And let's go ahead and check out things even further here. So we also have sentry example API error here.
In here we can see some other things. In here we can see the backend version, right? So you can see how different this is now. In here we are tracking internal files of Next.js and where it happened and how it happened. Right?
And again, you can follow the breadcrumbs to understand why this happened. You can see how it has the trace preview specifically created for Next.js. So you understand exactly every single thing that is happening here. Page load, Sentry example page, then HTTP server. You can see in how much detail you can track these errors.
This is what production-grade error tracking looks like. But we are barely scratching the surface of what Sentry can do. So while this is super cool, I can talk about this for days probably. So I'm going to stop myself here. And now what I want to do is I want to go ahead and just show you a few more things besides the issues here which are basically your unresolved issues.
And yes, from here you can actually assign your members to these issues. You can mark them as resolved. You can resolve them in a commit. You can go ahead and connect this with GitHub and create a pull request a GitHub issue there is so many things you can do here. There are also different views here under explore you can see individual traces or you can see the logs in here.
So let's go ahead now and let's focus on one specific thing which is insights. And in here you can see AI. So let's go ahead and configure AI agents because this is another super cool and useful thing that we can do. So we already installed Sentry SDK and we are using Vercel AI SDK. So this is good.
Sentry next JS. Let's just double check. I think that when we run the wizard we I'm pretty sure we got all the packages. So inside of here let me just check. Whoops century next JS.
Here it is. All right. No need to do anything here. Instead, let's just focus on next and that is configuration. So add the Vercel AI integration to your Sentry.init call.
So I'm going to copy the integration block right here. And then I'm going to find my server config as per their documentation so we have the edge config and we have the server config so after DSN here I'm going to add integrations Add the Vercel AI SDK integration to sentry.server.config.ts. It's added right here. Record inputs set to true and record outputs set to true as well. And let's just quickly verify.
So tracing must be enabled for monitoring to work. Tracing sample rate is 1.0 and let's just see whether we have that. Traces sample rate is set to 1. I believe that's the same as 1.0 Great, and we can also do send default PII Simply because they also have it here. So I will set it to true And now in order for this to work, we have to find where we use generate text and we have to add the experimental telemetry and execute it.
So let's go inside of our functions.ts inside of ingest where we use the generate text. So after the prompt here, let's also add experimental telemetry. Now depending on when you're watching, this might have become telemetry because obviously since it has the experimental prefix, it's bound to happen at some point that it will just be called telemetry. So if experimental is not working for you, it's obvious that the API has changed and I'm pretty sure the name will stay telemetry. So let's go ahead and add this to all the places where we are using generate text.
So these three places right here and let's click save. And once we've done that let's go ahead and click on next. And now this is waiting for the project's first agent events. So I'm going to go to localhost 3000 and refresh here and I'm going to click test AI. And now I'm going to just watch here and see if it will detect it.
So as far as I remember, we first wait for a few seconds and then we fire the events. And after a few seconds, go ahead and refresh if it doesn't refresh for you and you should be able to see this screen. And in here you can see traces, models and even tools. Right now we are not using any tools so we just have to focus on these two. And you can already see the number of LLM calls for GPT-4, for Gemini 2.5 Flash and for Cloud Sonnet 4.5.
You can see exact amount of tokens that we've used. You can see GPT-4 is using significantly more tokens than Gemini. So perhaps if you want to make sure your app is the cheapest one amongst competitors, you can maybe default your users to use Gemini 2.5 unless they select otherwise. This way they will have the cheapest amount of token spend. So this is what I was telling you about how you can learn more about your application if you add Sentry.
You can make smarter decisions this way. Inside of the models tab, you can see this in another graphical example. You can also see errors which are happening. For example, my cloud keeps getting errors, right? Why is it getting errors?
Well, it's getting errors because I have no API key inside. And what they've done is they've reused their amazing traces and everything that you usually use for normal errors and they have added it to our cell as decay. So now we can see exactly what went wrong in this type of environment as well. And this is extremely useful because AI models are very hard to keep track of, right? And now you have them in one place, you can filter by so many things, by your projects, by production, by development in the last hour, last 24 hours, and you can combine it with everything that we have learned in the issues tab.
So you can follow the trace from the front end all the way to the back end basically. Sentry is an absolutely crucial part of this project, but you will only see its power when you go to production. And hopefully all of you will go to production here. So you can find the AI again here in AI new. You can go ahead and play with the models.
You can play with the tools here. And the more you use your app, the more errors you're going to get here, the more things will happen. And this will be another way of fixing what you missed. So we will definitely come back to Sentry at some point. And I just want to see whether we have logs here.
So let's see, we already installed this. Let's go ahead and click next. So now we also need to add century console logging integration here. And what will happen now is remember when I told you that let me just see so we are adding this looks like if they don't define where I think we need to add it to both. So let's add it to server config here.
So integrations after Sentry, Vercel integration also at Sentry console logging integrations and track log, warn and error here. And let's do the same in Sentry.edge.config here. So I'm just going to add integrations here and add it like so. So now basically in your functions here, if you just add some logs, you are also going to track things. So if you add console log or maybe console warn, something is missing or console.error, This is an error I want to track.
You no longer have to develop your own logger abstraction and send that somewhere Because after you've added this integration, Sentry will now do it for you. So let's go ahead and click next here. And let's see. Okay, perhaps we have to use this logger. My apologies.
Perhaps I understood it wrong, But I'm pretty sure both are true. I think we're just using this to forcefully test it. I think both will work. So I'm going to directly try from their logger. Either way, you don't have to create your own, right?
So both will now work. Let's see. User triggered the test log. And let's go ahead inside of our app here. Let's refresh and let's click on test AI button and now in here.
Let's go ahead and wait a few seconds and then let's try and refresh to see if this will now work. So it takes a while but after a few refreshes you will start to see the logs here as well. And from now on whenever you want to log something and make sure that it is recorded, you can use the Sentry integration to do so as well. So as I said, we will definitely be visiting this dashboard throughout the tutorial so we can keep track of our tokens used, our LLM calls, and all the errors that we might have missed that are happening in the background. Because you can see I don't always have this open, but this could be full of errors as far as I know.
That's what we're going to make sure Sentry is keeping track of that. And we no longer have to depend on ourselves on thinking about every single possible errors that both us and our users can do. So I hope I kind of showed you the power of Sentry and why I like it so much. And let's go ahead and see if that's all we plan for this chapter. I believe it is.
We've set up Sentry. We've shown session replays, logs, and AI monitoring. And we will continue to demonstrate throughout the rest of the project and now let's just push this to GitHub. So since this was mostly just installation wizard we don't really have to review this pull request. So let's go ahead and just create a new branch, 08 error tracking like so.
Let's go ahead here in the source control and let's go ahead and stage all changes and let's do 08 error tracking And let's hit commit and let's hit publish branch. I also just want to tell you that if for whatever reason this chapter was unsuccessful for you in a sense that something is not working do not worry. So all of this is just a improvement it is not required in a sense that you will not be able to finish the project without this. I just think it's an extremely useful thing to have especially for a production grade application. So here I am opening the pull request and as I said this was mostly just an installation wizard and some testing scenes so we can immediately merge this pull request.
We are just going to keep it as a separate branch, right? But we don't really have to go through the review. And once you have merged this, you can go back inside of your project here and make sure that you go back inside of your main branch. And as always, make sure that you synchronize changes. So everything is pulled from the branch that we just merged.
And I believe that marks the end of this chapter. Amazing, amazing job. You can also go inside of your source control here, graph to confirm that you have detached and merged as always and see you in the next chapter. Thank you.