In this chapter, we're going to set up the entire backend layer that powers the application. We're going to build a type safe API using TRPC, Cloud File Storage with Cloudflare R2, which is actually any S3 compatible storage. So if you just heard Cloudflare and thought, oh, but I want to use AWS, no worries. You can. Any S3 compatible storage will work.
But I will teach you how to set up Cloudflare R2 because of its free tier and simplicity of use. We're also going to build a seed script that is going to add 20 system voices to our Postgres database and also upload their samples to R2. So we are going to have a respective reference key to the uploaded element in our database. So it's all gonna be nice and synchronized. We're also gonna learn how to create auth-aware procedures, such as base procedure, which will be used for public endpoints, auth procedure, which will be used when something needs to be authorized, and organization procedure, which is very important because our app is multi-tenant by default So you will learn how to do that.
And this is the actual ALF layer too. So whenever you think about security in your app, it should always be within the data access layer rather than the middleware. In the first chapter we worked with a proxy.ts file which basically just serves as a nice redirect for users, right? If unlocked user attempts to open the app, they are going to get redirected to the sign-in page. But that's not actual security.
We shouldn't rely on that proxy file. Instead, we should rely on things like this, data access layer security. So the key concepts to learn from this chapter are going to be type safe APIs, ALF middleware, server prefetching and hydration in client components, signed URLs for audio, and a seed script. The build order which we're going to use is first setting up the RPC followed by Cloudflare R2 storage and finally developing the actual seed script. So let's get started by setting up the RPC.
Make sure you have your app running here and this should be the last thing we built. We built the history placeholder, the settings tab, we turned this into a better UI and we actually have a form which cannot be submitted at the moment because a voice selector is missing, but we cannot build the voice selector until we actually have some voices. So, let's go ahead and you can use the link on the screen to visit TRPC.io page and I want to show you exactly where you can find the snippets that I'm going to use to set up TRPC. So we can go inside of the documentation here and in here you can go ahead and open client usage. And in here you're gonna see two React queries.
One is 10 stack React query with a star icon and one is React query integration classic. We're gonna use the new one, the star icon. And in here select server components. This is the guide that you need. Now, in case you are watching this in the future and something has changed don't worry, the project is open source, you can always just copy and paste exact components I'm adding and I will of course go over all the snippets we're going to copy and paste so we will understand what we're doing.
Great. So let's start by actually installing all of these packages here. I'm going to go ahead and copy this and I'm going to install at the RPC server at the RPC client at the RPC 10 stack react query at 10 stack react query at latest Zod client only and server only. I'm going to go ahead and install all of these packages and you can see from my documentation that I'm currently on a version 11.x so my major version is 11. I would highly advise that you work on the same version as me.
And I'm going to show you exactly what version that is now that my package.json is updated. So let's take a look. So 10-stack React query is 5.90.21. All of my TRPC ones are the same, so 11.10.0. And my client only, this isn't important, this is 0.0.1 and this server only is also 0.0.1.
So I would suggest to try and keep TRPC at the same major version as me. So this one doesn't really matter. In fact, when I developed this app the first time, I think it was on eight or nine. So yeah, you don't have to match 100%. Great.
So that was step one, adding all the packages. So now let's go ahead and build a sample backend. I'm gonna go ahead and zoom this in. So we're gonna go ahead inside of source folder and in here we're gonna create a new folder called trpc and in here I'm gonna go ahead and create init.ts and I'm gonna go ahead and copy the contents of this file So let's go ahead and I'm just going to zoom out just a bit. So we're going to import trpc from trpc server, cache from react.
We're going to build create trpc context using that cache and just return mock user ID. We're then going to go ahead and create the trpc instance here using inittrpc.create and we're going to export create trpc router, create caller factory and base procedure. You will also notice a commented out data transformer here. We're going to come to this later. So for now, this file is good as is.
Let's go ahead and continue copying the snippets. So the next thing we need is to create a routers folder and let's create underscore app.ts. So we're doing that inside of the TRPC folder. I'm gonna go ahead and copy everything. So what we're doing here is we are importing Zod from Zod and we are importing base procedure and create TRPC router from our recently created init file.
In here we are defining the baseAppRouter. So what this is, is essentially a Hello API route, which is using a base procedure, meaning it's public for everyone to use. The hello API route requires a param called text, which is a type of string. And it will return a object which holds a greeting key whose property is hello, and then it's gonna use whatever we sent into the params. So it's basically a very type safe way to write your API endpoints and combined with Prisma which is already extremely type safe it's going to make a brilliant developer experience and much safer code in the end result.
All of the files which I'm gonna create now, you don't really have to know at the top of your head, right? You will simply get to know them by using them. When I learned tRPC, I didn't really bother with understanding every single thing that's being written here. It just became clear after I had to use them. And I would recommend you doing the same.
So this is mostly just following documentation. The reason I'm telling this is because I don't want you to get scared of adding TRPC, which is a relatively abstract concept. It's simply a way for us to write API routes in a very, very type safe way. So the end result is important, not the way we are building this. In fact, this could have been an NPX TRPC in its script if you ask me.
But let's go ahead and add it one by one so at least we are aware of all the things we have added. All right, So now we have to actually add the TRPC to our Next.js API routes. The way you write API routes inside of Next.js natively, so not with TRPC, is by using a reserved filename route.ts. So it works the same way as, for example, page.tsx, except it's a route.ts. So you need to create a folder.
We're going to create a folder called API. And then we're going to go ahead and create trpc inside. Then we're going to create a dynamic trpc folder. So this basically means anything. And then we're going to finally add route.ts so this will be an equivalent of localhost 3000 forward slash api trpc and anything in between That's how we are going to communicate with trpc throughout Next.js.
So let's go ahead and add everything in here. And here we need to do some slight modifications. So we are importing FetchRequestHandler from trpc.servers.adapters.fetch and then create a trpc.context and app.router from seemingly incorrect files, That's because they are using a different import alias, so just change this to an at sign and that should work. Because trpc, at trpc is referring to this. This is at trpc.
And we already have the init file and we already have the routers app file. And that's exactly the ones we are targeting here. And we also have the matching exports. So everything should work just fine. This endpoint should be left as is.
Just make sure you didn't misspell anything. So API TRPC, API TRPC. If you accidentally write TPRC, that's wrong. That wouldn't work. So make sure it's TRPC.
This one actually doesn't matter. This is just the, it's important that it's in square brackets, but do make it make sense, you know, so keep it TRPC here as well. All right. So now that we have that, let's go ahead and continue with scaffolding TRPC. Again, I repeat, you don't have to understand every line of code at the moment because this is mostly just setting up React query to work with TRPC.
You only set this up once and you never think about it again. So that's why I'm telling you that you don't need to worry about not understanding everything right now. How you use it is more important than how we are setting it up because we are literally following the official documentation. So, make sure you've created a trpcQuery-client, let's copy it and let's paste it. In here, I have an error and that's because of a missing data transformer import so I'm just going to comment it out for now.
Alright, so import super json is commented out, serialized data is commented out and this is commented out. Great. So if you really are wondering what are all these options, you can read from the documentation to understand it better. Now let's go ahead and define the client instance of trpc. So that's different from query client.
Query client is for connecting 10 stack react query. This file client.tsx is a client instance of trpc. So I'm gonna go ahead and copy all the content from the example and I'm going to paste it here. So this is basically the provider. This is what we're going to have to wrap our app with to make it available to use with TRPC.
All right. I'm not going to go through all of the code here but if you want to you can pause to see exactly everything that's happening. There is one thing I want to change though. So this code right now seems to be looking for a Vercel URL. And otherwise it falls back to localhost 3000.
So that's fine for our development mode, but it will break in production because we are currently on a railway. So I wanna show you a slight modification that you can do, which is actually, this will be a good change even if you deploy on Vercel. It's gonna be equally good. So go inside of .environment file here and at the top I'm gonna add a new environment variable app URL and in here I'm going to keep it at localhost 3000 like this. So just don't misspell it, right?
You can go ahead and see after you do npm run dev, you can see your exact protocol and your app. So you can copy from here and paste it here, okay? And once you have that, you can also go inside of source lib environment.ts and you can go ahead and add app URL here and make that required. Okay. So once you have this, you can then go ahead inside of query client, my apologies, inside of client.tsx and in here, actually, I just told you to add it to .environment, I'm sorry to this one, to an environment file because I was thinking of using it, but here's how we're going to leverage this.
I still recommend adding it here simply so you get a warning in case it's missing, But maybe it's not that important because technically we do have a fallback here to localhost. So here's what I'm going to do. I'm going to still use process.environment here, but I'm going to check for app URL. And then if I do have an app URL, I'm just going to return it. Process environment app URL.
That's it. Otherwise, fall back to localhost 3000. You can see how simpler this is to understand. Because if you use the Vercel URL part, it doesn't return the protocol, right? So you have to match the protocol and then append that.
It's just weird. AppURL is just simpler regardless of where you deploy. You're just gonna have to remember to add the appURL variable. Alright. So, again, just make sure that you have the app URL variable set.
Great. Now that we have that, we no longer have to do anything else here, but we do have to mount the provider in the root of our application. So let's go ahead and do that. So this component's name is called trpc-react-provider. So I'm going to go ahead inside of source, go ahead inside of source, app folder, layout.tsx and in here we're gonna go ahead and import trpc-react-provider from at-trpc-client.
So make sure you're importing it from this component we just modified. And once you have that, let's go ahead inside of clerk provider. So keep that as the utmost provider and simply wrap the rest of your app within the RPC provider. Great. Now that is it for the client-side setup of the RPC, but now we need to create server-side setup.
So that's also going to be server.tsx, yes. And in here, I'm going to copy the entire content and paste it here. I'm immediately going to comment this part out, because you can see it's a suggestion. If your router is on a separate server, pass a client. Our router is not on a separate server.
Let's just comment this out. So, I don't want to lean too much into explaining this file right now because I just feel it's going to confuse you. This is just setup. And this basically allows us to use tRPC within server components. That's the lightest explanation I'm going to give.
OK? So just make sure you're following the documentation. Make sure you're doing it in the proper folder with the proper file name. And yes, the extension is .tsx because in a moment we're going to add some React components here. So if you've accidentally named it .ts, change it to .tsx because we're going to have to do that anyway.
Okay, so I'm going to go ahead and skip a few things here because this is what I wanna do. So let me show you exactly how to find this in the documentation. So this is the last part we did, the rpcserver.tsx. And basically they say, okay, you're finished. You can now use the RPC.
That's true, but I want to scroll a bit down until you can find this tip and in here it tells you to extend your server.tsx for a very good reason because we are simply going to use this so many times that it makes sense to create a helper from it. So I'm going to go ahead and copy this part first. I'm going to add it at the bottom here. So here it is, hydrate client, And we need to import hydration boundary from tanstack react query and dehydrate from tanstack react query. So let me show you exact two imports I've just added.
It's these two, okay? Dehydrate and hydration boundary from tanstack react query. Perfect, and you can see once I've done that there are no errors and the second function we need to copy is the prefetch function so in here there is a bunch of type errors here Honestly my recommendation is just to click quick fix and disable this for the entire file and in here cannot find TRPC query options. I think we can import that from TRPC, tanstack, React query. There we go.
So let me show you that import. Here it is. So we already had create TRPC proxy, options proxy, and now we've added TRPC query options. I'm gonna repeat it one more time. I know I'm being annoying at this point, but I don't want you to be intimidated by this.
I know this is a lot of code changes, but this is the type of code that is usually scaffolded for you with some NPX create next app or NPX create backend. But we've done it manually this time, simply because I don't know if any NPX TRPC exists, but it's also a good exercise to kind of learn how to do things manually. But none of this should really bother you so much that you give up on the project, right? I've shown you the most important changes. You can pause the screen to see if you have all the files.
If you do, everything should be fine. And even if you think you did something wrong, the code is open source. You can always just open up this file and see if you should change something. We will not change anything in this, this, this, this file. The only file we're going to modify from now on is going to be the app file in the routers because this is where we're going to write our API endpoints.
Alright, so we have that done and basically what that allows us what we've just did with those two functions prefetch and hydrate client function is to use it in a very simple way you can see how prefetch is now super lightweight and hydrate client is super lightweight as well Whereas previously you can see that prefetch needed to have a query client and then you needed to call prefetch query and then you needed to pass this. Hydration required a hydration boundary, a state, an import from the hydrate and you needed to pass a query client from above. So you can see the simplification. This was before and this, where is it? And this is now.
So much simpler. Okay. Great. I believe we are finished when it comes to setting up TRPC. So what I want to do now is test if TRPC is actually working.
So the way we're going to do that is by going inside of source, app folder, dashboard. I'm going to go ahead and open a new folder called test and in here I'm going to create a component called health check.tsx. This is just a component, right? So we're going to pretend that this is our usual view component because you can see that's kind of the structure we like to use. Whenever we create a page we import a view and this view is useClient.
So I'm kind of trying to replicate that exact scenario. So let's go inside of test, health check, mark it as use client, and in here I'm going to import useSuspenseQuery from tanstack react query, And I'm going to import useTRPC from TRPC client. And then I'm going to export a function health check. And I'm going to go ahead and return a div and just a paragraph health check. So let's go ahead and spin up our app to make sure everything is still running as intended.
So let me go ahead and check if everything's running. I can close this tab. If you want, you can restart your server just in case any cache isn't left. And if I go ahead and go to help dash check, so let me show you in my URL, localhost 3000 help dash check, not found. Did I name it differently?
I didn't name it that. I named it test. My apologies. Localhost 3000 test. That's what I named it.
And I still can't find that. Okay. Because I never created a page file. So let's go ahead and create a page file before we can test if this works. So this one is a default export and let's return a very simple health check.
There we go. So make sure that this is a default export and health check is just a normal component. So there we go. Health check text. So most of these things still work as intended.
So what I want to do now is I want to create a simple TRPC procedure to check the health of TRPC. So we are going to go inside of TRPC, routers, underscore app. And I'm going to remove the hello procedure entirely. And I'm going to add a health procedure, which will be base procedure dot query so you can it can either be a mutation or query it can be even more things I think it can be a subscription to but we're going to only focus on two of them query is your API get request mutation is your post, past, put, all of those others, right? So query, asynchronous, and let's return, whoops, let's return status, okay.
Great. Now let's go ahead, We can remove Zod from here. No need. Once we have that, let's go ahead inside of components. My apologies.
Add folder dashboard test health check. And in here, we are going to add the trpc hook so use trpc and then we're going to get the data from use suspense query trpc.health.queryoptions and execute that And you can hover over data and you can see that the response is status string. That's the power of tRPC. So if I go ahead and add code 123 here, you can see that immediately this data now has that option too. So that's how TRPC works.
It is end-to-end type safety. Now combine that with our access to Prisma and our type safe data models. Every time we return something, we will be absolutely sure that what we are working with on the front end truly exists in the API and on the back end. And this seems, I mean, we just spent 25 minutes adding this, but that's a one-time change, right? Usually, you would just copy and paste this from any other project that you have, right?
But I'm trying to explain what we're doing here. One hint, in case all of your types are any, so you don't see the actual content, the RPC has a little fix for that too. If you scroll down, I think it's extra information, frequently asked questions. Here it is. It doesn't work.
I'm getting any everywhere. You can go ahead and pause the screen and see if anything here works. For me, this was the one that fixed it the first time I had this problem and I've never had it since. If you don't have any problem, no worries. But you can see that they actually recommend committing this file to your repository.
So if you're working in a team, all of you have the same TRPC experience. Great. So we can now close TRPC page. We no longer need that. And now what I want to do here is simply check if TRPC works.
So I'm going to go ahead and render the following. So a div with class name rounded large border padding six text center text muted foreground text small margin top text large font semi bold. Great so you can see that I can reach the RPC but as I refresh you can sometimes see an error. So what exactly is happening here? Well what's happening is that we are using useSuspenseQuery here, but we are never actually prefetching.
So you should use useSuspenseQuery only in combination of prefetching the data in a server component. That's why we created both instances of client and server. So in here, what we're going to do is we're going to prefetch from trpc server trpc, again from trpc server, .help .queryoptions like this, But that's not enough. So make sure you've imported prefetch and TRPC from TRPC server because what we have to do now is the following. We have to go ahead and add Hydrate client, which again, you can import from TRPC server.
And then in here, I'm just gonna go ahead and add some slight styling here. So I'm gonna add a div, which has flex, flex column, items center, justify center, gap for padding eight. Then I'm just gonna go ahead and add a heading TRPC test page with text to Excel and font bold and I'm gonna go ahead and do a suspense which you can import from react. So add a suspense and add fallback here. Div loading like this.
So Let's go ahead and test this out. You can see for a brief second, it says loading, and then it's okay. You can use Command Shift R to do a cache reset, because if you just do Command R, you can barely see the loading, right? Because React query has some cache, right? So this basically points to everything working fine.
So how about we break it? So Let's go ahead inside of trpchelp, you can command click into here to get into the base procedure which is in the trpchelp router's app. So how about we break this? Let's see. You can add this comment here.
Throw new error, something went wrong. So now you can see it's stuck at an infinite loading and it will be stuck until it retries three times until finally we get an error that it ended up rejecting. So how do you take care of that? Well, you can use Next.js's error page, but you can also, if you wanna be more granular, you can use react-error-boundary. So I'm gonna go ahead and install that package now.
So npm install react-error-boundary. Let's just wait a second for this to be installed. I'm gonna show you the version I'm using. This one is very rarely changed. It's just a simple wrapper react error boundary 6111 and inside of page.tsx I'm going to wrap my entire suspense within an error boundary and I'm going to import error boundary from react error boundary and in here I'm gonna add a fallback for when something goes wrong.
So it has a similar syntax as suspense. That's why I like it. And it's a well-maintained library too. It's not some random library. And refresh now and you will see a similar thing.
So loading is now, well, loading because it's retrying. When it fails, it's gonna retry. And then finally it reaches a state of something went wrong. So we're gonna use every page.tsx file to prefetch the data, making it faster than usually just doing it with useQuery in a client component. Hydrating the data, keeping track of errors and loadings with error boundary and suspense.
And finally, in HealthCheck, we're just going to use a normal useClient component. And from here, we can call onClick, we can call various hooks, we can do pagination, we can do infinite load, basically all of those things that are very difficult and awkward to do in a server component. So every time I see someone doing that in a server component or trying to do as many things as possible in a server component, I have to cringe because server components to me are very good if they're used in what they are made of, what they're made for. And in my opinion, I just think of them as API routes which are able to return JSX. They are not as simple, of course, I'm simplifying for the sake of understanding how I want to use them.
And on top of that, they have access to the database directly. There is no middleman between a server component and your database. It's the same as calling your database within an API. So that's why I want to leverage a server component to query my database, prefetch it, and populate React queries cache. So when it's time for a client component to use that data, thanks to useSuspenseQuery, it's going to be available immediately.
That is basically the logic I'm trying to do. And I think I kind of explained that here. So client components will use TRPC hook and then use Suspense query. Server components will prefetch and hydrate, which will lead to API TRPC endpoint, which has the TRPC router, which finally has access to our Postgres database. That is the flow we have just implemented.
And of course, let's now go back here. Let's comment this out. So refresh and it should be working. And if you want to play, you can also add a little loading comment here. So I'm going to uncomment this now.
So just a wait a fake timeout and go ahead and refresh and then you will see loading happening for five seconds just in case you wanted to test loading. Great. That is basically how we added TRPC to our project. That is how we're gonna call data from now on. And mutations too.
Basically, from now on, everything becomes way more type safe, way more secure, and way easier to work with. You will simply be reliant on your code. You will trust your code more. And if you're working with AI, it's easier for your AI to work with code because of extreme type safety. So your AI absolutely knows what they're getting.
Perfect. Great, great job on that. So I believe that is the end of first part. The second part is R2. This is actually an easy part.
There isn't too much code here. It's mostly setting up an account. In order to set up Cloudflare R2, the process is actually as similar as setting up AWS S3. That is because they are both S3 compatible storages. So let's go ahead and install the following packages.
Instead of installing R2 specific libraries, we're going to add AWS SDK Client S3 and AWS SDK S3 request presigner. So let's go ahead and install those two packages. I'm going to go ahead and prepare my package.json here so we can see the change. They seem to have been added. Here they are.
So this is these are my versions. They it seems to be important that both of them are using the same version. So just keep that in mind. All right. Now let's go ahead and create our R2 instance or S3 instance.
You'll see what I'm talking about. So inside of lib I'm gonna create r2.ts and even if you want to use s3 you absolutely can. You will see the setup is exactly the same but I would highly recommend just doing exactly what I do for the first iteration and then change it later once you have probably tested so that it works. So we're gonna go ahead and import s3-client-put-get-and-delete-object commands from aws-client-s3. We're then gonna add get-signed-url from aws-sdk-s3-request-presigner and then let's go ahead and import our environment library so we can keep track of all the environment keys we're going to need.
So I'm going to go ahead and initiate a new S3 client. So I'm going to call it R2 because we are connecting to Cloudflare R2. Region is going to be auto. And my endpoint is going to be the following. I'm going to go ahead and add backticks and I will simply make an HTTPS myaccountid.r2.cloudflare storage.com.
Of course, we are going to confirm that this is still valid. Why do I say still valid? For me, it's 100% valid. But maybe if you are watching it in the future, something has changed. So I will show you exactly where you can get this URL so you know it's correct one.
And then you need to add the credentials. The credentials are access key ID and secret access key. We're going to store all of them under R2 access key ID, R2 secret access key. So these are the three commands, I mean environment keys which we're going to have to add. Now let's go ahead and define a type upload audio options which has a buffer, a key and optional content type.
Now let's go ahead and implement upload audio function. So upload audio function is an asynchronous function which accepts the buffer, key and content type which is default audio forward slash wav. So we are going to map the options we typed above and we simply return a promise. Inside of this function we are going to await r2 instance.send and we're going to initiate a new put object command. The put object command has the following properties, bucket, key, body and content type.
The bucket is another r2 environment key we're going to have to add. So it's good that all of these are erroring right now because we will add them later so we know we didn't misspell them. Then let's add delete audio function. It simply accepts a key of uploaded instance and is again going to send something using r2 instance and what it's going to send is the delete command and for that it only needs a bucket and a key. Be mindful of capitalization of keys, right?
And then let's add getSignedAudioUrl. GetSignedAudioUrl only accepts the key, which is a string, and it has a get object command with the same bucket name and key but instead of calling this from r2 instance we're using get signed url from this package right here and we pass in the r2 instance we pass in the command and we pass in the expires at which is one hour. Great! So now that we have all of these let's add them to the environment file. So I'm going to inside of the environment file right here and I'm going to start by adding the R2 account, which needs to be required.
Then I'm going to add R2 access key ID. Then I'm going to add R2 secret access key. And I'm going to add R2 bucket name. Great. And you can see that immediately these now no longer error.
So that's how you know that you've written all of them correctly except we didn't really add any of them to the environment file. So let's go ahead and do that now. We don't really have any info to add right now. So what I'm just going to do is I'm going to go ahead and just prepare empty ones. Let me see if I can do all of these at once.
There we go. So account ID, access key ID, secret access key and bucket name. So in order to set up Cloudflare, you can use the link on the screen to visit cloudflare.com, go ahead and create an account and simply log in. Something like this will probably be your dashboard's homepage, but you don't have to do any of those. You can immediately go to Storage and Databases and find R2 Object Storage.
And go ahead and click on Overview. And I wanted you to see this screen simply so you know that we have a free tier. Credit card is required, but you can also use PayPal and you will pay exactly $0. There isn't even an initial charge or anything. You can see that they have a very very generous free monthly tier, so 10 gigabytes a month for free which is more than enough for what we are going to do with this.
So you have to go ahead and either add your card or PayPal but you will not be charged at all. Once you add your credit card or PayPal, you will see this page right here. So this is a completely empty R2 object storage and down here you can find the first key which we need, which is the account ID. And this is what I was telling you this is how you check if your API has changed or not. So in my case it's the exact structure I expected my account id.r2.cloudflarestorage.com so first check the account id copy it and go ahead and add it here, r2-account-id.
Then let's go here and let's double check the endpoint. So, https-account-id r2-cloudflare-storage.com. So just go ahead and double check that that's exactly what you see here. Https account id .r2.cloudflarestorage.com. If you see something different, perfectly fine.
Just adjust this to work with that. If you're sure this will be a private repository, you can even copy this entire thing and just paste it in plain text here. Just be careful. So I mean, no one can do anything with your account ID. I'm pretty certain they still need the secret access key and all of that.
But still, you know, be careful. All right. And you should also have the manage button here. And the manage button here should give you access to create account API token. So let's hit create API token.
I'm going to call this Resonance because that's the name of our app and I'm going to select Admin Read and Write which is probably the most dangerous permission to give but it's way simpler to just develop the tutorial this way. Later I would suggest toning down the permissions of this token to only what it needs. The TTL is going to be forever and I'm not going to add client IP address filtering. Again, all of this would be good to configure in production you know just for a tutorial this is perfectly fine a super powered API token. All right so now I will copy the token value even though I didn't really use it simply because this is for Cloudflare API but I'm not sure if you will see this again so because of that I can also add R2 token and I'm just going to add it here even though we're not going to use it.
Then we have the access key so we can add R2 access key and we have secret access key so let's add secret access key too. Alright so let me check if we have everything we need. Looks like the only thing we don't have is the bucket name because we haven't created any buckets yet. Great! And once again you can see your URL here, but looks like they have EU as well.
So if for legal reasons you need EU, this is how you have to modify the URL. For tutorial, I'm pretty sure you can just use this one, but if for your business you need to use the EU one, for legal reasons, you can of course just modify this to be So .eu.r2.cloudflare.com. Okay. So now that we have all of that finished, we have to go ahead and click finish here. You should see your token.
Make sure it's active. Admin read and write applied to all buckets. And then let's go ahead and create a bucket. I'm gonna go ahead and give this a name of Resonance and let's call it Voices, actually Resonance app. Let's call it something like that.
I'm not sure if these are scoped to global buckets. For example, on AWS, names are unique globally. So if Resonance app is taken, just try to change it to something else. And this can be automatic and this can be standard. You don't have to modify anything here.
And then you can go ahead and copy the name of your bucket. For me it's ResonanceApp. Go inside of .environment and paste it here. Here it is. R2 bucket name.
And that's it for setting up R2. We're going to test it in the only part left in this chapter, which is the seed script. So for seed script, we're going to need a list of voices. I will provide you with this list. I will show you where I found it.
And I will also show you an alternative of what you can do if for some reason you can't find any system voices. Because these are just audio files. They are nothing more than 10 to 30 seconds voice samples. So you could technically create all of them yourself and just rename them to 20 different ones. It really doesn't matter.
All right. So that's what we have to do next. So before we add system voices to our project, I want to make sure we don't commit them. All right. So I mean, technically you can commit them.
Nothing is stopping you from doing so. But it would be a good idea for you to add forward slash scripts system dash voices to your git ignore here. Because we will technically then be uploading all of those voices, which makes no sense because they're only used once to be added to your Cloudflare R2, only for the seed script. This is the only reason we are adding them in our repository, right? Now I will commit them simply because I want you to have a reference for those voices in case you can't find them anywhere else.
But I would highly suggest that you add this to gitignore. Okay so I will remove this now but I would suggest that you don't. I would suggest you keep it. Okay and then let's go ahead and see how we can find those voices. So using the link on the screen, you can say the exact source that I found it, which is from model documentation.
This is the documentation of the platform on which we are going to learn how to self-host an AI model called chatterbox text to speech. And for now, this documentation isn't really important because we're not doing this now. But if you scroll just a tiny bit down in the chapter Define a Container Image, you can see we'll also use chatterbox provided set of voice prompts which you can download here. So that is where I found them. Now I can't guarantee that this exact documentation page will exist in the future so I will show you the I mean you can also what I wanted to say you can also use my source code and then simply go in the exact place I'm going to add those voices now.
Those voices are from Resemble and Resemble are the creators of Chatterbox text to speech in case you wanted to know. Okay, so I'm going to go ahead and add those system voices now by creating a new folder in the root of my app which I'm going to call scripts and then I'm going to add system voices inside. So these are the ones that you should have all the way from Aaron to Walter. So those are the system voices you should have. Now a quick reminder I didn't really listen to every single voice sample here, so if some of them say something offensive, you know, just be careful.
I'm not really sure what's inside of them. There are various, various samples to be able to create different characters and voices. So I don't really know what's inside them. Okay, now that we have system voices, we can go ahead and create the seed script to upload all of those system voices to our R2 Cloudflare. But before we just do that, let's go ahead and create those...
Let's make our app aware of built-in system voices. So I'm going to go ahead inside of Source, Features, and I'm going to create a new folder called Voices. And in here I'm going to create a data folder, VoiceScoping.ts. And I'm going to export, I'm going to add all of these. I'm going to show you the full list, don't worry.
Here it is. So it's this, export constant canonical system voice names. So all the way from Aaron to Walter. Now, in case for whatever reason you are unable to add system voices to your project, right? If you can't find them in my GitHub repository for some reason, or if this URL has changed, right?
What you can do is literally just find a voice sample that's longer than 10 seconds. Try to aim like 20 seconds. And you can just copy it 20 times and just rename it to all of these values. Or you can just use three examples and then just leave three names here. So basically what I'm doing now is I'm making sure that my voice scoping file has all of the names listed in this folder.
So I would highly suggest that you check one by one that all of them are here, okay? Because it's gonna be important for the seed script. If you want to, you can reduce this list in half of course, right? If you want to keep just four of them, you can do that. And then you can just delete the rest.
Okay. So once you do that, also, yes, the names are kind of important because of the seed script. Once we define that, I wanna go back instead of source features voices data, and I wanna add voice-categories.ts. And in here, we're going to import type voice category from generated Prisma client. What that is referring to is our Prisma schema.
So in here we have voice category, Audio book, conversational, customer service, all the way to corporate, right? And what we're gonna do now is we're gonna use that enum to create labels. So we could technically programmatically create those labels, but in case you want different user facing text, you can do it here. So basically I'm creating an object which will map every enum from my Prisma schema to a string. So The only one that actually makes sense here is customer service, which usually would have been written like this on the front end, but now it's written like this.
So this is just a remap which makes sense to do because these are hard-coded built-in voices. There's no reason to do this programmatically. I mean technically you could but you know these are all the options that are going to exist. And then let's simply add one helper at the bottom. Export const voiceCategories which will simply do object keys, voice category labels as voice category and then an array.
Alright, so voice category from Prisma Client. Great. And now what we have to do is we have to build the seed script. I would highly recommend opening my github right now, you can see the link on the screen, because this seed script isn't really gonna teach you too much. What we're gonna do with it is just make it iterate over every single item inside of our scripts system voices, upload it to CloudFlare R2 under a specific key and also create equivalent database record so each voice record has a reference to the uploaded file.
Okay so I'm gonna go ahead inside of scripts, I will create a new file, seed system voices.ts. All right. So my suggestion would be that you copy this entire file from GitHub simply because it's not so much of a learning experience as much as it is functional. So now I'm just adding all the imports which we are going to need. At this point, we kind of need to recreate our Cloudflare R2 client and we also need to recreate our environment instance because this scripts folder is outside of the source folder so it doesn't have access to any of those things.
I also need to import Prisma Client type Voice Category from generated Prisma Client. I need to import all of the canonical system voice names from Source, Features, Voices, Data, Voice Scoping. And this is why I told you to double check that all of these names match exactly what you have in System Voices. And yes, here's another tip. If for some reason you don't want System Voices, you technically don't have to, because we will have a way to clone our voices later.
But the problem is you won't be able to test if your app works until then. So because of that, at least make an effort to have like three voices in here. Make sure you map those three voices here and then do the seeding with just those three voices. So you definitely don't need all 20, but the more you have, the better your experience will be with testing this. And now we're going to find that system voices folder using path.join path their name directory name file URL to path using node URL import dot meta dot URL So now we have the system voices directory.
Now let's create the environment schema. So we need our database URL, R2 account ID, key ID, access key, the bucket name. We can then parse our environment schema against process environment to confirm all of this works. We need to create a Prisma Postgres adapter. We need to define the Prisma client.
We need to connect to R2 once again, so S3 client. Again, this is important if yours is different. So if when you defined R2, if you put .eu here or something like that or something changed, you also need to change it here, right? So you can see we are recreating all of those things now. Then let's create an interface voice metadata.
So it has a description category mapping to voice category and a language which is a string. And now in here, we're just going to go ahead and basically create a list of all the system voices we're going to add to our database. For example, Aaron is going to be a soothing and calm, like a self-help audiobook narrator. Category is Audiobook English US. And if you didn't use the system voices I provided, if you made your own, you can basically just do this for the amount of voices that you have in your system voices, or the amount of voices that you have defined in voice scoping.
Right, so all of that should match. Okay, I keep repeating myself because I'm not sure how easy or hard it will be for you to obtain all of the system voices. And if you fail to obtain them, I don't want you to feel discouraged into thinking you cannot complete the tutorial. You absolutely can. You can always just create your own .wav files and then copy them a few times, match them into voice scoping and then match them here.
So now I'm going to add the rest of the list. So this is why if you have system voices as me, you should go inside of my source code and simply copy this entire list. So in here, I simply listened to some of the voices or I actually looked at the transcript of some of them. That's why I told you some of them are a bit explicit so you know be careful where you listen them and I've matched what I think is the category and what I think it's the language. Okay so now we have the voices then let's go ahead and create this function read system voice audio then let's go ahead and create an R2 function to upload system voice audio, accepts key buffer content type and simply sends the put object command.
All right, we have that. And yeah, basically we will store each of these under name.vab, right? And now I'm going to create a asynchronous function to seed system voice. In here I'm going to go ahead and read system voice audio by name. I will get the buffer and the content type.
I'm going to attempt to find an existing Prisma voice in my database. If it already exists, we're gonna do one thing. If it doesn't, we're gonna do another thing. So in here, I'm gonna go ahead and do the following if it exists. So we're gonna define the R2 object key to be Voices, System, and then the ID from the database.
We're going to obtain the metadata using System Voices metadata, which we have defined here. So we're gonna find the matching voice, right, Andy, and then we have the metadata for Andy. Okay. Again, you can just copy this from the source code. Okay.
Let me see where I was. Yes, we're going to find that upload system voice audio and we're going to update that voice in Prisma by adding it the object key and extending it with metadata. All right otherwise outside of this if clause let's just go ahead and write meta here then let's go ahead and create a new voice, variant system, organization ID is null, check if we have meta and if we have added here and then let's go ahead and do practically the same thing. Define the R2 object key, which is voices system voice ID. And then let's go ahead and open a try and catch block here.
:14 Inside of try, I'm gonna go ahead and attempt to upload it. So upload system voice audio using that object key and update the equivalent database record with the uploaded R2 object key. And if it fails, we're going to make sure to delete the system voice, okay? And we need to catch an error. All right, and then I'm just gonna go ahead and define an asynchronous main function, which will log that it's seeding that many voices right so canonical system names whichever voices you have created here is going to attempt to seed and then for every single name inside of this list it will attempt to call the function seed system voice okay So that's why I told you that if you use a different number of voices or different names, you have to make sure to adjust all of that within the voice scoping.
:14 All right, And then let's just go ahead and add this. So we actually catch any errors if it happens and disconnect from Prisma in the end. Perfect. So that is our seed script. I would highly recommend copying this from the source code simply because it's so prone to errors.
:34 There's barely any type safety here. It's just a seed script, right? There are many different ways you can, of course, do this. And now, let's go and add this seed script to our Prisma... Where is our Prisma config?
:49 Is it in source? Let me see. Prisma.config.ts. We're gonna go ahead instead of migrations and we're gonna add a seed script to be tsx scripts forward slash seed system voices dot ts. So make sure you didn't misspell scripts And you didn't misspell the name of the file here and here.
:18 And you need to have TSX installed, which I think we already do. TSX, here it is in Dev Dependencies. If you don't, just install it. Alright, We should be able to test this now. So I'm gonna prepare the following way.
:36 I'm gonna go ahead and open my Cloudflare storage here. I'm gonna go in storage, R2 object storage, overview. Here it is, Resonance app. Currently zero objects inside. That's the first thing I'm going to do.
:52 The second thing I'm going to do is npm run npx prisma studio. That's what I'm going to run. So in here I have my Prisma Studio. So currently I have two voices so it would be a good idea to delete all of the voices inside. So it's completely empty.
:10 No generations, no voices here. And let's try and run the seed script. So I'm very interested to see if this will work. I'm going to go ahead and do npx prisma database seed. This should trigger the prisma config seed script here.
:32 So let's see if that's going to work or not. So it's running the command. This is a warning. That's fine. And here it is.
:40 Abigail, Anaya, Andy. It's basically going over each file and uploading them right now. So let's see if it works. And there we go. You can see that I have successfully uploaded all of the voices.
:54 System Voice seed has been completed. The seed command has been executed. So the first thing I'm going to do is refresh my R2 here and let's see. There we go. We have a folder called Voices and inside we have another folder called System and in here we have very important IDs and if you're wondering what these IDs are referring to.
:19 I think these are the same as the database. Let me check. So inside of your database, you can now refresh your voices. And yes, you can see that each of these voices now has a CUID. And we have mapped the name, the ID of the database to the uploaded file here in Cloudflare.
:39 And each of these ones have their own key and we have mapped the R2 object key to each of the, well, database models, which are actually just voices, system, and then their very own ID. So if you wanna check if this works, I recommend downloading one of the files and just listening if it sounds like one of the examples that you had. Great! And that is actually it. We are ready to open a pull request, review the changes and merge this.
:17 As I said, if you had any trouble with the system voices, you can of course create your own. So you can just record yourself for 20-30 seconds, try to make it clear audio and then you know just add yourself three or five times in here. Pick random names or just copy these ones. And then go ahead inside of your features, voices, modify your voice scoping to only feature those names. And then in the seed system voices, go ahead and modify the system voice metadata to only feature those few names too.
:54 And if you've done that, everything should be working just fine. Great. Again, you can always visit the source code if you think you've made a mistake. Beautiful! So, as I said, you probably don't have all of these .wav files here, simply because I told you to add this to your gitignore, because it doesn't make sense for you to have that because it's only used once.
:21 Okay. So I'm gonna go ahead and open a pull request now. I will shut this down. I'm gonna go ahead and see the name. So chapter 4, git checkout-b04 backend infrastructure.
:40 All right. I'm going to stage everything and I'm going to commit 04 backend infrastructure. Perfect. And then I'm going to do git push uorigin 04 backend infrastructure. Mine will take some time simply because it has to upload all of those audio files, but yours will be much faster.
:06 And after a successful push, you will see your branch as an available pull request right here. So let's go ahead and create a pull request and let's see the changes. And here we have the summary by CodeRabbit. We added system voices feature with persistent cloud storage support. We introduced application health check page for system monitoring.
:30 We implemented voice categorization system for organized voice selection and enhanced error handling with error boundary protection, referring to our test TRPC page. And we also had a successful deployment on the web, Though nothing has really changed for us, right? We didn't do any UI changes, we simply added voices, right? But one thing that you should do, what we forgot to do, but seems like everything is fine. We forgot to run npm run build and we forgot to do npm run lint.
:09 So you can see I'm testing this right now and everything seems to be okay and I know it's okay actually because a railway would fail otherwise. So if yours is failing it could be because of something here. Great. So let's look at the actionable comments from CodeRabbit. So in here we have wrapped the entire layout with TRPC React provider and it's telling me that I should move it inside of body because clerks recommended pattern is around HTML but TRPC React provider should be moved inside body.
:43 I've checked the documentation and the documentation simply says mount the provider in the root of your application. It doesn't really specify where. Maybe that's in the classic integration. I'm not sure. Setup.
:59 Let me try and find if that's in the classic integration somewhere. I don't really know, but ours work as expected. So unless it starts causing problems, I'm not going to move it. In here, I forgot to comment out the five second delay, But we will remove that endpoint anyway. Great, so great comments from CodeRabbit.
:20 Here's something to think about if something goes wrong. One thing you should do if you have your app connected to Railway, go ahead and open it. And we actually have to update the variables. Yes, because we added new ones. Right now we're not using them from anything but the seed script, but we will use them later.
:40 So we should add app URL, R2 account ID, access key ID, secret access key, all of these. So I'm just going to go ahead and add these at the top. There we go. And since I can add spaces, I'm going to keep some things separated here. So skip Environment validation, database URL, clerk secret key, next public clerk publishable key, app URL, and then R2, token, bucket name, secret access key, access key ID, and account ID.
:12 Let's click update variables. And once you've done that, you should go ahead and look for the apply six changes and hit deploy. It will still work just as expected but we will now guaranteed not have any problems with the next chapter which will be developing the actual voice selector. Great I believe that marks the end of this chapter. We have, oh we didn't merge the pull request.
:43 Let's merge the pull request first. This was successful, great. We can merge that And then I'm just going to go back here, get checkout main, Git pool origin main. So we are up to date with everything but on our main branch. I always like to confirm that by clicking inside of source control and opening up the graph and here it is.
:11 Backend infrastructure merged into the main branch. Brilliant! So I believe that marks the end of this chapter. We developed everything here, we learned a lot and see you in the next chapter.