In this chapter, we're going to implement GitHub import and export functionality. We're going to enable users to import a GitHub repository into Polaris as a new project and we're going to allow them to export a Polaris project to a new GitHub repository. Both of these features will require GitHub OAuth integration via Clerk, a background job processing via InGist, as well as binary file support for which we have already prepared when we started building the schema. So let's start by installing the dependencies needed to make this work. So the dependencies we're going to install are Octokit, which is the official GitHub API client, isBinaryFile, which is a simple npm package I found which is very good at detecting whether a file is binary.
There are many ways you can do this, many packages, I just found this one to do the job as I wanted to. And react icon simply because I know there is an icon for a GitHub logo there. So let's go ahead and install these three packages. After that, I'm going to go ahead and show you my package JSON so you can see exactly the versions that I have. So is binary file 5.0.7 octokit 5.5 and looks like we already had react icons 5.5.0.
If you didn't have it now you do. Great. So what we ought to do next is actually configure our clerk and add an additional OAuth scope to our GitHub OAuth provider. So using the link on the screen you can visit clerk's dashboard here and let's go ahead inside of configure SSO connections. Now in here you will basically see providers you have decided to add when you configured Clerk.
If you don't have GitHub here, go ahead, click Add a Connection for all users and search for GitHub. Once you've added GitHub here, go ahead and open it. I would recommend that you enable it for sign up and sign in. This is what I told you to do in the beginning. So it's very easy for your users to immediately get their GitHub connected.
Otherwise, they would have to do it additionally. But it's perfectly fine if you have multiple providers, you can have Google, you can have email and password, username, but it's very useful to have GitHub as well since our accounts will be so tightly coupled with them anyway. Now here's the problem. GitHub OAuth comes with some scopes, but right now, if we tried to use the token from the user who logs in using GitHub, we wouldn't have the necessary permissions to actually load their private repositories. Because of that, we have to enable Use Custom Credentials.
And lucky for us, Clark actually shows you the exact documentation on how to add GitHub as a social connection. So we already did this. We navigated to SSO connections, we selected add connection for all users and we added GitHub or we already had it from the beginning. Now to make the setup process easier they recommend keeping two browser tabs open. One for the clerk dashboard and one for github developer settings.
So make sure you have your developer settings open on github. You can use the link from the documentation right here github developer settings and you should then see all the github apps that you have, all OAuth apps and all personal access tokens. All right, so now what we have to do is we basically have to create a new GitHub OAuth for Clerk. So let's register a new OAuth here. I'm gonna go ahead and select new OAuth app.
I'm going to call this Polaris. For the homepage URL, for now you can just go ahead and use the localhost 3000 because that's where our app is running. But for authorization callback URL you have to copy exactly what's written here. So let's go ahead and paste that and let's click register application. Now in here you have the client ID which you can immediately copy.
You can go back to clerk and you can paste the client ID. And now we need to obtain the client secret by clicking generate a new client secret. This will most likely require you to do two-factor authentication. Once you've successfully authenticated, you will have access to the secret. Go ahead and copy it as this is the only time you will see it.
And once you have the secret, you can go ahead and paste the secret here. And now here's the deal with the scopes. Basically right now, as you can see, I have scopes for user email and read user. And I'm gonna go ahead and also add a repo scope. So at the time of me making this tutorial, this is the scope that is required in order to access users repositories.
So that's the scope we need. And you can of course later always add or remove scopes. So let's go ahead and click save. Great. Now, what I would suggest doing is running your app and simply checking if everything still works just fine regarding the login.
So let's go ahead and see did we implement a way to log out? I think we did right here. So I'm gonna go ahead and log out now. I will refresh my page and I'm gonna go ahead and sign in. I'm gonna use GitHub to sign in and let's see.
There we go. You can see that now the scopes are a little bit different. So besides my personal data, email address and profile information, it also shows access to repositories, both public and private. So this is what all of your users will see that Polaris is using that scope to access repositories. Perhaps there is a more granular scope which might be better to use but for our purposes a repo is the one we need.
Basically, you should see this now. And you can also see that user can also give access to any organizations that they have. Basically, let's just authorize the user and everything seems to work just fine. Great! So that part is officially finished.
So what we have to do next is we have to develop some system mutations. Now most of these system mutations will also refer to something we already implemented. So I'm just going to give you a reminder inside of convex schema.ts. If you take a look at projects, you can see that we already have import status and we already have export status as well as export the repo URL. So these three have not been used at all but we already implemented them.
You can see let me see in chapter seven projects. That's where we implemented these three fields. So just confirm that you have them. If by any chance you don't, go ahead and pause the screen and add these three fields. Everything else I think is pretty standard.
So now we're going to go ahead and we're going to focus on Convex's system mutations here. So the last thing we added here are a bunch of agent tools and now we're going to go ahead and implement all the mutations we need to successfully import a github project or export a github project. So the first one we're going to do is called clean up. So I'm going to go ahead and prepare it. The clean up mutation will accept the internal key such as every single mutation inside of system.ts and it will also accept project ID.
What this will be used for is to very simply clean up all files within a project. Since if you trigger an action to import from GitHub it might be a good idea to clean up any files you might have created before that because an import in my opinion is a mirror right? So it should be a one by one replica of what's in the GitHub. So the first thing we're gonna do here is we're going to fetch all the files using a project ID, using by project index here. And then what we're gonna do is very simply delete the files here.
Like so. And let's go ahead and simply delete any storage files if they exist and let's go ahead and delete file ID. So in this scenario I don't think we need to do any recursive deleting. Usually that's the first thing that I think of whenever we delete files, like, hey, I need to detect if this is a folder and then delete all of its descendants, but not in this case, because these are literally all the files, both folders and files. We are not querying by type anywhere, just by project.
So in this for loop, all files and folders will be deleted as well as any storage if they were binary files. Great. Now let's go ahead and implement a very simple generate upload URL. Generate upload URL is a mutation which will accept the internal key and it will very simply be used to use context.storage to generate upload URL. This is from the convex's documentation on how to upload files.
And basically, in order to upload a file, you first need to obtain the upload URL from the back end. And in our case, since this will be a background job communicating with convex, we've put this inside of the system queries, well, system mutation to be specific, and we will very simply return to the ingest background job. Here's the upload URL for any files you want to upload. Those will probably be some images or some fonts, any binary files that the GitHub repository might have. And now let's go ahead and develop the actual createBinaryFileMutation.
So, the createBinaryFileMutation will accept internal key, project ID, name, storage ID, and an optional parent id with an id of files. As usual, the handler will be validated using validate internal key. Then we're going to go ahead and we're going to fetch all files by this project's parent ID. My apologies, no, that's not what I was focusing on. The name of the index.
We are going to fetch all files in a specific project and by a parent ID. So independently, not by project's parent. It's just that the name of this index confuses me so much every time I read it, I read it like that. And what we have to do here is we basically have to check when we create a binary file This is the exact same thing as if we were creating a new file in a folder So let me show you again if I have foo.js and if I attempt to create another foo.js, I get an error. I should not be able to do that, right?
So imagine if they were binary files. So if this were image.png, for example, and I created another image.png, again, I shouldn't be able to do that. Now, these are not how images are created. Images are going to be created in a binary format, but we still have to check in this specific folder, folder being the parent ID, are there any existing files? So if we can find an existing file with the same name, which is a type of file as well, let's go ahead and prevent this from happening by throwing file already exists.
And then we can simply go ahead and create a new file using context database insert files with the project ID, name, type of file, storage ID, arguments storage ID, and parent ID, arguments parent ID and let's initiate the updated at and finally let's return file ID. Great! Now we're gonna go ahead and create a mutation called update import status. So this mutation right here update import status which I've just added will accept the following arguments internal key, project ID, status, which needs to match what's in our schema. So let me confirm import status importing completed and failed, importing completed and failed.
So these need to match. In the handler we're going to validate internal key and we are very simply going to just patch the project ID. Also there is a new syntax for this you can now specifically define which table you wanna patch. I think this is better. I like the explicitness.
Basically, Convex's IDs have a very specific prefix which tells them whether something is a table of project or file. So you can just pass this, but they've updated it so that you can explicitly select the table and I prefer this way more. Simply, I think explicitness is always better. Okay, so this is why I just added this function without typing it because it's a super simple one. It's just used to update.
Great. Now we're having a very similar thing which is the exact same function as this but instead of for import status it's for export status. So it's exactly the same. I'm just going to add it. Let me go ahead and show you.
Update export status is a mutation which again accepts the internal key, project ID, status, which can be exporting completed, failed, and cancelled. As well as the repo URL which is an optional string. So just make sure that all of these here match exactly. We are going to validate the internal key and then we're simply going to update again. Let's do patch projects here.
Arguments.projectid, export status arguments.status, export repo URL arguments .repo URL and update it at date.now Great! Now let's go ahead and add a function called getProjectFilesWithURLs So this is going to be a different one so let's just prepare it like so getProjectFilesWithURLs. It will accept an internal key project ID. It will start by validating the internal keys so that we know we have access to do this and we're just gonna load every single file in the project just as we did in the cleanup function, right? So using the by project index we're simply loading every single file here.
And what we're gonna do now is we're going to return a weight promise dot all and inside of here, we're going to go ahead and call files dot map and then Inside for each file. We're going to initiate an asynchronous function which gives us access to a file and we are very simply going to check if that file has a storage id we have to fetch that file. So let's go ahead and do URL await context storage get URL using file dot storage URL. Because right now if we were to just load all files in a project right if I were to just you know load files here And one of these files was a binary file. I couldn't do much with storage ID.
What do I do with storage ID on the front end, I need some kind of URL, something to display it. So that's what we're doing here. We are kind of preparing this files so that they are readable by the frontend. Using the storage ID which we store in the database so each file has an option of storage ID which will allow it to become a binary file But again, we can't do anything on the front end with the storage ID but by using context dot storage dot get URL we can get an actual URL And then we are very simply going to go ahead and modify that current file by passing in the storage URL so the front end can then do something with it. And for all other files we are just going to return the exact same file and set the storage URL to null.
Great. Now let's go ahead and implement a very simple create project mutation. This will be used exclusively for when we import a github project. So we already have I think the exact same one in projects. Create.
It's very similar to this one. I'm just going to add it so you can see the differences. The create project mutation accepts internal key, name of the project and let me see owner ID that's not something we need I believe or maybe okay we do let me see yeah okay So internal key, name and owner ID. We're going to validate internal key and then we're going to simply create a new project with the name, the owner ID, updated at and here's a catch, the import status will be set to importing. Why?
Well because we know that this specific create project will only be used by Ingest background job when it's starting to import a project from GitHub. That's why this is inside of a system and not inside of any other one. That's why it's using an internal key to validate because there's no auth here. So yes, that's the only thing. I kind of don't like having such a generic name with such an important status here, but for tutorial purposes, this is fine right now.
You can of course change the name later to something more specific. All right, that's all for the system mutations that we have to do. Now let's go ahead and focus on building the API routes. So we're going to start by a import route. Let's go ahead inside of app API and let's create a new folder called GitHub.
Inside import and inside a file route.ts. Oh, I made a mistake. As you can see, route.ts is not inside of the import folder. That means in Next.js this will not register as a route, so I have to drag it inside to make sure it's registered. This might trigger some cache invalidation, you can just save this file, close this file, and then close the entire .next folder so it doesn't distract you.
Let's go back instead of route.ds. Let's start by importing Zod, NextResponse from NextServer, Auth and ClerkClient from ClerkNextJS server. In here I'm going to import convex from convex client and ingest from ingest client. Then I'm going to import API from convex generated API. I'm going to define the request schema for this API endpoint which is just a simple URL.
And then I'm gonna create just a simple function which will help us parse GitHub URL. Now you can decide whether you wanna do this or not. So function parseGitHubURL accepts a URL string and it will check if URL matches this specific regex. I don't expect you to write this out. If you want to, of course you can, but you can also just visit the source code and copy this from this exact location.
App folder, API, GitHub import route. And again, this isn't anything important, right? It's just something to tell back to the user on the front end like, hey, I think you gave us the wrong URL. But then again, URLs might change in the future. So I'm not sure how good or bad this is.
I'm basically going to show you that you can do it both with or without this function. So don't worry about this function too much, right? You can completely choose to skip using this function. Now we're going to actually export the POST request. So what I'm going to start is by using await auth which we've imported from Clerk and check if we have a user ID.
In case the user ID is missing I'm just gonna throw in next response unauthorized. Then I'm gonna go ahead and do an await request.json and I'm gonna do request schema.parse on the body and here I'm gonna have parsed URL. Then what I can do here is I can extract from the URL using the function parse GitHub URL the owner and the repository. Right So basically if I go ahead and do github.com Antonio, my profile name basically and then the name of the repository like this, this function parseGitHubURL, would extract that into an object, owner, this and repo, this. So you don't really need that but it is useful, especially since I don't think the URLs are going to change like tomorrow.
But again, be careful with this because URLs sure can change in the future, but at this point, there's so many GitHub URLs that they are gonna have to maintain them or backlink them or something, right? So this is, for example, is a completely valid URL, right? And what we have to do now is we have to obtain our GitHub token. So we're going to start by creating the clerk client, and then we're going to get all the tokens we have for this user. So await client.users, get user OAuth access token, and simply pass in the user ID and then for which provider for github I think this is strictly typed yes you can select exactly for which one and then very simply choose the first one.
Tokens, data, first in the array, token. In case we are not able to obtain the github token, unfortunately we can't even begin fetching a URL. So let's go ahead and throw next response.json. GitHub is not connected. Please reconnect your GitHub account.
At least that's what we assume the problem is. Now we have to set up our internal key. So that's going to be process.environment. And let me go ahead and check. I think I've done this a few times before.
I have Polaris convex internal key. So I'm just going to copy that, paste it here. And again, If we don't have the internal key for whatever reason, let's throw 500, basically a server configuration error. And now that we have the internal key, we can create a new project from here. So I told you this will happen from the ingest background job, but actually it will happen here in the route, my apologies.
So we're going to call await convex.mutation api.system.createProject which is exactly what we've created last, I believe. And we have internal key, the name of the repository, which we can just, we can call the project exactly what the repository is called. For example, here, the repository is called CursorDev. So instead of anonymizing the name or randomizing the name like we do when we create a new project from the projects page, we can just reuse the name here, right? Why not?
And we match the owner ID to be the currently logged in user ID. Great. And then what we're gonna do is we're going to trigger a background job which is going to start synchronizing from GitHub to projects. So await ingest.send name, this will be the name of the event which we are going to create and the data will be the owner, the repository, the project ID and the github token. And then finally, let's go ahead and just return next response.json success true and the project ID.
And now while we are developing API routes, I also want to develop the export API route. So let's go ahead and create another route.ts inside. So this one is app folder API GitHub export route.ts. Again, we're going to start with Zod, next response, Auth and ClerkClient, ingest from ingest client, and ID from convex generated data model. We're then going to develop the request schema for this API.
Project ID, repository name. So this is now in reverse. This is for creating a repository on GitHub. So we're going to have some limits here. If you want to create a new repository, the name needs to be a minimum of one character and a maximum of 100.
The visibility can either be public or private and we're going to default it to private. The description can either can be maximum of 350 characters and it's optional. Now let's go ahead in here And let's start by obtaining the user ID. Let's check if user ID is missing and throw 401. Then let's go ahead and get the body.
Let's go ahead and parse project ID, repository name, visibility and description from the body. So we're using request schema dot parse body. Then let's go ahead and initiate the clerk client. Let's go ahead and get all the tokens we have for GitHub. So the same thing we just did in the previous route.
And let's get the GitHub token from the first one in the array. In case the GitHub token is not connected, let's go ahead and throw an error. Let's prepare by copying the internal key from here, adding it here, then doing a check if the internal key is missing and throw 500. So since we just moved between these two, just make sure you are developing the export one. Don't accidentally override your import one.
Great. And what we have to do now is just send another background job event. So this event will be await ingest.send github export.repo with data project ID project ID as ID projects, repository name, visibility, description, github token and internal key. We can see We can decide if we want to pass the internal key as a prop since we can just easily check it there. But in here it's kind of already checked so we don't have to do double checks but we'll see.
And I don't think we need to do any typecasting here actually. And then we can just go ahead and return something like next response.json success true event ID, event ID is zero. So this event ID is not important. For example, here, I don't think we even threw it. So you can decide if you want to be specific so that your network logs show the event ID you can copy.
So the same appears. Okay, so the same appears in the import one. And let me just go ahead and check if there are any more things that are not missing. So success through project ID. I think in here we are not passing project ID.
Great, so I just wanna make them the same since they are so similar already. Okay. So, now let's go ahead and implement a cancel route. So far we've implemented the export and import. Now let's give the user ability to cancel.
So another route.ts and at this point let's just copy one of the other ones. So I'm going to copy the export one. And I'm going to paste it here. Actually I'm going to copy import since it's more similar. So copy the import route, go inside of cancel and paste it here.
So we need Zod, next response. We're not gonna need clerk client you can remove that, so just out, convex, ingest, API and we're also gonna need id from generated data model. We're not gonna need the function parseGitHubURL. The request schema will not accept the URL but a project ID. The auth check will be exactly the same.
The parsing will parse for project ID. We can skip the entire check here and go immediately down to the internal key. And let me just see what's the problem. Cannot re-declare blocked scope variable project ID. That's odd.
Okay. Oh, because it appears later. No worries. So we were here, right? We check if we don't have the project, the internal key.
And we're not going to be calling any convex mutations I mean here but we are going to send an event so let's just go ahead and send an event called and send an event called githubexport.cancel and the data is just going to be project ID. So basically, we are going to give the user ability to cancel an export. Right, so if the user attempts to export it, I'm not sure how long that's going to take, right? It can have thousands of files, right? So for them not to be in a forever stuck state, we're going to give them the option to cancel.
Now, we could develop the same for import, But the user can just delete the project because we want to give the user option to cancel when they are exporting. Since that is a project they have developed in Polaris, So we need to give them a way to get out of this situation. Whereas with import, it's a brand new project. Nothing of value will be lost if they just delete the project. But of course, you can just develop the same once you see how we're going to do it.
Alright, so once we send the cancel event, we also have to update the status to cancel. So this is where we are gonna do the mutation. So await convex mutation, API system, update export status to cancel with the internal key and with the project. And let's go ahead and just let's just return success true. If you want to you can do the event again simply so we are consistent in all three.
There. Okay so that's the cancel route and then let's go ahead and copy the cancel route and do the last route here which will be reset. In reset route.ts we're gonna need Zod, next response, auth, convex, not ingest, API and ID. The request schema is the same, auth check is the same, parsing is the same, internal key check is the same. We're not going to be triggering any events and we are very simply going to clear export status.
So this will again call update export status and we're just gonna reset everything. So status will be undefined and the repo URL will be undefined. So what exactly is this used for? Well, the difference between cancel and reset is that cancel is simply used to allow the user to stop the export. The reset is used if the user wants to once again export their Polaris project but maybe to another repository.
So there is no synchronization between GitHub and Polaris. There is only manual synchronization. It's not going to be an ongoing thing. They are two different separate entities. So that's why we allow users to also just reset entirely if they want to export to GitHub in five different repositories like sure do it.
All right now let's go ahead and start building those ingest functions. So I'm gonna go ahead inside of source, inside of features, projects, and in here I'm gonna create a new folder, ingest. And inside of that I will create import github-repository.ts. And while we're here, I think it might be a good idea to just run npx convex dev, simply to synchronize all of those functions and simply to catch if there is an error in any of them. In my case, there was no error.
So convex functions are ready. If you're seeing an error, it might be a good idea to go back and fix them. So all of the functions which we've written today is inside of system.ts. So it has to be somewhere here. You can see my green line here, meaning all the newly added ones are these.
The cleanup, the create binary file, the update import status, update export status, get project files with URLs and create project. So if you have any errors, they're going to be here. All right. Now let's focus on the ingest function. We're going to import ky, we're going to import our newly installed octakit, same with the binary file and non-retriable error from ingest.
We're going to import the convex client, the ingest client and API and ID from convex generated. Let's create an interface for this event. So what do we expect to receive? So when we are importing, we need the owner, we need the repository, we need the project ID to which project we are going to synchronize this entire repository, and finally, we need the logged in user's GitHub token. So, once we have those things, we can go ahead and create an ingest function.
Let's go ahead and start with some configuration. The ID will be import GitHub repo. And let's first do on failure. So what should happen when this fails? We have an event and step.
And the first thing we're going to do is just check if we have the internal key. So I keep forgetting the name of my internal key here. Polaris convex internal key. So const internal key process.event.environment polaris convex internal key. If there is no internal key let's just return.
Now what we're going to do is we're going to import the project ID for which importing just failed to. So you can extract project ID from event.data.event.data. Usually it's just event.data but when working within on failure this is how you get the project ID and in here I'm casting it as this event simply so we have the ID of convex here and then what we're gonna do is run a step called set failed status so at this point importing has failed so what I'm gonna do is I'm going to await convex.mutation API.system update import status, pass in the internal key, the relevant project ID, and set the status to fail. So whatever goes wrong in this background job, after several retries of course, will simply trigger this on failure which is going to call the system mutation to indicate to the user, hey we failed importing this repository. So they can try again.
Great. So, now that we have that, let's go ahead and define an event name, which is github forward slash import dot repo. What I would suggest you do is you do a search throughout your repository here. You can use command shift F or control shift F while it's highlighted so it opens like this and just double check that inside of your source app API github import you have the exact same name for the event that you trigger so github forward slash import dot repo it should be a one by one match here great and now we build the actual background job of course it's going to be asynchronous and let's start by simply extracting the data we need. So owner, repo, project ID, GitHub token, and event.data as import GitHub repo event.
Then I'm just going to go ahead and copy the internal key situation here. I'm going to paste it. I'm just gonna slightly modify it by throwing a non-retriable error which will simply say Polaris convex internal key is not configured. Because if that's not configured the background jobs... Sorry it's throw new non-retribular.
The background jobs cannot do any convex mutations. Now let's go ahead and establish Octokit by passing the auth property as GitHub token, which we extract from a clerk, which we now have with the proper scope because we configured it at the start of this chapter. Great. So now we're gonna use the first mutation we've created today. We're going to clean up any existing files in the project whenever this import background job happens.
So let's go ahead and run a step called cleanup project which we'll call await convex.mutation api system cleanup and then it's just going to pass the internal key and the project id because that's all we need. Once it's been cleaned up let's go ahead and fetch the entire repository tree. So a step called fetchRepoTree will have a try and catch method inside. In the try method, we're simply going to go ahead and extract the data from await octokit.rest.get.getTree using the owner, the repository, 3 SHA main and recursive 1. The 1 is not a integer it's a string like so.
And Let me just see the problem here. Oh, it's my apologies, tree SHA. This can be main or it can be master depending if it's an older one. Well, I mean, technically you could also add an input so the user enters exactly which branch they want to clone. But what I'm doing right now for tutorial purposes is just fetching the main branch.
So I'm just gonna either try with main or I'm gonna fall back to master. So let's return data and then we're simply gonna do the exact same thing in the catch by trying with master branch because some older projects might use master. So we're just trying to fetch the tree now. Alright, that is that step done. What we have to do now is we have to sort the folders by depth so parents are created before their children because we first need to create folders before we can start creating the rest of the files.
So for example, the input we're going to receive, the input being this, right? The data here is basically this kind of structure. Path source forward slash components. Path source, path source components UI. And the output that we need to create is path source first path source components next and source components UI last so we can't basically allow source components to be created because we don't have their parent yet, right?
This is all coming back to our schema structure and our parent id reference. So let's go ahead and start developing this. Folders will be tree.tree.filter, get the item, check if item.type is tree and if we have item dot path then let's sort check if a depth has a dot path if it has let's do a dot path dot split forward slash dot length or zero Then let's go ahead and duplicate this, change this to bDepth and change the variable to use the b variable. B.path, b.path.split. And let's return aDepth minus bDepth which will basically sort them as we've defined above.
Great. Now that we have the folders, let's go ahead and do the following. We're going to return the folder map from the step so it can be used in subsequent steps. This is Ingest specific, so Ingest serializes step results. So we must use a plain object instead of map.
So what I'm going to do now is try to define a folder ID map within a step. So await step.run create folders async and now I will create a map but just by using a normal object so map will be a type of object which accepts a key and on the other end it's an ID of the file. Now we're going to go ahead and first go through the folders. So for const folder of folders. If folder path is missing let's just continue forward there's nothing we can do here but otherwise let's go ahead and prepare a few things.
Path parts, which are folder.path.split. So basically, when we have things like this, it will be split into source and components. The name will basically be the first one in the array, so we pop, and the parent path which will be the rest. So we are separating the name basically. And the last one we need is to find the parent ID.
So the parent ID will check if we have parent path and it will look through the map and check if it's there or mark it as undefined. And then let's go ahead and get the folder ID by creating it. Await, convex.mutation, api.system.createFolder, passing the internal key, project id, name and parent id. And then we're going to add to our map for that folder path the equivalent folder ID we have created in our database. So then in the next iteration, if that repeats, we will find that because of this.
We will now find that parent path in the map. So we know, okay, the parent has already been created. We can now start importing the children, right? So that's kind of the tricky part here is because Octokit just returns us a bunch of files and structures, but we have to create them in a way that we create parents first and then their children. So that's why we're doing this somewhat complicated logic.
All right. And of course, return map. Then let's go ahead and solve the binary files. So these are a bit more complicated. So let's start by getting all files or blobs from a tree.
All files, tree.tree.filter, check if item type is blob and we have item.path and item.sha. Then let's go ahead and create the files. So a step called create files. We're gonna go ahead and go through each file in our all files filter here. In case file path or file shy is missing.
We're just going to continue and Then we're going to open a try and catch block here Let me go ahead and fix the indentation, okay Inside of trying we're gonna go ahead and get the blob using the file SHA and owner and repo. I'm not sure how to pronounce it so when you hear me say SHA this is what I mean. Alright, so how do we get the blob? By calling octokit rest git get blob function which needs the owner, repository and the exact file SHA and that will basically give us back the blob. Once we have the blob we can go ahead and create a buffer using buffer.from and using the blob content with base64.
And then, once we have the buffer, we can call our isBinary function, isBinaryFile and pass in the buffer, isBinary from isBinaryFile. This will allow us to decide how we store this file. Now we also have to define if this is inside of some folder. So we have to check for path parts using file.path.split. Let's get the name of the file using pop.
Let's go ahead and get the parent path using parts.join and then let's check if we have the parent id so very similar to what we did before if we have parent path check if we have stored it inside of a folder id map otherwise, mark it as undefined so this folder id map I believe is returned from here right? So now let's check If a file is binary, we have to upload it. So let's first get the upload URL. We can do that using our system mutation. So upload URL is obtained with await convex mutation API system generate upload URL and it just needs the internal key.
Then we can go ahead and fetch that URL. So we will get back storage ID which we can store into the database by making a POST request. So let's go ahead and use ky for that, .post to upload URL which we've just obtained. So this is a signed upload URL from Convex. So we can safely upload here.
It needs to accept specific headers so these are the ones it needs. I'm going to show you content type application octet stream and the body will be the buffer. So we are now uploading a binary file and we are returning back in JSON format storage id which will be a type of id underscore storage And once we have the storage I. D. We can go ahead and do a weight convex mutation A.
P. I. System create binary file passing the internal key the project I. D. Name storage I.
D. And parent I. D. So this function was already created in the beginning. We are now using it through a background job to actually create that binary file after it's been uploaded to convex through a secure background job using a secure signed upload URL.
Great. So that's the case for if is binary. But if it is not binary in that case we're simply gonna convert the buffer to string using UTF-8 and then we're just gonna call a normal API system create file with internal key project ID name content and parent ID and in the catch here we're gonna go ahead and do console error fail to import file file dot path great And then let's go ahead and do one more step, an easy one. Set completed status. It will run await convex.mutation API system update import status.
At this point, we have already finished importing, we have finished uploading binary files, we finished creating normal files, we created parents first and assigned everything properly. So at this point we can just call our system update import status mutation with the status of completed. Great! And let's go ahead and return it. A very simple success true and project ID.
Great! So that is our import GitHub repo. I will admit it was a bit complicated even I was getting lost a bit here. So if we made some mistake we will test that out once we implement the UI so we can actually fire this. But I think it's mostly okay.
I think mostly we did everything right. All right, so the next thing we have to do is create a very similar background job but for exporting a GitHub repository. The export ingest background job will be similar but not too similar. So I'm just going to start a completely blank file. So inside of ingest folder here, I'm going to create export to GitHub.
Export to GitHub.ts and the imports are quite similar. So those are KY, Octokit, non-retriable, error, convex, ingest, API and ID. Now let's go ahead and add the interface export to GitHub event. Export to GitHub event will have a project ID, repo name, visibility, public or private, optional description, GitHub token and I don't think it makes sense to pass the internal key. So let's remove that.
Then let's create a specific type called file with URL. It has an underscore ID of ID files underscore creation time which is a number project ID optional parent ID name type file or folder content Storage ID and Storage URL. Now that I look at it, I think we can do this in a better way. I think we can do... Let me just see.
This would be a type of... I mean, I'm not sure. Let's see. Type file with URL to a type of doc from data model. So let me show you that import right here.
Doc from generated data model. And we import, we use files. So right now I think they would have like almost a 99% match. The only thing we want to extend with is the storage URL. I think these are now exactly the same.
I think. I think this is a much more elegant way of defining that. Okay. And now let's go ahead and create the ingest function. Export to GitHub.
Let's start with the ID export to GitHub. Let's define the cancel on event. The cancel on events will listen to GitHub forward slash export dot cancel. And OK, so we cannot use match. Let me see what is the name of the new one.
I mean, we can use match, but it's deprecated. I want to teach you that. So let's see we have to use if and I think I have to check data project ID matches data project ID but not exactly like this. So let me check if all right. So we have one example in process message.
Yes. So let's use it to learn. We have event data and we have a sync data. So that's what we have to check for. If event.data is equal to a project ID and async.data is equal to a project ID and they are using double D's.
Okay, I think we've done this correctly. All right, besides cancel on we're also going to have on failure. Now on failure is very similar to the on failure in the import one. So let's go ahead and revisit it. I'm gonna scroll up here to find the on failure.
Where is it? Here it is. We're going to start by checking if we have the internal key. And if we don't have it, we just do an early return. And now let's go ahead and let's destructure the project ID from event.data.event.data and let's cast it as export to GitHub event.
Alright. And then what I'm going to do is call a function very similar to this one. So an entire step called set failed status, set failed status, call convex.mutation instead of update import status is going to be update export status. And I believe the queries are exactly the same. The internal key, the project ID and the status.
So yeah, I think this works just fine. Now Let's go ahead and let's define the event name. So the event name will be githubexport.repo. Again, I recommend searching through your project and confirming that this is the event that you trigger inside of app folder API githubexportroute.ts. So githubexport.repo.
:22 It should match exactly. And it should also match this one. So check this. GitHub export.cancel. Search for that too.
:32 Inside of your source app API GitHub cancel, you should have that event here as well. Great. So you don't have any misspellings. And now let's go ahead and actually build the function. It's going to be an asynchronous function which accepts event and step.
:52 Let's go ahead and start by extracting everything we need. So project ID, repo name, visibility, description, and GitHub token. We don't need the internal key. Then I'm going to copy from import a check of the internal key once again and a throw of a non-retriable error. And I'm just going to paste it here.
:16 So we're going to attempt to get the internal key. If it's missing simply throw a non-retriable error. Great. Let's start by running a step which will change this project's status to exporting. So set exporting status will call convex mutation API system update export status internal key project id status exporting.
:45 Then just as we did in the import background job, we're going to initiate the Octokit using auth and github token which we obtained. Let's go ahead and get the authenticated user using the Octokit. So we can run a step get GitHub user and return await Octokit rest users get authenticated. This will basically return whatever user has passed the github token to. And let's go ahead and alias it as user.
:23 And now let's start by creating a new repository with auto init so we have an initial commit. So this step will be called create repo. And in here what we're gonna do is return await octokit rest repos create for authenticated user, Pass the name, repo name, description to be description or a very simple exported from Polaris. Private will be if visibility is set to private and auto init will be set to true. Great.
:04 Then let's go ahead and wait for GitHub to initialize the repository simply because auto initialize is async on GitHub side. So I'm going to sleep for three seconds. If this step fails too much for you, you can increase this to five or six seconds, but most of the time this works with three seconds. In fact, I mean, this is what my debugging led me to. This is my conclusion that that's what happens because I had some annoying bug with this.
:34 It could be that I was doing something wrong at the time which I fixed later because this never happened again but still I want to show you that this is an asynchronous function so The next step is basically to get the initial commit SHA, however you pronounce it, right? But we can't do that if the repo has not yet initialized, right? So that's why we want to avoid any errors. But still, a background job will retry itself if it fails so this isn't too important right. Alright so yes next step is to get the initial commit we need this as the parent for our commit So let's go ahead and create a background job called get initial commit.
:25 Make sure it's an asynchronous function and basically what we're gonna do in here is again call Octokit SDK. So await Octokit REST git get ref owner user.login. Remember, user comes from a previous step, get GitHub user in which we return get authenticated and we await it and we alias that to user. So now we can use it here. RepoName, refHeads forward slash main and return refObject SHA.
:01 Now let's go ahead and fetch all project files with storage URLs. So this will be some binary file things, right? Let me go ahead and prepare this step. Fetch project files. And in here we are going to do the following.
:25 We're going to return open parentheses, await convex dot query API system get project files with URLs passing the internal key and project ID and cast this as file with URL and then an array of those. So it looks like since we're not getting any errors here, it looks like this cast is correct. I think if I change this to something, you will see that this then causes an error, which means that we have correctly extended the storage URL part. So yes, if you remember, this system function basically loads all the files in a project and it simply makes use of that storage ID by turning it into a URL. So in this scenario when we want to export those binary files to GitHub, we need to convert them to a URL that GitHub can, well, turn into a binary file and upload onto their system because GitHub can't do much with our internal storage ID.
:30 So that's why we're doing that. Now that we have all the files ready, we have to do the reverse of what we were doing in the import background job. We have to build a map of file IDs to their full paths. But luckily, this is actually a little bit simpler than doing the other thing. All right.
:55 So build file paths function accepts an array of files with their storage URLs, if there are any, We're gonna prepare a file map using new map. It will have an ID of files and object file with URL. And then let's go ahead and run a quick files dot for each file and simply set it in the file map, mapping their ID with their content inside. Then let's develop method inside to get the full file, the full path of a file. So this method will accept a entire object of file with URL and it will return a string first things first if there is no parent ID we return a file dot name So the point of this function is to return things like source, components, index.js.
:55 But in case it's a root file, it's just index.js. That's why we just return the file name if there is no parent. Otherwise, open backticks in this return statement here and simply call get full path again with the parent and then forward slash file dot name. Now let's go ahead and just see the problem here. My apologies instead of get full path here.
:34 So after we check if there is no file parent ID, we have to get in the parent of course. So file map dot get file dot parent ID. Then if there is no parent, return file.name. My apologies, I'm kind of getting lost. These files are way too similar.
:56 All right, so in order to get full path, we check if there is no parent ID and do an early return. Otherwise, if we attempt to get the parent from the file map using file parent ID because we map all the files with their ID and their content, and if it doesn't exist there, we again return file.name. So this is kind of an edge case, right? Otherwise, we recursively call this function until it generates the full path string. So it will either early return or it will continue generating the string.
:29 Great! Now that we have that, let's go ahead and prepare an object which will store all the files with their paths. So files for each file, simply assign to the paths object, their full path and their content inside and return paths then outside of this function let's go ahead and actually get all file paths by calling build file paths. Then let's filter to only actual files, not folders. So file entries object dot entries file paths dot filter skip the first argument which is the id and go into the object and access file dot type comparison for file if it's true it's going to filter out all of those which are not files.
:35 And then if that length is zero, let's throw a non-retriable error, no files to export. And now what we have to do is we have to create blobs for each file. Let me fix the typo here. So, let's go ahead and prepare this function. Create blobs.
:05 So, the Blob has the following structure. Path, String, Mode, Type and SHA. If you're curious about this magic number right here, feel free to Google that along with blob so you will see a more in-depth explanation. But basically this is the mode that makes it exportable to github file system. So simply create an empty array and give it a type of this it's important that you don't forget this array type at the end alright, So now that we have file entries and file paths Let's go ahead and do in a for loop.
:51 So for each path and file of file entries Let's prepare their content to just be a string and let's go ahead and prepare their encoding. Are they content? Are they textual content? We're going to use UTF-8. Or are they a binary file?
:09 In which case we're going to use base64. First things first, if file.content is not undefined, that means this is a text file. So let's do content equals file.content. Else if we have file storage URL, this is a binary file. Fetch and base64 encode.
:42 So let's go ahead and do that now. We can get a very quick response using ky.get on file storage url. Once we get the response we can turn that into a buffer using buffer from await response.arraybuffer and then we can store that into content by turning it into base 64. And let's go ahead and set the encoding to base 64 in that case. Else, so if it's not a text file and not a binary file, skip files with no content at all and continue.
:21 Alright. Now let's go ahead and actually create the blob using Octokit. Await octokit rest git create blob owner user login repo content and coding data blob and then let's go ahead and push to our items array path again the same mode as before type of blob and sh a blob sh a and let's return items there we go then let's check if tree items dot length is zero it means we failed to create any file blobs. We didn't export anything so we throw back because this is a non-retriable error right something went very wrong up there but otherwise we are ready to start creating the tree so this is very simple we are going to create a step called create tree and simply return octokit rest git create tree owner user login repo repo name tree tree items then we have to create the commit with the initial commit as the parent So we are now creating a new commit, right? We've just pushed these files and we have to commit that.
:52 That's how Git works. So again, just a very simple Octokit SDK function octokit.rest.git create commit owner, repo name, message, it can be whatever you want. We're going to use initial commit from Polaris 3 and parents which is initial commit SHA. Great. Now let's go ahead and let's update the main branch reference to point to our new commit.
:29 That's again gonna be a simple Octokit function. So a step called update branch ref, return await Octokit rest git update ref with the owner repository ref heads main SHA commit.sha and force to true. And then last step here, set status to completed with repo URL. So set completed status, convex mutation, API system update export status, internal key, project ID, status completed and finally the repo URL using repo.html underscore URL. So we know we have access this from the start.
:19 Actually, it's just empty, I believe. So let me find where do we get the repo from here it is. Great. All right. That is it.
:28 All we ought to do now is just a simple return here. There we go, success true, repo URL, and how many files were actually exported. Great. So that is our export to GitHub function. Now we have to register these ingest functions.
:49 For that we have to go instead of app, API, ingest, route.ts. And let's go ahead and pass in import GitHub repo, export to GitHub, and that's it. And at this point we can remove demo generate and demo error Great! Now let's go ahead and add some UI so we can actually test this So we're gonna go inside of source, inside of features, inside of projects, components and I'm gonna go ahead and create a new file called import github-dialog.tsx. Github-dialog.tsx I'm gonna go ahead and import ky and http-error from ky I'm going to import zod, sonar, toast user-router from next-navigation, use-form from 10-stack-react-form, use-clerk from clerk-next.js I'm going to import Button from Components UI button, Dialog, DialogContent, description, footer, header and title from components UI dialog.
:07 And then I'm going to import input from components UI input, field, field error and field label. Then I'm going to import ID from convex generated data model. Let's go ahead and define the form schema. It's very simply going to ask the user for the URL they want to import. As simple as that.
:31 And thanks to our function, let me go ahead. Is it here in GitHub import? I think it is. It is. Pars GitHub URL.
:41 So this is why I've kind of developed it. So we allow the user to just enter a URL, right? And then on the backend, we're just gonna extract the owner and the repository, right? So I think it's a useful function. Feel free to copy it from the source code simply because this is a bit heavy to copy.
:01 So this will make it easier on the user experience so we do the hard work for them alright, so the props for this component are gonna be open and onOpenChange and let's go ahead and export the component so import the GitHub dialog, accepts open and onOpenChange Let's prepare a router from useRouter and let's prepare open user profile from useClerk. So we're going to use this in case we get an error that user doesn't have their GitHub connected. Because remember, while we do allow GitHub login, you can also enable Google login or a bunch of other ones, even email and password, right? But the good news is user can always connect additional ones from their account settings. Clerk makes that very very easy.
:56 Now let's go ahead and define the form. The form will start with default values which is just empty URL. The validators are going to be onSubmit which will simply look for form schema. And then we'll develop the actual onSubmit asynchronous method which gives us access to the value and what we're gonna do is we're going to initiate a POST request to so let's do the following let's extract project ID from await ky.post forward slash api github import. Don't misspell this.
:38 This is not type safe. You can write whatever you want here just make sure that you actually have api github import. So no typos anywhere. Alright, and as the body of this post request we're gonna add JSON URL value URL so exactly what user rights. We're going to parse this back as JSON.
:03 And let's go ahead and write what we expect, which is success boolean project ID, ID of projects. And let's see what else do we expect? Also the event ID, which is a type of string. All right. So let's be correct for our front end here and write exactly what we expect back.
:26 So after project ID, it's event ID, which is a type of string. Great. At this point, we can already send toast.success importing repository since it's a background job so it didn't finish it just started And let's go ahead and close this dialog and let's reset the form. All right. And after that, what I want to do is also do a router.push to projects project ID simply so we can immediately well redirect the user there.
:11 Let's wrap this inside of a try and catch and in the catch method let's go ahead and grab the error and let's check if error is instance of HTTP error Let's go ahead and extract the body of the error using await error response json. Let's go ahead and add types here, error string. And now what we can do is we can check if body question mark error question mark includes GitHub not connected. In that case, we can throw toast error GitHub account not connected and an action with the label of connect and an on click open user profile. So if there is an error happening here, user will simply get a subtle toast which says GitHub account is not connected and an action, so down here that's where it's going to happen, and an action to open clerk user profile which will basically just trigger manage account here which will allow the user to then connect another account.
:40 Alright and at this point so what I'm gonna do is just indent my thing in the try here. And for this part, GitHub not connected. So again, be careful here. Do we throw that error? Here it is.
:59 GitHub not connected. For example, here is the bug. I'm not sure this would work now because my github here is capitalized, but it's not here. So make sure to capitalize it here in the includes because it's checking for a string. You can see it's a fragile way to do it.
:18 You could do it with a very specific status or maybe adding code here, which could be an enum, like GitHub missing, and then you can check for that instead of a string. But of course, that is for actual production problems for now, just for tutorial sake we can just check for a string. But just to make it work make sure you're actually throwing that part of the string so you can actually catch it here. Alright, so after we throw that error, we also ought to close the dialog as well. Great.
:57 And then finally, Outside of this if clause here, let's just do toast.error unable to import repository. Please check the URL and try again. So there's a chance that the error is unrelated to the github account. Since we do an early return here we don't have to put this instead of an else. Alright, and now let's go ahead and actually build the UI.
:30 So let's go ahead and add a return here dialogue open and on open change let's go ahead and add dialogue header dialogue content and dialogue title then let's go ahead and add dialog description enter a github repository URL to import a new project will be created with the repository contents for the actual form we're going to go ahead outside of the dialog header and define the form element with onSubmit() prevents default, calling form handleSubmit() inside let's go ahead and do form.formField() so what's the deal here I don't think I've explained this previously so form in this case is just a native HTML form element. Form in this case is referring to this form. Right, so don't be confused about that. So this form.field is not native HTML. This is a specific component that's being exported through the hook.
:36 That's why we can access it this way. It just coincidentally perfectly matches with this. I actually like it, but it is a bit confusing when you don't understand what's going on because this also feels like it's native. It's not. This is a hook.
:50 All right. So form.field with the name of URL will have access to that field properties in this way. And then in here we can immediately check if it is invalid. So we're going to store the invalid state using field state meta is touched and if not field state meta is valid. All right.
:13 Now we can finally return how the field is going to look like. So let's do field data invalid. So basically just an accessibility attribute here is invalid. And let's go ahead and define the field label again. HTML for so it has an accessibility attribute.
:35 The name is, I mean the actual label is repository URL. And then let's go ahead and define the input. The input has ID of field name, name of field.name, value of field state value, on blur, field handle blur, on change, field, handle change, event, target value. ARIA invalid is invalid. And a placeholder explaining to the user the structure that we expect.
:08 So then our backend parseGitHubURL function will extract the owner and the repository and pass it as separate objects or keys should I say to relevant in just background jobs which called the octokit further on in case there's an error in the actual field let's display that by checking if is invalid And rendering the field error and passing the errors with field state meta dot errors. Great. So that marks the end of the form field. All we have to do now is create the dialog footer, which with a margin top four class name, which will simply give us two buttons. The first button will be a type of button.
:55 This is very important. So this button is not used for submitting. That's why we explicitly give it a type of button, a variant of outline and on click on open change false. Basically this is used to close the model, like cancel it. And for the submit one we're going to access that through another form.subscribe element.
:20 The subscribe element will have a selector, uses state and returns an array state can submit and state is submitting. Then we can work with those two fields using the following syntax. So again, can submit and is submitting. And we're very simply going to return button type submit, which is disabled if not can submit or is submitting. If it is submitting, display importing, otherwise import.
:55 So, to make it easier for you to read, I'm going to collapse it a bit. There we go. Great, that is it for import GitHub dialog. And now the last UI component we need to create before we test this out is the export popover. So I'm going to go ahead and copy and paste this since they are somewhat similar and rename the copy to export popover.tsx.
:25 Double click to make sure you are working inside of export popover and let's go ahead and start by checking our imports, so I'm just gonna add an overall import for react since we're gonna need it so react from react, ky is good zod, toast, we're not gonna need useRouter so we can remove that we will need useForm and useClerk and for the icons we're going to need check check icon check circle 2 icon, external link icon, loader icon and X circle icon. For the components, we're going to use button. We are not going to use dialog, instead we're going to use popover. So popover, popover content, and trigger. We are going to be using the inputs that can stay and we're going to have field, field error and field label and we're going to have two more components.
:28 Besides that, we're going to have select, select content, select item, select trigger, select value, and text area. Then I'm also going to import a hook called use project from hooks use projects. We already have ID from a generated data model. And let's also add an icon from react dash icons forward slash F a F a GitHub. All right.
:00 Now let's modify the form schema. So the form schema will have a field called a repo name. Repo name will be a string with a minimum length of one, maximum of 100, and a regex for only alphanumeric characters, hyphens, underscores and dots. Basically, the same rules that GitHub enforces. So a very simple regex here.
:27 You don't need to add it, but it will prevent the user from trying to submit an incorrect one. For the visibility prop, it's going to be an enum of public and private and then the description which has a maximum length of 350 which is too long after that. Alright. For the props, the only thing we're going to need is the project ID and it's going to be called export popover props. Then, let's go ahead and change the export instead of import GitHub dialog to export popover.
:05 Export popover simply uses the project ID and the same named props. Since we don't have the router hook, we no longer need it. But instead of that we can add the project and load it and we can define a simple use state from react.useState, open and set open and we can leave the profile here. Then let's keep track of the export status of the project and export repo URL of the project. So this way we can track since convex is a real-time database what's the current status of the background job and did we receive a final repo URL we can visit.
:46 Now we go to the form. So the form will have three different values here. Repository name in which we are going to attempt to load the current project's name but since there are different rules for what we allow users to name our project and what github accepts, we have to use .replace and only accept alphanumeric characters, dashes, hyphens and dots. If you want to, you don't have to do again this regex at all. You can just do a fallback like this, but this will prevent any problems from happening.
:25 Visibility will fall back to private and we're going to cast it as the only two enims we accept and description will be empty. Validator's object stays the same. And now in the onSubmit it's going to be a little bit differently. So in here we're going to call API GitHub export. Again, make sure you didn't misspell this right so just double check inside of your import, my apologies, inside of your API GitHub export, right, export.
:58 Alright. So the JSON it accepts is a little bit different. It's not URL. Instead, it's the project ID, repository name, which is value.repo name, visibility, value.visibility, description, value.description, or undefined. And this will not be needed at all.
:27 There we go. So now there should be no problems here. For the JSON, we don't really care. We don't have to. We're just initiating.
:36 We don't really care about the result itself. And we can remove... Well I guess we can just leave the toast message which would say export started. I think we can kind of send a success message and then immediately let's go into catch and make sure to check for the exact same error, GitHub not connected and allow the user to connect and change this set open to be false. The reason we are not doing set open false here is simply It's a different UI, you will see.
:17 But just in case you were wondering, hey, why are we not closing it here? Because we are closing the import one. Because that's a dialog, this is a popover. So it's a little bit different. Okay, what I want to do now is just double check that this error actually works.
:33 So for that we can again set up export route.ts and check. GitHub not connected. Make sure you are throwing this, make sure the capitalization is correct, make sure it's the exact same line you're checking here. Great. So instead of toast error saying unable to import repository it will be unable to export repository.
:54 Unfortunately this is not due to the URL. It can be many things. So we're just going to say unable to export repository. All right. For the return here I am OK I'm not going to delete anything just yet.
:09 Because there are a few more functions we have to develop. The first function is handleCancelExport. Which is basically a button to cancel the export so it's gonna call API GitHub export cancel make sure it actually exists API GitHub cancel and now that I look at it, mine is actually incorrect. So my cancel route is in a different place here. So yes, I'm going to drag my cancel route and put it inside of export because that's where I meant to add it.
:50 So yes, it should be API GitHub export cancel. My apologies, I think I've missed this completely. Alright, so now this makes sense. API GitHub export cancel, allowing us to cancel an export. Then let's go ahead and add a function to reset the export.
:12 Again I think we're going to have to move this. So API GitHub export reset. Let me see inside of my API here. Yes, let's move reset and put it inside of export folder because that's where I meant to do it. I just completely forgot, my apologies.
:29 So yes, because both of these entirely refer to exporting. Perfect! Now let's go ahead and develop a function called render content. If export status is exporting, In that case we're just going to go ahead and display a div with class name flex, flex call, item center and gap 3, a loader icon with class name size 6, animate spin and text muted foreground, a paragraph with text exporting to GitHub, text small, text muted foreground as the class names, And finally a button to cancel it. So this button will have a size of small, variant of outline, class name of width full, and on click, handle, cancel, export.
:26 And the label, cancel. Alright. So that is for that case. Now let's do if export status is completed and if we have export repository URL. In that case, so let's go ahead and copy the outer div since that stays the same.
:51 The only thing we're going to check is the icon, which will be check circle to icon size 6 and text emerald 500 to give it a nice greenish color. Beneath a small description, repository created with text small and font medium class name. Beneath that another text, text extra small, text muted foreground, text center. Your project has been exported to GitHub. Then let's go ahead and create a div class name flex flex column with full and gap2.
:25 In here let's go ahead and add a button to open that GitHub repository. So this button right here size small class name with full as child property Inside an href with a target forward slash blank, my apologies, underscore blank. I think we can do this with a normal link though. Let me see. We just have to import link from next link.
:52 I think this should work just fine. Yes. And external link icon and view on GitHub label. Now, we're also gonna add a button to reset the entire thing, right? So once the link is shown, view on GitHub, next to it, this button will serve as the reset button.
:17 And by reset, we don't mean we're gonna delete it from GitHub. No, the user now knows, hey, that's the link, go on your GitHub and maintain it there. But click this button if you want to change the repository, right? If you want to export it again to some other place. So that's button, size is small, variant outline, class name with full, on click, handle, reset, export with a close label.
:43 Great. Now, in case the export fails, we need to display an error in that case. So we're gonna display something very similar to the first one to exporting. So let's just go ahead and copy this entire thing here. And let's just paste it here.
:07 Instead of loader icon, it will have X circle icon. It won't have animate spin. Instead, it will have text rows 500. Then for the paragraph, we're just going to say unable to export with text small and font medium. And beneath that, Text extra small, text muted foreground and text center.
:34 Something went wrong, please try again. For the button, it will have a size of small, variant of outline, width full, and this will be handle, reset, export. So if the export fails, we're gonna allow the user to trigger a reset from here as well so they can enter new information rather than just try the same thing again. Alright. And then finally in the return we're gonna go ahead and build our form.
:09 So let's go ahead and build form here with an onSubmit, preventDefault and formHandleSubmit. Let's go ahead and add a space Y4 inside space Y1. Let's add a heading export to GitHub with font medium and text small. Beneath the heading we have a paragraph text small and text muted foreground, export your project to a GitHub repository. Outside of that div, we're going to add our first form field, which will be used to enter the repository name.
:45 Now to access the field property we use the following syntax and then in here what we can do is we can extract the isInvalid into a constant by checking for field state meta is touched and field state meta is valid. Great. Then let's go ahead and actually return the field. So we're going to use the field component and give it an accessibility attribute data is invalid. We're going to add a label which says repository name and the accessibility HTML4 and let's go ahead and render the actual input with the ID of field name, name of field name, value of field state value, on blur, field handle blur, on change, field handle change with event target value, another accessibility attribute for is invalid, and the placeholder indicating to the user how they should name this project compatible with GitHub standards.
:56 And beneath that, let's go ahead and simply handle any errors using the field error and the prop errors field state meta errors. Great. Now, outside of that form field, let's go ahead and duplicate that and paste it here. So this one will be used for visibility. Alright.
:23 In this case we don't need the is invalid, we can immediately go ahead and return. So we can remove the... Actually I mean we can keep it, it doesn't really matter. Not too sure because this is a different component, sorry. I am gonna remove it actually.
:40 So the field label will simply say visibility. And The prop here is not gonna be an input so we can get rid of that and the error too. The prop will be select. So inside of this select let's go ahead and give it value, field state value. Let's give it on value change, which accepts value, which is either public or private, and it calls field handle change and passes in the value.
:12 Then in here, let's go ahead and do normal select composition, select trigger with the ID it needs, and select value with a placeholder, select visibility. Beneath the select trigger, we're going to render the select content with its select items, one for private and one for public. Make sure the value has the exact same value as you've defined everywhere else. Public, private in lowercase. So this needs to match what you've defined in your project's schema.
:48 Here it is, a casting, public or private. All right. And then the last item that we need is the description item. For that, again, you can copy the first one, repository name. I'm just gonna go ahead and add it here.
:07 Change it to description. This field can stay the same. Change the field label to be description. And instead of using the input, we render the text area. ID is field name, name stays the same, value stays the same, on blur stays the same, on change area invalid.
:31 The only thing we're going to change is the placeholder as a short description of the project and rows to 2. The error rendering stays the same. What's left to do is the submit button. So outside of the last form dot field, render a form dot subscribe with the usual selector of can submit and is submitting. We can access those fields through a syntax like this and then simply render the button inside type of submit size small class name with full and disabled if you cannot submit or if you are submitting and if you are show a different label creating and the default label create repository.
:21 Great. Then let's go ahead and create a function called getStatusIcon. So depending on the current status of the export, we are going to display different icons. For export status exporting, it's going to be an animated loader icon. In case it's completed, it's going to be a check check icon with a specific emerald color.
:53 If it's failed, it's going to be x circle icon with a specific red color. Otherwise, it's going to be a regular FA GitHub. All of them use the same size. Great! And now that we have that, there is only one more thing left to do.
:11 So delete the entire dialog here and let's do a very simple popover composition. Open and on open change. Popover trigger as child. Let's go ahead and do the following. The reason we are gonna do this so we're gonna create a div here with this class name flex item center gap 1.5 height full px3 cursor pointer text mute foreground border l hover bgx in 30 and inside we're going to render, get status icon and span class name text small of export.
:56 The reason we have this super specific class name is Because this is that button. So I was just copying the styles of the tabs that we have. That's what I was doing right here. That's why we have this super specific class name. So yes, now when we render this, we're going to replace this old dummy export button which currently does nothing.
:22 And then outside of the popover trigger all we ought to do is render the popover content with class name V80 align start and executes the actual render content. Perfect! We are ready to wire this up. First component we're going to add is the import dialog. For that we're going to go inside of source, features, projects, components and we're gonna go inside of projects view.
:55 Let's go ahead and add an import for import GitHub dialog. We've developed it in the same folder so we can use a very short path here. Then, let's go ahead and add a state here just beneath the command dialog. Import dialog open, set import dialog open. Then, in the use effect here, so far we only checked for a key of letter K, now we're also gonna check for letter I which will import the dialogue which will open the import dialogue my apologies.
:35 Okay make sure that we actually have the event listener. Let's make sure we actually close it. Or maybe, Okay, no, I think this is all just fine. And now we have to render the import github dialog. We can do it just beneath the command dialog with its equivalent open, import dialog open, and on open change, set import dialog open.
:06 And while we can now open it with a shortcut, let's actually give this other empty button with fa-github an on click, right? Set import dialog open. So right now, if you go right here and click on import, it should show you import from GitHub. Enter a GitHub repository URL to import. A new project will be created with the repository content.
:36 Amazing! And if you try something stupid, you will see you get a please enter a valid URL. Alright, so I will try this out, but I suggest that we try it out together. Let's just finish wiring up the UI components. So one more place we have to visit is the Source, Features, Projects, Components, Project ID, view.
:04 Let's go ahead and import the export popover which we've just created. It's right here in the same folder and now let's go ahead and find the placeholder div that we have. Let me find it. Here it is. So after find the tab with the label preview, and then in here, you will find this, div class name flex one justify n, so that's good.
:31 But this, this is just a mock function so go ahead and now you do you notice the class name the class name is exactly what we've added here right it's the same class name So we can now delete this and just render export popover with project ID. So let's quickly go into a random project just to see if we can now open that popover which gives us pre-filled repository name and option to change the visibility and a description. Perfect. Now I'm going to go ahead and prepare a few repositories for us to test if there are any bugs. So I'm going to start with a random repository I have.
:18 This is a private repository, so I have to be logged in to try this. So you can see the URL is github.com, my name and then the repository name. And I would suggest removing forward3, tree forward main. So you just have the repository name here. And let's click import.
:39 And now, of course, this is going to fail now. The reason it's failing is because I don't even have ingest running. My apologies. So let me go ahead and I'll update if I need to and then we're gonna see exactly if it works or doesn't, so I'm not sure. I think I have to import once again.
:01 There we go. Importing repository. And you can see it's already creating files so this is actually all working I'm super impressed that we did this from the first try and you can see the status is importing and there we go that's it It was so fast and it worked so well that I'm in Ave and it's completed. There we go. So the first thing it did is it cleaned the project, then it fetched the repository, it created the folders, public, source and source components, and then it created the files and set the completed status and the project is finished.
:39 And in the preview here I think we should be able to also preview it. I think it's just a simple landing screen that I've generated with AI actually. One thing we didn't try is a binary file. So that's something that I'm yet to try. I'm just going to create a random repository or I'm going to attempt to fetch some public repository.
:03 All right, So I just waited. So this installs. Yeah. You can see we can even preview it. And while we're here, we can try exporting this.
:12 Or if you want to have some more fun, go ahead and create a brand new one and simply, you know, create a simple white plus react to do app. Something like that. Wait for it to be created and then we're going to try and export it. Now that this project has been completed with AI and I have a simple to do here, let's try exporting. So you can see that the name is already pre-filled with my random project name.
:42 I'm going to set it to private and let's set the description to be a test description. And let's click create repository. So export has started and you can see that I have a cancel button if I ever want to stop it. But let's take a look at what's actually happening here. So we're getting the GitHub user, we're creating repository, we're waiting for repository to initialize.
:05 You can see it took a few attempts. Then we get the initial commit, we fetch project files, we create blobs, create tree, create commits, update branch, set completed status and that's it. We successfully exported to GitHub. Let's view this on GitHub and here it is, the exact file and you can see initial commit from Polaris. The only thing I'm not seeing is the readme, Perhaps it still needs to be synced or maybe we made some mistake.
:33 We will see but that's honestly the least Important part of this entire thing. It's that the files are actually in it Perfect. So the only thing left to check is what's up with binary files So what I've prepared is I've just uploaded a random image to one of my repositories here. I suggest you do that as well. You can use upload files and just add an image.
:00 Make sure it's jpeg png. Basically make sure it's not an SVG file because that's text, right? Make sure it's jpeg or png or some other binary that you have so I'm just using a simple COVID Antonio icon here, right? And what I'm gonna do is I'm going to copy the URL of this repository, which has that, and I'm just gonna go ahead and import it here. So I'm going to paste it and I'm going to click import.
:27 So let's see what will happen. Will it succeed with that or not? Already I can see that there is something here and when we click on it we correctly see ToDo implement binary preview because we are not yet rendering this in any way. I'm more interested in what's here. So looks like this was successful.
:48 It successfully created files. But let's take a look at the convex. So inside of my data here I have files and I think that so far we shouldn't have a single file with a storage ID except one which is called images.jpg and if I go inside of the actual files, this is representing your storage you can see that inside I actually have one file And if I go ahead and click on download here, I'm not sure what this is. I think this might be some mistake because this does not look like the file I have added, But maybe it is, maybe it's not. I'm not exactly sure.
:33 I think it's because of this incorrect content type. I think something's wrong with the extension. But looks like it was uploaded. What I want to try now is try exporting it. So, test file 123 or test binary.
:50 So I'm just exporting the exact same repository now, which has images.jpg. So I can see if the binary file was transferred. Perhaps there is some bug happening here. We'll leave that to the next chapter. Don't worry.
:05 It's already been two hours. But I just want to see if we're doing a mistake or not. All right. So I can now view the repository. And I have it here.
:18 And okay. So it's perfectly fine. The image was successfully uploaded, right? So you can see that this is now test binary 1 to 3 and whatever the image was in this repository where I've manually added it via upload. It was preserved through the Polaris project, through our file storage, all the way to a new repository.
:44 All right, so Let me go ahead. I'm not exactly sure why when I open it, it's in this weird format. Okay, when I open it on my laptop, it actually just opens a normal image. I should have just opened it. So everything is perfectly fine.
:58 We implemented everything correctly. Obviously, we don't have the actual preview here, but that's easy. We're just going to show an image or if we can't show content, we're going to say, editor doesn't support this type of file. Amazing amazing job. As you can see, when you export something, you can keep it in this state and it will even persist through refresh I believe yeah but if you want to restart it you can click close and we have a bug okay oh yes yes yes we moved those files I forgot about that so inside of source app API github export we have cancel and reset open both of them I think both of them should have errors.
:43 We have to give each of them a higher level. Okay, so we fixed that. Okay, easy fix. I think we can now try again, let me refresh. And I probably have to rmrf.next.
:05 So I clear cache and do npm run dev again and then restart because we just fixed both, right? Reset and cancel. I think it was just cache that was the problem. And if we try close now, there we go. You can see how it entirely resets.
:28 Amazing, amazing job. Let's go ahead and review and merge these changes so chapter 15 I'm first gonna do git checkout-b 15 github import export git add git commit github import and export and then git push uorigin 15 github import export perfect You can see that now we are on that branch here. And then I'm just going to go ahead onto the Polaris repository. I'm going to open a pull request and let's review it. And here is the summary by CodeRabbit.
:18 New features. We added import GitHub repositories directly into projects. We can export projects to GitHub with customizable repository settings. We have real-time status tracking for import and export operations and cancel export functionality with ability to reset export status. So let's take a look at the comments here.
:41 So the first comment is for the cleanup function. Right now what we do is we simply load all the files in a project and we run them through a loop to delete them. Same with their storage. But you can see that CodeRabbit reads the convex documentation and it knows that convex enforces a 100, 000 operation limit permutation. That is reads and writes combined.
:06 So projects with over 50, 000 files will fail. All right, so obviously the solution for this will be to implement batch cleanup as it said. Right now for our tutorial purposes this is perfectly fine, but yes, you should be aware that there are limits to convexes mutations. Same with wherever you deploy a normal API route, there are limits to how long it can run something. So perhaps this could be a job for a background job or convex workflows, but for now this is okay.
:43 But you should be aware that there is a limit. I try my best to bring this project as close to production as possible. I think you notice that that's why you watch these types of videos, right? But I have to compromise here and there. In this case, I didn't create batching.
:03 So I hope that when you run this project, if you do it in production, please be aware of that and of course, you know, fix it. It's a nice challenge for yourself. Great. In here it's warning us about potentially affected peer dependencies of Octokit. So I'm not really aware of this, but yes, you could run npm audit to ensure that there are no security issues.
:34 I think everything is mostly okay. In here, it says the default values won't update when the project loads, but we've tested this and it does work correctly. So I think it's confused because of our optional chaining here. I think that's what confuses it because yes, usually this wouldn't update once it loads. But I think ours is already loaded at this time.
:02 So perhaps we don't even need the optional chaining. Yes, and about this, I will try to get more information in the next chapter if it's something serious, but I'm pretty sure it is not. All right. And in here, it's basically telling us that we have a very broad catch here. So we could mask any real errors.
:26 Again, for tutorial purposes, it's okay. For production, Yeah, you would probably want something a bit more. How do I exactly say? Well, not so broad, right? Because I don't even care why this fail.
:44 This can fail because of the branch, which is what we assume, but it can also be a million other things. It can be invalid owner, invalid repo, right? So that's what it's complaining about. It's the fact that the moment this fails, we just assume it's the branch, but it can be other things. So in production, you will probably check, you know, for the type of error and what the error returns and then do something.
:07 Same thing here, we do silent error handling, you can see the catch and just console error, so we don't really know which files have failed to error, we don't really keep track of anything. In here is a good opportunity to actually use a Sentry logging for this. This could be very useful so you can keep track of files that fail and perhaps you can then extract if only binary files are failing or only a specific extension is failing, right? This is where Sentry could come in very, very useful so you can analyze which files cause problems the most. Other than that, great, great job.
:47 Let's go ahead and merge this pull request. Let's go ahead, get checkout main, get pull origin main. And there we go. So that marks the end of this chapter. Let's just confirm everything here is merged.
:05 Let's go ahead and check our graph. There we go. 14 and then 15. Perfect. And yes, that is all.
:12 We've connected GitHub OAuth via We built complete import system with binary file support. We created background export jobs using Ingest workflow. We implemented real-time status tracking with UI components. And we handle repository creation and Git API operations. Amazing, amazing job and see you in the next chapter.