How I built Makeit.ai

January 11, 2025 (2mo ago)

This was built when I had some spare time to try replicate AI models.

Building the UI was easy with shadcn, it's really great to build forms and any UI elements quickly.

There are two sign-in methods—Google OAuth and magic links sent to your email. However, if users sign up with one method and try to sign in using the other, it won't work in NextAuth due to security reasons. To solve this, I added a "last used" tag to the sign-in methods so users can easily sign in again. The solution was simple: store the sign-in method in localStorage when users first sign in or sign up.

I integrated several services: Stripe for payments, Mailgun for emails, replicate for AI models, Cloudflare R2 for storing generated and user-uploaded images, and MongoDB as the database. I used Redis for rate-limiting certain API endpoints, specifically the magic link email sign-in handler, to prevent spam that could increase Mailgun costs.

Redis was also useful for short-term mutations where MongoDB wasn't suitable. Where did I need this?

This will be a long example, I had two AI models—one to design room interiors and another to upscale the generated images. This required handling webhook calls from both models for their prediction completion, success, or failure events.

If the webhook event payload status was successful, I'd proceed with processing. Otherwise, I'd update Redis with the failed event and update the MongoDB document with the failed image URL to display in the UI.

When receiving webhooks, I check if the request is from the upscale model. If not, I would send the generate image for the upscaling to the upscale model, but I had to make that won’t happen many times because replicate sends the webhoook events many times to make sure and each time I would have made an upscale prediction.

So I would check in the Redis if the prediction status is either processing or upscaling, if not I would store the prediction ID as the string and the status as processing or both as a set and the status as not upscaling at first.

// Set the initial status
await redis.set(`prediction:${body.id}:status`, "processing");

Then send a prediction request to the upscale AI model using the API endpoint.

If the request succeeds, update the status to "upscaling" with a 24-hour expiration window for the Redis string or set. This prevents multiple upscaling requests—I check if the prediction request status is either "processing" or "upscaling" and decline any additional upscale webhook events from Replicate.

// Update Redis with the new prediction ID and status
await redis.set(`prediction:${body.id}:status`, "upscaling");
await redis.expire(`prediction:${body.id}:status`, SECONDS_IN_24_HOURS);
await redis.set(`prediction:${response.id}:initial_id`, body.id);
await redis.expire(`prediction:${body.id}:initial_id`, SECONDS_IN_24_HOURS);

Then, I updated the MongoDB document by identifying the prediction ID stored in it when sending the prediction request first with the upscale prediction ID, so when I received the webhook for the upscale image, I could update the document with the image URL by identifying the prediction ID of the upscale model in the document.

The upscaled image size would be easily more than 5MB or 10MB, so I had to compress the image size while still maintaining the image quality using the sharp npm package and then upload it to the R2 bucket and then store the image URL in the MongoDB database.

On the client side, I thought how would I update the UI when the prediction is done with upscaling or it’s failed?

I could do polling, and use Server Sent Events(SSE) or Web Sockets(WS). But as I was using serverless architecture, it wasn’t possible to use SSE or WS as those needed long living connection so only could be used if I had a server.

At first, I would do polling for only one image generation with exponential backoff where increasing the delay by twice each time.

But then I thought if the user could send 4, 8 or 16 prediction requests and if I had to display the generated image each time the prediction request succeeded and updated in the database, I had to change the polling logic a little bit, so I store the prediction ID’s of the requested prediction in an array as a state and based on the user tier would limit the user to send more prediction requests based on the length of the array.

But every time the UI updates with the generated image the state would be updated and lose the array info and the polling wouldn’t update the UI for other image requests, so I had to use useRef to keep a reference to that array in the state and use that reference in the polling function for updating the UI with generated images and update the state with the previous state’s reference.

And the dashboard UI is just a form built using shadcn components and the Image gallery for generated interior images. Also some pages for billing, account info, and some settings like account delete, etc…