GenAI Series: My Tryst with AI-Assisted "Vibe Coding"
Building a Web App from Idea to Deployment (And the Reality Check) - Process, Key Takeaways and The Job Market
In the last few posts on GenAI, I talked about the GenAI architectural journey and prompt engineering fundamentals.
For this third installment, I wanted to take a more practical approach by experimenting with "vibe coding" - the trending practice of developing applications primarily through AI code generation.
Setting aside my regular work tasks, I challenged myself to create something completely new: a web application that helps users determine whether it's financially advantageous to pay off their mortgage early or invest that money instead. I planned to generate the core React code using Claude, then deploy the finished product to a free hosting service like Vercel, all while minimizing manual coding.
Let's see how this AI-driven journey unfolded.
Generating the Core Application with Claude
My first step involved prompting Claude to generate the React code for the mortgage vs. investment comparison. I approached this iteratively, building up the functionality piece by piece. Also, here's a glimpse into the key queries I used.
Query 1: how to calculate amortized principle andnterst on loan table
Query 2: Create two scenarios. One where I invest 200 gbp on top of payment amount(at 7 percent) and one in which I pay off 200 as a prepayment of loan at a 4.18 percent. Thje loan amoiunt is 450000 and loan tenure is 30 years.
Query 3: Please add what would happen to the 2395.33 if I keep on making the payment towards investment for remoaning 4 yr 6 mo. Also give me total equity at end of loan and at end of 30 years.
Query 4: add Total Equity at Loan Payoff in scenario 2 as well at the time loan in scenario 1 got paid off. Give me the Equity value as mortgage paid value + investment value.
Query 5: Why is final value £363,450.44? and total equity 813450.44 looks wrong somehow
Query 6: but the final row is still wrong
Query 7: still wrong final row "Final"
As you can see from the progression (and you can view the full chat history here [Link to Claude Chat 1]), the code generated wasn't perfect from the start. It required significant back-and-forth, prompting for corrections, and manual inspection of the output. Debugging and validating the financial calculations were crucial.
The chat grew quite long, and I found myself copying the code and pasting it into a new chat window to continue iterating and refining. My second attempt with Claude [Link to Claude Chat 2] also demanded multiple rounds of testing to get the calculations looking correct.
This experience immediately highlighted a key lesson: while AI can quickly generate code, it's often not immediately right. Domain knowledge (understanding amortization and investment growth) and rigorous testing are absolutely essential. You cannot simply trust the output blindfolded.
Claude offered a handy feature to publish the generated code artifact [Link to Claude Artifact], which was great for sharing the progress at that point.
However, during further manual testing of the published artifact, I discovered persistent issues, particularly with state management when switching between different views or inputs. Having pushed Claude Pro to its limits on this task, I decided to switch gears and brought the code into Gemini-Pro-Flash to attempt the final corrections. This iteration with Gemini [Link to Gemini Chat] finally helped me reach a more stable code state, which you can explore on the Canvas [Link to Gemini Canvas]. You can surely play with the Canvas as well by changing the numbers. Here is how the final code preview looks. Gemini did a few changes to the UI as well as you can see below:
So, the core application code was finally in a decent place. I am amazed as to this whole code took me like 2 hours. And I think this is correct. But the journey wasn't over. The next challenge: deploying this artifact using Vercel, a platform I had zero prior experience with.
Deploying the App with Vercel (Starting from Scratch)
With the working code in hand, I turned back to AI for guidance on deployment. I opened a new chat and provided Claude with the JavaScript/React code, asking for step-by-step instructions for deploying on Vercel, requesting code modularization, and tests along the way.
The initial response [Link to Claude Deployment Chat] seemed promising, appearing pretty good on the first go. I decided to follow the instructions precisely, step by step.
First Attempt: Following the AI's Lead
I began with the initial commands provided by Claude:
npx create-react-app mortgage-calculator
cd mortgage-calculator
The commands ran, setting up a project structure, but a warning immediately popped up: create-react-app
is deprecated. This was a perfect example of AI providing slightly outdated information.
I informed Claude about the deprecation. It offered alternatives: Vite and Next.js. With no prior experience with either, I initially chose Vite. However, the Vite-related code Claude generated seemed to get confused with the earlier create-react-app
context (a form of hallucination).
Restart 1: Pivoting to Next.js
Realizing the initial path wasn't working smoothly and learning the importance of being precise with AI prompts (and managing chat context/limits), I started fresh in another chat. This time, I specifically asked Claude to help me create a Vercel app using Next.js with my React code, emphasizing the need for the latest best practices, modularization, tests, and very easy, detailed step-by-step instructions, starting with the absolute first step.
Can you help me create a Vercel app with next.js using the below React code. Use the latest and best frameworks. Give me very easy and detailed step-by-step instructions. Also modularize this code as necessary and write tests as well. Start with First Step.
This adjusted approach felt better, focusing on getting one clear instruction at a time and adapting as needed.
Following Claude's detailed Next.js instructions (steps 1 through 12). However, I noticed that Claude hadn't included any steps for local testing in its initial deployment guide. Before moving on to the actual Vercel deployment (Step 13), I paused and asked Claude for guidance on how to test the application locally.
Claude readily provided the necessary commands and advice. I successfully got the application running on my local machine at localhost:3000
. It was a relief to see it come to life! As expected, running the application revealed a few errors. By simply feeding these error messages back into Claude and implementing the suggested fixes, I was able to resolve the issues and get the application working correctly in my local environment. (You can see this part of the debugging process in the continued deployment chat here: [Link to Claude Deployment Chat]).
Now that the app was stable locally, I considered the final deployment step. Managing the project with multiple files would be much easier if it were version-controlled. I decided this was the perfect point to integrate with GitHub and leverage Claude Projects, which simplifies managing multi-file projects, especially for deployment.
After pushing my local files to a new GitHub repository, I returned to Claude with a new prompt, specifically asking it to guide me through creating a Vercel app using the project now hosted on GitHub:
Create a Vercel app using the project knowledge. I have these files already on Github And want to deploy this on Vercel
Following Claude's subsequent output, which involved linking the GitHub repository to Vercel and configuring the deployment, I was successfully able to deploy the application to Vercel! Claude continued to suggest various updates and refinements to the files during this process, which I diligently applied, further improving the codebase.
Leveraging the successful deployment, I also went back and forth with Claude to add more functionality and polish to the website, including extending the functionality we started with to have a full-fledged website rather than just an app. Here's how the application looked in its deployed state:
The result is an application where the core logic and structure were generated and iteratively refined with Claude's assistance. You can view the final deployed application here: [Link to the final App deployed via Vercel].
While the application still has certain areas for improvement and potential bugs, as is often the case with any initial build, it stands as a testament to the power of AI for accelerating development. It's a surprisingly robust starting point, especially for anyone with some existing coding experience looking to build upon an AI-generated foundation. Honestly, I am thinking of creating a full-fledged site out of this, adding multiple financial calculators. Let me know if that would be helpful and if that is a good idea.
Key Takeaways
This experiment in "vibe coding" – attempting to build and deploy a web application end-to-end using primarily AI code generation – has been incredibly insightful to me. My goal was to see if I could tackle a project and toolchain (React, Vercel deployment via Next.js) I wasn't deeply familiar with, by leaning heavily on AI assistance from Claude and Gemini.
The answer? Yes, it's possible to build a functional application this way, and the speed at which initial code is generated is genuinely impressive. However, the process was far from the seamless, flow-state "vibe" one might initially imagine.
My key takeaways from this journey are clear:
AI is an Accelerator, Not a replacement: AI models like Claude and Gemini are powerful co-pilots. They can generate boilerplate quickly, suggest structures, and provide instructions for unfamiliar tasks (like deployment). But they are not infallible.
Testing and Validation are Non-Negotiable: Time and again, the AI-generated code contained errors – sometimes subtle calculation mistakes, sometimes state management issues, sometimes outdated dependencies (
create-react-app
). Manual testing and debugging, using domain knowledge to verify output, were crucial steps that AI didn't eliminate.Iteration is Built-In: Getting to a working solution required multiple rounds of prompting, correcting, and refining the code based on errors found during testing. It's an iterative dialogue with the AI, not a one-shot request. Be prepared to start over or switch approaches if needed (like pivoting from Vite to Next.js).
Deployment Needs Guidance Too: Even with step-by-step instructions from the AI, navigating the nuances of a new platform like Vercel required attention to detail and troubleshooting.
Implications for the Developer Job Market
This experience also gave me a practical perspective on the much-discussed impact of AI on developer job markets. Based on my attempt, I don't see AI simply replacing developers. Instead, I think it fundamentally changes the nature of the work. The AI isn't taking over the entire software development lifecycle; it's becoming an incredibly powerful tool within it.
My personal view is that the demand won't disappear, but the required skill set will evolve. Proficiency will shift from merely writing code line-by-line to effectively orchestrating the development process. This means skills like understanding system architecture, breaking down complex problems, knowing how to test and validate AI output, debugging and fixing errors (which AI is also prone to introducing), integrating different components, and critically evaluating the generated code will become even more paramount. The ability to provide clear, effective prompts – a form of communicating intent to the AI – also becomes valuable, but it's useless without the underlying technical knowledge to verify the response.
For developers, this isn't a signal to fear obsolescence, but rather to adapt. Learning how to work with AI tools, integrating them into workflows, and focusing on the higher-level cognitive tasks that AI isn't yet capable of performing autonomously – like true innovation, complex problem-solving, and strategic decision-making – will be key. It makes development more accessible for getting started, but becoming a truly effective developer still requires deep understanding and critical thinking.
Conclusion
While the "vibe" might be punctuated by moments of debugging frustration rather than pure creative flow, the overall experience is empowering. It allows you to quickly explore ideas, build minimum viable products, and learn new technologies by having an intelligent assistant by your side every step of the way.
Would I recommend "vibe coding"? Absolutely. But go into it with realistic expectations. Treat the AI as a highly capable, occasionally incorrect, junior developer. Provide clear guidance, constantly test its work, and be prepared to guide the process. It's a new way of building, and while it demands human skill and oversight, the potential for rapid creation is undeniable.
If you're curious, check out the deployed app [Link to the Vercel App] and feel free to explore how you might improve upon this AI-generated foundation. The future of coding is likely a collaborative one, and experiences like this show us what that collaboration looks like in practice, challenges and all.
Let me know what you think about this, and have you tried vibe coding yourself?