Hitting the Wall with Vibe Coding

Stephen Robles recently attempted to write an iOS podcast client. By his own admission, he’s not a developer, so turned to a series of AI tools to write the app. It didn’t go well:

At first, it would hang on searching for shows. After I seemingly fixed that, adding a show to the Library didn’t work. I kept going back and forth with GPT-5 and it kept getting worse. Every new build there was an increasing number or errors and unintended changes to UI elements I hadn’t asked for.

Up until this point, I was using the ChatGPT Mac App with “Work with Xcode” turned on, so GPT-5 could make changes to the code itself within the active window. Sometimes it would think it’s changing document X, but I had document Y opened, requiring me to revert, undo, and many times, get lost in the process.

Eventually the app failed to build and I could not fix it with ChatGPT. It felt like we were going in circles.

In my desperation, I turned to a different LLM. Many commenters on YouTube and social media suggested Anthropic’s offerings were better suited. I downloaded Claude and provided the full context of my app. I uploaded every Swift file I had created with screenshots of all the things I didn’t understand. From Core Data to build…files? I repeat, I have no idea what I’m doing.

Eventually, Robles gave up and ended up without a custom podcast app, but an interesting blog post instead.

I have very complicated feelings about vibe coding. I think a seasoned developer using AI to speed up their work can make a lot of sense, but I honestly don’t want to depend on an app that was written by someone who doesn’t know how it works.