Skip to main content

Back in 2023, I wrote a blog post entitled Creating an app called Frame-IT. It turns out I was using ‘Vibe Coding’ and it’s changed the way I view the creation of almost everything…

Everything Changed

The term ‘Vibe Coding’ was defined Andrej Karpathy, a co-founder at OpenAI in Early 2025. The basic idea is to rely entirely on Large Language Models – LLMs – and code by using only natural language prompting, rather than writing the code yourself.

Back in 2023, I published ‘Frame-IT’, an iOS app on the Apple Store, written in Swift but coded completely in the then-early version of ChatGPT. As I noted on DigitalUrban, every app has a story, from the idea through to story boarding, finding a design team, working with in house computer scientists or outsourcing. All of these steps take time, resources and often for the small ‘would be developer’ the obstacles are overwhelming, meaning that the spark of an idea often gets lost.

Yet, everything has changed, the whole landscape of developing, designing and creating has turned on its head. It has reached the point where the spark of an idea can be coded, designed, marketed and launched with the help of Artificial Intelligence. Frame-IT is a very simple app, doing a simple thing, but it came from an idea, a want, a desire to do something which before AI would have got lost in logistics, finding the time of computer scientists, and it would certainly be economically unfeasible.

The First Idea

Frame-IT came about while looking at Samsungs’ Frame Televisions, beyond the normal 4K television they blend into the background by displaying art and making the art look as it is mounted on a white background while also being enclosed in a frame – i.e. like a painting. Samsung does this well and the screens are excellent, but it is limited to art works, leading to the idea that it would be nice to frame websites, specifically ‘data’ based web sites and hang data on walls, data that changes in Realtime. Of course, any screen could be used, but if you simply show data on a screen or monitor, no one notices. If you hang it in a frame, as if its art, people will notice.

The concept was therefore simple, build an application that puts a picture frame around websites, allow users to select which websites to show and if there is more than one, cycle through them, like a dashboard. This would allow small screens, such as iPads, to act as frames and, via AirPlay, devices such as projectors or general screens to gain a look beyond their norm and display websites and data in the same way as hanging art on the wall.

The problem was, this was an idea, to code it would require knowledge of Swift, a library of ‘picture frames’ and probably a business case to justify the time of resident computer scientists (I work at University College London, so we have a few around) and to be honest it was such a simple idea I’m note sure anyone would listen. So, with the rise of AI, I decided to build it myself, using AI from start to finish, from knowing little about Swift, and only as much about AI as my then twitter feed was full of and reading articles on site such as The Verge.

The app took me two days to build, two days where I was also working, so mainly doing things in-between other tasks, over lunch and a little bit in the evenings. With the initial learning curve out the way, I could rebuild it in a day. The first things I did was sign up to OpenAI to gain access to GPT 4 , which released on March 14th 2023. This allowed me access to the latest version and therefore the most up to date knowledge base, although looking back version 3 would probably have been fine and I could have simply used the free tier. My first prompt was “Can you write me swift code to take a website, centre it and add the image of a picture frame around it’. The app should work full screen, in landscape mode”. I had a sample picture frame (taken from Wikipedia and cutting out the actual image to leave a transparent centre).

Within 60 seconds, GPT gave me a section of code and then stopped, it turns out there was a limit in the amount of characters it could respond with. A quick Google search (ironically) showed me that by typing ‘continue’ ChatGPT will continue the code. With this I became a master of cutting and pasting, with the code often taking three continues and code which while sometimes was in code boxes, was also in simply text. It also allowed me to start to understand the layout and nature of Swift, within 30 minutes i had my first app running on an iPad, via Xcode. Simply by cutting and pasting and pressing ‘play’. Not all things worked out, there were often errors, but errors I could cut and paste back into GPT and it would solve them, most of the time.

Each time a milestone was made – a working version, a version with a picture frame in and a website showing, I would save the code as Chat GPT would sometimes break the next version while I was asking for new features. Features such as ‘Add a button to move to the next website’, ‘If there is more than one website, then cycle through them every 60 seconds’ (the websites were hard-coded at this point). Once i had a concept running on my own iPad, I realised it was quite neat and decided to then adapt it so others maybe able to use it. I added a settings page – via ‘Can you delete all the websites I have added and include a settings page, the settings page should be reached via a button and have the ability to add and delete websites, once added they should be saved so the app remembers them when reopened’. This is coding but in the new language – human language.

The picture frames were created in Image Creator by Bing, a search engine I never thought I would use (being a Mac user) but in that last week I had not been near Google. Bing allowed me to type in prompt and get images without any user limits. Simple prompts such as “create me a gold Victorian picture frame, it should be photorealistic with minimum reflections and the centre should be cut out” worked amazingly well, providing me with an almost limitless range of Picture frames to include as part of the assets.

Not all the picture frames are from AI, a couple are from open source imagery, it felt wrong to cut out the picture, but the frames have some provenance and they were nice to include.

Marketing requires imagery as well – as such I took a very simple picture of my two test iPads and used the ‘PhotoRoom” AI app to transform it to two iPads on a bench with a concrete wall behind, whereas the reality was far from as glamourous.

The App is now available via the Apple Store, it even has its own website and logo – again designed by AI, designed using Looka and hosted on Framer. So while it worked, it was a little painful with the limited lengths of code in the early version of ChatGPT and the endless cutting and pasting/fixing things – it was quick, but it still took a notable commitment and time. It is also a very very niche app, but as we get to, thats the point of Vibe Coding.

And then Everything Changed Again

That was 2023 – late 2024, and everything has changed again, not only the workflow and the tools, but also the ability of the LLMs to Vibe Code, they now work, no more cutting and pasting small chunks of code, everything works. Code comes out ready to go and any errors can be fixed in a complete code base, with notably less errors compared to 2024 . As such, we made another app – again Vibe Coding, without knowing at the time – MQTTFrame.

The aim was to build on the concept of FrameIT but allow realtime data intergration directly in the app via MQTT (a lightweight, publish-subscribe network protocol for data) and to make it easy to subscribe to any MQTT topic and display live updates.

It is a niche thing to want to do, but that’s what Vibe Coding is great for – building those apps that you want, but perhaps few others do.

Our aim was to allow:

  • Customizable Frames: Choose from multiple high-quality decorative frames to complement your home or office décor.

  • Clean and Clear Data Display: Showcase important data such as temperature, pressure, news, and more with clear typography and layout.

  • Fullscreen Mode: Tap to seamlessly switch to an immersive fullscreen view, maximizing readability and aesthetics.

  • Easy Management: Add, edit, and reorder topics quickly through an intuitive interface.

  • Instant Updates: Receive real-time data updates over MQTT, ensuring you’re always in the know.

And that’s what we did – but this time in a day and with everything created in ChatGPT – icons, graphics, code, marking text, and even walking me back through how to get it on the Apple Store (as it’s an oddly complex and easily forgotten process).

So one day Vibe Coding and an app was taken from concept to ready to publish on the Apple Store – if you want to try it its currently available for free, on the Apple Store

Everything Again and Again

That was the end of 2024, now in mid 2025 and term Vibe Coding has come into common use and Large Language Models are now more than capable of making anything from webpages to apps to writing posts (all of this was, however, written by an actual human; any grammatical errors are of course, intentional). We have now switched from ChatGPT to Google Gemini and use it almost daily to produce and make apps or webpages that perhaps only I want in my life, such as – an Asteroids Clock, made in 30 minutes (temporarily housed at – https://finchamweather.co.uk/astroidsclock.html – we will move it to Github soon as we get chance).

 

or a Procedural City Maker (https://finchamweather.co.uk/proceduralcity.html)

 

or a Weather Dashboard with the forecast made into the image of an English Landscape using data from the API from the UK Met Office:

You can view the dashboard live at https://finchamweather.co.uk/aiweatherimage.html

All small, niche things that somehow I love and I am happy that Vibe Coding allowed them to exist – that’s the magic, Vibe Coding allows you to create anything in your imagination that did not exist to exist, it’s a great time to be making digital things.

If you liked this post, please do subscribe, it gives us a good reason to write another one…


Thanks for reading Digitalurban’s Substack! Subscribe for free to receive new posts and support my work.

 
 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Close Menu

About Salient

The Castle
Unit 345
2500 Castle Dr
Manhattan, NY

T: +216 (0)40 3629 4753
E: hello@themenectar.com

Archives