Vibe coding is rewriting the rules of technology
The AI-driven approach takes you from idea to app in minutes.
By Kiara Nirghin
When I first heard the term “vibe coding” earlier this year, I’ll admit I was skeptical. As someone who has spent significant time working with AI algorithms and predictive modeling, I’ve witnessed plenty of viral AI trends come and go since the release of ChatGPT in 2022.
For those unfamiliar with the concept, vibe coding — a term coined by Andrej Karpathy, former AI director at OpenAI and Tesla, in February 2025 — represents a fundamental reimagining of the software development process. Rather than meticulously crafting each line of code, developers “vibe” with AI tools using natural language — they get to focus on vision and creative direction, while the AI handles the technical implementation. It’s less about understanding every function and more about communicating intent and desired outcomes.
This taps into my longtime fascination with the intersection of human creativity and machine capability. As the CTO of Chima, a Y Combinator-backed applied AI research lab, I strive to make AI agents useful for the most cutting-edge media, consumer, and business applications, and as a Stanford computer science alum, I collaborate closely with the Stanford Institute for Human-Centered AI, whose mission to make AI collaborative, augmentative, and life-enhancing perfectly matches my own ambitions.
In this article, I’ll share my experiences exploring the vibe coding movement, reflect on what I learned at recent vibe coding gatherings in San Francisco, and examine what this shift means for the future of technology development. Whether you’re a seasoned developer, an AI enthusiast, or simply curious about how we’ll build technology in the coming years, the vibe coding revolution deserves your attention — and it might just change how you think about human-machine collaboration forever.
What is vibe coding?
As mentioned earlier, the origin of vibe coding can be traced to a seemingly casual post by Andrej Karpathy on X in February 2025. “There’s a new kind of coding I call ‘vibe coding’,” he wrote, “where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.” The tweet landed harder than expected — developers everywhere reported that they were already experimenting with the idea.
As someone who began coding at a young age — I developed my first algorithm to help address climate change in South Africa when I was 16 — I’ve witnessed the evolution of programming methodologies firsthand. Traditional coding is precise and unforgiving. Every semicolon matters; every function must be meticulously crafted. It’s an approach that has served us well, but it has also created significant barriers to entry for those without formal training.
Vibe coding turns that on its head.
At its core, it’s about collaborating with AI through natural language to build software, focusing on the “vibe” or essence of what you want to create rather than the technical implementation. You describe your vision, and the AI handles the code generation. When errors occur, you don’t dive into debugging line by line — you simply explain the issue to the AI and let it propose solutions. Vibe coding removes many traditional technical barriers, allowing people to focus on problem-solving and creativity rather than syntax and technical minutiae. The human becomes the creative director rather than the technical executor.
What makes vibe coding particularly fascinating is how it reflects a broader evolution in our relationship with technology: We’re moving from an era where humans must adapt to the rigid logic of machines to one where machines increasingly adapt to human modes of expression.
This is something I believe Gen Z is going to fundamentally lead as it’s something we’ve been demanding for some time. This shift also parallels developments I’ve observed in other fields, from natural language interfaces to generative design tools that respond to conceptual direction rather than explicit instructions. I call this “co-agency with AI.”
Just a few years ago, “talking to a computer” meant tapping commands into a terminal or navigating clunky drop-down menus. Now we chat with AI agents that remember context, infer intent, and tackle sophisticated projects alongside us. This evolution is reshaping creativity, productivity, and the very texture of work. We’re entering an era of human-AI co-agency, with humans and smart systems operating as equal partners to accomplish what neither could do alone.
That vision isn’t new. On December 9, 1968, Doug Engelbart and his team at the Augmentation Research Center squeezed the future of personal computing into a 90-minute stage demo at San Francisco’s Brooks Hall. Their presentation, later dubbed “The Mother of All Demos,” previewed technologies that would influence the Alto PC and Macintosh and Windows operating systems, but more importantly, it communicated Engelbart’s belief that computers should amplify human intellect, not replace it.
This demo was the audience’s first glimpse of true human-computer partnership: real-time dialogue between person and machine to solve problems together. Every modern chat window, collaborative doc, and video-call whiteboard still echoes that afternoon in 1968.
Engelbart’s “Mother of All Demos” sketched a future where humans guide high-level intent while computers shoulder the technical grunt work. That future is finally materializing through vibe coding.
Vibe coding IRL
Vibe coding isn’t some hand-wavy X trend.
It’s showing up in real rooms. It’s pulling in crowds, and off-loading real technical work. In San Francisco and online, people are convening for sessions to explore the technique, and at hackathons, they translate plain-language ambition into running software in real time, with crowds following along to see just how far the hands-off coding approach can go.
In early April alone, I saw three gatherings — Convex’s demo sprint in SoMa (April 11), Signal Fire’s one-day hackathon on 2nd Street (April 18), and the VIBE25-1: After Dark Vibe-Coding Hackathon (April 18) — draw hundreds of builders who collectively shipped dozens of working prototypes in less time than it takes most teams to file a Jira ticket.
Most of these events had similar descriptions: ”VIBE CODING IS THE NEW DEFAULT — SHIP FAST, HAVE FUN” and “Let’s take vibe coding to the absolute fullest. Come build something experimental, fast, and actually useful. No pitching. Just deploying.”
What struck me most across all three events was the emphasis on “fast” — a freshman and an L4 Meta engineer could pair-prompt the same model and stand up a multiplayer sketch board in 30 minutes — so, to test the potential time savings of the “describe-it-and-let-AI-ship-it” approach for myself, I turned to Manus.
Manus felt less like babysitting a chat window and more like tossing the keys to a competent junior teammate.
The Chinese startup drew widespread attention in March, when it launched a demo of a general AI agent that could complete various tasks. Fresh off a reported $75 million benchmark-led round at a $500 million valuation, Manus describes its flagship product, also dubbed Manus, as “a general AI agent that turns your thoughts into actions…[It] excels at various tasks in work and life, getting everything done while you rest.”
My prompt for Manus was simple:
I am looking to share Freethink’s (https://www.freethink.com/) mission and vision.
Our stories have one big theme in common: change.
Go beyond surface-level news to explore the innovations driving real change, from AI and robotics to energy and biotech breakthroughs.
Through in-depth features and insightful interviews, our coverage not only inspires, it empowers you to understand the forces shaping our world. Join us at freethink.com for the latest insights from the front lines of change.
Craft a compelling space and website to publish this, targeting Gen-Z.
Manus spun up a workspace, hashed through design decisions on its own, and deployed a live microsite without asking for my input every 30 seconds. Total cost: about $20. Total time: the length of a coffee meeting.
Compared with my early test of OpenAI’s Operator, Manus felt less like babysitting a chat window and more like tossing the keys to a competent junior teammate. That is vibe coding: handing an idea to an LLM-driven agent, guiding it at the concept level, and receiving something usable — not just code snippets — on the other side.
Here’s the session replay and the final site.
The 2025 Vibe Coding Game Jam
One recent event that crystallized vibe coding’s potential was the 2025 Vibe Coding Game Jam sponsored by Bolt, CodeRabbit, and Lambda. Running from approximately March 1–25, 2025 (the dates seemed to be at the whim of the founder), the event brought together developers, designers, and AI enthusiasts to create games where at least 80% of the code was generated by AI.
The Game Jam was the brainchild of entrepreneur and indie developer Pieter Levels, and the premise was simple: create games that were accessible on the web, free-to-play, and instantly playable — all through the power of vibe coding.
Here’s the X post where Levels announced the event and laid out six core rules:
anyone can enter with their game
at least 80% code has to be written by AI
game has to be accessible on web without any login or signup and free-to-play (preferably its own domain or subdomain)
game has to be multiplayer by default
can use any engine but usually ThreeJS is recommended
NO loading screens and heavy downloads (!!!) has to be almost instantly in the game (except maybe ask username if you want)
What made the Game Jam particularly significant was its judging panel, which included Levels, Andrej Karpathy, Tim Soret (the game designer behind “The Last Night”), Ricardo Cabello (creator of the popular Three.js library), and @s13k_ (a web3 game developer). This collection of industry leaders lent the event credibility and drew attention from both mainstream tech media and the indie game development community.
This helped propel the event to the status of global phenomenon, with 1,170 entries from across the world. The submissions showcased the breadth of what’s possible with vibe coding — some entries were simple yet addictive arcade-style games, while others pushed boundaries with complex mechanics and innovative gameplay.
The Game Jam wasn’t without its hiccups. The lack of structure led to a somewhat chaotic experience for participants, and some rules changed mid-event (the multiplayer requirement was dropped after feedback). The submission process was also handled through a simple form rather than a dedicated platform.
This ad-hoc approach created challenges for some participants, but I saw the chaos as both a feature and a bug — the Game Jam may have been fast, experimental, and occasionally messy, but so is vibe coding, and despite these growing pains, the event still succeeded in its mission: showcase vibe coding’s potential to democratize game development.
However, perhaps the most significant outcome of the Game Jam wasn’t the games themselves, but the community the event fostered — weeks after it wrapped, participants were still collaborating, sharing techniques, and building upon each other’s work in Discord servers. This organic ecosystem of developers helping each other navigate this new paradigm may ultimately be what propels vibe coding from experimental approach to mainstream methodology.
While still in its early planning stages, the next Game Jam is already scheduled for May 30. Levels teased the stakes on X: “Next vibe jam you can join and I will be the jury— not just games, anything! $1 M in prizes, yee-haw.”
The future of vibe coding
Vibe coding now stands at a fascinating inflection point. As someone who has witnessed its early applications firsthand, I believe we’re only beginning to understand how this approach will reshape not just software development, but our broader relationship with technology creation. The question isn’t whether vibe coding will impact the future — it’s how profoundly and in which directions.
At my own startup, we’re actively exploring how vibe coding can be integrated into development processes. This balance — between the speed and accessibility of vibe coding and the rigor required for mission-critical systems — represents one of the central challenges the field must navigate as it matures. We aren’t the only ones racing to figure out how to strike this balance, either — a quarter of the startups in YC’s current cohort have codebases that are almost entirely AI-generated.
Students who struggle with programming concepts could flourish with vibe coding approaches.
My take is that the most immediate impact of vibe coding will likely be in education and workforce development.
Traditional coding education often begins with teaching abstract concepts and syntax rules — only after they learn those can students build anything meaningful. This approach can discourage many potential technologists, but vibe coding inverts the model, allowing learners to create functional applications from day one — deeper technical concepts are then gradually introduced as needed.
I think we’re going to see students who previously struggled with programming concepts flourishing with vibe coding approaches, and this shift has profound implications for addressing the persistent diversity challenges in tech.
Beyond education, vibe coding is poised to accelerate innovation across industries by reducing the technical barriers to implementation. With vibe coding, healthcare professionals can easily prototype patient management systems, environmental scientists can quickly build climate modeling tools, and educators can create interactive learning platforms, all without having extensive programming backgrounds.
This democratization of technology creation could help address what I call the “implementation gap” — the distance between identifying a problem and having the technical resources to build a solution. Many brilliant ideas never materialize because the individuals who understand the problems lack the technical skills needed to implement solutions. Vibe coding has the potential to narrow this gap significantly.
However, as with any technological shift, vibe coding brings challenges that must be addressed thoughtfully.
Security concerns top the list. When developers don’t fully understand the code being generated, vulnerabilities may go undetected. As expected, more events focused on security are cropping up in San Francisco, like “AI, Security + SciFi Bites & Cocktails during RSAC25,” which included panelists from Anthropic, Box, and Rad Security — during that event, security experts discussed frameworks for auditing and securing AI-generated code, an essential step toward responsible adoption.
Maintenance is the challenge I find most concerning. When writing code, you have to balance speed and quality — if you’re trying to get something out fast, you might write code that you know is going to require more maintenance in the future rather than the better code you could write if you had more time. In these instances, you’re accumulating “technical debt,” and applications built using AI-generated code could include lots of these fast-but-not-great solutions, leading to what some are calling “technical debt at scale” — systems that work initially but become increasingly difficult to maintain or modify over time.
Still, the most profound questions surrounding vibe coding may be philosophical rather than technical. As we increasingly delegate technical implementation to AI systems, how does this shift our understanding of creativity and authorship? If an application is primarily generated through AI, who can claim ownership of the resulting intellectual property? These questions echo debates I’ve encountered in other domains where AI and human creativity intersect.
Vibe coding isn’t merely a technical shift. It’s also a cultural one.
Despite these challenges, I remain optimistic about vibe coding’s potential to transform technology development for the better. The approach aligns with what I’ve long believed: anyone with an idea for how technology could solve a problem should be able to create that solution, not just those with specialized technical training.
The future I envision isn’t one where vibe coding replaces traditional development entirely, but rather one where we have a spectrum of approaches suited to different contexts and requirements. Mission-critical systems requiring maximum reliability and security may continue to rely heavily on traditional methods, while rapid prototyping, creative exploration, and personal projects embrace the vibe coding paradigm.
The vibe coding movement is still in its infancy, with tools, methodologies, and best practices emerging in real-time. The March 2025 Vibe Coding Game Jam and the multitude of events happening in San Francisco and online right now represent early experiments in what this approach might become.
What’s clear is that vibe coding isn’t merely a technical shift. It’s also a cultural one, challenging our assumptions about who gets to create technology and how that creation happens. As someone who has experienced firsthand how access to technology can transform lives and communities, I find this democratization profoundly hopeful. That optimism — and the prototypes springing up nightly — makes vibe coding a revolution worth tracking.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.
I learned something new today. My experience with Chat GPT has been limited but the Manus demo was pretty cool.