Skip to content Skip to main navigation Skip to footer

Blog

What happens when someone with less than a month of game development experience tries to create a voice-controlled action game using generative AI?

Before this project, I had already built my first HTML game, which became an important stepping stone for experimenting with game logic in the browser.

— Behind the Scenes of Developing a Battle Game Centered on Chanting —

What happens when someone with less than a month of game development experience tries to create an action game controlled by voice input?

It probably does not go smoothly.

The design falls apart halfway through.
The naming gets messy.
You fix one thing, and something else breaks.

But I think that messy process is exactly where the real story of development shows up.

This time, I want to share some behind-the-scenes thoughts on the voice-input action game I am currently making.
This is not so much a polished postmortem of a finished game, but more of a record of how a beginner actually struggles, where the time goes, and how the game slowly starts taking shape.

First, what kind of game am I making?

The game I am making is a side-scrolling action game where you cast magic by speaking words like “Fire,” “Water,” and “Wind.”

It looks cute on the surface, but the systems behind it are surprisingly ambitious.

  • Voice recognition to cast spells
  • Three basic magic types
  • Combined spells
  • Different enemy behavior patterns
  • Day/night and field variations
  • Support magic
  • Energy accumulation and gate creation
  • Stage clear sequences
  • Environmental gimmicks like campfires and fallen logs

For someone with less than a month of game development experience, I am probably trying to do too much.

And naturally, the more I added, the harder things became.

The most time-consuming part is not “adding new features” It is making everything work together naturally

This is the biggest thing I have learned while making the game.

When you are a beginner, moments like “I made magic appear!” or “The enemy moves!” are exciting.
Those isolated features can be surprisingly fun to build.

But the real difficulty comes after that.

Take enemy behavior, for example.

  • Chase the player
  • Patrol within a certain range
  • Attack
  • Run away if there is a campfire nearby
  • Jump
  • Land on the ground properly
  • Turn in the right direction

Trying to make all of those behaviors coexist at the same time suddenly becomes very hard.

You get situations like:

  • “I added the fleeing behavior, and now it no longer attacks.”
  • “I restored the attack logic, but now it seems to prioritize the player over the fire.”
  • “It runs away from the fire, but its face is still turned toward the player, which looks weird.”

As a beginner, I realized there is a huge gap between writing individual pieces of logic and making the whole thing look natural as a game.

The thing I am struggling with most right now: the slime’s “campfire panic”

Lately, the thing I have spent the most time on is slime behavior.

In the night field, if you hit fallen logs or piles of dry branches with Fire magic, they ignite.
I wanted slimes to react to that.

It sounded simple at first:

“If there is a burning campfire nearby, the slime runs in the opposite direction.”

But once I implemented that, a lot of other problems appeared.

  • Its patrol logic was still active, so even after fleeing, it would reach the edge and turn back
  • Its facing update was still tied to the player, so it ran away while still looking at the player
  • If I made it stop attacking while fleeing, it actually felt less alive

That last one was especially interesting.

At first glance, “a fleeing enemy should not attack” sounds logically correct.
But when I actually watched it on screen, it felt unnatural.

So I ended up going in the opposite direction.

Now the slime panics and runs away, but still attacks in a desperate, reckless way.
That made it feel less mechanical and more like a living creature.

This was the kind of adjustment I could not have figured out just by looking at code.
I only understood it after watching it move in the actual game.

What matters more than “correct logic” is “logic that feels convincing”

One thing I have felt over and over during development is this:

What matters is not whether the logic is technically correct.
What matters is whether the player can look at it and feel that it makes sense.

Take the campfire effect, for example.

At first, it worked functionally.
Things burned, and slimes reacted.

But visually, the flames were too small.

From a system point of view, the fire existed.
But on screen, it looked more like a little glow than an actual dangerous fire.

If the player cannot clearly read “that thing is burning,” then the slime running away from it does not feel justified.

So now I am adjusting it like this:

  • Piles of dry branches get flames at double size
  • Fallen logs get flames at triple size

That felt symbolic of a much bigger lesson.

In games, what matters is not what is happening internally.
What matters is what the player can understand from the screen.

The most beginner-like weakness shows up in naming and organization

One area where I really feel my inexperience is in code organization.

As I keep adding new features, the number of conditionals grows rapidly.
Different enemy types need different movement.
Special fields need special logic.
Certain attacks only work under certain conditions.
Support magic overlaps with other systems.

And then you start getting situations like:

“Why does a piece of code named dragon affect slime behavior?”

That is very real beginner development.

At first, you build things with the mindset of “if it works, it works.”
And honestly, that is not even wrong in the beginning.
You need to make things move somehow.

But once the project grows, your future self starts getting confused.

There were moments while adjusting slime behavior when even I had trouble immediately understanding where I needed to edit things.

That made me realize something important:

The difficulty of development is not just in making enemy AI.
It is also in keeping the code readable enough that you can still work on it later.

Voice-input games come with their own very specific problems

The core of this game is that it can be played using your voice.
That sounds cool, but it is much more troublesome than it looks.

Voice recognition does not hear things the way you want it to.

So it is not enough to just detect the word “fire.”

In reality, I end up handling a lot of variations like:

  • fire
  • fir
  • firee
  • waterr
  • window (misrecognized instead of wind)
  • resurection (a broken version of resurrection)

That part has been surprisingly interesting.

Sometimes it feels like I am not just making a game.
It feels like I am also gradually building a dictionary of likely mishearings.

As a beginner, I found that making voice input feel natural was in some ways harder than making the magic itself stronger or more interesting.

But at the same time, this is also what makes the game feel unique.
Without that messy voice-recognition layer, it would just be another keyboard action game.

Because of that, it really feels like a game where words themselves hold power.

One of the most fun parts was not adding mechanics. It was making systems connect to each other

Personally, one of the most satisfying parts of development was not adding isolated features.
It was seeing different features begin to connect.

For example:

  • Your Magic Experience Points increases your magic level
  • Higher magic levels unlock combined spells
  • Defeating enemies fills up Cosmos Energy
  • When Cosmos Energy is full, you can create a transfer gate
  • In the night field, Fire can burn fallen logs and branch piles
  • Slimes panic when they encounter those flames

Once these links started to appear, the world stopped feeling like a collection of test functions.
It started to feel more like an actual game world.

One of my favorite parts is that magic no longer affects only enemies.
It also affects the environment.

When beginners make games, it is easy for everything to stay trapped within the simple loop of “attack” and “enemy.”
But the moment the player can affect the world itself, the game suddenly feels much more real.

That gave me a lot of confidence.

Trying to do too much is exhausting. But trying to do too much also taught me a lot

Honestly, for someone with less than a month of experience, putting all of this into one project is a lot.

  • Basic magic
  • Combined magic
  • Support magic
  • Three enemy types
  • Splitting slimes
  • Field variations
  • Campfire gimmicks
  • Voice input
  • Energy systems
  • Stage clear conditions

It probably would have been cleaner if I had made something much smaller.
There would have been fewer bugs too.

But because I pushed too far, I learned something important very clearly:

Adding features and making the whole game hold together are completely different problems.

In that sense, maybe it was actually a good thing that I ran into this wall while I am still a beginner.

Right now, I do not think I am in the stage of “making it well”. I think I am in the stage of “learning how to fall properly”

If you look only at the level of polish, the game is still far from finished.

The priorities between systems still collapse sometimes.
The visuals are still being adjusted.
There are still many parts I want to go back and improve.

But I think that is okay.

What happens when someone with less than a month of game development experience tries to make a voice-controlled action game?

The answer is probably this:

You take a lot of detours.
But those detours teach you things you could not learn any other way.

Things like:

  • what breaks when you change a certain piece of code
  • when to prioritize visuals over logic
  • why “technically working” is not enough if it does not feel good to play
  • why naming and structure matter if you want your future self to survive
  • why, in games, feeling convincing matters more than being theoretically correct

Those are the lessons I am learning right now, in real time.

The most interesting part of this game might be that it is still moving forward while still being immature

There is a lot to learn from highly polished development logs made by skilled developers.
But that is not really the part I want people to see here.

What I want people to see is the process itself:

A person with less than a month of game development experience, trying to build something they do not fully understand yet, breaking it, fixing it, and slowly turning it into a game.

It is not clean.
It is probably full of unnecessary detours.
But because of that, it is real.

“This is what happens when a beginner makes a game.”

As a snapshot of that process, I think this project is being very honest.

It is not finished yet.
But at least now, it feels like I have moved from the stage of just making things to the stage of trying to make them actually work together as a game.

Demo Page URL | Voice Action Game

We’ve posted a demo on The Magical Melody (tentative title) page so you can start playing right away.

Conclusion

When someone with less than a month of game development experience tries to make a voice-input game, this is what it looks like:

  • It does not work the way you imagined
  • You fix one thing and something else breaks
  • You realize visual believability matters more than internal logic
  • Enemy AI takes an absurd amount of time to make feel natural
  • Voice input is much messier than expected
  • But once the systems start connecting, development suddenly becomes exciting

And right now, I am still in the middle of that process.

A First-Hand Account of Prototype Development

An Ordinary Person’s Experience: Tackling Game Development with Generative AI, Even Without Any Prior Game Development Experience:A story set in 2026

In my previous article, I introduced a prototype of a game I created using generative AI.

In this article, I’ll share my personal experience—specifically, what actually happened during the development process and how far I was able to get as a complete novice in game development.

The focus of this article is not to compare the merits and demerits of each tool.

Rather, this is an article reflecting on my personal experience to answer the question: To what extent could a game development novice create something that actually feels like a game by using generative AI?

An overview of the completed game and the demo page are summarized in the introduction article.

My Background as a Beginner

First, I’ll outline my background.

  • No prior experience in game development
  • No prior experience in 3D data modeling
  • However, I have 17 years of professional experience as a systems engineer, IT help desk specialist, and systems administrator
  • About 3 years of experience using AI tools

In other words, I’m not completely new to IT.

However, this was my first time actually making a game.

I believe this background is significant.

This is because in many situations where I was able to get through this project, the foundation was not so much game development knowledge, but rather the instincts I acquired through IT work—such as setting up environments, troubleshooting errors, and being vigilant about specification discrepancies.

On the other hand, simply having the program run is not enough to make a game work.

It’s necessary to shape the game by incorporating visuals, sound, world-building, playability, and even presentation—and that was truly a challenge for a complete novice.

Time Taken to Complete the Prototype

As a rough estimate of production time, it took approximately 70 hours for the HTML version and about 90 hours for the Godot version to reach a playable prototype stage. This is not a precise measurement but a rough estimate based on the number of days and hours worked. Note that this does not include the time spent preparing for the demo release; it refers to the time from when I started building until the prototype was in a playable state after repeated revisions. In my experience, the Godot version, being modular in structure, presented more challenges during revisions and adjustments, and I feel that this difference was reflected in the time it took.

My Experience with the HTML Version

Challenges

The HTML version progressed quite quickly at the very beginning.

I remember feeling that “this might actually work” because it took far less time than I had imagined to get something up and running.

However, things weren’t easy from there on out.

What I struggled with most was getting the visuals to look exactly as I intended. Even when I specified specific processing steps, the visual fidelity was low; for instance, the receipts the ATM dispenses initially ended up looking like paper airplanes. It wasn’t enough for it to simply work—it took a significant amount of effort to make the visuals look “realistic.”

Sound was another area where I struggled.

I had to rewrite the audio generation program for each stage multiple times. The issues weren’t just with the code itself; sometimes the audio wouldn’t play due to where it was implemented, and even when it did play, it was often too hard for the player to hear. It’s not enough for the sound to simply play; we had to ensure it felt natural to the human ear, so I felt this was an area where relying solely on AI wasn’t sufficient.

Additionally, as the code grew larger, it became increasingly difficult to have it output the entire text continuously.

While we could generate everything at once up to a certain scale, as the number of lines increased, the output would often stop midway or omit necessary processing. From this point on, simply telling it to “output everything” no longer worked, and detailed incremental instructions became necessary.

Observations from Use

What I noticed while using it was that generative AI struggles in some situations to accurately grasp the final visual form based solely on text.

Especially for parts related to appearance, providing instructions while showing an image made it easier to achieve the intended result, and the accuracy of both input and output improved compared to using text alone. Color schemes also required more than just simple color specifications; adjustments were needed that included gloss and texture.

Another significant observation was that the AI tends to prioritize “working code” and lacks a strong optimization mindset.

Even for the same process, I got the impression that unless the human designer incorporates lightweight and high-speed design principles from the start, the implementation tends to become heavy and slow. I clearly understood that ‘working’ and “running smoothly” are two different things.

Effective Workflow

As conversations grew longer, responses became slower, and the system often stopped midway.

However, starting a new session at a natural breakpoint and resubmitting the entire code at that point tended to make things more stable.

Also, instead of generating the full text every time, specifying additions, deletions, before-and-after changes, reasons for the work, and target sections as differences made it easier to proceed even with large codebases.

With the HTML version, since it’s easy to grasp the whole picture on a single page, it was easier to see exactly what needed to be changed, and I felt the conversation flowed more smoothly than with the modular approach.

Experiences with the Modular Approach

Challenges with the Modular Approach

In modular development like Godot, we could proceed by outputting the entire code during the initial, lightweight stages, but as the processing load increased, outputting the full text tended to stall midway.

From that point on, it became a series of incremental revisions, and if the developer didn’t understand the structure themselves, it became impossible to keep up. In modular development, I felt that rather than leaving everything to the AI, humans need to proceed while maintaining an understanding of the big picture.

What stands out most in my memory is the implementation of the background music (BGM).

No matter how many times I revised the code, it didn’t improve. When I traced the cause, the problem wasn’t with the program itself, but with the file format I was using for the BGM. It wouldn’t play properly with wav, and it wasn’t until I switched to ogg that it finally played. I realized the importance of humans noticing when something is “off” and correcting the course, because generative AI can sometimes veer off track once it goes astray and continue moving in the wrong direction.

The same thing happened with visual adjustments.

The initial obstacles generated were quite simplistic: the airplane had unnatural wings, the juice looked like a stick, and the candy resembled little more than a colored box. By repeatedly adjusting these elements to convey their characteristics, we finally began to create a game that was visually clear and easy to play.

Insights from the Modular Approach

In the modular approach, it was crucial to clearly specify exactly what to change and where when adding, removing, or replacing parts of the program.

Even if the output seemed correct at first glance, upon later inspection, we sometimes found that parts of the generation were missing, so we couldn’t feel confident unless we provided detailed instructions.

However, what I strongly felt during this process was that the very act of developing a game using AI also served as a way to hone debugging skills and structural understanding.

While the AI made things easier, the experience of reviewing its output, making judgments, and making corrections directly translated into improved development skills. Compared to the old days of writing everything from scratch by myself, I feel that development speed has increased significantly.

Limitations I Encountered with the Modular Approach

When I shared the entire module and went through multiple rounds of feedback, there were instances where the AI noticed structural imbalances or processing bottlenecks.

However, for issues like BGM—where the root cause lay not in the code itself but in file formats or implementation conditions—the discussion would sometimes veer off track, leading to a series of irrelevant fixes. While generative AI is convenient, I felt that prioritizing and isolating problems still relies heavily on human judgment.

Additionally, I couldn’t ignore Godot-specific coding conventions, version differences, or UI variations.

Even when I provided the development environment details, it sometimes returned code with syntax errors, so I couldn’t trust it blindly and had to verify it with my own eyes at least once. While the modular approach is better suited for organized development, I felt that it also increases the difficulty for beginners trying to rely solely on AI.

Overall Summary

The Gap Between Initial Expectations and Reality

Before starting, I assumed that even with generative AI, the limit for beginners would be simple 2D games like Space Invaders or Pac-Man.

However, once I actually tried it, that assumption was overturned quite early on. The first step was surprisingly easy, and simply getting something functional up and running was much faster than I had imagined. Even without any game development experience, generative AI allows you to go from “nothing” to a “playable state” in one go. That speed of getting started was one of the things that surprised me the most this time.

However, the ease only lasted until the very beginning.

The more you build, the more issues—such as visuals, movement, sound, structure, and consistency—pile up all at once. Generative AI excels at laying the initial foundation. However, in the subsequent process of refining it into “what you actually want to create,” human understanding and judgment become crucial. In that sense, this experience taught me not that “generative AI can do everything automatically,” but rather that “even beginners can get started quickly, but the closer you get to completion, the more significant the human role becomes.”

Where Generative AI Shone

I felt generative AI was particularly strong in getting the initial prototype up and running and providing a large number of rough drafts for implementation.

Even for something that would take who knows how many years to write from scratch on my own, using generative AI allowed me to get it to the point where it functioned as a prototype in a short amount of time. This is huge. From the perspective of someone with no game development experience, the fact that it made me feel like “I can actually do this” was valuable in itself, but this time, I went beyond that and actually reached a stage where it looked like a real game.

I also felt that generative AI doesn’t just spit out code; its role changes depending on the user’s objectives.

It helped me organize my options when I wanted to think about implementation methods, served as a foundation for my work when I wanted to move forward, and helped me quickly create a rough draft when I had a clear idea of what I wanted to do. In other words, I feel that the strength of generative AI lies not in its versatility, but in its ability to amplify human intent.

Where Human Perseverance Was Needed

On the other hand, there were clearly situations where generative AI alone wasn’t enough.

The most challenging aspects were debugging and maintaining consistency. While it can generate code, when it comes to whether that code truly fits with the whole, whether it breaks other parts after being modified, or whether it works across different environments and versions, it suddenly becomes unreliable.

In fact, there were times when the AI itself would later point out issues with the code it had generated—such as “a definition is missing” or “there’s a duplicate function”—and I felt time and again that the momentum of the output and the stability of its consistency are two separate things.

Furthermore, as conversations grow longer, the burden on the human side increases.

Misreadings, oversights, and assumptions become more frequent, and there were times when we lost sight of the root cause of an error due to these factors. It’s not just a problem with generative AI; there are also difficulties on the human side when dealing with long texts. That’s why it was crucial to implement workflows such as breaking things down into segments, making incremental corrections, and reorganizing and resubmitting the existing code. While using generative AI certainly makes things easier, it doesn’t work if used carelessly; using it effectively requires persistence and organizational skills on the human side.

Where My IT Experience Came in Handy

What I realized once again this time was that even without game development experience, my previous IT work experience served as a solid foundation.

Whether it was understanding runtime environments, being vigilant about version differences, isolating errors, recognizing quirks in interfaces, or handling variations in tool specifications—I felt my past experience directly applied in areas separate from the game itself.

I believe the perspective of not just the program’s contents, but also “how to run that program” and “where to look for problems”, was a sense I could only have developed through my professional experience.

Conversely, I also felt that for someone with absolutely no IT experience, there’s an additional hurdle between having generative AI write the code and actually getting it to run.

What is a module? What is a development environment? What happens when versions differ? Without that foundational understanding, it’s easy to get stuck before you even start game production. Still, this experience demonstrated that even without game development experience, if you have an IT foundation, generative AI can be a very powerful tool.

Conclusion from a Beginner’s Perspective

To conclude from a beginner’s perspective, I believe that by using generative AI, even those with no game development experience can sufficiently reach the point of creating something that looks like a game.

At the very least, I feel we’ve entered an era where it’s realistic to aim to take an idea, bring it to life, test it, and present it as a playable prototype. This is a significant shift. Game development is no longer the exclusive domain of a select few experts; for anyone with a creative drive, the gateway to actually entering the field has widened considerably.

However, it’s not the case that anyone can create everything completely automatically.

As the scale increases, there are definitely areas where human persistence is required—such as consistency, debugging, understanding the environment, and refining the visuals. If your goal is to release a demo, as in this case, simply writing the code isn’t enough; you need to consider everything from testing the functionality to preparing the release environment and organizing how it will be presented.

Furthermore, if you’re looking to sell it on an app store, what comes next is no longer just about programming. A whole new set of practical tasks suddenly comes into play, including release procedures, app store submissions, legal compliance, terms of service, marketing, and ongoing improvements.

Even so, what I can clearly say from this experience is that generative AI has the power to turn a beginner’s challenge into a tangible reality.

Creating the “foundation” of a game has become far more realistic than before. However, the range of skills required varies greatly depending on whether you proceed to releasing a demo or take it all the way to sales. However, I definitely feel that generative AI has shortened the distance from taking that first step to actually shaping the project and releasing it to the world.

For me, generative AI was the force that turned “This might be impossible” into “Let’s just try to make it happen first.”

Demo Page Link

You can enjoy the actual game on this demo page.

1.Cash Bundle Breaker 3D

An introductory video that lets you experience the game’s overall atmosphere

2.Dreamfall Sky

A video showcasing the game’s overall visuals

Game Introduction Article

If you’d like to know the game’s overview and highlights first, please also check out the introduction article.

View the Game Introduction Article

Summary

What I realized through this experience is that while generative AI provides a powerful boost for beginners, it is not a magic tool that can handle everything from start to finish.

Nevertheless, there is no doubt that it brings the possibility of creating a “playable” game—even for complete beginners—much closer to reality.

I Tried Making My Own Game with Generative AI

My First Game Development Project Using Generative AI: A Test to See If I Could Develop an HTML and Godot Module Type

As someone with no prior game development experience, I decided to take on the challenge of creating a prototype for my own game using generative AI.

In this post, I’ll start by introducing the finished game and its highlights as a real-world example of “what even a beginner can create.”

Click here for the demo page

1.Cash Bundle Breaker 3D

2.Dreamfall Sky

What kind of game is it?

What I created this time is a prototype of a homemade game made using generative AI.

Rather than aiming for a finished product, I prioritized getting it to a point where it could actually be played.

As a game, I adjusted various elements—such as the movement, visuals, sound effects, BGM, and presentation for each stage—to create an experience that feels like a real game.

My goal was to create a prototype that goes beyond just moving screens, incorporating visual presentation, sound, and the way obstacles are displayed to ensure it holds together as a cohesive prototype.

What I Created

In this project, I utilized generative AI to incorporate the following key elements:

  • A basic system that functions as a game
  • Stage-specific visual effects
  • Implementation of sound effects and BGM
  • Adjustments to the appearance of obstacles and objects
  • Creating the overall atmosphere of the screen

Additionally, during the development process, we experimented with both HTML-based architecture and modular architecture.

For this release, we are focusing on the game itself as a showcase to give you a sense of “what we’ve created.”

As for development time, it took approximately 70 hours for the HTML version and about 90 hours for the Godot version to reach a playable prototype stage. These are not exact measurements but rough estimates based on the number of days and hours worked. Please note that this does not include the time spent preparing the demo for release; rather, it represents the time from when I started building until the prototype was in a playable state after repeated revisions. In my experience, the Godot version, being modular in structure, presented more challenges during revisions and adjustments, and I feel this difference was reflected in the time required.

What Was Created Using Generative AI

The key point here is that even even without game development experience, I was able to actually create a game prototype by leveraging generative AI.

While I used several generative AI tools, the focus of this article isn’t on comparing tools, but rather on

how far a beginner can take a project by using generative AI.

At first, I thought a simple 2D game would be the limit, but in reality, I was able to take it to a point where it felt quite like a game—including not just the mechanics but also visual effects, sound, and visual adjustments.

Of course, fine-tuning and debugging were necessary, but even so, I feel that the speed at which we went from “nothing” to a “playable state” is a major strength of generative AI.

Additionally, before actually creating a game like this one, I wrote an article simulating how to proceed if you were to create a game using generative AI. This project was also an effort to test just how far we could actually take a game into a tangible form using generative AI, based on the workflow and vision I had outlined in that article.

Highlights

What I want you to focus on in this prototype isn’t so much the level of polish, but rather the fact that a beginner was able to create this much using generative AI.

The highlights include the following points:

It’s actually playable

Although it’s a prototype, it’s not just a sample—it’s reached a state where you can actually play it.

I felt that even when using generative AI, it’s entirely feasible to turn an idea into a working prototype.

It captures the essence of a game, including sound and presentation

We also worked on creating the atmosphere of a game through sound effects, background music, and visual adjustments.

While the visuals and sound were areas where we struggled with adjustments, they are also the parts where we felt the most satisfaction as a prototype.

It’s a challenge from a beginner’s perspective

This project wasn’t undertaken by someone with prior game development experience; rather, it was an attempt to see how far I could go using generative AI starting from scratch.

For that reason, I believe the value lies not so much in the finished product, but in the fact that “a beginner’s challenge has taken shape to this extent.”

Demo Page Link | html type game & module type game

You can try out the actual game on this demo page

1.Cash Bundle Breaker 3D

An introductory video that lets you experience the game’s overall atmosphere

2.Dreamfall Sky

A video showcasing the game’s overall visuals

About the Development Experience

During the creation of this game, I made many more discoveries than I had anticipated—from visual adjustments and sound implementation to debugging and organizing the structure.

I’ve summarized these experiences in a separate article titled Development Experience.

If you’re interested in learning about the challenges I faced during development, how using generative AI helped me make progress, and my perspective as a beginner, please be sure to read that article as well.

View the Development Experience Article

Summary

Game development using generative AI isn’t magic that automatically completes everything for you.

Still, I felt it holds significant potential in helping beginners bring their ideas to life and progress to the point where they can release a playable prototype.

As a concrete example, I’m releasing a prototype of my own game here.

I hope you’ll take a look at the game itself first, and if you’re interested, please read the development experience article as well.

error: Content is protected !!