Adventures in writing a custom game engine

Adventures in writing a custom game engine


Hello all! Between stress from University and the the pressure of releasing a Steam game sometimes you just gotta take some time off and do something relaxing while also investing in your future. Like writing a custom game engine from the ground(ish) up. Today, and hopefully in the future (if I don’t give up), I’m going to ramble about my experiences in planning out and writing a custom game engine with the hope that someone might find this interesting, or let me know if I do or say something really stupid.

But why?

Simply put: convenience. I need an engine that:

  • Handles remote collaboration through Git or other version control well
  • Is easy to port to multiple platforms (consoles included)
  • Is reasonably priced (cheap or free)
  • Supports fancy GPU stuff like shaders
  • Uses a language I like
  • Plays nice with FMOD, something my sound designer needs for future projects
  • Handles 2d pixel art well.

The more games I work now nowadays the larger the teams seem to be. Currently PSYCRON is a two person effort with about a bunch of other amigos helping out with feedback, testing and promotion. Gamemaker Studio 2 (the engine I’m using) works great when I’m the only one making changes to the codebase but it quickly becomes a problem if someone who isn’t me wants to try out their new art, audio, or dialogue. To get or test their assets in the game they have to:

  1. Send me all their suggestions/changes/new files and wait for me to load them in/finish what I’m doing THEN load them in
  2. Buy/download/get comfortable with GameMaker and load them in themselves, later merging changes in through Git.

Here’s option 1, my current pipeline for jam games and the like, illustrated.


My nightmares (and life sometimes), visualized.

This isn’t feasible for longer and larger projects, which led me to set my sights elsewhere. But which engine to choose?

100 different game engines on the wall, 100 different game engines-

GameMaker Studio 2

Game Maker (or GameMaker now), my first engine and foray into “reeeal programming”.

This felt appropriate for a post about reinventing the wheel, but I couldn’t find anywhere else to put it.

I love this engine, it’s like game development comfort food for me. It has some weirdness to it (ztesting is enabled for surfaces by default, in a 2d engine, why???) but it’s great for prototyping and getting something up and running quickly. Unfortunately when you start to look at larger projects you run into issues. With a lot of different objects and scripts, the IDE starts to slow down, especially on laptops or lower end computers. While all the tools in GameMaker are pretty decent, they tend to chug if you’ve got too many open at once.

GML (Gamemaker’s proprietary scripting language) has come a long way since 2012 when I first started, introducing some more OOP stuff with constructors and sort of inheritance within them. It’s very forgiving with semicolons being more of a suggestion then a hard rule, but has no static typing, a simple way to have default values for function parameters or function overloading (someone from YoYo actually personally shot this idea down when I suggested it in a beta). After three semesters of C++, these are features I’m really starting to miss. Console porting is also preeeeety expensive.

Part of my wallet died inside looking at this. Note, that’s $1000 for 12 MONTHS.

The 2.3.1 update made it play significantly nicer with Git, but it’s still not perfect. There’s also no native FMOD bindings and while I can probably write a .dll for Windows to handle that (I actually got about 90% of the way there, loading banks, playing sounds and everything) porting that to every other platform I’d want to export to would be a nightmare.

For it’s pricing, scalability issues and weird audio engine which I didn’t get into here, GMS is out of the question, at least for larger projects.


Unity… is alright. If I was interested in doing 3d stuff, it’d probably be my engine of choice as it’s really great at doing that from what I’ve seen. One of my main issues with it being it feels more designed with 3d in mind with 2d being a later edition and pixel art 2d being even less of a priority. I got to play around with it in school in a game development class (the majority of which was spent making PSYCRON, thanks Mr. Wu!) and using it like everything the kitchen sink is included.

I feel like it shouldn’t take this long?

There’s probably ways to poke Unity with a sharp stick and force it to do what I want but that seems like a lot more trouble than it’s worth.

Funny thing is that Unity says it’s pill collider is a cylinder collider with two half circle colliders for the end caps, so the mythical cylinder exists somewhere.

It’s also “rental software”, with a yearly cost to use it’s pro version which I’m not really a fan of. As far as yearly software goes it’s prices are reasonable -ish ($400 a year, per person) but I prefer to just pay once and own my software outright rather than Unity hooking up an IV to my wallet with the possibility of a pro licence disappearing into smoke if their servers ever die.


Similar to Unity, it’s really good at what it does, but what Unreal does well isn’t something I’m interested in making. It’s revenue share opposed to fining me once per year is nice but it has the Unity issue of being a 3d engine with 2d feeling like an afterthought.


From what I’ve seen of it, free and open source Unity without a lot of the bloat. It’s pretty awesome! Except that console porting has to be done by an external company and it’s FMOD situation is questionable. So close!


So, some interesting engines but none of them really fit exactly what I want to do. The fact that I’ve also got to start job hunting soon weighing heavily on my mind and wanting something that isn’t “games made in a language that’s the weird kid of JavaScript and C++” to stick on my resume and GitHub I started looking at another option: building my own engine using a framework.


I’ve been on and off with SDL2 and if I ever really “git gud” with another graphics API I’d probably use it to handle basic keyboard and gamepad input because it can do that really well. SDL’s rendering was a bit of a shock at first coming from Gamemaker but I got comfortable pretty quickly. It also has native Switch support and has some really nice C++ bindings!

Unfortunately, it’s missing one feature which absolutely kills me: no shader support! I use shaders a lot, from hitflashes, to wave distortion, repaletting sprites (absolutely essential) and while I can probably make some simple games in SDL2, trying to expand beyond that without shaders isn’t really viable for me.


Simpler than SDL2 but covers a lot of the bases I want. It has some nice C++ bindings and supports shaders, yay! This was almost the framework I went with until I discovered that it just doesn’t support consoles. Oh well.


Hoooooooh boy was OpenGL tempting. Lots of tutorials, supported by literally everything imaginable, would look amazing on a resume… But it’s just too complicated for me and what I want to do. The sheer amount of work that’s involved to get a triangle on screen then getting a texture on that; trying to extend and make an entire engine with it? That’s a bit much for me. Who knows, maybe I’ll flip-flop back to it one day but I’m definitely not ready yet (props to Troid for figuring it out and using it in NME though).



It seems I’ve hit a dead end. I’m out of options.

But wait-


In the distance? Is that the ghost of Christmas Past?


An old engine long forgotten? My saviour?

It is! Monogame!


Okay, it’s C# (ehhhhhh) not C++ BUT-

  • It supports shaders
  • It supports consoles
  • It’s designed to work with Visual Studio so that should play well with Git right?
    • Right?
  • Real neat extendable pipeline tool
  • It’s free.

Monogame is in this weird perfect in-between for me where it handles just enough of the under the hood stuff(basic sprite blitting) so I’m not worrying about making triangles and quads to render to but also leaves me free to design my the systems that I actually find interesting. So, for now at least, it’s the framework I’m gonna run with.

This post has gone on long enough, so I hope whoever stuck it out or skimmed to the end found some of this interesting. Thoughts/suggestions are appreciated if anyone’s got them. Next time, I’d like to go into detail about creating a graphics pipeline: texture atlases and my misadventures with rectangle organization. Thanks for reading!


Oops, sorry I was busy. But, now that PSYCRON is out the door I can finally sort of pick this thread up again!

Learning to love Monogame

After PSYCRON was done, I spent a couple weeks getting to grips with Monogame, and am now feeling a lot more confident in my ability to write an engine in it. It's got a nice image drawing system (called SpriteBatch) which is decently fast and relatively painless to set up. The big snag with Monogame and it's built in media functions is that they STILL use the XNA content pipeline.

XNA (and now Monogame) have a very particular way of getting assets into your game. It’s heavily recommended through all the tutorials and on the Monogame forums to use the built in content pipeline tool which is a Windows forums application where you pick the assets/content to load into a project. The purpose of this magic pipeline is to (I think) make it easy to optimize assets for any platform. Issue is it’s really really really annoying to use.

The evil content pipeline tool in all it's... Non ease of use-ness...

Whenever you want to import or update an asset, you need to open up the content pipeline tool, import the asset by hand if it's new, click build, wait for the content pipeline to do it's thing and then it finishes hopefully fairly quickly, then manually define the frames/origin in a class. This situation is maybe okay, if a little annoying if I'm the only person working on projects, but I'm not. With additional artists and a musician it's nuts of me to tell them "yeah just install Visual Studio 2019 and 20gb worth of tools to see changes".

Footnote to this, the creator of FNA (another XNA fork with the goal of maintaining rather than updating like Monogame) wrote an excellent article on why the XNA content pipeline sucks. TL;DR it’s not optimizing anything, it’s just packing them all into one file.

If I’m writing a custom engine, I’m absolutely gonna take advantage of that. Fortunately there’s a function for loading a texture in from a filestream so I use that to import images. Issue is importing a bunch of different single animations and drawing them isn’t great. That’s a lot of stuff loaded onto the GPU when instead we could be using the mythical texture paaaage.


My proposed improved content pipeline for XNA. Sidenote that adding custom imports to the pipeline like .xml Tiled map files and custom sprite formats is maddening.

Texture what?

From my *very* limited understanding of GPUs, if every image is loaded in individually, whenever the engine wants to draw a different image, the GPU has to do a "swap" where it switches the current image out for the new one. The less of these "swap"s the GPU is doing the better, so one solution is to whack everything onto one large texture and feed that to the GPU instead. Those large textures are often called "texture atlases" (or "texture pages" if you're a GameMaker pleb).

So, the basic goal is to feed Monogame external texture atlases (as PNGs) and the coordinates/metadata about each animation on them (as JSON). The theoretical M3D art pipeline then became:
  • All sprites are drawn in Aseprite
  • Aseprite files are saved in a directory, which using Aseprite’s plugin API (creds to @DoctorM64 for showing me that existed) are converted into spritesheets
    • These spritesheets should also have metadata attached, frame count, size, duration, etc.
  • A texturepacker program scoops these up and:
    • Organizes them efficiently onto a 2048 * 2048 canvas
    • Mergse all the json metadata for each animation into one file
    • Spits it all out as a PNG and JSON which Monogame can then load

Does Lua not support this or am I just stupid

Aseprite thankfully has an api you can use to poke it's sprites with written in Lua. I had not used Lua before and after a long time writing C++ and GML it was a bit of a shock! The Aseprite documentation page doesn't really explain things, it's more up to you to look through every article and figure out how all it's classes fit together. Beyond having to restart Aseprite every time I wanted to test a change to the script I also ran into two big issues. Lua (or at least the version Aseprite rolls with) did not have:
  1. Any decent file functions
  2. Any way to import/export json

I ended up relying heavily on StackOverflow and a random "Export layers in Aseprite as separate images" Lua script that someone had written.

Eventually after I’d written the bulk of the script, I discovered that Aseprite actually has support for running scripts as command line parameters. That sped up development a lot and after a bit more pain, I had a tiny Lua script that could export a sprite strip and it’s metadata! Woohoo!

Look at my json exporting code and cry

This whole experience left me with so many questions about Lua. Why do if statements end with then? Why do for loops look the way they are? Why is half of my document end end end end?

The dreaded rectangle

Algorithms are not my strong suit. Balancing a RedBlack Tree != good (or ~= good if you're a Lua programmer), so the idea of writing a program to efficiently generate texture atlases was a bit daunting. However I had a plan: Leverage the raw speed of C++. If I write my code in C++, even if it's not as optimized as it could be, it'll still run pretty well compared to something like Python. So back to LazyFoo's SDL2 tutorials it was.

After a lot of poking, I had MinGW set up. With simple shapes drawing, the basic algorithm for generating the texture atlas I came up with was:

  1. Import all textures
  2. Find the dimensions of each texture, represent as a rectangle
  3. Sort the rectangles from greatest to least area
  4. Place the largest rectangle in the top left of the atlas

Then, until every rectangle has been placed:

  1. Define all free space on the atlas as a series of empty rectangles
  2. Place the next largest rectangle in the smallest available space

Generating the valid free spaces turned out to be a bit more than I bargained for. This involved also introducing collision checking to make sure free spaces didn’t overlap with already placed textures. I found the free spaces by creating rectangles originating from the top right and bottom left points of each texture. I’m skipping over a lot of the stress and frustration involved in getting this working, but it was the best feeling in the world when I finally had a prototype running.

Many C++ drops of blood were shed to bring you this image.

All that was left was to get the json and textures in. Whoooooooooooo boy.

Can I get a < filesystem > ?

C++ first appeared 36 years ago. It apparently took 34 years after that before anyone decided that filesystem access should be standardized, because who would want that? Me. Me would want that. On my laptop I had some configuration set up so I just had to add a -stl=c++17 flag to my makefile. I reset my laptop recently however and when coming back to fix up the texture packer something broke and I lost an afternoon trying to get C++ to accept that the filesystem library did exist and that I wasn't just playing a really funny joke on it.

Less surprisingly, there’s no stl implementation of json serialization/deserialization, which was kinda important. One quick #include <json.h> later and that was sorted. Actually exporting the finished texture atlases was kinda tricky. By default none of the solutions I found exported images with transparency, so for a while every imported image had a horrible pink box around it:

Oi Jotaro, why can't I use Za Hando to erase my ugly pink outline?

A solution was eventually hacked together where I key out the pink right before exporting the image. I’m still not 100% solid on how it works, there aren’t many examples of people exporting transparent images with straight SDL2. And with that, my image packing journey was at an end.

Satisfied, I hooked everything together with a small batch file and python script which would find all the raw .aseprite files needed, run them through my texture packer and spit the results out into a folder. C#'s support of JSON wasn’t fantastic, so that final bump was fixed by having a python script convert the JSON data into XML which C# played much nicer with.


The finished product!

That's all folks!

This post is more then long enough so I think I'll end it here. Thanks for reading this far! Feel free to let me know if I got something wrong or there's something I could improve on. Join me next time as I (probably) talk about stuffing in FMod and learning about FMod studio for fancy audio shenanigans.

I swear, stuff is still being worked on! Enjoy this teaser of some exciting primitive circles in the meantime:

There’s also a YouTube video in the works retelling the first post (and other posts in this series, eventually) on the way, so look forward to that!

1 Like