Status update: fall 2021

Since my last status update, I’ve spent about half my time working on Runaway and half working on an Android release.

In Runaway news, I’ve made important design decisions about how to load levels. While I’m proud of the design, it’s complicated enough that I’ll have to save it for a post with the technical tag.

I’m pleased to announce that after a difficult month, Run Mobile has been updated! On Android, at least. We made it with a few days to spare before Google’s November deadline, meaning that they won’t lock the app down.

Finally, I’m continuing to work on the story in the background. I have several cutscene scripts in the works – much like the one I released in June – but it turns out, it’s easier to start new ones than to sit down and finish them. Endings are hard!

Moving forward:

  • I may take a week or so to update some other Android libraries. The past month was less painful than expected, so I might as well get those updated before Google makes another big change to break my workflow. (Update: it took one day.)
  • I’ll get back to work on Run. Add Infinite Mode, adjust the physics, maybe do another pass on the animations.
  • I’ll start rebuilding Run 2 using Runaway.
  • Maybe I’ll even finish a cutscene script.

Supporting 64-bit devices

When I left off last week, I was told that I needed to upload my app in app bundle format, instead of APK format. The documentation there may seem intimidating, but clicking through the links eventually brought me to instructions for building an app bundle with Gradle. (There are other ways to do it, but I’m already using Gradle.) It’s as simple as swapping the command I give to Gradle – instead of assembleRelease, I run bundleRelease. And it seems to work. At least, Google Play accepts the bundle.

But then Google gives me another error. I’ve created a 32-bit app, and from now on Google requires 64-bit support. I do like 64-bit code in theory, but at this stage it’s also kind of scary. I’ll need to mess with the C++ compile process, which I’m not familiar with. And I’m stuck with old versions of Lime and hxcpp, so even if they’ve added 64-bit support, I can’t use that.

Initially, I got a bit of false hope, as the documentation says “Enabling builds for your native code is as simple as adding the arm64-v8a and/or x86_64, depending on the architecture(s) you wish to support, to the ndk.abiFilters setting in your app’s ‘build.gradle’ file.” I did that, and it seemed to work. It compiled and uploaded, at least, but it turned out the app wouldn’t start because it couldn’t find libstd.so.

I knew I’d seen that error ages ago, but wasn’t sure where. Eventually after a lot of trial and error, I tracked it down to the ndk.abiFilters setting. Yep, that was it. Attempting to support 64-bit devices just breaks it for everybody, and the reason is that I don’t actually have any 64-bit shared libraries (a.k.a. .so files). This means I need to:

  1. Track down all the shared libraries the app uses.
  2. Compile 64-bit versions of each one.
  3. Include them in the build.

And I have only about a week to do it.

Tracking down shared libraries

The 32-bit app ends up using seven shared libraries: libadcolony.so, libApplicationMain.so, libjs.so, liblime-legacy.so, libregexp.so, libstd.so, and libzlib.so.

Three of them (regexp, std, and zlib) are from hxcpp, located in hxcpp’s bin/Android folder. lime-legacy is from Lime, naturally, and is found in Lime’s lime-private/ndll/Android folder. ApplicationMain is my own code, and is found inside my project’s bin/android/obj folder. From each of these locations, the shared libraries are copied into my Android project, specifically into app/src/main/jniLibs/armeabi.

The “adcolony” and “js” libraries are slightly different. Both of those are downloaded during when Gradle compiles the Android app. Obviously the former is from AdColony, and I think the latter is too. Since both have 64-bit versions, I don’t think I need to worry about them.

Interestingly, Lime already has a few different versions of liblime-legacy.so, but no 64-bit version. If I had a 64-bit version, it would go in app/src/main/jniLibs/arm64-v8a, where Gradle will look for it. (Though since I’m using an older Gradle plugin than 4.0, I may have to figure out how to include CMake IMPORTED targets, whatever that means.)

As far as I can tell, that’s a complete list. C++ is the main problem when dealing with 32- and 64-bit support. On Android, C++ code has to go in a shared object. All shared objects go inside the lib folder in an APK, and the above is a complete list of the contents of my app’s lib folder. So that’s everything. I hope.

Compiling hxcpp’s libraries

How you compile a shared library varies depending on where it’s from, so I’ll take them one at a time.

I looked at hxcpp first, and found specific instructions. Run neko build.n android from a certain folder to recompile the shared libraries. This created several versions of libstd.so, libregexp.so, and libzlib.so, including 64-bit versions. Almost no effort required.

Next, I got started working on liblime-legacy.so, but it took a while. Eventually, I realized I needed to test a simple hypothesis before I wasted too much time. Let’s review the facts:

  • When I compile for 32-bit devices only, everything works. Among other things, libstd.so is found.
  • When I compile for 32-bit and 64-bit devices, it breaks. libstd.so is not found.
  • Even though it can’t be found, libstd.so is present in the APK, inside lib/armeabi. (That’s the folder with code for 32-bit devices.)
  • lib/arm64-v8a (the one for 64-bit devices) contains only libjs.so and libadcolony.so.

My hypothesis: because the arm64-v8a folder exists, my device looks only in there and ignores armeabi. If I put libstd.so there, the app should find it. If not, then I’m not going to be able to use liblime-legacy.so either.

Test #1: The 64-bit version of libstd.so is libstd-64.so. (Unsurprisingly.) Let’s add it to the app under that name. I don’t think this will work, but I can at least make sure it ends up in the APK. Result: libstd-64.so made it into the APK, and then the app crashed because it couldn’t find libstd.so.

Test #2: Actually name it the correct thing. This is the moment of truth: when the app crashes (because it will crash), what will the error message be? Result: libstd.so made it into the APK, and then the app crashed because it couldn’t find libregexp.so. Success! That means it found the library I added.

Test #3: Add libregexp.so and libzlib.so. This test isn’t so important, but I have the files sitting around, so may as well see what happens. My guess is, liblime-legacy.so is next. Result: could not find liblime-legacy.so, as I guessed.

(For the record, I’m not doing any of this the “right” way, which means if I clear my build folder or switch to another machine, it’ll stop working. But I’ll get to that later.)

Compiling Lime’s library

Like hxcpp, Lime comes with instructions, but unlike hxcpp, they didn’t work first try. From the documentation you’d think lime rebuild android -64 would do it, but that’s for Intel processors (Android typically uses Arm). So the correct command is lime rebuild android -arm64, but even that doesn’t work.

Turns out, AndroidPlatform only compiles for three specific 32-bit architectures, and ignores any others you request. I’m going to need to add a 64-bit option there.

Let’s jump forwards in time and see what the latest version of AndroidPlatform looks like. …What do you know, it now supports 64-bit architectures. Better yet, the rest of the code is practically unchanged (they renamed a class, but that’s about it). Since it’s so similar, I should be able to copy over the new code, adjusting only the class name. Let’s give that a try…

…and I figured out why the 64-bit option wasn’t included yet. The compiler immediately crashes with a message that it can’t find stdint.h. Oh, and the error occurred inside the stdint.h file. So it went looking for stdint.h, and found it, but then stdint.h told it to find stdint.h, and it couldn’t find stdint.h. Makes sense, right?

According to the tech support cheat sheet, what you do is search the web for a few words related to the problem, then follow any advice. When I did, I found someone who had the same bug (including the error message pointing to stdint.h), and the accepted solution was to target Android 21 because that’s the first one that supports 64-bit. Following that advice, I did a find and replace, searching all of Lime’s files for “android-9” and replacing with “android-21”. And it worked.

As expected, fixing one problem just exposed another. I got an error about how casting from a pointer to int loses precision. I’m certain this is only the first of many, many similar errors, since all this code was designed around 32 bit pointers. It should be fixable, in one of several ways. As an example of a bad way to fix it, I tried changing int to long. A long can hold a 64-bit pointer, but it’s overkill on 32-bit devices, and it’s even possible that the mismatch would cause subtle errors.

But hey, with that change, the compile process succeeded. Much to my surprise. I was expecting an endless stream of errors from all the different parts of the code that aren’t 64-bit-compatible, but apparently those all turned into warnings, so I got them all at once instead of one at a time. These warnings ended up falling into three distinct groups.

  • Three warnings came from Lime-specific code. After consulting the modern version of this code, I made some educated guesses about how to proceed. First, cast values to intptr_t instead of int, because the former will automatically adjust for 64 bits. (Actually I went with uintptr_t, but it probably doesn’t matter.) Second, when the pointer is passed to Haxe code, pass it as a Float value, because in Haxe that’s a 64-bit value. Third, acknowledge that step 2 was very weird and proceed anyway, hoping it doesn’t matter.
  • A large number of warnings came from OpenAL (an open-source audio library, much like how OpenGL is an open-source graphics library and OpenFL is an open-source Flash library). I was worried that I’d have to fix them all by hand, but eventually I stumbled across a variable to toggle 32- vs. 64-bit compilation. But luckily, the library already supported 64 bits, and I just had to enable it. (Much safer than letting me implement it.)
  • cURL produced one warning – apparently it truncates a pointer to use as a random seed. I don’t know if that’s a good idea, but I do know the warning is irrelevant. srand works equally well if you give it a full 32-bit pointer or half of a 64-bit pointer.

Ignoring the cURL warning, the build proceeded smoothly. Four down, one to go.

Copying files the correct way

As I mentioned earlier, I copied hxcpp’s libraries by hand, which is a temporary measure. The correct way to copy them into the app is through Lime, specifically AndroidPlatform.hx. Like last time I mentioned that file, this version only supports 32-bit architectures, but the latest version supports more. Like before, my plan is copy the new version of the function, and make a few updates so it matches the old.

Then hit compile, and if all goes well, it should copy over the 64-bit version of the four shared libraries I’ve spent all week creating. And if I’m extra lucky, they’ll even make it into the APK. Fingers crossed, compiling now…

Compilation done. Let’s see the results. In the Android project, we look under jniLibs/arm64-v8a, and find:

  1. libApplicationMain.so
  2. liblime-legacy.so
  3. libregexp.so
  4. libstd.so
  5. libzlib.so

Hey, cool, five out of four libraries were copied successfully. (Surprise!)

I might’ve glossed over this at the start of this section, but AndroidPlatform.hx is what compiles libApplicationMain.so. When I enabled the arm64 architecture to make it copy all the libraries, it also compiled my Haxe code for 64 bits and copied that in. On the first try, too.

Hey look at that, I have what should be a complete APK. Time to install it. Results: it works! The main menu shows up (itself a huge step), and not only that, I successfully started the game and played for a few seconds.

More audio problems

And then it crashed when I turned the music on, because apparently OpenAL still doesn’t work. There was a stack trace showing where the null pointer error happened, but I had to dig around some more to figure out where the variable was supposed to be set. (It was actually in a totally different class, and that class had printed an error message but I’d ignored the error message because it happened well before the crash.)

Anyway, the problem was it couldn’t open libOpenSLES.so, even though that library does exist on the device. And dlerror() returned nothing, so I was stumped for a while. I wrote a quick if statement to keep it from crashing, and resigned myself to a silent game for the time being.

After sleeping on it, I poked around a little more. Tried several things without making progress, but then I had the idea to try loading the library in Java. Maybe Java could give me a better error message. And it did! “dlopen failed: "/system/lib/libOpenSLES.so" is 32-bit instead of 64-bit.” Wait, why does my 64-bit phone not have 64-bit libraries? Let me take another look… aha, there’s also a /system/lib64 folder containing another copy of libOpenSLES.so. I bet that’s the one I want. Results: it now loads the library, but it freezes when it tries to play sound.

It didn’t take too long to track the freeze down to a threading issue. It took a little longer to figure out why it refused to create this particular thread. It works fine in the 32-bit version, but apparently “round robin” scheduling mode is disallowed in 64-bit code. Worse, there’s very little information to be found online, and what little there is seems to be for desktop machines with Intel processors, not Android devices with Arm processors. My solution: use the default scheduling mode/priority instead of round robin + max priority. This seems to work on my device, and the quality seems unaffected. Hopefully it holds up on lower-end devices too.

Conclusion

And that’s about it for this post. It was a lot of work, and I’m very glad it came together in the end.

Looking back… very little of this code will be useful in the future. These are stopgap measures until Runaway catches up. Once that happens, I can use the latest versions of Lime and OpenFL, which support 64-bit apps (and do a better job of it than I did here). I will be happy once I can consign this code to the archives and never deal with it again.

But.

Work like this is never wasted. The code may not be widely useful, but I’ve learned a lot about Android, C++, and how the two interact. I’ve learned about Lime, too, digging deeper into its build process, its native code, and several of its submodules. (Which will definitely come in handy because I’m one of the ones responsible for its future.)

The app is just about ready for release, with about a week to spare. But I still have a couple things to tidy up, and Kongregate would like some more time to test, so we’re going to aim for the middle of next week, still giving us a few days to spare.

Testing the Google Play Billing Library

To recap, I’ve been working to update the Google Play Billing Library, because if I don’t update it by November 1, Google will lock the app and I’ll never be able to update it again. Pretty important.

When I left off last week, I’d rewritten a lot of code, but hadn’t been able to test it due to an infinite loop. This week, I was hoping to fix the infinite loop quickly and move on to the many bugs in my as-yet-untested billing library integration.

Tracking down the infinite loop

This was a tough bug. While it’s far from the toughest I’ve ever tracked down, it seemed mysterious enough that shortly before finding it, I was nearly ready to give up and look for a workaround. (There’s a motivational poster in there somewhere.) But I’m getting ahead of myself.

At first, I had almost nothing to go off of. The app would start, log a few unrelated messages, and then hang with a blank screen, with no clue as to why. The lag (and eventual “app not responding” message) tipped me off that it was an infinite loop, but that isn’t much to go on.

My biggest clue (or so I thought) was knowing what code most recently changed: the billing library. So somehow that must be looping forever, right? Well… I commented out the “connect to billing library” code, and nothing changed. I commented out more and more features of the library, followed by lots of other related and barely-related code, all to no avail.

I finally got the app to run by disabling all internet-related libraries and features (I have a compiler flag for that), but nothing short of that worked. I started to suspect that the mere presence of the billing library caused this error, even without using it in any way. I considered dropping all network features and releasing the app with no logins, purchases, or cloud saves, since that was starting to seem easier than tracking this down. Desperate times and all that.

Then I remembered I wasn’t out of debugging techniques just yet. Next up was version control. I could rewind the clock and get a version that worked, and carefully examine the changes from there. So I reverted the GPBL and my last few weeks of code, and compiled the app the way I’d had it working.

And it still froze. Huh.

This is a lesson I’ve learned many times over, but still sometimes forget. When something makes zero sense, you need to check your assumptions. I’d assumed that since this was a new error, my new code was at fault. But now my old code didn’t work either, and that meant I was looking in the wrong place.

I forget exactly what it was I did, but knowing this, I managed to make the app crash. That’s a good thing: with a crash comes an error message, and (at least in this case) a stack trace. This instantly narrowed the search down to a single line of code, and the line of code had nothing to do with billing.

Teaching my code patience

Turns out, I’ve been overlooking a significant issue for a while now. To make a medium-length story short, cloud save setup happens partway through the game’s setup process. I don’t know (or care) exactly when, because it shouldn’t matter.

At this point in the process, it sends a request to log in to PlayFab. You have to log in eventually if you want to use cloud saves, so I figured, might as well start it immediately so it can happen in the background. What I forgot is that Haxe’s network requests (usually) don’t happen in the background. Instead, the login command puts the rest of the setup process on hold until it connects.

Not only that, the moment you finish connecting, several different classes try to make use of the cloud data. But because game setup wasn’t done, some important variables were still null, including PlayFab.instance. When PlayFab.instance is null, it runs the PlayFab class constructor. The constructor sends a request to log in. After logging in, classes try to make use of the cloud data. But PlayFab.instance is still null… so it runs the constructor again. And again.

There’s a long-running programmer joke that you start out wondering why your code doesn’t work, and end up wondering how it ever worked. That’s me right now. (And I particularly relate to the second comment on that post: “Or if you change your code to make it work better, it doesn’t work, so you load your backup and your original code doesn’t work anymore…”)

The solution? Just don’t log in until the game is fully ready. A little bit of patience goes a long way. (I’ve also pushed a commit in case the infinite loop comes up elsewhere.)

Surprisingly, everything above only took a couple days. It felt like longer, but my computer’s clock disagrees.

Testing the GPBL (finally)

So my plan was, after resolving the infinite loop, I’d spend the rest of the week testing the billing code. This would provide lots more material for the epic conclusion to my “why developing for Android is so annoying” series.

But this plan failed, because I only ran into two problems. The first came with an error message, and the error message told me exactly how to fix it. The second required a bit of effort to figure out why things weren’t happening in order, but it wasn’t bad enough to blog about. And then… purchases just worked.

I spent nearly half the week just kind of waiting to see if Google Play would accept or reject the app. (Answer: reject. Google now insists on app bundles, so that’s what I’m working on next.)

So, uh… maybe mobile development isn’t quite as bad as I remember.

Updating the Google Play Billing Library

As I’ve mentioned a couple posts ago, I need to update this specific library by November. Google was nice enough to provide a migration guide to help with the process, though due to my own setup, it isn’t quite as easy as that.

As I start writing this post, I’ve just spent several hours restructuring Java code, and I anticipate several more. I’ve been tabbing back and forth between the guide linked above and the API documentation, going back and forth as to class structure. I’m trying to track in-app purchases (you know, the ones bought with actual money), but this is made harder by the fact that Google doesn’t, you know, track those purchases.

The problem with purchases on Google Play

Let me introduce you to the concept of “consuming” purchases. On the Play Store, each in-app purchase has two states: “owned” and “not owned.” If owned, you can’t buy it again. This is a problem for repeatable purchases (like in Run Mobile), because you’re supposed to be able to buy each option as much as you like. Google’s solution is this: games should record the purchase, then “consume” it, setting it back to “not owned” and allowing the player to buy it again.

I know for a fact Google keeps track of how many times each player has purchased each item, but that information is not available to the game. I get a distressingly high number of reports that people lost their data; would’ve been nice to have a backup, but nope.

There’s one upside. In the new version of the library, Google allows you to look up the last time each item was bought. So that’s a partial backup; even if all the other data was deleted, the game can award that one purchase. (Actually up to four purchases, one per price point.) That’ll be the plan, anyway.

The problem with templates

Templates are a neat little feature offered by Haxe and enhanced by Lime. As I’ve mentioned before, Lime doesn’t create Android apps directly; it actually creates an Android project and allows the official Android SDK to build the app. Normally, Lime has its own set of templates to build the project from, but you can set your own if you prefer.

That’s how I write Java code. I write Java files and then tell Lime “these are templates; copy them into the project.” Now, Lime doesn’t do a direct copy; it actually does some processing first. This processing step means I get to adjust settings. Sometimes I mark code as being only for debug builds, or only for the Amazon version of the app. (Yeah, it’s on Amazon.)

Essentially, this functions as conditional compilation, which is a feature I use extensively in Haxe code. When I saw the opportunity to do the same thing in Java, I jumped on it.

Problem is, the template files are not (quite) valid Java code. This makes most code helpers nigh-unusable. Well, as far as I know. Since I never expected any to work, I didn’t try very hard to make it happen. Instead, I just coded without any quality of life features (code completion, instant error checking, etc.). Guess what happens when you don’t have quality of life features? Yep, your life quality suffers.

Use the best tools for each part: Xcode to make iOS code, Android Studio to make Android code and VSCode (or your other favourite Haxe editor) to make Haxe code

—Some quality advice that I never followed.

You know, I’ve always hated working on the “mobile” parts of Run Mobile. I’d do it, but only reluctantly, and it’d always be slow going, no matter how simple and straightforward everything seemed on paper. When I was done and could get back to Haxe, it’d feel like a weight off my chest. In retrospect, I think the lack of code completion was a big part of it. (The other part being that billing is outside my comfort zone.)

I’m not going to change up my development process just yet. There’s too much riding on me getting this done, and not nearly enough time to change my development process. But eventually, I’m going to see about writing the Java code in Android Studio.

Conclusion

After even more hours of work, I’ve started to get the hang of the new library, rewritten hundreds of lines of code, and fixed a few dozen compile errors. I’ve removed a few features along the way, but hopefully nothing that impacts end users.

In retrospect, the conversion guide wasn’t very helpful. It provided “before” and “after” code examples, but the “before” code looked nothing like my code, so I couldn’t be sure what to replace with what. The API docs were far more useful – since everything centers on BillingClient, I could always start reading there.

As of right now, the app compiles, but that’s it. When launched, it immediately hangs, probably because of an infinite loop somewhere. Once that’s fixed, it’s on to the testing phase.

Coding in multiple languages

Quick: what programming language is Run 3 written in? Haxe, of course. (It’s even in the title of my blog.) But that isn’t all. Check out this list:

  • Haxe
  • Neko
  • Java
  • JavaScript
  • ActionScript
  • C++
  • Objective-C
  • Python
  • Batch
  • Bash
  • Groovy

Those are the programming languages that are (or were) involved in the development of Run 3/Run Mobile. Some are on the list because of Haxe’s capabilities, others because of Haxe’s limits.

Haxe’s defining feature is its ability to compile to other languages. This is great if you want to write a game for multiple platforms. JavaScript runs in browsers, C++ runs on desktop, Neko compiles quickly during testing, ActionScript… well, we don’t talk about ActionScript anymore. And that’s why those four are on the list.

Batch and Bash are good at performing simple file operations. Copying files, cleaning up old folders, etc. That’s also why Python is on the list: I have a Python file that runs after each build and performs simple file operations. Add 1 to the build count, create a zip file from a certain folder, etc. Honestly it doesn’t make much difference which language you use for the simple stuff, and I don’t remember why I chose Python. Nowadays I’d definitely choose Haxe for consistency.

The rest are due to mobile apps. Android apps are written in Java or Kotlin, then compiled using Groovy. Haxe does compile to Java, but it has a reputation for being slow. Therefore, OpenFL tries to use as much C++ as possible, only briefly using Java to get the game up and running.

iOS is similar: apps are typically written in Objective-C or Swift, and Haxe doesn’t compile to either of those. But you can have a simple Objective-C file start the app, then switch to C++.

Even leaving aside Python, Batch, and Bash, that’s a lot of languages. Some of them are independent, but others have to run at the same time and even interact. How does all that work?

Source-to-source compilation

Let’s start with source-to-source compilation (the thing I said was Haxe’s defining feature) and what it means. Suppose I’m programming in Haxe and compiling to JavaScript.

Now, by default Haxe code can only call other Haxe code. Say there’s a fancy JavaScript library that does calligraphy, and I want to draw some large shiny letters. If I was writing JavaScript, I could call the drawCalligraphy() function no problem, but not in Haxe.

To accomplish the same thing in Haxe, I need some special syntax to insert JavaScript code. Something like this:

//Haxe code (what I write)
var fruitOptions = ["apple", "orange", "pear", "lemon"];
var randomFruit = fruitOptions[Std.int(Math.random() * fruitOptions.length)];
js.Syntax.code('drawCalligraphy(randomFruit);');
someHaxeFunction(randomFruit);

//JS code (generated when the above is compiled)
let fruitOptions = ["apple","orange","pear","lemon"];
let randomFruit = fruitOptions[Math.random() * fruitOptions.length | 0];
drawCalligraphy(randomFruit);
someHaxeFunction(randomFruit);

Note how similar the Haxe and JavaScript functions end up being. It almost feels like I shouldn’t need the special syntax at all. As you can see from the final line of code, function calls are left unchanged. If I typed drawCalligraphy(randomFruit) in Haxe, it would become drawCalligraphy(randomFruit) in JavaScript, which would work perfectly. Problem is, it doesn’t compile. drawCalligraphy isn’t a Haxe function, so Haxe throws an error.

Well, that’s where externs come in. By declaring an “extern” function, I tell Haxe “this function will exist at runtime, so don’t throw compile errors when you see it.” (As a side-effect, I’d better type the function name right, because Haxe won’t check my work.)

tl;dr: Since Haxe creates code in another programming language, you can talk to other code in that language. If you compile to JS, you can talk to JS.

Starting an iOS app

Each Android or iOS app has a single defined “entry point,” which has to be a Java/Kotlin/Objective-C/Swift file. Haxe can compile to (some of) these, but there’s really no point. It’s easier and better to literally type out a Java/Kotlin/Objective-C/Swift file, which is exactly what Lime does.

I’ve written about this before, but as a refresher, Lime creates an Xcode project as one step along the way to making an iOS app. At this point, the Haxe code has already been compiled into C++, in a form usable on iOS. Lime then copy-pastes in some Objective-C files and an Xcode project file, which Xcode compiles to make a fully-working iOS app. (And it’s a real project; you could even edit it in Xcode, though that isn’t recommended.)

And that’s enough to get the app going. When compiled side-by-side, C++ and Objective-C++ can talk to one another, as easily as JavaScript can communicate with JavaScript. Main.mm (the Objective-C entry point) calls a C++ function, which calls another C++ function, and so on until eventually one of them calls the compiled Haxe function. Not as simple as it could be, but it has the potential to be quite straightforward.

Unlike Android.

Shared libraries

A shared library or shared object is a file that is intended to be shared by executable files and further shared object files. Modules used by a program are loaded from individual shared objects into memory at load time or runtime, rather than being copied by a linker when it creates a single monolithic executable file for the program.

Traditionally, shared library/object files are toolkits. Each handles a single task (or group of related tasks), like network connections or 3D graphics. The “shared” part of the name means many different programs can use the library at once, which is great if you have a dozen programs connecting to the net and don’t want to have to download a dozen copies of the network connection library.

I mention this to highlight that Lime does something odd when compiling for Android. All of your Haxe-turned-C++ code goes in one big shared object file named libApplicationMain.so. But this “shared” object never gets shared. It’s only ever used by one app, because, well, it is that app. Everything outside of libApplicationMain.so is essentially window dressing; it’s there to get the C++ code started. I’m not saying Lime is wrong to do this (in fact, the NDK documentation tells you to do it), I’m just commenting on the linguistic drift.

To get the app started, Lime loads the shared object and then passes the name of the main C++ function to SDL, which loads the function and then calls it. Bit roundabout, but whatever works.

tl;dr: A shared library is a pre-compiled group of code. Before calling a function, you need two steps: load the library, then load the function. On Android, one of these functions is basically “run the entire app.”

Accessing Android/iOS features from Haxe

If your Haxe code is going into a shared object, then tools like externs won’t work. How does a shared object send messages to Java/Objective-C? I’ve actually answered this one before with examples, but I didn’t really explain why, so I’ll try to do that.

  • On Android, you call JNI.createStaticMethod() to get a reference to a single Java function, as long as the Java function is declared publicly. Once you have this reference, you can call the Java function. If you want more functions, you call JNI (Java Native Interface) multiple times.
  • On iOS, you call CFFI.load() to get a reference to a single C (Objective or otherwise) function, as long as the C function is a public member of a shared library. Once you have this reference, you can call the C function. If you want more functions, you call CFFI (C Foreign Function Interface) multiple times.

Gotta say, there are a lot of similarities, and I’m guessing that isn’t a coincidence. Lime is actually doing a lot of work under the hood in both cases, with the end goal of keeping them simple.

But wait a minute. Why is iOS using shared libraries all of a sudden? We’re compiling to C++ and talking to Objective-C; shouldn’t extern functions be enough? In fact, they are enough. Shared libraries are optional here, though recommended for organization and code consistency.

You might also note that last time I described calling a shared library, it took extra steps (load the library, load the function, call the function). This is some of the work Lime does under the hood. The CFFI class combines the “load library” and “load function” steps into one, keeping any open libraries for later use. (Whereas C++ doesn’t really do “convenience.”)

tl;dr: On Android, Haxe code can call Java functions using JNI. iOS extensions are designed to mimic this arrangement, though you use CFFI instead of JNI.

Why I wrote this post

Looking back after writing this, I have to admit it’s one of my less-informative blog posts. I took a deep dive into how Lime works, yes, but very little here is useful to an average OpenFL/Lime user. If you want to use CFFI or JNI, you’d be better off reading my old blog post instead.

Originally, this post was supposed to be a couple paragraphs leading into to another Android progress report. (And I’d categorized it under “development,” which is hardly accurate.) But the more I wrote, the clearer it became that I wasn’t going to get to the progress report. I almost abandoned this post, but I was learning new things, so I decided to put it out there.

(For instance, it had never occurred to me that CFFI was optional on iOS. It may well be the best option, but since it is just an option rather than mandatory, I’ll want to double-check.)

Why I haven’t updated Run Mobile in ages (part 1)

Google has announced a November 1 deadline to update Play Store apps, and I’ve been keeping an eye on that one. We’re now getting close enough to the deadline that I’m officially giving up on releasing new content with the update, and instead I’ll just release the same version with the new required library.

But why did it take this long for me to decide that? Why didn’t I do this a year ago when Google made their announcement, and keep working in the meantime? To answer that question, this blog post will document my journey to make a single small change to Run Mobile. The first step, of course, is to make sure it still compiles.

I should mention that I have two computers that I’ve used to compile Android apps. A Linux desktop, and a Windows laptop.

The Linux machine:

  • Is where I do practically all my work nowadays.
  • Performs well.
  • Is the one you’ve seen if you’ve watched my streams.
  • Has never, if I recall correctly, compiled a release copy of Run Mobile.

The Windows machine:

  • Hasn’t seen use in years.
  • Is getting old and slow; probably needs a fresh install.
  • Has the exact code I used to compile the currently-released version of Run Mobile.

Compiling on Windows

I tried this second, so let’s talk about it first. That makes sense, right?

Well, I found that I had some commits I hadn’t uploaded. Figured I’d do that real quick, and it turns out that Git is broken somehow. Not sure why, but it always rejects my SSH key. I restarted the machine, reuploaded the key to GitHub, tried both command line and TortiseGit, and even tried GitHub’s app which promises that you don’t have to bother with SSH keys. Everything still failed. At some point I’ll reinstall Git, but that’s for later. My goal here is to compile.

Fortunately, almost all of my code was untouched since the last time I compiled, and so I compiled to Neko. No dice. There were syntax errors, null pointers, and a dozen rendering crashes. Oh right, I never compiled this version for Neko, because I always targeted Flash instead.

So I stopped trying to fix Neko, and compiled for Android. And… well, there certainly were errors, but I’ll get to them later. Eventually, I fixed enough errors that it compiled. Hooray!

But for some reason I couldn’t get the machine to connect to my phone, so I couldn’t install and test the app. Tried multiple cables, multiple USB ports… nothing. And that was the last straw. This laptop was frustrating enough when Git and adb worked.

Compiling on Linux

Since Git does work on this machine, I was easily able to roll my code back. I’d even made a list of what libraries needed to be rolled back, and how far. (This is far from the first time I’ve had to do this.)

With all my code restored, I tried compiling. Result: lime/graphics/cairo/Cairo.hx:35: characters 13-21 : Unexpected operator. A single error, presumably hiding many more. Out of curiosity, I checked the file in question, expecting to see a plus sign in a weird spot, or an extra pair of square brackets around something, or who knows. Instead I found the word “operator”. (Once again I have fallen victim to the use-mention distinction.) Apparently Haxe 4.0.0 made “operator” a keyword, and Lime had to work around it.

Right, right. I’d gone back to old versions of my code, but I hadn’t downgraded Haxe. I’d assumed doing so would be difficult, possibly requiring me to download an untrustworthy file. This was the point in the process when I tried to compile on Windows instead. As explained above, that fell through, so I came back and discovered I could get it straight from the Haxe Foundation. (I’d been looking in the wrong place.) Once I reverted Haxe, that first error went away.

But that was only error #1. Fixing it revealed a new one, and fixing that revealed yet another. Rinse and repeat for an unknown number of times. Plus side, it’s simpler to keep track of a single error at a time than 20 at once. Minus side, there were a lot of errors.

  1. Type not found: haxe.Exception – Apparently I hadn’t downgraded all my libraries. After some file searching, I found two likely culprits and downgraded both.
  2. Cannot create closure with more than 5 arguments – I’ve never seen this one before, and neither has Google. I never even knew that function bindings were closures. Also, I’m not sure how addRectangle.bind(104, 0x000000, 0) has more than 5 arguments (perhaps it counts the optional arguments). But this wasn’t worth a lot of time, so I used an anonymous function to do the same thing.
  3. Invalid field access : lastIndexOf – This often comes up when a string is null. Can’t search for the last index of a letter if the string isn’t there. Fortunately I’d already run into this bug on Windows and knew the solution. Haxe 3.4 tells you to use Sys.programPath() instead of Sys.executablePath(), except programPath is broken.
  4. Extern constructor could not be inlined – Another error stemming from an old version of Haxe, this comes up when you cache data between compilations. It can be fixed by updating Haxe (not an option in my case) or turning off VSHaxe’s language server.
  5. Invalid field access : __s – Another null pointer error I’d already seen. But it was at this point that I remembered not to try to compile for Neko, so I turned my focus to Android instead.
  6. You need to run "lime setup android" before you can use the Android target – So of course that wasn’t going to be easy either. Apparently I’d never told Lime where to find some crucial files. (Also apparently I’d never downloaded the [NDK](https://developer.android.com/ndk/), meaning I’ve never used this machine to compile for Android.)
  7. Type not found : Jni – Wait, I (vaguely) remember that class. Why is it missing? _One search later…_ Aha, it’s still there, it’s just missing some tweaks I made on the Windows computer. This is all for some debug info that rarely matters, so I removed it for the time being.
  8. arm-linux-androideabi-g++: not found – Uh oh. This is an error in hxcpp, a library that I try very hard to avoid dealing with. Android seems to have retired the “standalone toolchains” feature this old version of hxcpp uses, and I’ve long since established that newer versions of hxcpp are incompatible. Well, I tried using HXCPP_VERBOSE to get more info, and while it helped with a few hidden errors, I spent way too long digging into hxcpp without making much progress. Instead, I went all the way back to NDK r18.
  9. 'typeinfo' file not found – Another C++ error, great. Seems I’m not the first OpenFL dev to run into this one, which is actually good because it lets me know which NDK version I actually need: r15c. The Android SDK manager only goes as far back as r16, so I did a manual download.
  10. gradlew: not found – It might be another “not found” error, but make no mistake, this is huge progress. All the C++ files (over 99% of the game) compiled successfully, and it had reached the Gradle/Java portion, something I’m far more familiar with. Not that I needed to be because, someone else already fixed it. The only reason I was still seeing the error is because I couldn’t use newer versions of OpenFL. One quick copy-paste later, and…
  11. Cannot convert URL '[Kongregate SDK file]' to a file. – No kidding it can’t find the Kongregate SDK; I hard-coded a file path that only exists on Windows. In retrospect I should have used a relative path, but for now I hard-coded a different path. Then, to make absolutely certain I had the right versions, I copied the Kongregate SDK (and other Java libraries) from my laptop.
  12. Could not GET 'https://dl.bintray.com/[...]'. Received status code 403 from server: Forbidden – There were nine or ten of these errors, each with a different url. Apparently Bintray has been shut down, which means everyone has found somewhere else to host their files. I looked up the new URLs and plugged them in. Surprisingly, the new urls worked first try.

And, finally, it compiled.

Closing thoughts

And that’s why I haven’t updated Run Mobile in ages. Every time I try to compile, I have to wade through a slew of bugs. Not usually this many, but there’s always something, and I’ve learned to associate updating mobile with frustration.

I was hoping to avoid this whole process. I’d hoped to finish Runaway, allowing me to use the latest versions of OpenFL, hxcpp, and the Android SDK. But there just wasn’t enough time.

Don’t get me wrong, it feels good to have accomplished this. But as a reminder, I’ve made zero improvements thus far. I haven’t copied over any new content, I haven’t updated any libraries, I haven’t even touched the Google Play Billing Library (you know, the one that must be updated by November). I’ve spent two weeks just trying to get back what I already had.

Maybe I’m being too pessimistic here. I have, in fact, made progress since February 2018. My code now compiles on Linux, unlike in 2018. My 2018 code relied on Bintray, which is now gone. And it’s possible that new content may have been included without me even trying.

And that’s enough for today. Join me next time, on my journey to make a single small change to Run Mobile.

Haxe has too many ECS frameworks

Ever since I learned about them, I’ve wanted to use the entity component system (ECS) pattern to make games. Used properly, the pattern leads to clean, effective, and in my opinion cool-looking code. So when I was getting started my game engine, I looked for an ECS library that to build off of. And I found plenty.

The Haxe community is prolific and enthusiastic, releasing all kinds of libraries completely for free. That’s great, but it’s also a bit of a problem. Instead of working together to build a few high-quality libraries, everyone decided to reinvent the wheel.

xkcd: Standards

It did occur to me that I was preparing to reinvent the wheel, but no one had built a game engine capable of what I wanted, so I went ahead with it. Eventually I realized that’s probably what all the other developers were thinking too. Maybe there’s a reason for the chaos.

Let’s take a look at (eleven of) the available frameworks. What distinguishes each one?

Or if you want to see the one I settled on, skip to Echoes.

Ash

Let’s start at the beginning. Ash was one of the first ECS frameworks for Haxe, ported from an ActionScript 3 library of the same name. Makes sense: Haxe was originally based on AS3, and most of the early developers came from there.

Richard Lord, who developed the AS3 version, also wrote some useful blog posts on what an ECS architecture is and why you might want to use it.

Objectively, Ash is a well-designed engine. However, it’s held back by having started in ActionScript. Good design decisions there (such as using linked list nodes for performance) became unnecessary in Haxe, but the port still kept them in an effort to change as little as possible. This means it takes a bunch of typing to do anything.

//You have to define a "Node" to indicate which components you're looking for; in this case Position and Motion.
class MovementNode extends Node<MovementNode>
{
    public var position:Position;
    public var motion:Motion;
}
//Then systems use this node to find matching entities.
private function updateNode(node:MovementNode, time:Float):Void
{
    var position:Position = node.position;
    var motion:Motion = node.motion;

    position = node.position;
    motion = node.motion;
    position.position.x += motion.velocity.x * time;
    position.position.y += motion.velocity.y * time;
    //...
}

It honestly isn’t that bad for one example, but extra typing adds up.

ECX

ECX seems to be focused on performance, though I can’t confirm or debunk this.

As far as usability goes, it’s one step better than Ash. You can define a collection of entities (called a “Family” instead of a “Node”) in a single line of code, right next to the function that uses it. Much better organized.

class MovementSystem extends System {
    //Define a family.
    var _entities:Family<Transform, Renderable>;
    override function update() {
        //Iterate through all entities in the family.
        for(entity in _entities) {
            trace(entity.transform);
            trace(entity.renderable);
        }
    }
}

Eskimo

Eskimo is the programmer’s third attempt at a framework, and it shows in the features available. You can have entirely separate groups of entities, as if they existed in different worlds, so they’ll never accidentally interact. It can notify you when any entity gains or loses components (and you can choose which components you want to be notified of). Like in ECX, you can create a collection of components (here called a View rather than a Family) in a single line of code:

var viewab = new View([ComponentA, ComponentB], entities);
for (entity in viewb.entities) {
    trace('Entity id: ${entity.id}');
    trace(entity.get(ComponentB).int);
}

The framework has plenty of flaws, but its most notable feature is the complete lack of macros. Macros are a powerful feature in Haxe that allow you to run code at compile time, which makes programming easier and may save time when the game is running.

Lacking macros (as well as the “different worlds” thing I mentioned) slows Eskimo down, and makes it so you have to type out more code. Not as much code as in Ash, but it’s still inconvenient.

Honestly, though, I’m just impressed. Building an ECS framework without macros is an achievement, even though the framework suffers for it. Every single one of the other frameworks on this list uses macros, for syntax sugar if nothing else. Even Ash uses a few macros, despite coming from AS3 (which has no macros).

edge

edge (all lowercase) brings an amazing new piece of syntax sugar:

class UpdateMovement implements ISystem {
    function update(pos:Position, vel:Velocity) {
        pos.x += vel.vx,
        pos.y += vel.vy;
    }
}

You no longer have to create a View yourself, or iterate through that view, or type out entity.get(Position) every time you want to access the Position component. Instead, just define an update function with the components you want. edge will automatically give you each entity’s position and velocity. You don’t even have to call entity.get(Position) or anything; that’s already done. This saves a lot of typing when you have a lot of systems to write.

edge also provides most of the other features I’ve mentioned so far. Like in Eskimo, you can separate entities into different “worlds” (called Engines), and you can receive notifications when entities gain or lose components. You can access Views if needed/preferred, and it only takes a line of code to set up. Its “World” and “Phase” classes are a great way to organize systems, and the guiding principles are pretty much exactly how I think the ECS pattern should work.

Have I gushed enough about this framework yet? Because it’s pretty great. Just one small problem.

A system’s update function must be named update. A single class can only have one function with a given name. Therefore, each system can only have one update function. If you want to update two different groups of entities, you need two entire systems. So the syntax sugar doesn’t actually save that much typing, because you have to type out an entire new class declaration for each function.

Eventually, the creator abandoned edge to work on edge 2. This addresses the “one function per system” problem, though sadly in its current state it loses all the convenience edge offered. (And the lack of documentation makes me think it was abandoned midway.)

Baldrick

Baldrick is notable because it was created specifically in response to edge. Let’s look at the creator’s complaints, to see what others care about.

• It requires thx.core, which pulls a lot of code I don’t need

That’s totally fair. Unnecessary dependencies are annoying.

• It hasn’t been updated in a long time, and has been superceded by the author by edge2

It’s always concerning when you see a library hasn’t been updated in a while. This could mean it’s complete, but usually it means it’s abandoned, and who knows if any bugs will be fixed. I don’t consider this a deal-breaker myself, nor do I think edge 2 supersedes it (yet).

• Does a little bit too much behind-the-scenes with macros (ex: auto-creating views based on update function parameters)

Oh come on, macros are great! And the “auto-creating views” feature is edge’s best innovation.

• Always fully processes macros, even if display is defined (when using completion), slowing down completion

I never even thought about this, but now that they mention it, I have to agree. It’s a small but significant oversight.

• Isn’t under my control so isn’t easily extendable for my personal uses

It’s… open source. You can make a copy that you control, and if you can submit your changes back to the main project. If you’re making changes that aren’t worth submitting, you’re probably making the wrong changes. Probably.

• Components and resources are stored in an IntMap (rather than a StringMap like in edge)

This is actually describing what Baldrick does, but it still mentions something edge does wrong. StringMap isn’t terrible, but Baldrick’s IntMap makes a lot more sense.

Anyway, Baldrick looks well-built, and it’s building on a solid foundation, but unfortunately it’s (quite intentionally) missing the syntax sugar that I liked so much.

var movingEntities:View<{pos:Position, vel:Velocity}> = new View();

public function process():Void {
    for(entity in positionEntities) {
        entity.data.pos.x += entity.data.vel.x;
        entity.data.pos.y += entity.data.vel.y;
    }
}

That seems like more typing than needed – entity.data.pos.x? Compare that to edge, which only requires you to type pos.x. I suppose it could be worse, but that doesn’t mean I’d want to use it.

Oh, and as far as I can tell, there’s no way to get delta time. That’s inconvenient.

exp-ecs

Short for “experimental entity component system,” exp-ecs is inspired by Ash but makes better use of Haxe. It does rely on several tink libraries (comparable to edge’s dependency on thx.core). The code looks pretty familiar by now, albeit cleaner than average:

@:nodes var nodes:Node<Position, Velocity>;

override function update(dt:Float) {
    for(node in nodes) {
        node.position.x += node.velocity.x * dt;
        node.position.y += node.velocity.y * dt;
    }
}

Not bad, even if it isn’t edge.

Under the hood, it looks like component tracking is slower than needed. tink_core‘s signals are neat and all, but the way they’re used here means every time a component is added, the entity will be checked against every node in existence. This won’t matter in most cases, but it could start to add up in large games with lots of systems.

Ok, I just realized how bad that explanation probably was, so please enjoy this dramatization of real events instead, featuring workers at a hypothetical entity factory:

Worker A: Ok B, we just added a Position component. Since each node needs to know which entities have which components, we need to notify them.

Worker B: On it! Here’s a node for entities with Hitboxes; does it need to be notified?

Worker A: Nope, the entity doesn’t have a Hitbox.

Worker B: Ok, here’s a node that looks for Acceleration and Velocity; does it need to be notified?

Worker A: No, the entity doesn’t have Acceleration. (It has a Velocity, but that isn’t enough.)

Worker B: Next is a node that looks for Velocity and Position; does it need to be notified?

Worker A: Yes! The entity has both Velocity and Position.

Worker B: Here’s a node that needs both Position and Appearance; does it need to be notified?

Worker A: No, this is an invisible entity, lacking an Appearance. (It has a Position, but that isn’t enough.)

Worker B: Ok, next is a node for entities with Names; does it need to be notified?

Worker A: It would, but it already knows the entity’s Name. No change here.

Worker B: Next, we have…

This process continues for a while, and most of it is totally unnecessary. We just added a Position component, so why are we wasting time checking dozens or hundreds of nodes that don’t care about Position? None of them will have changed. Sadly, exp-ecs just doesn’t have any way to keep track. It probably doesn’t matter for most games, but in big enough projects it could add up.

(Please note that exp-ecs isn’t the only framework with this issue, it’s just the one I checked to be sure. I suspect the majority do the same thing.)

On the plus side, I have to compliment the code structure. There’s no ECS framework in existence whose code can be understood at a glance, but in my opinion exp-ecs comes close. (Oh, and the coding style seems to perfectly match my own, a coincidence that’s never happened before. There was always at least one small difference. So that’s neat.)

Cog

Cog is derived from exp-ecs, and calls itself a “Bring Your Own Entity” framework. You’re supposed to integrate the Components class into your own Entity class (and you can call your class whatever you like), and now your class acts like an entity. I don’t buy it. Essentially their Components class is the Entity class, they’re just trying to hide it.

As far as functionality, it unsurprisingly looks a lot like exp-ecs:

@:nodes var movers:Node<position, velocity>;
override public function step(dt:Float) {
    super.step(dt);
    for (node in movers) {
        node.position.x += node.velocity.x * dt;
        node.position.y += node.velocity.y * dt;
    }
}

I was pleasantly surprised to note that it has component events (the notifications I talked about for Eskimo and edge). If Cog had existed when I started building Runaway, I would have seriously considered using it. In the end I’d probably have rejected it for lack of syntax sugar, but only barely.

Awe

Awe is a pseudo-port of Artemis, an ECS framework written in Java. I’m not going to dig deep into it, because this is the example code:

var world = World.build({
    systems: [new InputSystem(), new MovementSystem(), new RenderSystem(), new GravitySystem()],
    components: [Input, Position, Velocity, Acceleration, Gravity, Follow],
    expectedEntityCount: ...
});
var playerArchetype = Archetype.build(Input, Position, Velocity, Acceleration, Gravity);
var player = world.createEntityFromArchetype(playerArchetype);

Java has a reputation for being verbose, and this certainly lives up to that. I can look past long method names, but I can’t abide by having to list out every component in advance, nor having to count entities in advance, nor having to define each entity’s components when you create that entity. What if the situation changes and you need new components? Just create a whole new entity I guess? This engine simply isn’t for programmers like me.

That said, the README hints at something excellent that I haven’t seen elsewhere…

@Packed This is a component that can be represented by bytes, thus doesn’t have any fields whose type is not primitive.

…efficient data storage. With all the restrictions imposed above, I bet it takes up amazingly little memory. Sadly this all comes at the cost of flexibility. It reminds me of a particle system, packing data tightly, operating on a set number of particles, and defining the limits of the particles’ capabilities in advance.

OSIS

OSIS combines entities, components, systems, and network support. The networking is optional, but imposes a limitation of 64 component types that applies no matter what. (I’ve definitely already exceeded that.) I don’t have the time or expertise to discuss the can of worms that is networking, so I’ll leave it aside.

Also notable is the claim that the library “avoids magic.” That means nothing happens automatically, and all the syntax sugar is gone:

var entitySet:EntitySet;

public override function init()
    entitySet = em.getEntitySet([CPosition, CMonster]);

public override function loop()
{
    entitySet.applyChanges();

    for(entity in entitySet.entities)
    {
        var pos = entity.get(CPosition);
        pos.x += 0.1;
    }
}

I have to admit this is surprisingly concise, and the source code seems well-written. The framework also includes less-common features like component events and entity worlds (this time called “EntityManagers”).

I still like my syntax sugar, I need more than 64 components, and I don’t need networking, so this isn’t the library for me.

GASM

According to lib.haxe.org, GASM is the most popular haxe library with the “ecs” tag. However, I am an ECS purist, and as its README states:

Note that ECS purists will not consider this a proper ECS framework, since components contain the logic instead of systems. If you are writing a complex RPG or MMO, proper ECS might be worth looking in to, but for more typical small scale web or mobile projects I think having logic in components is preferable.

Listen, if it doesn’t have systems, then don’t call it “ECS.” Call it “EC” or something.

It seems to be a well-built library, better-supported than almost anything else on this list. However, I’m not interested in entities and components without systems, so I chose to keep looking.

Ok, so what did I go with?

Echoes

Echoes’ original creator described it as a practice project, created to “learn the power of macros.” Inspired by several others on the list, it ticked almost every single one of my boxes.

It has syntax sugar like edge’s (minus the “one function per system” restriction), no thx or tink dependencies, yes component events, convenient system organization, and a boatload of flexibility. Despite deepcake’s (the creator’s) modesty, this framework has a lot to it. It received 400+ commits even before I arrived, and is now over 500. (Not a guarantee of quality, but it certainly doesn’t hurt.)

Echoes’ performance

I haven’t seriously tested Echoes’ speed, but deepcake (the original dev) made speed a priority, and I can tell that it does several things right. It uses IntMap to store components, it keeps track of which views care about which components (meaning it’s the first one I’m sure doesn’t suffer from the problem I dramatized in the exp-ecs section), and it does not let you separate entities into “worlds.” The latter is a good feature to have, but incurs a small performance hit. I’m not saying it’s good to be missing a feature, I’m just saying it’s faster (and I haven’t needed it yet).

Echoes’ flexible components

Let’s talk about how components work. In every other framework I’ve discussed thus far, a component must be a class, and it must extend or implement either “Component” or “IComponent,” respectively. There’s a very specific reason for these restrictions, but they still get in the way.

For instance, say you wanted to work with an existing library, such as—oh, I don’t know—Away3D. Suppose that Away3D had a neat little Mesh class, representing a 3D model that can be rendered onscreen. Suppose you wanted an entity to have a Mesh component. Well, Mesh already extends another class and cannot extend Component. It can implement IComponent, but that’s inconvenient, and you’d have to edit Mesh.hx. (Which falls squarely in the category of “edits you shouldn’t have to make.”) Your best bet is to create your own MeshComponent class that wraps Mesh, and that’s just a lot of extra typing.

In Echoes, almost any Haxe type can be a component. That Mesh? Valid complement, no extending or implementing necessary. An abstract type? Yep, it just works. That anonymous structure? Well, not directly, but you can wrap it in an abstract type. Or if wrapping it in an abstract is too much work, make a typedef for it. (Note: typedefs don’t work in deepcake’s build, but they were the very first thing I added, specifically because wrapping things in abstracts is too much work.)

All this is accomplished through some slightly questionable macro magic. Echoes generates a lot of extra classes as a means of data storage. For instance, Position components would be stored in a class named ContainerOfPosition. Echoes does this both to get around the “extend or implement” restriction, and because it assumes that it’ll make lookups faster. This may well be true (as long as the compiler is half-decent), it’s just very unusual.

Echoes: conclusion

I settled on Echoes for the syntax sugar and the component events. At the time, the deciding factor was component events, and I hadn’t realized any other libraries offered those. So… whoops.

I don’t regret my choice, at all. The syntax sugar is great, abstract/typedef support is crucial, and the strange-seeming design decisions hold up better than I first though. I know I found one I’m happy with, but it’s a shame that all the others fall short in one way or another…?

How space works in Run

Space in the Run series is kind of complicated, all because of a decision I made early on. See, I had a problem: how can I program a game where you run on the walls and ceiling? I’d made platformers before, but never anything where “up” could become “left” at a moment’s notice.

My answer was abstraction. I would program the physics exactly the way I was used to. The Runner would move around in her own little world where “up” means up and “down” means down. This meant I could focus on getting the running and jumping physics to feel just right. Then after those physics had run, I would use a mathematical formula to rotate everything by some multiple of 90°, depending on which wall the Runner was really on.

Well, kind of. The actual details varied from game to game, but the core guiding principle remained “I want the jump physics to be easy to program.” In Run 1 I took several shortcuts. By the time I got to Run 3 and Runaway, I was using 3D matrices to accurately convert between nested coordinate spaces.

Coordinate spaces

Cartesian coordinate systems use numbers to represent points. Coordinate spaces are when you’re using a coordinate system to represent physical space. They can be two dimensional, three dimensional, and even more. (Though I’m going to focus on 3D for obvious reasons.)

A 2D coordinate space, with four points labeledA 3D coordinate space, with one point labeled

Those right there are coordinate spaces. Both are defined by their origin (the center point) and the axes that pass through that origin. As the name implies, the origin acts as the starting point; everything in the space is relative to the origin. Then the axes determine distance and direction. By measuring along each axis in turn, you can place a point in space. For instance, (2, 3) means “2 units in the X direction and 3 units in the Y direction,” which is enough to precisely locate the green point.

Axis conventions

Oh right, each axis has a direction. The arrow on the axis points towards the positive numbers, and the other half of the axis is negative numbers. I like to use terms like “positive X” (or +X) and “negative Y” (or -Y) as shorthand for these directions.

By convention, the X axis goes left-to-right. In other words, -X is left, and +X is right.

The Y axis goes bottom-to-top in 2D, except that computer screens start in the top-left, so in computer graphics the Y axis goes top-to-bottom. I would guess that this stems from the old days when computers only displayed text. In English and similar languages, text starts in the top-left and goes down. The first line of text is above the second, and so on. It just made sense to keep that convention when they started doing fancier graphics.

In 3D, the Z axis is usually the one going bottom-to-top, while the Y axis goes “forwards,” or into the screen. Personally, I don’t like this convention. We have a perfectly good vertical axis; why change it all of a sudden?

In Runaway, I’ve chosen to stick with the convention established by 2D text and graphics. The X axis goes left-to-right, the Y axis goes top-to-bottom, and the Z axis goes forwards/into the screen. If I ever use Runaway for a 2D game, I want “left,” “right,” “up,” and “down” to mean the same things.

Not that it matters much. Runaway doesn’t actually enforce +Y meaning “down.” I wrote comments suggesting that it should, but because of the nature of the games I was building, I never hard-coded it. Instead, I coded ways to define your own custom coordinate spaces.

Rotated coordinate spaces

You know how different game engines can assign different meanings to different axes? Well in fact, each individual coordinate space can assign different meanings to different axes. In a single game, you can have one coordinate space where +Y means “down,” another where it means “up,” and a third where it means “forwards.”

This brings us back to Run. I wanted to program character motion in a single, consistent coordinate space. I wanted to be able to say “the left arrow key means move in the -X direction, and the right arrow key means move in the +X direction,” and be done with it. Time for a couple images to show what I mean.

The Runner stands on the floor. There's a set of axes showing that -X is left, +X is right, and -Y is up, and a set of arrow keys showing the same.

Above is the basic case. You’re on the floor, and the arrow keys move you left, right, and up. If the game was this simple, I wouldn’t have anything to worry about.

The Runner stands on the right wall. The axes still show that -X is left, +X is right, and -Y is up, but now the arrow keys show -X being down, +X being up, and -Y being left.

Once you touch a wall, the Runner’s frame of reference rotates. She now has a distinct coordinate space, rotated 90° from our perspective. She can move around this space just like before: left arrow key is -X, right arrow key is +X, and jump key is -Y. It works great, until she has to interact with the level.

In her own coordinate space, she’s at around (-1, 2, 5). (Assuming 1 unit ≈ 1 tile.) In other words, she’s a bit left of center (X = -1) on the floor (Y = 2) of the tunnel. But wait a minute! In the tunnel’s coordinate space, the floor has a wide gap coming up quick. If the Runner continues like this, she’ll fall through! That can’t be right.

To fix this, we need to convert between the two coordinate spaces. We need to rotate the Runner’s coordinates 90° so that they match the tunnel’s coordinates, and then check what tiles she’s standing on. We end up with (2, 1, 5) – a bit below center (Y = 1) on the right wall (X = 2) of the tunnel. That wall doesn’t have a gap coming up, so she won’t have to jump just yet. Much better.

I have other use cases to get into, one of which is something I’m still working on and prompted this blog post. But I’ve spent long enough on this post, and I really should get back to the code. Perhaps next week.

How much content does infinite mode have?

Ah, procedural generation. The easiest way to add replay value to a game. Or is it?

Let’s talk about content and replay value, two vague but still useful measures of entertainment. (Note: Richard Stallman published a style guide telling everyone to stop calling artistic works “content.” This isn’t really important or relevant, I just find it amusing.)

The term “content” applies to most forms of online entertainment – games, movies, comics, art, and so on. The term “replay value” obviously only applies to games, because there really isn’t the same concept in other media. Sure, you can replay a video, and people do, but not as much and not for the same reasons.

Disclaimer: this post involves a lot of personal opinion and personal experience, moreso than usual.

Defining “content”

“Content” is a measure of something, but… what? (tl;dr at bottom)

Take YouTube videos. Those are widely agreed to be “content,” but does one video equal one content? No: a hundred 5-second videos aren’t a hundred times more content than a single long video. But it isn’t video length, either. A boring hour-long livestream can feel like less content than a few minutes’ worth of 5-second videos. It seems to be a combination of video count, length, level of detail, entertainment value, and (I would argue) release schedule. We’re more likely to refer to someone as a “content creator” if they release videos at semi-regular intervals.

In video games, all kinds of things count as content. Levels, characters, weapons, items, enemies, you name it. If players are excited to see more of a thing in the next update, they’ll probably call it “content.” (You may note the two media are slightly different – a YouTube video is a single indivisible piece of content, while a game is made up of many pieces of content. But I see that as an artifact of the medium. You can’t extend or modify YouTube videos after uploading, so we don’t talk about them in those terms.)

Games receive “content updates” from time to time, and one of the things players talk about is the size of these updates. The more changes made, the bigger the update. But “changes made” isn’t the metric being used. I’ve released content updates that took ages to make and still felt disappointingly small because players breezed through the levels in 15 minutes.

So… time spent? The more time you spend, the more content in a game? Well no. If that were true, idle games would have more content than any other game, followed by super-grindy RPGs. But we never talk about grinding as content; in fact, it’s almost antithetical. Idle games do have content, but the “content” comes in between grinding. When you unlock a new item or upgrade or whatever the game involves, there’s a period of adjustment, where you’re exploring the new possibilities. You have to figure out how to use this new thing effectively, and when you start to figure that out, you settle into your new grinding routine. Each new thing that makes you think is a piece of content.

So… is “making you think” the definition of content? Not quite, but it’s a lot closer.

tl;dr: I’d say the act of exploration – either literally exploring an area or figuratively exploring the possibilities of a game mechanic – is what defines content. The more time you spend exploring, the more content there is.

Defining “replay value”

Replay value is how many times you can replay a game (or part of a game) without feeling like you’re grinding. There are still some challenges to be had and discoveries to be made, even if you’ve already seen each location, item, and upgrade at least once.

At least, that’s the definition I’ll use here. Note that by this definition, replay value is a type of content. (In fact, I’m going to use the terms semi-interchangeably.) Either way, you’re exploring the game’s possibilities, and you aren’t grinding.

Defining “grinding”

Grinding is the act of replaying a game (or a section of a game) that you already thoroughly understand. It’s all about repetition and execution – you’re going to do this thing over and over again until you achieve some kind of goal.

When you fight weak enemies in an RPG to gain experience, that’s grinding. A particularly boring type of grinding, and hardly the only type. You can also grind an obstacle course, with the goal of eventually beating it. You can grind for resources by combing back and forth over the areas they most commonly appear. In rare cases, even a boss fight – supposedly the most exciting part of a game – can end up being a grind if the fight is slow or depends on randomness.

Speedrunning is (IMO) the most exciting form of grinding. Speedrunners will practice their maneuvers over and over to build muscle memory, and then spend even more time attempting full speedruns until everything comes together and they pull off all the tricks in a row. It’s a spectacular end result made possible only by replaying a game for hours, days, weeks, months, even years past the “replay value” stage. The speedrunner already found/did everything the developers intended them to find/do (content and replay value), and decided to press onwards (grinding).

To be fair, speedrunners also spend a fair amount of time experimenting, hunting for exploitable bugs and faster routes. This is a sort of exploration, not grinding, so it counts as content. It’s just content the developers didn’t intend to add.

Infinite Mode

Now that I’ve given my definitions, on to the two “infinite” modes. Both Run and Run 3 have a mode by this name, and though they’re different, I built them with the same goal: use randomness to add replay value.

Run 1’s Infinite Mode is simple. Each level is a random scattering of tiles. Not fully random – there’s a bias towards clumping together – but close enough. This produces a practically endless number of levels, but not an endless amount of content. Once you’re good enough to beat the highest difficulty, it takes only a few more run-throughs before you get used to everything the mode has to offer, and “replaying” becomes “grinding.” Ultimately, there isn’t a whole lot of content here.

In Run 3, the mode contains over 300 pre-made levels, which you encounter in a mostly-random order. This provides a much more varied experience, with each level having a distinct style and challenge. This is, unambiguously, a lot of content, and all it took was making 300+ levels by hand. Then there are a bunch of achievements that give bonus cash if you can beat levels in unusual ways, and upgrades to be purchased with this cash, and post-run statistics that provide tidbits of information on how you did. All this creates a much more replayable mode… in theory.

In practice, most of that stuff is for grinding, and therefore doesn’t count as “content” by my definition. The main culprit are the shop prices, some of which are astronomical. You’ll still be grinding for the Angel long past when you’ve seen most of the levels. This isn’t great design, and I know that, but in my defense, it’s supposed to be temporary.

Which brings us to the future of the two modes. In short, I want to add more content to both of them, building off what already exists. I want Run 1 to generate more interesting levels, and I want to make better use of Run 3’s existing 300+ levels.

The future of Infinite Mode in Run 1

For this mode, I want to start employing patterns. So many of my hand-made levels were made by creating a simple pattern, then repeating it. It shouldn’t be hard for a computer to do the same, even if it won’t do as good a job.

On top of that, adding an option to share random seeds will instantly increase add replay value. Not by adding new levels or anything; that’s what the patterns are for. No, seeds will make the existing levels more interesting to explore. Now you have more reason to pay attention to little details, because if you find an interesting randomly-generated level, you can save it for later and share it with friends.

For those out of the loop, computers rarely use truly-random numbers. Instead, they use complicated and unpredictable patterns to simulate true randomness. The mathematics are complicated, but all you really need to understand is they start with a seed: a single number serving as a starting point. And each time you start with the same seed, you’ll get the same “random” values in the same order. That would make Infinite Mode generate the same levels in the same order, down to the tile.

Well, kind of. Infinite Mode’s levels are based on two factors: randomness and difficulty. Difficulty is constantly changing based on the player’s performance, not based on the seed. You’d only get the same levels if the difficulty value happened to be the same. Fortunately, once you unlock a difficulty, you can go back and replay lower difficulties to see what you missed.

I think this will matter most to speedrunners. Now instead of each level being its own self-contained challenge, you’re playing a set of 100 connected levels, and you know them all in advance. You have to beat only about 10% of them, and the trick is to pick the easiest and fastest ones. So say you just beat difficulty 35 and jumped to 45, but you already know that 45 is annoying and inconvenient. You might choose to fall back to 44 and play that instead, even though that reduces your progress by 1 overall. That’s 100 levels’ worth of possibilities to explore, with almost no extra effort on my part.

tl;dr: I want to generate slightly better levels, and provide a way to replay ones you like. Both would add a modest amount of replay value.

The future of Infinite Mode in Run 3

I have multiple plans for this mode. I want to make the levels more interesting in and of themselves. I want to expand the upgrade system, including upgrades to income (so the Angel doesn’t take so ridiculously long). I want to arrange the levels in more interesting ways, with occasional choice points. I want more risk-reward tradeoffs.

I plan to make the levels more interesting by adding, removing, or moving tiles. The same sort of thing that Run 1 does, except less random. This doesn’t add much new content, as the levels will be mostly the same, but it’ll keep players from getting complacent.

Everything else – upgrades, choice points, tradeoffs – will be bona fide new content, and will take a lot of effort to design. Each will introduce new strategic choices, at different levels of gameplay. Tradeoffs happen in the moment, and you succeed or fail after a few seconds. At a choice point, you commit to a branch of the tunnel, and that branch determines the next several levels. It’s a higher-level decision with longer-lasting consequences. Upgrades happen before you even set out, and affect an entire run, creating some meta-level gameplay where you decide what to bring.

Each higher-level decision will affect the lower-level ones, creating more possibilities to explore. As an example, suppose one of the upgrades is a flashlight. If you bring that along, you might feel more comfortable venturing into a low-power branch of the tunnel, because you’d still be able to see a little. But then once you’re there you come across a long-jump challenge, and you decide not to attempt the jump because the flashlight doesn’t reach that far. Instead you go around and pass up that reward. If only you’d brought a mobility upgrade in place of the flashlight… but in that case would you have risked coming to the low-power branch?

tl;dr: There’s still a lot of potential to spice up Run 3’s Infinite Mode, and I plan to do it by adding game mechanics. Adding a few more levels won’t make much difference, but new mechanics will add replay value to all 300+ levels simultaneously.

What is Runaway?

I’ve been working on Runaway since (at least) 2019, and talking about it in vague terms for about that long. But what is Runaway, really?

Before getting started, let’s clear up some terminology. Runaway is a game engine, named after Run (the game series) and Away3D (the 3D rendering library). The Runway is a tunnel in Run 3 that hasn’t been released. Despite the similar names, there’s no connection.

Overview of Runaway

A game engine is a collection of code designed to help people write games. You may have heard of engines such as Unity, Unreal, or GameMaker. Haxe – the language I named this blog after – has engines such as HaxeFlixel, HaxePunk, and/or Armory. All of these are designed to serve as a solid foundation for making games, saving you the time of re-writing your physics and rendering code each time.

Which is an odd thing for me to worry about, since I’ve never been afraid to re-write my physics and rendering code, or anything else really. Each game in the Run series was re-written from the ground up, and Runaway is its own ground-up rewrite. Though it’s taken years, I’ve learned a lot each time, and I’m building Runaway because I finally feel ready to build a standalone game engine, separate from the games themselves.

Entity-component-systems

So what sets Runaway apart from all the other engines? The big difference is, it uses an entity-component-system model. This model isn’t strictly better or worse than conventional engines, but to me it feels more elegant. (And if I’m being honest, the aesthetics are what won me over.) Practically speaking, the model lends itself to loose coupling, meaning Runaway should be especially versatile.

As the name implies, there are three important pieces here:

  • Entities are, you know, things. They can be anything from the tangible (characters, obstacles, items) to the vague (load triggers, score trackers), depending on what components they have.
  • Components are properties of entities. Things like position, size, shape, AI, abilities, and appearance. Each component is a small piece of data, and entities can have as many as needed. Note that components are nothing but passive data storage. They don’t act or update on their own.
  • Systems run the actual code that updates entities and components. Each system looks for a certain set of components, and updates only the entities with that set.

That’s the magic of the ECS model: what you are (an entity’s components) determines what you do (a system’s code).

Alice and Bob

Let’s walk through an example to see this in action.

var alice = new Entity();
var bob = new Entity();

Right now, Alice and Bob have nothing but a unique ID (Alice is 0 and Bob is 1). They aren’t characters yet. I’d describe them as “floating in a void”, but they can’t even do that because they don’t have positions. Let’s fix that.

alice.add(new Position(0, 0, 0));
bob.add(new Position(5, 0, 0));

Now that we gave them Position components, they’re floating in an endless void, doing nothing. How about a race?

alice.add(new Velocity(0, 0, 0));
alice.add(new Acceleration(0, 0, 1));
bob.add(new Velocity(0, 0, 10));

With the addition of new components, they immediately spring into action! Bob takes a commanding lead, at a speed of 10 units per second. Alice, meanwhile, accelerates slowly at 1 unit per second. Even after 5 seconds, she’s only moving at half Bob’s speed and has traveled only 12.5 units, compared to Bob’s 50. Bob will continue to widen the gap between them over the next several seconds, but his speed is fixed. As Alice continues to accelerate, it’s only a matter of time before she overtakes him.

But where is the code for this? Alice and Bob’s positions are changing every frame, even though Position, Velocity, and Acceleration components are nothing but data. It’s because Runaway has a class called MotionSystem, that was waiting all this time for these components to show up.

class MotionSystem extends System {
    @update private function accelerate(acceleration:Acceleration, velocity:Velocity, time:Float):Void {
        velocity.x += acceleration.x * time;
        velocity.y += acceleration.y * time;
        velocity.z += acceleration.z * time;
    }
    
    @update private function move(velocity:Velocity, position:Position, time:Float):Void {
        position.x += velocity.x * time;
        position.y += velocity.y * time;
        position.z += velocity.z * time;
    }
}

For those unfamiliar with Haxe, these are two “functions” named accelerate and move. The accelerate function takes three “arguments”: acceleration, velocity, and time, and it modifies velocity. The move function takes velocity, position, and time as arguments, and it modifies position.

Because these functions are marked @update, they will automatically run once per entity per frame, but only if the entity in question has the correct components. Ignore time (it’s always available), but the other arguments must match. That means having a Position component isn’t enough, because move also requires Velocity. (Just Velocity wouldn’t be enough either.)

Now that Alice and Bob have both components, the move function automatically updates Position each frame. Alice also has an Acceleration component, so she meets the criteria for the accelerate function, and therefore accelerates as well.

While the physics described here aren’t anything impressive, the important thing to notice is how easy it is to add (or not add) functionality to an entity. Acceleration is baked into most physics engines, but in Runaway, it’s totally optional.

This is why I consider Runaway to be flexible. I can write extremely specific code, tailored to all kinds of specific situations, and then pick and choose which to apply to which entity.

Runaway’s current status

As of this writing, Runaway is being used in a single game: Run 1’s HTML5 port. It has all the features needed for that simple game, and a few more, but it’s also missing things I’ll need going forwards.

One of the improvements I need to make is the loading system. The current system requires pre-determined levels, such as the 50 levels in Run 1. But Infinite Mode generates levels on the fly, with difficulty based on your performance in the previous level, and that requires something more flexible.

I do hope to release the engine at some point, just not for a while. First I want to complete multiple games, including at least one not in the Run series, to be sure it’s stable and versatile enough to compete with the existing engines.