2015/10/02

The Character Controller: Tile Map Collisions, Part 2

As expected, this one was a little tough to work out.

Basically, I added a diagonal wall collider to the tile map. Like, three times. Because the first two attempts were really hard to understand, built ad hoc and incredibly buggy.
So, I decided to take a step back and formalize my collision system first, with some basic vector math to unify pretty much any collider in a single concept. I then rewrote the Impassable collider within that system, fixed a couple of bugs, and then implemented diagonal walls, which actually kinda work now.
"Kinda", because some arcane collider configurations allow you to faze through walls, but I think I already found the cause of that, I just didn't manage to fix it yet. It'll have to wait until my debugging utilities are overhauled, which I'm considering doing next. Not sure though.

The Theory

Before we jump into the code, let's talk about how this will go down conceptually.

Collision detection is now handled inside a loop. In each iteration, we move the character as far as it can go without causing a collision, and independently of that, store the direction it should go in the next iteration, which is determined by what it would collide with first, and how it is oriented relative to it.
This allows us to suggest a new direction to go, independent of whether going that way would also cause a collision.

To determine how far the character can go, we use a generalized function that finds the intersection point of two line segments in terms of the point's distance from one segment's starting point, relative to the segment's length.
The first line segment starts at the character's hit box (or rather, its corners) before the movement, and ends at the same corner after the movement. The value we get back for every test should be in the range [0, 1] if there's a collision. If we multiply the speed by that value, and move then, we wouldn't get a collision.

To make the character slide along a wall he didn't approach head-on, we need to figure the remaining speed of the character, which is 1 minus the value from before, multiplied by the speed.
We then need to perform what I call a half-reflection about the wall's normal on the remaining speed vector. This is just the average between the original vector and a properly reflected vector, which can be calculated more elegantly by simply leaving out the factor 2 in line 765.

Implementing Helper Functions

As a prerequisite to the actual collision logic, we'll need the two functions mentioned above.
Starting with the line segment intersection test, we'll additionally need a CrossVector2 function, defined as the magnitude of the cross product of two 2D vectors in 3-space.
Sounds a lot scarier than it is:
Next, the actual intersection test function:
Note that floating point values, by their nature as discretized real numbers, are imprecise; you shouldn't just test equality on them as I do here (rather, you should test if they are within a certain margin of error, often called epsilon), but I didn't want to pollute my example code with them.

Finally, the half-reflect operation, which is really quite simple:

Rewriting AABB Collisions

This was actually the hardest part, but after this, implementing new collider shapes became dead simple.
To keep the first steps simple, let's rewrite it so that we simply stop moving on collision, no sliding for now. We can actually keep the outer condition from our old TestSquareCollision function; unless the bounding boxes intersect, we don't need to do anything.
If there is an intersection, however, we need to make some calls to LineIntersection. Which line segments do we test for intersection, though? I already mentioned that lengthA will always be positionOffset, and that startA will be a corner of the character's hit box. The other line segment is obviously an edge of the hit box we're testing against, but which corner and edge, respectively, depends on the direction the character is going.
If they're moving away from an edge, there's little point in performing a collision test. Similarly, if the positionOffset points inside the character's hit box, we don't need to test that corner.

We'll do the axes separately again, testing both corners on the side that lies further into the direction of movement. That does mean that we're testing one corner twice, during diagonal movement, but against different edges.
Great. That won't actually do anything, though, because we're just discarding the results. We want to find the smallest, non-null result that is in the range [0, 1], let's call it the collisionPoint, and then multiply the positionOffset by that.
Finding the minimum is pretty easy, as we already come across every value (no extra looping required); we just need to start with a collisionPoint = 1 and reassign it every time we come across a smaller value that is greater than one.

Running this, it feels horrible to control, so let's add sliding back in. We already did most of the work by implementing HalfReflect, all that's left is reorganizing our code to use that.
Really, we just need to do some additional bookkeeping; maintain a Vector2 wallNormal that points away from the edge whose collisionPoint we use.
At the end, before multiplying the positionOffset, we then calculate the Vector2 remainder, which we optionally return via a ref parameter:
With that, we can go back to our Update method and add the loop that applies the remainder again.

To do that, we wrap pretty much everything after input processing in a for loop - you could use an infinite loop that breaks when there is no remainder, but that's risky and potentially slow - and maintain a Vector2 nextIterationOffset, which we can apply at the end of the loop, like so:

Adding Triangular Colliders

Finally, something new!
To add a new collider type to the tile map, we first of all need to add a new value to the TileCollider enumeration. I'll just do one orientation and leave the rest as an exercise for the reader, so let's call this one CornerNE, meaning the beveled north-eastern corner of a room.
To make the thing functional, we add a corresponding case to the switch statement in CharacterController.Update.

Therein, we can now write LineIntersection(new Vector2(bottomRightOld.X, topLeftOld.Y), positionOffset, newVector2(x, y), Vector2.One); (x and y are the coordinates of the TileCollider, remember?) and can basically treat the result as we did in TestSquareCollision.
There's two problems with this, though;
  • If you move up to the wall, such that LineIntersection returns 0, and then move away from the wall, you'll actually slide, even though that shouldn't happen (unless the wall is magnetic, for instance). To fix that, we can add another condition on the positionOffset's direction, only testing when you're moving towards the wall on at least one axis.
  • If you approach such a block from the other side (where a wall should be), you can move all the way through, making this a one-sided wall. The trick here is to goto case TileCollider.Impassable if the character's right is right of the tile collider's right, or their top is above the tile collider's top. This covers not only the right and upper side of the wall, but also its lower-right and upper left-corners.
In it's entirety, the new case looks like this:

That wraps up this lesson, but as I said in the beginning, there's still a bug lingering inside this somewhere, so do beware.
The last collider type I wanted to implement are staircases, which are kind of like diagonal walls, except they are entirely passable, and will always reflect the entire position offset. Not sure if that actually works, but either way, it has some huge implications for animations (which I'll have to rethink anyway).
I'm not sure which problem I'll tackle next, but I'll certainly post about it. Until then!

2015/09/23

The Character Controller: Tile Map Collisions, Part 1

Yeah, I'm now doing a series within a series. Great.
I really wanted to do everything about grid based collision testing in one post, but I got the basics working in two or three different ways already, and haven't figured out non-square colliders quite yet. Since there's actually multiple of those that I want to make, I figured I have to make this at least three posts.

This is the first part. The easy part. Sigh.

Defining the Collision Map

First of all, we need to go back into the TileMap class and add collision data.

We could do crazy stuff like having a more fine grained collision map, so you'd have four (or nine, even sixteen) pieces of collision data per visual tile. This is actually what the first Pokémon games did, though their blocks were actually 32 by 32 pixels.
But! That's quadratically more colliders we need to test for an equally large character hit box. Time we could spend elsewhere. Like on more complex colliders that give us floating precision, rather than keeping us tied to the grid. We'll get to those later.

For now, let's just define a field of type TileCollider[,]. We'll then need to define the TileCollider type. If you wanted to parametrize the complex colliders, you could use a struct. I didn't see the need, so I went with an enum. We can switch later on, and define constants for the non-parametrized ones, leaving the previously written code functioning.
Since we're only doing the basics today, the only values we'll need are TileCollider.Passable and TileCollider.Impassable. The former isn't actually a collider and just lets us pass through, while the latter is a square collider around the entire block.
Add some code to the test constructor and we should be good to go.

Identifying Relevant Grid Cells

Back in the character controller, we now have to make use of the collision map somehow. It would be pretty easy to just iterate over the entire map and collision test against all cells. There's two problems with that: Obviously, it's really unnecessary because we can probably identify what cells are relevant, and because we might have to do two passes, depending on how we implement some colliders.

We could use the same optimization here as we did with tile rendering; calculate the bounds of the relevant subsection of the grid and only loop over that;
Do that twice - with slightly different loop bodies, and you should be able to get even complex colliders to work.

There's other options, though. For one, we probably don't actually need to do collision tests on the cells that don't intersect with the edges of the hit box. However, unless the hit box is larger than 1.0 on both axes, that's zero cells, so that's a completely stupid (and surprisingly hard) optimization to make, unless you want to have the player control a larger character at some point.
Or you want a theoretically complete, reusable engine. Which is a kind of ridiculous idea, because you'll probably want to make some changes with every new game you build, no matter how good your engine is.

You could also look at only the corners of the hit box in the first pass, and only the edges in the second pass. We'll get into why that would work (and what you need two passes for in the first place) in the next post, but for now, suffice it to say that it would probably work.

If you get either of the edge/corner only approaches to work, these are pretty easy to optimize, as you only need to check one edge (two corners) during horizontal and vertical movement, and two edges (three corners) during diagonal movement.
But both of them are somewhat inconvenient to implement, so I'm not sure that's really worth it, especially if you consider that the character controller only runs once per frame. Maybe two or three times, if you wanna have the kind of enemy/puzzle element that imitates the player's movement in a different collision environment.
Even if you have to optimize it, you'll probably want to start with entity collisions, because that takes a variable amount of time, depending on the number of entities.

Detecting and Handling Tile Collisions

Now that we have a loop, we can start detecting and handling collisions. How that is done depends on the collider, though, so let's start with a switch statement like this:
The passable collider doesn't actually have to do anything, since movement is supposed to be unaffected there, so we'll just leave it like that, and make it the default behaviour as well.
As for the impassable collider, we actually kind of have that already, as entity colliders are rectangles, and we just want an ol' square collision test.

I decided to make a function I called TestSquareCollision, which got the positionOffset as a ref, so it could modify it without returning it. I'm not sure implementing it exactly like that was the best idea, as there's actually some stuff in the entity collision handling that has yet to be added, which doesn't have to be shared with grid collision testing, namely the ColliderComponent.Passable flag, determining whether the collision should be handled or merely detected, and script hooks which may need to be called.

Either way, implementing the impassable collider now becomes trivial by calling TestSquareCollision like this:
Before we wrap this up, there's one more thing: If we leave the bounds of the map, we'll be greeted by an IndexOutOfBoundsException, because we try to test colliders outside of the collision map.
That's not really acceptable, so let's implement something akin to border blocks. Except we don't really need a border block collision map. Generally, we don't want the player to reach outside the map, so we can safely prevent it like so:
Testing this will reveal another bug though. When sliding along the edge of the map via diagonal movemene, we'll emcounter the "literal corner case" from before, which we currently handle by throwing an exceptioin. Which at least brought the issue to light, though the fix is just not doing anything, as it turns out. You may still want to log it, in case it occurs in other situations where not doing anything is inappropriate, but for now it's fine.

And that's the basics, really. If you don't want fancy collider shapes, you're done. Imma need at least six more, (though that's really more like two in various orientations), which we'll cover in parts 2 and 3.
Until then!

2015/09/21

The Character Controller: Entity Collision Testing

Now that we can place sprite objects on our tile maps, it's time to make stuff move. We already did that with the AnimationComponent, to some degree, but we should give the player some control now, shall we?

Basic Movement

Thankfully, we already have our input mapping down, so there really isn't anything to get in our way with this. Let's just start with a CharacterController class like this:
You can imagine how the constructor goes, so let's jump straight to Update(). To consistently move (and later, animate) our character properly, we'll need the time that has passed since the last update, but for now, that's pretty much it:
Now, let's think about how we actually want to do this.
The very first thing we'll do is process input, since there is no point in doing anything else until we're certain the player actually wants to move their character. Now that we know we're not dealing with zeroes, we should also normalize the direction vector so diagonal movement doesn't become absurdly fast (although that might be an interesting option if you want to optimize your game for speed running). In the same step, we can also calculate the actual position offset: As you can see, I also added a "sneak button" which halves the movement speed. This is because I wanted something akin to the run button in many old RPGs, with a better reason not to press it; usually, people would hold it down all the time, unless they were in a tight area that required more precise navigation. This just inverts the roles, and spares you some thumb pain.
To complete basic sprite movement, we just need to add the positionOffset to our Character.WorldPosition.

Collision Detection

Now for the interesting part, I decided to implement collision testing against other entities before testing against the tile map, because it's easier to test quickly and presumably has greater implications for our data structures; making map collision suit that data would be easier than vice versa.

The Collider Component

Speaking of data, let's talk about the ColliderComponent class. Like the SpriteComponent from last time, it points to an Entity, holding its WorldPosition, and then has some data of its own.
For now, that's just data defining the size of the collider. Later we'll add collision behaviour related stuff, like script hooks. How do we define the size of the collider, though?

We could use circle colliders; with the entity position as the center point, we only need a radius, and collision testing circles is super easy. But since our maps are based on an orthogonal grid, using circle colliders is kinda janky when you need to block a path with entities or something.
I'd rather use AABBs, but the Entity.WorldPosition represents the entity's center point, and we can't use Rectangle because it's integer based to boot. So this is what I ended up with:
I decided to use a Vector2 to store the width and height of the bounding box - we'll have to do intersection tests manually.

To use this class, I just replaced the Entity Character with a ColliderComponent Character and changed all references to it to Character.Master.

Testing for Collision

To do the actual collision tests, we first of all need a list of other colliders to test against. Once we pass that into our Update method, we can just loop over the collection, calculating the actual bounds of the respective hit boxes:
Finally, we need to test if the two rectangles actually intersect. Since, they're axis aligned, that's pretty simple as well:
Two rectangles A and B intersect, if their edges are arranged ABAB or ABBA on both axes.
You can easily assert that by checking if the left edge of A is left of the right edge of B, and the left edge of B is left of the right edge of A. Repeat for other dimension/s.
In code, that looks like this:

Handling Collisions

Many tutorials bail at this point, or explain the obvious options, like projectile hits (Apply projectile effects, Remove projectile from play) and knock back (Move the object into the direction it came from or do more fancy calculation to determine knock back direction).
Turns out creating an object that's simply impassible is pretty hard to get right; you might encounter jittering sprites or weird jumpy behaviour. Neither is particularly pleasing to look at, and might cause more serious bugs down the line (e.g. if the object jumps somewhere it can't get back out from).

So, how to do it right?
I decided to approach this very carefully, ignoring diagonal movement for a while:
Essentially, we find the relevant edges, depending on which direction we're moving in, and reduce the positionOffset by the difference. If we changed direction, rather than speed (i.e. changed the sign), we set the speed to zero instead - this keeps the sprite from moving at all if it can't go some direction without a collision.

To add diagonal movement to the mix, we actually only need to modify the outer conditions a bit. If we're moving diagonally downward, for example, but can't go downwards without a collision, we want to keep the horizontal speed, while reducing the vertical speed.
To determine which axes we need to keep intact, we need to introduce another rectangle; the bounding box of the Character at its current location:
As to what we'll do with that: We'll test the intersection condition mentioned earlier, except we'll test the axes separately. If the box is above or below the other rectangle, we need to limit vertical movement, if it's to the sides, we need to limit horizontal movement:
There's one literal corner case unaccounted for in this code: if you approach a collider vertically, and it does not fulfill either condition, you get unhandled behaviour. Try as I might, though, I haven't been able to force that behaviour, and it's difficult to say how to handle it appropriately. I decided to just log that and not worry about it too much. If it practically never happens, and there isn't too much to worry about (I suspect a little jump at worst), there's better places to spend your time.

Better places, such as collision testing against the map.
Which is what I'll be doing next.

Until then!

2015/09/15

Map Objects: Rendering and Animating Sprites

Yes, that's right. I actually started working on my supposed main project again.

The Results

Before we delve into the code, here's a shitty gif of what we have so far. You'll have to excuse the quality, I haven't looked into screen capturing software yet, so I made this with screenshots and a simple gif converter:
At 500 FPS, which I'm currently getting on my machine,
this actually doesn't look like a strobe light.
Also, here's my Tilemap.cs file, copied verbatim. That should give you an idea of why I haven't been working on this for a while. It's feature creep at it's finest.
But that's okay, it's mostly throwaway code I wrote knowing it'd be rewritten once I knew how everything should fit together.

The Code

Okay, let's talk business. Right at the top, there's a monstrous [Flags] enum that I haven't talked about before, but that doesn't get used until much later, and it's not like the enumeration itself needs a lot of explaining, once we get to that bit.
The truly new part, though, are the following type definitions: The all important Entity class with one whopping member, the SpriteComponent and AnimationComponent, and some other animation related data structures.

Sprite Rendering

The SpriteComponent is relatively simple, mostly because it doesn't define any methods. Scrolling down to TileMap.DrawSpriteLayer(...), it should be pretty easy to see how it's used:
The TileMap has a List of sprites and simply renders them in order, if the passed layerIndex is equal to their own Layer member. The position a sprite is rendered at, of course, depends on the Entity.WorldPosition, but just rendering there won't do too much good; generally, we don't want the sprite's top left to be the Entity's center point, so we use half the sprite's width and it's full height, converted to world units, as a negative offset.
In case you're wondering why I don't use a different SprteSortMode if I'm just rendering in order, it's because sprite sorting seems to be broken in MonoGame [citation needed], so I just sort all sprites in the same list, before calling DrawSpriteLayer() multiple times. Also, n*log(n) scales better than n*m*log(n/m).

Animations

Since the animation data structures also don't define any methods, let's jump straight to the code that processes them. It's in TileMap.Update(...), and enclosed in the second profiler.BeginOperation /* ... */ EndOperation block. That's lines 368 through 418.
Again, the TileMap keeps track of all AnimationComponents in a List, and iterates over that list, though the update logic is a tad more complex than sprite rendering.
Of course, we skip null entries, as well as components that currently aren't playing an animation, as signified by a negative index. We then update the timer and pull out the current frame's duration from the nested data structures.
We then calculate a new world position, which was actually pretty difficult to get right: we regularly sort by position, and so we can't use an offset we add during rendering, so no lerping. I'm still not convinced this is right, but at the moment, testing is rather difficult, and it's still throwaway code, so I don't really care quite yet.

The next part, you'd be familiar with; if the timer is past a threshold, we increment a frame counter and subtract from the timer, in preparation for the next frame.
Then we need to find the next frame in the animation, which is the short bit in the following else block. If there is no next frame (signified by another threshold), we look at the AnimationSequence.
This is the least self-explanatory part in this update, I think; the AnimationSequence is essentially a list of animation definitions that we want to play back in order. This would be used by both passive behaviour (instead of parallel script execution) and event scripts.
To reflect this duality, each AnimationComponent has two sequences: the active sequence, which is the one we use to look up the next frame, and the default sequence, which is set as the active sequence when we're not executing scripts.

The script engine would have to lock sprites by setting the active sequence to null and could then animate them by setting another sequence. The sprite could later be released by resetting the active sequence to the default sequence.
This is not actually reflected in the code at the moment (I don't know why, actually - I'll fix that once I'm done writing this), but it will be at some point.

Other changes

It's been quite a while since the last time I let you people look at my map rendering code, so there's some other new features, such as parallax backgrounds, but that's actually not that much code, and more related to the camera, which has to get a different matrix calculation method.
There's also unmentioned leftovers from the initial refactoring, such as the LayerSettings enumeration, but I think if you look at the Draw method, that shouldn't be too hard to figure out. The complicated part there was precalculating the look up tables, so if you take those for granted, the code itself isn't that bad.

What's next?

Now that this ordeal is over with, there's a few options of what to implement next.
  • Of course, I could do another refactoring, but the other subsystems will also tie into the TileMap class, so prototyping these will mess it up again. I'll do that when the TileMapView is feature complete and I want to thoroughly debug it.
  • Now that we have the visual aspect of maps down, we could think about implementing BURGLE, the level editor for this engine. That would simplify testing quite a bit, but keeping two code bases up to date on an experimental API is a lot of work.
  • Testing could also be simplified by implementing debug visualization, but I refuse to do that until I overhaul the configuration and logging facilities, and by extension, the profiler. Making the latter two non-static was a stupid mistake, and the .NET standard config and logging facilities are just awful to work with.
  • We could also ignore all that and get to work on the next engine subsystem, either the audio engine or the character controller, as a prerequisite to the physics system (which will not be a complex simulation; collision will only be tested between the player character and other entities).
That's all I can think of as sensible at the moment, but I'm a bit torn. Blogspot statistics tell me I have, like, two loyal readers, so if either of you have a strong preference there, let me know in the comments or message me on reddit (/u/teryror), if that suits you better.

UPDATE: The two page views I expected are in, and I haven't gotten any messages concerning this, so I decided to approach the choice from first principles.
First of all, coding the level editor would require a significant amount of work in areas other than map rendering, and I don't think I should leave this code to be half-forgotten before rewriting it. The same goes for the entirely unrelated debug facilities.
Choosing what subsystem to implement next is a little non-obvious in this regard. The audio engine is on the same level as map rendering, as it should also work regardless of game state. However, the code is not all that involved with the map itself. There are a couple of related map properties, such as the default background music and echo settings, but collision detection is inherently involved with the map, so that's what I'll work on next.

Of course, you can also let me know if you have questions about this rather terrible code, though I'd understand if you wouldn't want to touch that with a ten feet pole.

Anyway, see you later!

2015/09/07

Analyzing Color Palettes: Introducing Impale.fs

As announced last week, I've been working on an image analysis utility called Impale. It's just 39 lines of F#, but not being used to the language and fucking around with various plotting frameworks (none of which I got to work in a reasonable time) led it to take like 12 hours of work.

You can find the code here, but we'll be going over it now anyway, before we apply it and see if we can learn something new about pixel art and color palettes.

The Source Code

First of all, we open some stuff, which is essentially using in C# terms. We use System.Drawing to read image files, System.IO to write CSV files, and I had to add System.Globalization because my PC is running a German locale, which means I usually can't just use ToString on floating point numbers and expect it to play nicely with basically any tool ever made in the US. Which is practically any tool, ever.

We then define the main function and mark it as the EntryPoint, which could be any top-level function in an F# program, I just decided to name it main anyway. It doesn't actually do anything interesting by itself; we just define a local function analyze, and use Array.Map to apply it to all arguments that were passed to the program. Finally, we return 0, for no particular reason, other than not bothering to look up how to properly return unit, the F# equivalent of void.
Note that map actually returns a new array, where each element corresponds to the result of analyze when applied to an element in args. Since we only call analyze for its side effects (creating two files), we have to ignore the result explicitly, since F# expects us to write programs without side effects.
ignore is actually a function itself: the |> operator redirects the value on its left into the function on its right.

The analyze function

This is where all the actual work happens. We start off by opening a bitmap, and converting it into a flat list of colors using list comprehension. We then use List.distinct (which doesn't seem to appear on MSDN for some reason) to get the set of all colors in the bitmap, except it's still a list, not an actual set.

We then define writeLine, a local function that prints a properly formatted (float, float) tuple to a StreamWriter.
We also define getValueSaturation, which extracts such a tuple from a Color instance using the GetBrightness and GetSaturation methods. We then get our first set of data points using List.map again.
Next, we create a StreamWriter to a (hopefully) new CSV file and use partial function application to create an anonymous function based on the writer and the writeLine function. If you think of it as a family of functions distinguished by the StreamWriter they use, this is a pretty cool form of metaprogramming.
Finally, we close the stream because we're good citizens.

Room for improvement

These last couple steps are then repeated with a different file extension and a different color-to-data-point conversion function. Since you can pass functions around easily, this is actually redundant code, I just copy-pasted it as an afterthought after I got the first step working.

We could also have done a more sophisticated data analysis, for example by grouping the colors into color ramps, adding more valuable information.

Since I didn't want to bother getting XPlot to work, the script currently puts out CSV files which I have to manually paste into fooplot. Visualizing data automatically would be a great boon.
At least I don't have to manually gather color information with Paint.NET or something...


But... why?

Essentially, you can learn pixel art by trial and error. You can get better quicker by looking at other people's work. Outlining, dithering, applying texture to a surface. I feel like I'm reasonably good at those techniques, and you can learn them by zooming in and studying images that do them well.

However, learning to properly construct palettes is hard. You need to reconstruct the palette from the image to get an overview, you need to understand basic color theory and understand how the palette relates to the image as a whole.
Just extracting the palette is tedious work and easy to mess up. Visualizing it in a meaningful way to understand why it works is itself pretty boring, but necessary to perform any kind of analysis.
This tool is supposed to help with these two steps in particular.

We gather all colors in the image and spit out two scatter plots in CSV: One shows the relationship between saturation and brightness, the other between hue and brightness.
Let's look at a couple of examples to see how that can be an immense help in palette analysis.

Pokémon Garnet

Okay, so this is a long dead RPG Maker XP based fan game, but it's easily the best looking fan game I've ever seen, it arguably looks better than the original Pokémon games, and it's up there on my list of best looking pixel art game, full stop.

I mean, just look at this:


This game is gorgeous! And that's because it has an obviously carefully crafted color palette.
Of course, the line work and shading are pretty good, too, but most of these tiles are recolors from freely available tile sets; they just aren't the distinguishing factor.
The palette alone isn't enough to make it stand out, though. The levels (or, at least, the few screenshots we have, the game never saw a release) are actually designed with the palette in mind: Each area has a very distinct color scheme.
Anyway, let's do this analysis.

First of all, we need to beware of the screen overlays this game uses. We could keep them in, but since I plan on using some too, we should only run the analysis on portions of the screen that aren't affected by the overlay.
For instance, the scene at the beach is complicated by god rays and lens flare. Using the magic wand tool with global selection and zero percent tolerance, I selected and copied an area of the screen that wasn't overlaid with anything and ran the analysis on that.
Starting with x=Brightness; y=Hue for the beach scene
Immediately, we can see clear bands of color. That's the color scheme I mentioned. We have some obvious outliers; the medium-bright purple/pink at the top is the NPC in the top right corner, the really bright blues on the right are at the water/land border, where there's spume. The yellow near the middle of the lower left quarter is probably from one of the NPCs as well, though it's not obvious.
Other than that, there's two obvious groups, one in the 0-45 degree range, the other in the 180-225 degree range. So we're actually dealing with near-complementary colors.
Furthermore, both groups can be split into at least two color ramps each. It's a little difficult to tell from the diagram alone, but we know that the orange group is made up of the light, yellow sand and the dark, brown rock, while the water is split into the light blue rock surface and the dark blue rock walls. While it's not necessarily easy to tell which color belongs to which ramp, it's clear there's no definite trend within a ramp. For example, I remember quite a few tutorials saying that as colors get brighter, their hue should trend towards 60 degrees, i.e. yellow, making the light source appear yellow. Here, there seems to be mostly steady hue within a ramp.
x=Brightness; y=Saturation, also for the beach scene
For saturation in relation to brightness, we see a general upwards trend, with few colors in the lower right and upper left quarters. We can also see that the density in the lower left is a lot higher. There are few obvious color ramps, though.
You might also notice that, other than black for sprite outlines, in this image there's exactly zero colors that have less than ten percent value or saturation.

I'm not going to write a full analysis like this for every screenshot I have, but looking at some other saturation/brightness diagrams, there's one rule that feels really important: saturation and brightness rarely sum to more than one, and when they do, it's usually on an interactive object.
You can see this for yourself by drawing the function f(x)=1-x. Only few colors will ever fall above the line. For example, this is from the top-right forest scene:
Of course, this diagram doesn't show the general upwards trend, and there's colors with less than ten percent brightness and less than ten percent saturation, though there's still no zeros there, and none with both values in the deep end.
In fact, in this scene, if you were to color code the dots by which ramp they belong to (which I actually did using Paint.NET and Excel one time, motivating me to write the script), you'd notice that the saturation in each color ramp decreases as brightness increases.
In the portion of the screenshot I analyzed, there's no NPCs visible. The only colors above the previously mentioned line are outlines of collidable background objects. If I added NPCs, the top right of the chart would get filled in more.

When combined with the black outlines Sprites use in this game, this is a great way to lead the eye and to make interactivity visually obvious without placing silly exclamation points above characters' heads.

Conclusion

All in all, I think this was totally worth the two evening-nights I actually worked on the script and the past couple hours I spent analyzing this. As always, we could go more in depth, both with the script and our manual interpretation, but I think that's enough for today.

See you!

2015/09/01

Staying Alive

Just a quick update on what I've been up to, since it wasn't a great idea to release five queued up blog posts in two weeks, as it turns out.
I couldn't have known, really, I just happened to lose almost all my motivation over these two weeks, and got hooked on Dontnod's Life is Strange. I played through all four available episodes in, like three days. Twice. Once the fifth episode is out, I'll try and write an in-depth analysis of that game's design.

Until then, I'll try to stay focused on building a game, but before I return to doing actually useful stuff for the project, there's one last F# project I want to work on.
I call it IMPALE, the IMage PAlette anaLyzEr. It's just a small console app that takes an image file, performs some data mining and spits out a couple of diagrams in another image file.

I don't have any big plans in store for that, and I don't intend to let it take longer than like a week, but no promises. Until it's done,

see ya!

2015/08/24

Map Objects: Designing the Entity System

As you may have noticed, I've been laying off talking about dynamic objects in a level. Sure, we can animate our tiles and have a camera that you could theoretically move (well, I have a piece of test code that does move it, but I haven't shown that to you and it won't be in the final game anyway), but we don't have NPCs or anything similar.
This is because I wanted to get the rendering pipeline somewhat presentable, and didn't want to talk about entity systems until I absolutely had to. Tile maps don't need to be in the entity system, they represent the space the entity lives in. The camera doesn't have to be either, because there will only ever be one and it will be operated by different pieces of code depending on game state. The virtual screen is explicitly outside of the game world, while all entities are inside.

ECS Architecture

A decade or so ago, people started realizing that using class hierarchies for their game objects was more curse than blessing. Around the same time, the most common performance bottle neck was found to be the CPU's memory cache.
The Entity/Component/System pattern, designed to combat both problems simultaneously, was all the rage until about five years ago, and is commonly considered the definitive method these days. Unity has a somewhat improper implementation without the System part, and as far as I know Unreal uses object composition as well.

An entity is your basic game object, made up of multiple components, flat data structures that contain part of the state of the entity. Systems then operate on all components of a certain type, rather than all entities worrying about themselves - the RenderingSystem draws all entities that have a MeshComponent, the PhysicsSystem moves all entities with a ColliderComponent and performs collision detection.
If you perform the same instructions on all elements of an array, you significantly decrease the probability of encountering a cache miss, granting a boost in performance if that's your bottle neck.
This pattern also decouples the different tasks your engine has to accomplish; you can define an entity to have any combination of components, massively reducing the number of different types of game object you might need. This in turn means that you can come up with new types of entities on the fly (unless, of course, that would require a new type of component or a new system), without writing any code whatsoever.

Now to rain on the parade, because I'm a cynic who enjoys rain, we really shouldn't expect an advantage in performance. With C# running on the CLR, and calls into MonoGame inside even our hottest loops, there's enough going on outside of our control that the cache will probably get trashed with every iteration. Cache-optimizing the code between these two layers just won't do all that much.
Not that we'd really need to. I expect no more than like 30 NPCs on any given map, and even fewer other objects. For all its other strengths, I'll still implement something similar, it's just something to keep in mind.

Requirements

Now, before we rush in specifying implementation details of a "robust, powerful entity system" for the heck of it, maybe we should look at what we actually want it to do. Once we know what kinds of entities we'll need, we can worry about what components we might need to construct them.

The obvious thing is NPCs: Animated sprites walking about the map that the player can interact with by pressing A while facing them. The specifics of what an NPC does when interacted with should be defined in the event scripting system.
Then there's event spaces: Plain hit boxes that trigger an event script on collision. A special case of this is the warp space, which transports the player to a given position on a given map, potentially after playing a door animation. It sounds like a good idea to map warp spaces to each other so that, when you change one map's layout, you don't have to change warp spaces on another. That would be a nice option, but it wouldn't let you make warps that don't let the player go back the way he came.

That's actually the three basic event types from the third generation Pokémon engine I'm so used to working with, so that's proven to be enough for the kind of game I'm planning, but you could get away with less, like the RPG Maker, which combined NPCs and event spaces by providing multiple hooks for scripts (on-collision and on-interact, essentially), and required you to script warp spaces yourself.
And maybe we find that to be the way our script component should work, but at minimum we should provide a parametrized default warp script so you don't have to write six lines of boiler plate script code for every fricking door in the game. I like using the RPG Maker as reference for this sort of stuff, but that's one thing I think it botched badly.

Moving on, there's some more object types I'd like to have to enhance the feel of the game world.
First, there's sound emitters. Ever since I saw a live stream of a guy scripting dozens of events in RPG maker to alter the volume of an ambient waterfall sound effect as your character moved, I've been in love with this idea. It's common practice in 3D games, where it's important for the suspension of disbelieve, less so in 2D games.
I'm not quite so sure about the last one, but particle emitters might come in handy as well. Whether it's malfunctioning electronics spouting sparks, waterfalls spraying water or just chimneys puffing away, that some things might be easier to animate this way. This would be very inaccurate though (neither the SNES nor the GBA used many particle effects), and I'm not sure it would be actually worth the effort, I'm just putting out the idea.

Recomposition

Now we can look at what components we need to make, by finding common vs. distinct properties of these entity types and identifying their purpose.

To start with the obvious, other than NPCs and particle emitters, which have very distinct logic, none of these are actually visually rendered. We'll therefore make a SpriteComponent and possibly a ParticleEmitterComponent. Similarly, the sound emitter is the only type that directly uses the audio engine (which doesn't actually exist yet, so we'll have to implement this later on), so we'll make that a distinct SoundEmitterComponent.
Next, you may have noticed that all our components make use of the entity's position in the game world. We might make that a LocationComponent that all other components rely on, but at that point, wouldn't it be better to just put that data on the actual entity? Or rather, is something that isn't located in the game world really an entity? Maybe it should just go into the map properties if it's level specific, or into the engine code, if it's an even more generic thing.

Identifying the other components is where it gets more difficult, as they all use the scripting system and rely on collision to some extent. We'll probably want to make a HitBoxComponent for collision testing, with a setting to determine wether the player should pass through, where you can manually query for collision, or whether they should be pushed back by the physics system.

Another thing only the NPCs do according to my descriptions is move and animate. There's different stances you might take on how that should be handled: Movement is just what happens when you play the walking animation, when you move an NPC, it should automatically play the walking animation. If you've ever worked with FL Studio or other audio software with automation envelopes, you might argue that being able to animate any value on any component is the way to go.
While it's true that that's an incredibly powerful system, especially in music production, where all the knobs affect each other and changes to their values stack exponentially, rippling into the final mix, I don't think that's necessarily true for games, especially considering we're currently trying to keep values from affecting each other too much. Our animation system ought to be simple enough to be effectively used by event scripts, but not so simple that you need a lot of code to accomplish anything with it.
So, since our animations have to be accessed by script code, we might as well push the movement behaviour of an NPC into the script system, with scripts looping in the background, running animations. To complement that, we just need an AnimationComponent to (1) define available animations; and (2) store the currently running animation and its state.
That leaves us with the three script hooks we've identified so far: OnCollision and OnInteraction are triggered during collision testing, while the ParallelProcess just runs in a loop forever. Not every NPC needs a movement behaviour attached, neither do other objects you might want to script, so let's make a separate ParallelScriptComponent.
As for the other two, they are so strongly tied to the collision code that I'll probably just put them on the HitBoxComponent after all.

Additions and Optimizations

Okay, so I just spent a couple of hours away from my computer, thinking about other stuff, and came back to sanity-check the concept so far. Here are three things I didn't talk or think about the first time round:

Conditionally De/Spawning Entities

If a game allows you to revisit a location multiple times over the course of the narrative, you'd probably expect your progress in the story to have some effect on the game world, especially the characters in it. To do that, RPGs commonly show or hide characters depending on a choice you made or an event you cleared.
We have a general purpose progression system consisting of boolean values and integers that can be set by scripts (in theory, anyway - the scripting thing in general isn't in place yet). Tieing an entity to such a value is an easy way to the goal here, we just need to find the place to check for it. You might get away with checking every frame, since both flags and variables are stored in your every-day array, and it guarantees immediate response, which caching the value doesn't.
Since you may want to use this for any entity, and every system has to perform the check, we don't even have to make this it's own component, and store the relevant data directly on the entity.

Sharing Animation Definitions

As I specified above, the AnimationComponent should hold definitions for all available animations. Thing is, most NPCs will use the exact same sprite sheet layout with the exact same animation timings and all. We don't have to make shared components a thing, we just need to make AnimationDefinitions into a type of content that can be shared by AnimationComponents.
It seems pretty obvious, so I'm not sure me a couple hours ago thought so too and didn't mention it for that reason, or just forgot about it.

Dynamic Object Creation

While pretty much all of this post focused on objects you'd place on a map during level design, the entity system also needs to handle objects that are placed by scripts or even the engine itself, as they come and go.
Prime examples of script placed objects are NPCs that aren't actually on the map and only spawned for one event, and sound effects that should have an audible location in the world (an explosion, a scream, church bells, whatever).
As for objects placed by the engine, there's the player character itself - placing it in engine code is much better than demanding that every map have an object with a PlayerCharacterComponent or something, I think. There's also a swath of visual effects that I'd implement this way; Pokémon style tall grass, submerging characters in water up to their necks, and reflections on ice floors or wall mirrors, just to name the ones that I think would be cool.

With these effects in particular, we need to consider our technical limitations and our layer model, as well as rendering order in general. But that can wait until the next post, where I'll implement the basis for all this and hopefully the rendering bits, so I can show screen shots again.

2015/08/20

Matrix Transformations in Color Space

When I started programming in C# five years ago, I quickly latched onto XNA for game programming. I didn't have constant internet access back then, so I bought books to learn from instead. The particular book I learned XNA from had a chapter on shader programming. Back then it seemed like magic to me - my school didn't teach me linear algebra until a year or two ago, I forget - and even today, I grasp the general concept, but have little experience, especially with vertex shaders.
However, one sample shader caught my attention recently: The black-and-white shader. It used a dot product operation to convert a color vector to its monochrome luminance. Other than that, and maybe sepia filters, I never really heard of people using transformation matrices on their color vectors. A quick google search reveals that it's not an entirely new concept - that page predates my birth by over three years. There is quite a bit of cool stuff you can do with this, so let's get crackin!

Prerequisites

Before we can work on the color transformations though, we need to linearize our colors. There's a nice chapter of GPU Gems explaining the concept and general importance of that, and since we use additive and subtractive blending for lighting effects, we should be doing that anyway.
I haven't produced any textures for this project yet, so we can, as advised, assume that all loaded textures are already linearized. However, we still need to perform a gamma correction in the post processing effect, and that has to happen in a pixel shader. Additionally, we should apply the calculated transformation matrix as we render so we'll actually need two different shaders.

I've dabbled in shader programming in HLSL for XNA, but I know that MonoGame uses its own shader system to compile for different platforms. I was pleasantly surprised to find documentation and example code on that at least.
Note that, unless you are using SpriteSortMode.Immediate, applying the Effect manually won't actually work. You need to pass it to SpriteBatch.Begin, but then you'll face a compatibility issue between the sprite batch and you effect: the camera transformation matrix is no longer applied, and neither is the OrthographicOffCenter matrix used internally by every sprite batch. You'll have to apply these manually with a vertex shader. Here is a SpriteBatch compatible pass-through effect that does that:

Color Transformations

Okay, this section is basically about porting Paul Haeberli's color transformations to C#, it's not really my code per se. The first transformation, Changing Brightness/Color Balance is basically provided by XNA, as it only scales the color by the specified parameter; we can use Matrix.CreateScale for that. We can also Apply Offsets to Color Components by using Matrix.CreateTranslation.

Modifying Saturation

This one is pretty simple: the first variant, Converting to Luminance, is a specific case of the saturation problem, and is solved by using a weighted average of all channels for all channels. To arbitrarily Scale Saturation, you need to interpolate between Matrix.Identity and the luminance matrix. I implemented both in one function, since you can just pass 0 to get the monochrome picture:
This is pretty cool, because you can actually pass values outside of (0; 1) to get inverted colors at -1 and boosted saturation for values greater than 1.

Shifting Hue

I'll admit that I don't really understand the math behind this either. I mean, I know how the matrices do their job, but I know too little about color spaces and how they relate to each other to be sure about why you need to do these exact operations. Sorry.
To be honest, it's not even that useful an operation. If we're tinting the map and objects to simulate different lighting situations, hue doesn't usually shift by a fixed offset. Rather, it shifts towards a certain value, generally yellow for increased light and blue for decreased light.

Just combining the other three operations gets us quite far, though; these are the exact same scenes, only rendered with a different color transformation matrix:

The ugliest island map in the world, day and night.

2015/08/18

ABDUCT: About functional programming, patent laws and a dead project

About a week ago, while I was working on the entity system for my game, I grew tired of my code base; I'm not really happy about all decisions I made, both early on and with my tile map rendering, and it all accumulated into some anxiety over sprite rendering.
So I decided to take a break and come back later instead of burning out and leaving the project unfinished. I didn't want to stop contributing to the game, however, so I considered my other options. I was also kind of obsessed with audio stuff at the time, since I spent so much time researching it just prior, and really wanted to work on authentic BRR compression, with better tool support than the shady tools I found on romhacking.net.
Finally, I wanted to try functional programming, see what all the hype was about. Since audio compression seemed like a great fit, and F# like a good bridge into the paradigm, everything seemed to line up perfectly.

Since tool naming is important, I thought long and hard (heh) about a naming scheme for my own tools, and came up with the Authentic Bit rate reDUCtion Tool, the Barely Usable Rigid Game Level Editor and other silly backronyms for criminal verbs.
Then I got to work, setting up an F# project Abduct.Core, which should do all the heavy lifting, and the C# project Abduct.UI, which should handle all the boring stuff, such as communicating with NAudio, the library I decided to use for handling audio files (and resampling, which, as it turns out, you shouldn't try to do yourself, at least not if you want quality output).

And, as if in the second act of a tragedy, things started to go down hill from here.
The plan was relatively simple: use NAudio to load WAV files and convert them into 16 bit mono. Resample to a user defined sample rate, perform length correction - BRR compressed sounds come in chunks of 16 samples, so you either have to pad in silence for sound effects, or truncate looped samples. Then you apply the actual compression, and decompress again - we're not actually running on a SNES, so the compressed data is of no use to us.
The user should be able to play back a sound at any sample rate (without resampling!) to hear what the sample would sound like after pitch modulation, with or without looping, to examine its behaviour, and finally export the file as an uncompressed wave file.

As mentioned above, I use NAudio to resample, so that's not really an accomplishment. I did get length correction to work, I think. It's currently intertwined with the non-functioning compression code, and even before, it was a bit hard to test.
Convering different wave formats with NAudio involves mucking around with .NET streams. A lot. So much so, that I couldn't find a way to easily do it for any wave file I might load, even with a clearly defined target format. So, for prototyping, I just allowed wave files in the proper format.
Playing back audio with NAudio is also an atrocious process. Well, it is if you have to change the output wave format dynamically because you're trying to modulate pitch.
The real killer of this project, though, was the actual compression, for two reasons:
  1. I didn't bother to really learn the concepts of functional programming. I just jumped in with a new project in Visual Studio and this reference. Inevitably, I encountered a compiler error that I still haven't wrapped my mind around yet.
  2. As the wikipedia article on BRR compression mentions - which I didn't notice during my initial research - there's loads and loads of patents involved. I don't have the patience and legalese to read all of these, and I can't afford (and frankly, don't want to pay) a patent attorney to do it for me. I'm not clear on American patent laws, and the fact that I'm located in Germany doesn't make the whole situation any simpler.
To cut to the chase, even if I did get around all these other problems - bite the bullet and write all that code to properly use NAudio, learn all about the functional paradigm - I still couldn't complete and use the tool with confidence, much less release the code in good conscience.
I will, however, strip away the bits of compression code I wrote so far, get it to a somewhat presentable state, and release that code on github later this week. If anyone from the home brew and rom hacking communities is more confident in their legalese (or Nintendo's unwillingness to sue over this stuff), they are of course free to do with it whatever they want.

As for myself, the inability to get accurate SNES sound is a bit of a disappointment. I'm left with just a few options:
  • Skip the compression and only use length correction and resampling. This would make our samples significantly higher in quality - maybe using 8 bit samples might combat that somewhat.
    • I could, of course, try other compression algorithms, or even create a shitty one myself.
  • Use a chip tunes soundtrack. Most samples qualified for that would barely suffer from the compression anyway. It would also get us closer to the GBA sound, which wouldn't be too bad.
  • Giving up on authenticity and using a CD quality sound track is tempting, but not really my style.
I'll have to think about the specifics of this later. It's 4 am over here, and I need some sleep.
Anyway, that's what I've been up to this week. I'll muck around with functional programming some more, but I'll return to doing engine work soon, promise!

NOTE: I usually wrtite blog posts a week or three in advance, so the posts will keep coming as scheduled, this post is the one in the wrong time line.

UPDATE: The cleaned up code for this project can now be found on github: https://github.com/teryror/ABDUCT-Audio-Processor.

Handling Multiple Resolutions with a Virtual Screen

My current retro 16 bit style guidelines dictate that my game use a virtual screen of 320x180 pixels. At first, that seems easy enough: when the game is running in window mode, we can just dictate the size to be a multiple of that and zoom in accordingly. In full screen 720 or 1080p, we can just scale by a factor of 4 or 6, respectively. Common laptop resolutions, like 1366x768, are not so easy, but we can give the option to either scale unevenly, by a factor of roughly 4.26 in this case, or scale evenly by a factor of 4, and have a black border around the virtual screen. On displays with different aspect ratios, we generally face the same problem, except even the unevenly scaled version would require letterboxing.

To handle all of these different cases, as well as the very concept of a virtual display, I created the class VirtualScreen. It uses a RenderTarget2D that is a multiple of 320x180 in size, and provides a CaptureFrame method that sets the render target on the GPU, and a DrawFrame method that renders the target on the screen.
The hairy bit is the Refresh method, which sets the size of the render target and calculates a destination rectangle for DrawFrame. There's also the Reset and Save methods, which load the properties of the class from the configuration and calls Refresh, and write the current state of the screen back into the configuratin and save it, respectively.

This is the bad boy in question:
It looks pretty hairy at first glance, and it is a bit heavy on the arithmetic side, but the real problem is, that it just explodes in size as we add rendering options, and there just is no design pattern or algorithm to save us here. We could factor out a RefreshFullScreen and a RefreshWindowed function, but that would only take away a couple lines of code and a level of indentation; it wouldn't help with the why does-this-math-work-question.
The real problem is, that this thing just explodes in complexity as you add video options - I added a SuperSampling property that doubles the size of the render target while keeping the destination rectangle the same, slightly blurring the pixel art but also smoothing out sprite movement (once we get to that, anyway).

But setting up the back buffer and render target isn't the only thing I wrote this class for.
The Scale property the Refresh method sets before calculating a destination is not configurable. Instead, it's supposed to be used to create transformation matrices for rendering on the virtual display. I now use it in my Camera class in place of its own scale member, so the camera automatically adjusts to changes to the rendering options.
Since we have a texture with the rendered world and UI by the time DrawFrame gets called, we can also do post-processing there, which I'll at least touch on in the next post.
I will also render a Super Game Boy inspired background image behind the render target, to make potential letterboxing less awful to look at. The original The Binding of Isaac did that to great effect, I find.

2015/08/16

Matrix Transformations in 2D Space - The Camera

While I was factoring out the useful bits of the map rendering proto-code, I tested them a lot more thoroughly and a fixed a couple of bugs, mostly just missing calls into classes I wanted to plug into the engine. One bug in particular, however, was related to the way we currently rotate tiles;
When we set the origin in our draw call to (8 8), we not only rotate tiles around their center, as I'd assumed, but also shift them to the top left by the same amount. To fix that, I removed the origin parameter and used some evil bit magic again:
This is not today's topic though, so explaining how that works is left as an exercise for the reader.

Doing Math on Camera

With the refactoring done, we could go straight to implementing map objects, but I decided to do some polishing first. We can kill quite a few birds with one stone here, and that stone is a 2D camera.
We usually talk about cameras in 3D games, because their control is usually fundamental to the gameplay. But it's also common to have a camera in a 2D game, because it's very convenient for the programmer: If you can just move the camera and the environment magically scrolls on the screen, there are so many fewer headaches involved in just figuring out where to render a sprite.
The easiest way to implement such magic is with a transformation matrix that you pass to the GPU, which will use that to automatically figure out where sprites should be rendered, and at what size.
XNA/MonoGame also provide us with a Matrix structure, so we don't even have to worry about the math behind the magic too much. We just call a couple static functions, multiply the results together, and pass the product to SpriteBatch.Begin.

However, in case you actually are unfamiliar with the math, I highly recommend you read up on it. There's tons of more focused resources on linear algebra than this blog, and you need it constantly when you're doing any sort of graphics or physics work. In any case, here's a brief rundown of the essentials:
A matrix is a NxM field of real numbers. The (x y) vectors we've been working with are essentialy 1x2 matrices. Other than vectors, the most common kind of matrix in graphics programming is the 4x4 matrix, which is also what a Matrix in XNA is.
You can use arithmetic operators on matricess; the two most important operations for us are multiplying 1x4 matrices - Vector4 - with 4x4 matrices to get another Vector4, and multiplying one 4x4 Matrix with another to get a third 4x4 Matrix.
The latter lets us combine multiple transformation matrices into one, so we can perform multiple transformations of a vector with just one matrix multiplication.
The former works by calculating four linear combinations of the elements in the original vector, with the columns of the matrix serving as the coefficients for each elemnt in the new vector. Since we mostly work with 2- or 3-component vectors, the remaining elements are usually padded with 1.0, giving us the power to do translations, i.e. shifting the vectors around by a constant offset. We can also scale a vector by setting the factors on the main diagonal.

Our camera doesn't even need to do any more than that:
First, we want to zoom in by a factor of 16. This lets us render tiles into destination rectangles of size 1, which should remove a lot of potential *16 and /16 from our code, such as in the bug fix above.
Then, as the camera's position goes to the bottom right, we want to move the scene to the top left. This lets us render the map with the origin at (0 0), no matter which part of the level should currently be focused. We could also easily add a screenshake offset, without anything having to change in our rendering code.
Finally, we want to zoom in some more so our virtual 320x180 screen doesn't look so small on a 1080p display. Having this configurable easily will help us once we actually code the virtual screen to deal with multiple resolutions.

I use a full class TopDownCamera for this, with properties and dirty flags, but at its heart is just this one line of code:
Passing that to SpriteBatch.Begin does all the rest. However, there's one other thing we can do, now that we have the camera working:

Polishing Tiles

At this point, I added a Vector4 Viewport property to my camera class that would give me the left, top, right and bottom limits of the visible area of the world. Using that, we can calculate the upper and lower limits of the x and y loops in our DrawLayer method. This has two major advantages:
We only consider tiles that are actually on screen - we can remove map size from the performance equation.
We also get to consider values that are not on the map. This may sound stupid at first, but whenever the camera gets close to the edge of the screen, you see Cornflowe Blue, or whatever other color you clear your screen with. If the map data contained a set of border blocks we could render there instead, we can easily detect when to render border blocks, depending on whether the x and y indices are within the bounds of the blocks array.

Depending on how much attention you want to give this, determining which tile to render can actually be more difficult. If you just define a single border block, you can just go
I, however, wanted the border blocks to be almost full featured maps of themselves. They share the same layer settings and obviously won't get any collision data, but I defined a ushort[,,] blocks, which should be repeatedly rendered around the map.
Finding the proper index into this array is a bit more complicated. You could just modulate the coordinates by the length of the array in the respective dimension, but then you'd still get negative values. Negating these won't work, though, because that would inverse the block pattern upwards and to the left of the map. However, you could add the length of the array and modulate again, which gives the correct pattern, like this:

Finally, here is the code in action:
The ugliest island map in the world!
The water is all border blocks, by the way.

2015/08/15

Tile Maps: I'll Show You The World

After fixing the hot mess from installing the full release of Visual Studio 2015 without first uninstalling the Release Candidate, which left me unable to start up or uninstall either (if you're having the same problem, try running the installer from the command line with the /u /force flags), I can finally do the rendering code for tile maps.

However, to see if it works, we'll need a tileset and a map. We can hardcode the tile map, but for the tileset, I decided to use KenneyNL's Roguelike tile sets, just for proto typing.

And now my watch begins

To get an idea of where to even start, let's think of when a tile map would be visible.
  • First and foremost, during exploration, when you're controlling your character,
  • during scripted events, when you're not in control, and
  • when you're in an overlaid menu, where you control the cursor, rather than your character.
These are three inherently different game states, that share some logic - they are substates to the encompassing TileMapView game state.
Just to get something on screen quickly, I took a quick stab at implementing it. It's chaotic, it's not feature complete, and it's not even documented. It's a refactoring waiting to happen. However, it also renders this:
Not all that pretty. Yet.
I think the code is mostly self-explanatory, but before we go on to add the missing features, let's discuss the evil bit level hacks and other non-obvious choices.

Starting at the top, the MakeTileID method would probably have been a macro, if those existed in C#. It's purely a convenient function for now, but it does document the bit level layout of a tile ID: There's integers X and Y, each ranging from 0 to 63, taking up bits 0 through 5 and 6 through 11, respectively. There's a SpriteEffects in there, which is a field of two flags, and finally a two bit integer called r. That's an index into the ROTATIONS array above, allowing blocks to be rotated individually.

The constructor is pretty simple initialization. All these loops programmatically fill the map. The last line is there to test the rotation and mirroring features. I tested them individually as well, but here the combination of both flips and a rotation by 180° gives us the original orientation of the tile.

Finally, the DrawLayer method does the heavy lifting: we Begin a SpriteBatch for each layer because, as the additive/subtractive lighting layers go in between, we'll have to begin different batches anyway. The layers in between are also the reason this is its own method in the first place, and not just the body of a loop in Draw.
Then, we declare and initialize our local variables; the source and destination Rectangles for the call to SpriteBatch.Draw, as well as rot, an index into ROTATIONS, and a SpriteEffects variable. I do this outside the loop because it's probably going to be the hottest loop in our code base for quite a while and it's a quite intuitive optimization.
Within the loop itself, we split all the info in the tile ID and update the local variables as necessary, before finally drawing the tile. The Vector2 origin we pass here refers to the center of rotation, relative to the top-left corner of the sprite. Since all tiles are 16x16 pixels, passing (8 8) rotates tiles around their mid-point.

The light that brings the dawn

I already mentioned the light maps that go between layers, an those will be our next priority. To make this make sense, let's place a light source first. The fire pit will do, so that's MakeTileID(14,26) for me.
As for the light itself, I used gradients and the Posterize adjustment in Paint.NET to make a pixely, fading blob of orange on black and added that to the tile set. At this point, my proto-tileset looked like this. I have my doubts stuffing the lighting data into the tile set like this is viable in the long run, but for now, this will certainly do.

To blend layers, we need to pass a BlendState when calling SpriteBatch.Begin, so we'll just add that as a parameter to our DrawLayer function. After adjusting our code in Draw and adding a couple tiles to the additive layer, we can now render this:
...still not impressed? Me neither.
While BlendState.Additive is provided in XNA/MonoGame, we'll have to make our own subtractive blend mode. We can't really set the opacity either, but for that, we can use the color parameter to SpriteBatch.Draw.
It is used to tint sprites by multiplying each pixel's color by the provided value. Since we only use the opacity for additive and subtractive blending, we can provide a shade of grey that gets us the same effect as setting opacity.
The rules say we get exactly different 16 opacities, so I decided to make another array of constants to reference with an opacityIndex parameter to the DrawLayer method. Choosing an opacity of fifty percent (that's #808080) makes the lighting in this proto type a lot prettier:
Meh.


The fire that burns against the cold

Our little camp fire just doesn't look right with just the light map, wouldn't you say? The thing it lacks is movement. I've never seen a still fire, so we need to animate it.
I already mentioned using a Dictionary for that in my last post, and here's how that works:
You need the X and Y parts of a tile ID, both as values and as keys. When rendering, get the lower 12 bits by ANDing the tile ID with 0xFFF, put them in a local variable, and if (dictionary.ContainsKey(id)), set id = dictionary[id].
This lets you alias tile IDs, which is a great way to troll level designers, but not all that useful by itself. To animate a tile, you need to dynamically change the contents of the dictionary. There are seemingly endless ways to time your animations, but no matter which you choose, in the end, the result will look somewhat like this:
Converting this into a gif seems to have fucked the colors, but at least it's moving.

With that, we have the basic feature set down. The new main focus, obviously, is to refactor this mess into something useful. You'll hear from me again once that's done and something interesting is in the making again. Until then!