Category: Programming


Juicy Particles

A few weeks ago I attended the Control Conference in Amsterdam and saw two very interesting presentations. One was by Martin Jonasson (Rymdkapsel) & Petri Purho (Crayon Physics) called Juice It or Lose It. They demonstrated some techniques for making your game more alive, or responsive and their presentation showed how much more fun a game becomes when you add more and more feedback to the player about what’s happening.

The second one was by Jan Willem Nijman from Vlambeer (Ridiculous Fishing) called The Art of Screenshake. Here, Jan Willem shows how adding ‘small’ details like recoil and screenshake can turn a game from being kind of boring to an awesome bullet fest.

These presentations inspired me to do some extra Juicing myself (no pun intended), to make Caromble! more alive. As our game is getting more and more complete, we are looking for ways to make the gaming experience of Caromble! more engaging. One of the ways to do this is by adding more feedback to the player. Show what is happening ALL the time! Show the relations between objects and actions. Show that some things are about to happen or that they have just happened. To improve our communication with the player, I have created 3 particle systems. For the third particle system, I will also explain what it does in detail, but words will probably be less impressive than seeing it in action.

In Caromble!, each level is divided in multiple subareas. To reach the next one, the player needs to activate a portal. This is often done by destroying x amount of objects in that subarea. After an object was destroyed, the portal would magically fill with red portal goo. This sounds like something that could use some extra communication. With the new particle system, each destroyed object spawns particles that are shooting towards the portal. Now the magic filling of the portal makes more sense, plus it looks pretty.

In the later chapters of the game, we introduce switches. Switches can do all kinds of things when hit by a ball. They can open a door, lower a ramp or activate another switch. The player can see the causal connection between the two, because the effect happens immediately. However, it is so much nicer (and improves the anticipatory feeling) if the switch shoots some particles to the object that is influenced. Adding this effect makes particle system number 2.

The third particle system is the one that looks the best in my opinion. Previously, our powerups would just… well… appear out of thin air. The ball hit a powerup, then it was gone, but a few seconds later: Poof! a new one was back again. Now, we have a particle system that precedes the spawning of the powerup. Niceness!

Let me explain how this last effect works in detail. I created 6 emitters that each spawn around 100 particles a second. A maximum of 50 particles can be alive per system, so older ones are removed by newly created ones. The particles have a varying lifetime between 0.1 and 0.4 seconds and have a start and end size between it linearly interpolates. Perlin noise forces are added to the particles to give them some randomness.
Initially, I position these particles at the bottom of where the powerup will be spawned. These emitters are distributed evenly at a radius r (= halfwidth of the powerup). The particles will get the color of the corresponding powerup (for now they are green).
Then for the first 1.5 seconds, the particles will rotate around the circle defined by r and will move up till they reach the top of where the powerup will be spawned. During this movement, the circle over which the emitters rotate will linearly transform to a circle with a radius of 0.2 * r. Now, for 0.5 second the particles will stay at their place and become white.
Finally, for the last 0.3 seconds the particles will move down (without rotating over the circle), but the radius of the circle will linearly grow back to again. Stir all of these ingredients well, put it in the oven for 30-45 minutes and voila, a particle effect for spawning a powerup has been created.

To see all of this new juiciness, please check it out below. Disclaimer: Thomas Schmall is currently working on a funky ass model for the powerup, so for now we have to make do with a green cube.

Furthermore, we wish everyone a wonderful and sparkling 2014, the year Crimson Owl Studios will bring Caromble! to your PC, Mac and Linux.

 

Java garbage collection and device resources

For Java programmers the garbage collector is a pretty nice thing. Peter wrote a bit about Java and the garbage collector on this blog a while ago. In short, the garbage collector will make sure everything that isn’t needed anymore will just quietly disappear. Most of the time you’ll never know it is there. And after a while you sort of forget its existence, and when you do encounter it, it might be a painful experience.

I had one of those encounters when I was nearly finished with our new render-core and rolled it out to all developer machines. One of the laptops (with only an AMD APU) suddenly started running out of video memory very fast. That was strange, the memory footprint of Caromble! isn’t big and hasn’t changed with the new core. So what was happening?

After a bit of debugging it proved related to the rendering of text. Each time some in-game text would change, we would allocate a new vertex array on the GPU with exactly enough space for each letter. For each text (and the game has about 5 of these at any one time) only one buffer was used. Hardly exciting and not something you would expect to cause problems with video memory. Nevertheless, these buffers were causing the problem.

I looked at how the video buffers were disposed. All references to a buffer on video RAM (with a backing system RAM copy) were being managed by a ReferenceQueue. It is a solution from theArdor project that I think is both elegant and powerful. As long as the data can still be used it will remain on the video card. When it can’t be reached anymore, because all references to it are destroyed, it will be scheduled for garbage collection. If we poll the RefenceQueue regularly we’ll get a reference to all buffers scheduled for destruction and can tell the GPU that it can free the corresponding vertex array. So it isn’t necessary to keep track of who is referencing the data, but we still get to execute custom code for cleaning up the memory from the GPU. Perfect right?

Except it isn’t. As soon as the garbage collector detects that a video buffer can’t be used anymore it will return a reference to the object holding that buffer for us. But there lies the problem. We won’t get that reference until the garbage collector determines that it can safely be removed. The only guarantee the JRE makes on this matter is that this will happen before Java runs out of heap-space. So as long as there is sufficient system RAM available the garbage collector might decide that it will be better for performance to just let the garbage lie around for a bit longer. Since it doesn’t know about the video memory we are using, it can’t know that leaving this bit of garbage can be dangerous.

So that was the lesson I learned from all this? The job of the garbage collector is to keep the program inside its heap-space and to try to be as silent about it as possible. If there is enough space there is no need to slow the program down by collecting. So it won’t.

Once I figured out what the problem was, the solution was quite simple (but not very elegant). I solved the problem by explicitly removing buffers that I know are no longer required from the GPU. Buffers that are potentially shared by different meshes are kept on the GPU until the backing CPU data gets garbage collected. But since these buffers don’t often changed that is not a problem.

It was quite an interesting insight in the workings of the garbage collector, and I think it also proves that you shouldn’t try to to be too smart around resources allocated out of sight of the garbage collector.

A brand new Render Core

Six weeks ago, I wrote a post on this blog about rewriting the render core of Caromble!. Back then, I thought I was only a few days away from finishing the project. But it turned out to be quite a bit tougher than expected. It seemed that every corner we cut during the past couple of years, came back to haunt us and delay finishing the rewrite.

But just a few hours ago, we finally managed to merge the changes back to the game, and that means that the old renderer is history. I will now briefly explain how the new core works. If that sounds too technical, just scroll down to look at the screenshot that proves that the Caromble! can now run on MacOS. Yeah!

For Caromble! we are using the Ardor3D engine to power the graphics. Ardor is quite flexible, shielding programmers from the exact versions of OpenGL they were using. This is a great way to keep games running on old hardware, while still allowing programmers to add new features for newer devices. The big drawback of this approach is that it mixes fixed function OpenGL stuff with the newer stuff. And that makes it impossible for programmers to explicitly use one or the other. Which is exactly what we needed to do if we wanted to have the game running on MacOS. Because for MacOS only OpenGL 3.2 is currently supported (if you want to use any of  new stuff). For Windows, we’ll mostly be using OpenGL 3.2, except for the Intel HD embedded GPUs, which support only OpenGL 3.1.

The nice thing about OpenGL 3.1+ is that it will allow you to have very precise control about where data lives and how it is transferred to the rendering pipeline. For instance, in the old days if you wanted to specify the normals per vertex, you would have to tell a special “normal-pointer” function that the current buffer is used for normals. Later, during rendering, the gl_Normal variable would be magically available to read back these normals.

In modern OpenGL these special variables no longer exists. If you want normals you have to bind the normal buffers to a generic target, and then bind the generic target to the input that you’ll use for normals. It is a bit more work for the normal case, but it gives you much greater flexibility and allows you to write much cleaner code (more stuff can be shared).

So that means the core of our renderer will boil down to the following.

For each mesh that you want to draw you do the following during initialization:

  1. Create a Vertex Array Object. Confusingly named, this object will hold all pointers to all data describing you mesh.
  2. Gather all your vertex data (positions, normals, texture coordinates, …) and put it into Vertex Buffer Objects. Upload these to the graphics card.
  3. Explicitly bind these buffers to inputs of your vertex shader. (No more magic values such as GL_Normal).
  4. Upload the textures you’ll need an keep track of them.

During rendering we first activate the shaders we will be using for this renderpass (deferred shader, light shader, transparent object shader, etc).  Then, for every mesh we need to draw we do the following:

  1. First we upload the relevant scene information to the graphics card (such as matrix transforms and the near and far plane). Strictly speaking it is a bit of a waste to do this for every mesh, but we have yet to make that optimization. (And we pixel shader limited right now anyway).
  2. Now we look up the textures that belong to this mesh, and bind them to the correct texture slots.
  3. We now bind the correct Vertex Array Object and tell OpenGL it is time to draw something

In a nutshell, that is how our current render-core works. Of course there is a more to it, but this is the most important bit.

For us an import advantage of our new render-core is that it is much smaller than the old core. There is only one code path that all objects take, and that will make it much easier for us to optimize the game during the last phase of development that we are now in. So while it took a lot of time and effort to port our render core to support OpenGL core-profiles, we are confident the investment will pay out by making our code much easier to maintain.

And with the promised screenshot, that will be all from me for now  🙂

Caromble! on MacOS

Caromble! on MacOS, please ignore that I lost the ball.

Graphics Effects

Since most of us working on Caromble! are programmers, it shouldn’t come as a surprise that we wrote our own game engine for the game.
In the animated image below we show a few of the effects we apply to Caromble! to get it to look the way it does.

Graphics Effects

Click to enlarge

In the first of the images you see the scene with no texturing and only the main light applied, in the next image you see the same scene with secondary lights and the specular map enabled.

After that, we add bump mapping, basic colors, shadows, bloom & depth of field, ambient occlusion, reflections and soft particles.

On low end machines, you might have to disable some of these effects to get to run the game with decent speed. But on high end machines, we still got some room to spare. Maybe, very maybe, we just might add motion blur to get the game to look even better….

Vacation and a major rewrite

You may have noticed that things are a bit more quiet than usual on our development blog. Our vacations are to blame. Personally, I have just spend two awesome weeks touring through Italy with my girlfriend. Eating pasta, drinking wine, sitting in the sun and generally not thinking about anything more complex than where we should have lunch. Maybe I was also gathering some strength for what I knew was waiting for me back home.

parasol

You see, back when we first started working on Caromble! none of us had ever made a game for real.  It is amazing to see that lots of the code we wrote in those days is still in the game, and working as it was originally intended. It is also  unsurprising to see that lots of code has been thrown away and rewritten. And it is also unsurprising that there are lots of bits and pieces of code that really should be rewritten, but never were broken enough to warrant such action.

Among this code was core rendering engine. Like I said, when we started there were lots of things we didn’t know. Especially about rendering. We understood the basics, but when it comes to rendering a game there are also a lot of important specifics. So what happened was that we just went ahead working with things we didn’t fully understand, and just getting it to work.  Starting from examples from the internet we actually made quite good progress. Eventually we were even being able to contribute a little bit to Ardor3D, the graphics engine we use.

While we were getting to grips with rendering in OpenGL, OpenGL itself went through a transition. Slowly but steadily the Fixed Function Pipeline got out of fashion, in favor of the programmable pipeline. Initially we didn’t care very much for fashion, and quite happily used both pipelines for different parts of the rendering process. Which didn’t really seem to matter at that time. So what if we depend on both?

Well it turns out it does matter. To be able to run the game on MacOS we needed to choose. Apple drivers will work with either pipeline, but not both at the same time (OpenGL compatibility mode). We couldn’t go back to the fixed function pipeline anymore, as we would not be able to maintain the look of the game. So we had to take the leap forward.

And that brings me to the thing that was waiting for me. A complete rewrite of the core rendering code. Just when the game is quite nearly finished too. To add to the fun, this also means we can’t use the off-the-shelve version of Ardor3D anymore, which is still fixed-function at heart. At the same time, it is also quite exciting. We get to write a rendering core which is tailored exactly to the needs of Caromble!, we anticipate that we can get a bit more performance this way too.

Originally, this would be the blog posts were I would triumphantly boast about the new awesome render core we just put in Caromble!, making life at least twice as awesome. Sadly, all this exiting state-of-the-art rendering core currently does it give me a black screen.  And sometimes crash. But I should be able to fix that pretty quickly. When I do I’ll share how I did it here. We could also make the renderer available to those we would find it interesting.

Until then!

 

 

 

 

 

 

Long Lost Code

A while back we hinted about it: Boss Fights. Originally, the idea was that the virus shards were items you would collect to progress through the game. But after a long discussion we felt that this was typical behavior seen in so many other brick-breaker games. We didn’t want the virus to be stationary, we wanted it to take the fight to you!

So about that fight, well, I really do not want to spoil everything before you get a chance to play the game… Previously we got inspired by games like Pinball, this time though, we looked at SCHMUPS (Raptor!) and even games like Pong. While Arkanoid will always have the most evillest boss out there (DOH!), after a few days of programming, our (still nameless) boss is shaping up to be a pretty good challenge.

When we started building our game engine, many, many years ago. We were trying to build it as generic as possible (trying to be able to support fps/race/etc -basically all games). For some part, this was a waste of time; When we started working on Caromble! we quickly gained a lot more focus and improved the engine on areas needed for Caromble! -finally making some serious progress.
But not all was lost, because for the boss behavior we could reuse code from years back that still proved to work just fine. For programmers like us, that feels awesome!

But enough with the sentiment, we finally got our Etoo London interview online, check it out!

And again, if you’re going to Gamescom and want a go at Caromble!, just let us know!

Juicing it Up – Paddle Movement

As you may know, we’re working very hard to dot the i’s and cross the t’s -see our Ripple post of a few weeks back for example. With all this juiciness going on throughout the game, we simply cannot ignore the paddle. While it still has placeholder graphics, we can at least tweak it’s movement.
 
Previously, the paddle simply followed the mouse (or keyboard/gamepad/leapmotion) and moved backwards slightly when it got hit by the ball. To improve upon this, we now use physics for it’s movement. The paddle is now hold in place by four springs that allow the paddle to accurately bounce back when hitting the ball and allow it to tilt when moving very fast.
The effect is pretty subtle to the eye, but it makes all the difference when playing.

 

 

Taking the Leap

When Microsoft’s Kinect was released, I imagined all the cool kind of interactions that were possible and what this could do for gaming. Now a few years later, I haven’t really played any great games that used the Kinect. The only exception would be Child of Eden, which was awesome. I think the main problem was the precision. It feels like 1 out of 5 times the Kinect misinterpreted your actions. Of course this could be poor software implementation and not the hardware, but still. Were motion controls doomed?

Now, it is less than one month until the release of the Leap Motion. Instead of full body detection, it does hand and finger recognition. The poor precision that in my opinion degraded the Kinect is not an issue here. It has a precision of 0.01mm with almost no latency (~5ms). I remember that I saw the video about a year ago and I was convinced, with many others, that it was a hoax. That kind of precision was just unreal, especially for the price they aimed to sell it.

When the possibility arose to apply for a dev-kit, curious as I am I did not hesitate and filled in the form. A few weeks later, after attending a game jam for Empowerment for Children, I came home finding a package directed to Crimson Owl Studios. This was the first mail we ever received that was directed to Crimson Owl, so that was already special in itself. Even better, it contained the Leap Motion dev-kit. About 15 minutes later I had it plugged it and a demo running… WTF! It wasn’t a hoax. It worked, just as in the video. Wow, nice… It did have some problems with detecting multiple hands simultaneously, but that was solved in later firmware updates.

So how to use this for Caromble!? Let’s just try something and first let it detect my finger positions. The SDK is simple and intuitive and comes with a Java binding, so in less than an hour we had something up and running. Moving your finger left and right moves the paddle, simple as that. I was tweaking the parameters to adjust the sensitivity, when I came up with the idea to use my finger position in z-axis direction to influence the sensitivity. That turned out to be a pretty good idea. During gameplay I sometimes used high precision and sometimes high moving speed. The ability to adjust this during gameplay felt really nice. The two other actions we have in Caromble!, charging and activating powerups, have been implemented by directing your finger upward and tapping respectively. This implementation makes it possible to play Caromble! with the Leap Motion as can be seen below and I must say, for me it adds to the fun.

Perhaps motion controls for gaming isn’t doomed after all! We will try and find the best way of using the Leap motion to control Caromble!. Perhaps you readers have some ideas of how to use it to charge the paddle or activate powerups? Perhaps we could add a second hand? Possibilities enough. Let us know in the comment section below.

 

 

Making waves (or ripples)

Last week was a big week for Caromble!. For the first time since last October we would be showing the game to an audience. And not just any audience. Caromble! would leave the Netherlands for the first time, and travel all the way to EToo London.

Doesn't this make us look like very real and serious game developers?

Doesn’t this make us look like very real and serious game developers?

It wasn’t  just showing the game though, Pascal and me were set to make our first ever appearance in a live stream. So yes, there were some nerves, but all went well. I always enjoy to be able to explain something that I’m passionate about, so I enjoyed being interviewed together with Pascal about Caromble!.

The next day we hooked a laptop to a big TV and had people play Caromble! all day. Collecting feedback from players who never played the game before is very important,  and we definitely learned a thing or two about how we can make the game even more fun. It was also a great chance to meet a lot of very nice people from the UK gaming scene. All in all, I think the nicest thing about showing the game on such an event is just watching people play and enjoy it. Having people enjoy and play Caromble! is the whole reason we are making the game after all.

Showing the game and talking about it is much fun and very important, but most in all it got me all psyched and eager to work on Caromble! even more. So when I had nothing on my hands on Saturday, I figured it would be a great time to do some development.

One of the things I’ve been very eager to improve for quite a while, is what happens every time a ball hits a wall. Since Caromble! is a game, and games are supposed to be fun, all things in the game should be fun. Especially the things that happen all the time. Balls hit walls a lot in Caromble! (sorry I had to write that). We have been trying to make hitting the wall more fun for a while now. Adding nice particles helped a bit, but it never felt quite right.

Last Saturday I finally sat down and wrote a shader that should help. The effect is inspired by the helicopter crash from The Matrix. Or maybe just inspired by throwing a rock in a pond.  Where the ball collides with a wall, I move bits and pieces of the final rendered image back and forth to make it seem like the whole world is kind of rippling. Maybe I should stop talking and start showing. We are very curious about feedback you may have!!

 

 

 

 

Since I was a kid, I loved to create something huge with Lego blocks and then… destroy it. Or make a huge card house with my grandfather, like 7 stories high, and then… make it collapse. I’m not unique in this. Many people seem to like destruction, witnessing  the several popular scientific television shows in which objects are blown to pieces or dropped from a crane.

I think that besides seeing things collapsing, people find it even more satisfying  to be the cause of some kind of destruction. Whenever I see a structure of domino stones, my fingers itch and I want to be the one to tip over the first stone. I think we love to be the first part in a causal chain. I’m not  sure why we love it so much, but perhaps it creates a feeling that makes us extra aware of being alive. I experience a similar feeling when travelling by bus. Whenever I need to press the stop button, I also want to see the stop sign lighting up because of MY button press; “Yeah, I did that!”. Perhaps instead of “I think therefore I am”, the statement “I cause therefore I am” suits us better.

Caromble! is in its essence a breakout game. A game in which the player moves a paddle, aims and bounces the ball, hoping to destroy some blocks. Somehow it so much more interesting if one of these blocks was supporting a big structure that is now collapsing. “Yeah, I caused that… again!”. That specific feeling, that is the added value of incorporating a physics engine into Caromble! in my opinion. The core gameplay may not always be very different from an ‘ordinary’ breakout game, but the fact that the player can be the first step in a causal chain that leads to destruction, explosions and chaos , makes the experience so much more satisfying. Here is an awesome video from OK Go that demonstrates this idea through a Rube Goldberg machine. And how satisfying does this look (we use the Java port of BulletJBullet, for our physics)?

The interesting thing of incorporating a physics engine in your game, is choosing when and how to intervene in the physics simulation. Without intervention (moving, creating or destroying objects in the simulated world) there is no gameplay. In Caromble! the single action a player can perform (besides using power-ups) is moving the paddle. With the paddle, the player can influence the most important object in our  game; the ball. The ball is a peculiar object. It takes part in the physics simulation, but it is not totally governed by it. If that would be so, friction of the floor with the ball (which both are not zero) would eventually result in a ball at rest (zero velocity). That would be quite a boring game. That is why the ball is governed by more rules than those of the physics simulation: OUR rules.

For example, each level has a desired ball speed. Whenever the ball speed deviates from it, we make sure it quickly converges to the desired one. Also, we handle all ball collisions ourselves. The collision detection is done by the physics engine, but based on the ingoing direction and the normal of the object at the collision, we determine the outgoing direction of the ball ourselves. One of the reasons we do this, is to ensure that  there is some minimum bouncing angle. Whenever the ball bounces almost horizontally between two opposite side walls, we don’t want it to take forever before  it approaches our paddle again.

Finding the balance between simulation and intervention is quite the challenge. Sometimes, mostly after heated discussions, we decide to add a new intervention rule to our physics simulation for the sake of gameplay. How far should you go with this? Well, we think that you should aim for a certain level of control for the player, such that whenever he/she loses a ball, it should have been possible to prevent this using skill. However, the simulation should add a factor of unpredictability and surprise. You know that once you hit that one block, that structure will collapse, but how exactly, that should be somewhat of a surprise. We aim to get this balance right and I think we are on the right track. As long as Caromble! will make some of you players experience the feeling that I got when tipping over a Lego structure, I am very satisfied.