Sunday, 19 May 2013

Developing Mobile Games with Moai SDK Book Review


This book is a gentle introduction to the basics of building games with the MOAI SDK.  MOAI is used in professional game development because it’s simple, open, cross-platform and powerful.  It takes a lot of the initial legwork out of building a framework, while still being completely open to modification.  This book, however, is not geared towards professionals.  This book is a basic introduction to game development for beginners using the MOAI framework.  If you’re a beginner to game development, either hobbyist or aspiring professional, then this book is a good starting point.  Otherwise, I wouldn’t even consider it, it won’t provide you with any value.


Now, for those beginners who are still reading, I’ll go over what the book contains.  There are several example projects in here that cover different aspects of game development, including HUDs and physics, as well as the absolute basics such as basic gameplay logic, displaying sprites, sound, a simple resource manager, etc.  So, if you’re unfamiliar with any of these topics, then this book may be for you.

Finally, it should be mentioned that this book really focuses on iOS game development.  There isn’t much explanation on how to deploy to other platforms.

All in all, this is a decent book for beginners in game development, but doesn’t provide much value beyond that.  It barely scratches the surface of game development with MOAI.

Friday, 19 October 2012

Skipping Stones first footage

Here's the first footage from the project Skipping Stones, which I'm currently dedicating most of my time to:

Friday, 12 October 2012

The trouble with abstraction

Abstraction is good


Abstraction hides all the complicated details of a process so that the user focuses on the grand scheme instead of dealing with the nitty gritty.  For example, to drive an automatic car you don't have to understand how an engine works, you need to understand how to use the pedals, the steering wheel, and how to read the gauges.  If your car is manual you have other nonsense to worry about, like the clutch, and knowing how the transmission works.  That's because an automatic transmission is one level of abstraction higher than a manual transmission, and as a result if your transmission is automatic than the details of how it works are hidden from you.  Furthermore, both of these abstract away the inner-workings of the engine, so the driver can focus on the road and his destination rather than how to turn the wheels of the car.  Eventually driving will be abstracted even more and cars will drive themselves, so the only thing you'll have to think about is your destination.

Another example of abstraction is a restaurant.  A client in a restaurant is given a menu of options, tells somebody which ones he wants, and gets to eat them shortly after.  Restaurants abstract away cooking and creating recipes, so you only have to worry about how to get the food from your plate into your mouth.

Instructions in a higher level programming languages abstract away Assembly language, which in turn abstracts away machine code.  If you're writing for a VM then there's at least another step in there too.  This is a good thing because code looks like this:


instead of this: 


There are lots of benefits to using higher level languages, including readability, productivity, clarity, etc., that you can read about all over the place, but I'm writing an article about the problem with abstraction.

Where abstraction fails


As it turns out, programming is not the same as driving a car.  Cars are tools with one goal in mind, whereas most programming languages are general purpose tools that are used for building more tools.  The trouble is that it's easier to design a tool that does one thing really well, than it is to design a tool that can do anything, including build more tools.  As a result, high level languages will get the job done, but they may not do it the way you had hoped.  This isn't actually a problem for most applications, but can a pain when writing video games and other performance critical applications.  All of a sudden, the nitty gritty details that have been abstracted away become very important.  Another problem arises when the system breaks.  For example if your car's breaks don't work, you can't fix them unless you understand the underlying system.  Programmers who don't understand how a compiler works will be confused by unfamiliar compiler errors, so they seek somebody who understands the underlying system.

What to do about it?


There are multiple solutions.  You can build your own system of abstraction, so that you can control and understand every aspect of it.  This is equivalent to building your own car engine, or writing your own game engine in a language that gives you the necessary level of control, such as C or C++.  This is both an enriching and time-consuming experience, and when you finish you'll feel like a real champ.  However this is also a very difficult path, and without the right guidance you may never actually finish, in which case you'll feel like a trash can, question whether you just aren't as good as those who succeeded, go through an existential crisis and move into the woods to find yourself.  This is the bottom-up approach to understanding a system of abstraction.

You could also learn the system from the top-down.  In other words, start using an existing system of abstraction, and as you become more comfortable with it you learn ways to coax it into doing what you want.  This has been a common approach for a long time.  For example, when C++ first trickled it's way into game development, developers were concerned with making object-oriented design in C++ as performant as C, which is done by avoiding constructing and destroying too many objects.  The problem is that depending on your code, the compiler may create lots of temporary objects without you realizing.  That means that you have to understand the underlying system of abstraction in order to coax the compiler into not doing that, which I think is a great example of a top-down approach to learning a system of abstraction.  By the way, this problem still exists and in garbage collected languages this can have a huge effect on performance.  Similarly to the bottom-up approach, the more you understand the underlying system the more productive you'll be, but luckily the path to righteousness won't be quite as difficult.  Furthermore, the knowledge you gain from learning this system can be used later if you choose to build your own from the bottom-up.

You could do nothing.  That is, if you're lucky you'll find the right tools for the job and you won't have a problem, like a steering wheel which is perfectly suited to steer a car.

Why am I writing this?


I think it's important for creators to understand their tools, rather than view them as a mysterious black box.  Doing so will make you more effective when using them, and avoid helplessness and frustration when something goes wrong.  

In my experience no system is perfect, and this post represents my experience in dealing with a world of broken systems.

Friday, 28 September 2012

Some more Unity editor scripting goodies

Last post I showed you how to make some boring additions to the editor, this post I'll step it up to slightly-less-than-boring.

As I mentioned in a previous post, our game has triggers, and when you hit a trigger sound happens.  Normally, you might make these triggers into a prefab, and, every time you wanted to make a new one you'd create a new instance of the prefab, attach your sound, place the trigger, and adjust the collider.  In order to simplify this process I created a drag & drop area so you can drop in a sound file and it sets everything up such that you need only adjust the horizontal position and radius of the trigger.


Here is a the example code for a drag and drop GUI in the inspector.  Luckily, it's not very complicated.  First, we create a new Rect, which will serve as our drag and drop area, in this example it's 50 units tall and expands to the width of the inspector.  Then we capture the current Event, with Event.current.  This will tell us if the user is performing a drag operation.  There are lot a lot of Event Types, but we're only concerned with DragPerform and DragUpdated.

At this point we check if the user's drag is inside our Rect, otherwise we can ignore it.  This is done by calling Contains() on our Rect and passing it the mouse position.  Next I set the DragAndDrop.visualMode to DragAndDropVisualMode.Copy.  This little touch gives the user visual feedback that he's within the drag area bounds by changing the mouse cursor.  Finally, if the event is a DragPerform, as opposed to DragUpdated, then we call DragAndDrop.AcceptDrag(), which is probably important for something.

So now let's do something with our newly dragged object.

In this snippet I added a block of code to our switch statement.  The idea is that for each object that the user dropped, check if it's an audio clip.  If it is, create a new Trigger, make it a child of our TriggerContainer, set it's position to that of our container, and set it's audio clip to the one dragged.  To do this, first I load the prefab using AssetDatabase.LoadAssetAtPath() and pass it the path to our trigger relative to the root of the project.  Then, we simply Instantiate() a new one, set it's position to that of our container, and set it's parent to our container.  As you may notice, setting the parent/child relationship is done through the transform object, which isn't necessarily obvious.  Finally, I finish it off by grabbing the AudioSource object and setting it's clip to the one that was dragged in.

When you're done, it should look something like this:

and dragging in new AudioClips will create new Triggers:

So there it is.  I covered making a drag and drop GUI, loading assets from an editor script, and instantiating new parent/child objects.

Friday, 21 September 2012

How to (start) writing a custom inspector in Unity

Unity's editor extension documentation is quite sparse, so I'm sharing some of what I did this week to help confused developers.

Here is more or less a template for starting a custom inspector script:

What we have here is an inspector for a Monobehaviour of type Trigger, as denoted by [CustomEditor(typeof(Trigger))], which should be adapted for your own objects.  Editor scripts inherit from the Editor class and have an OnEnable function, which works similarly to Start in a normal monobehaviour.

In the beginning


The first thing you'll notice is I convert target into a SerializedObject.  The reason you want to do this is explained in this video, which you'll probably want to watch anyway to learn more about editor scripting.  After that you'll probably want access to the variables in your monobehaviour.  My triggers are round, so they have a public int called radius, and I get access to it by calling FindProperty("radius") and assigning it to a SerializedProperty.

Showing stuff


Now you can access all the properties of your object, so you're going to want to show them to the user. So let's take this party to the OnInspectorGUI() function.  First of all, you're going to want to call Update() on your serialized object.  You do this because of reasons.  Seriously, I don't know what this does but apparently it's important even though nobody thought to mention it anywhere.  Next I added a label for posterity, and a property field.  PropertyField displays the default inspector control for the given property type, which in this case is an int.  Finish by calling ApplyModifiedProperties() on your object.  This applies all the changes you made, but also gives you the ability to undo and other goodies, which clear from the documentation right?. Anyway, at this point you've duplicated exactly what the default inspector view gives you, good work!  By the way, incase you actually wanted to draw the default inspector you can call DrawDefaultInspector() as well.

Showing cooler stuff


Let's make something more interesting, like a dropdown.  I gave my Triggers a priority variable, which is an int, but sometimes it's hard to remember if a low number is priority or high priority.  So instead I added a dropdown with the descriptive choices low, medium, and high.

The code looks something like this.  It the same code as before, but with an added property, a list of words to fill the dropdown and the code to draw it.  The dropdown, or "Popup", is between a horizontal layout block, as denoted by HorizontalBegin() and HorizontalEnd().  Basically all this does is put all the GUI elements between in the block next to each other.  The Popup() function is pretty straightforward, give it the current index and an array of strings to fill itself, and it returns the selected index.  When you're done, it should look something like this


So now you've gotten a taste of building a custom inspector.  Later I'll post about how to make a drag and drop area and other goodies, but if you want to get ahead you should watch this video.

My experience with Unity Editor Scripting

Once upon a time I decided I was going to extend the Unity editor in order to make the most awesomest adventure game maker of all time.  After a few weeks of struggling through Unity's extremely sparse editor extension documentation, I was able to make a barely useable turd of a tool.  At this point I decided it would be easier to make an in-game editor that could save and load level data.  Admittedly, this was pretty cool because anybody could tweak the levels while playing, but part of the exercise was to explore the editor scripting functionality, which was a great big failure.

This week, however, I discovered the Unite sessions, including intro to editor scripting and advanced editor scripting, which are a blessing for a budding editor scripter.  Working with the Unity editor used to feel like this

But after watching those videos it feels like this

So, big thanks to Shawn White, Tim Cooper and Yilmaz Kiymaz.

Friday, 14 September 2012

Intuitive flick gestures in Unity


One thing I implemented was flick gestures for iOS in Unity.  This turned out to be a little trickier than anticipated, because it must be carefully designed to feel good.  In our project, flicking is the primary method of interaction, so it's crucial that it's comfortable.


The challenge


Our project has the player throwing objects from a first-person perspective.  We use flick gestures to control the throws, so we have to translate a 2D gesture into a 3D velocity. So to get started, we'll consider what we have to work with.

The 2D gesture has a start position and end position, both in screen coordinates, as well as the time it took to complete the gesture.  In other words, we have the distance, time, and direction of the swipe in screen space.


Getting velocity


The first thing I did was turn the screen space velocity into world space velocity.  To do this, I took the swipe time divided by the swipe distance and multiplied it by a fudge value.  Although this strikes me as a fairly naive approach, it turned out to be surprisingly effective.  To finish, I spent some time tweaking the fudge value until the velocity of the thrown stones aligned with my flick gesture in a way that felt comfortable and intuitive.


Getting direction


First try


Translating the direction of the flick into a 3D direction took a little more effort.  The first thing I tried was getting the angle of the swipe, creating a rotation matrix, and applying it to the forward vector of the camera. At the time we were only concerned with the rotation of the stone around y, in other words the yaw.  This approach felt terrible.  As a player it was difficult to know where you'd throw your stone based on your flick gesture, which means it was completely unintuitive and deemed no good.


Second try


The next attempt was to throw the stone in the direction that the player's finger stopped.  Luckily, Unity has a function ScreenPointToRay built-in to the camera, so all I had to do was take the final position of the touch, and convert it into a ray.  So when I throw the stone, I take the direction of the ray multiplied by the velocity we calculated earlier.  

Initially I was concerned that this approach would feel awkward.  The user could potentially swipe from the bottom left towards the top right, but if she didn't cross the centre of the screen the stone would still fly leftwards.  It turns out I was dead wrong for two reasons.  First, moves like these are incredibly unlikely.  For example, a user who wants to throw a stone right isn't likely to start from the bottom left of the screen and move only halfway across.  He's much more likely to start from the center of the screen and move right.  Second, control is so good that the player is likely to understand it immediately.  Thus he'll never make some sort of awkward flick and be surprised by the result.

So, as you may have guessed, this is the scheme we stuck with.  In fact, it felt so good that I decided to revisit the velocity equation.  I was curious if using world space would feel better than screen space distance for determining the velocity of the stone.  Although it turned out badly due to the perspective of our camera, I'll explain how this is done for those want to try it.


Getting world space distance of the flick


Using our trusty ScreenPointToRay function, we can calculate the exact distance that the player's finger travelled in the game world.  To do this, turn the starting position and ending position of the flick into rays.  Then, perform a raycast onto your terrain, or other relevant collider, using each of these rays.  If the raycast hits, you can extract the world space coordinates of the collision from the RaycastHit object.  Finally, subtract one set of coordinates from the other one and you've got a vector that'll tell you the distance travelled in the game world.


Finishing touches


As a finishing touch you'll probably want to add some flick gesture recognition.  For example, checking the time it took the player to perform the gesture.  This will allow you to differentiate between a flick and a slow swipe. If the direction of you flick matters you may want to check that as well.  For example, in our game we only want flicks that travel from bottom to top.  To determine the direction of the flick I would use atan2 to get the angle of the flick vector and decide if it's within your range of acceptable values.

The code I used while prototyping can be found here: https://gist.github.com/3723838.  Please leave any questions or suggestions in the comments.