Friday 19 October 2012

Skipping Stones first footage

Here's the first footage from the project Skipping Stones, which I'm currently dedicating most of my time to:

Friday 12 October 2012

The trouble with abstraction

Abstraction is good


Abstraction hides all the complicated details of a process so that the user focuses on the grand scheme instead of dealing with the nitty gritty.  For example, to drive an automatic car you don't have to understand how an engine works, you need to understand how to use the pedals, the steering wheel, and how to read the gauges.  If your car is manual you have other nonsense to worry about, like the clutch, and knowing how the transmission works.  That's because an automatic transmission is one level of abstraction higher than a manual transmission, and as a result if your transmission is automatic than the details of how it works are hidden from you.  Furthermore, both of these abstract away the inner-workings of the engine, so the driver can focus on the road and his destination rather than how to turn the wheels of the car.  Eventually driving will be abstracted even more and cars will drive themselves, so the only thing you'll have to think about is your destination.

Another example of abstraction is a restaurant.  A client in a restaurant is given a menu of options, tells somebody which ones he wants, and gets to eat them shortly after.  Restaurants abstract away cooking and creating recipes, so you only have to worry about how to get the food from your plate into your mouth.

Instructions in a higher level programming languages abstract away Assembly language, which in turn abstracts away machine code.  If you're writing for a VM then there's at least another step in there too.  This is a good thing because code looks like this:


instead of this: 


There are lots of benefits to using higher level languages, including readability, productivity, clarity, etc., that you can read about all over the place, but I'm writing an article about the problem with abstraction.

Where abstraction fails


As it turns out, programming is not the same as driving a car.  Cars are tools with one goal in mind, whereas most programming languages are general purpose tools that are used for building more tools.  The trouble is that it's easier to design a tool that does one thing really well, than it is to design a tool that can do anything, including build more tools.  As a result, high level languages will get the job done, but they may not do it the way you had hoped.  This isn't actually a problem for most applications, but can a pain when writing video games and other performance critical applications.  All of a sudden, the nitty gritty details that have been abstracted away become very important.  Another problem arises when the system breaks.  For example if your car's breaks don't work, you can't fix them unless you understand the underlying system.  Programmers who don't understand how a compiler works will be confused by unfamiliar compiler errors, so they seek somebody who understands the underlying system.

What to do about it?


There are multiple solutions.  You can build your own system of abstraction, so that you can control and understand every aspect of it.  This is equivalent to building your own car engine, or writing your own game engine in a language that gives you the necessary level of control, such as C or C++.  This is both an enriching and time-consuming experience, and when you finish you'll feel like a real champ.  However this is also a very difficult path, and without the right guidance you may never actually finish, in which case you'll feel like a trash can, question whether you just aren't as good as those who succeeded, go through an existential crisis and move into the woods to find yourself.  This is the bottom-up approach to understanding a system of abstraction.

You could also learn the system from the top-down.  In other words, start using an existing system of abstraction, and as you become more comfortable with it you learn ways to coax it into doing what you want.  This has been a common approach for a long time.  For example, when C++ first trickled it's way into game development, developers were concerned with making object-oriented design in C++ as performant as C, which is done by avoiding constructing and destroying too many objects.  The problem is that depending on your code, the compiler may create lots of temporary objects without you realizing.  That means that you have to understand the underlying system of abstraction in order to coax the compiler into not doing that, which I think is a great example of a top-down approach to learning a system of abstraction.  By the way, this problem still exists and in garbage collected languages this can have a huge effect on performance.  Similarly to the bottom-up approach, the more you understand the underlying system the more productive you'll be, but luckily the path to righteousness won't be quite as difficult.  Furthermore, the knowledge you gain from learning this system can be used later if you choose to build your own from the bottom-up.

You could do nothing.  That is, if you're lucky you'll find the right tools for the job and you won't have a problem, like a steering wheel which is perfectly suited to steer a car.

Why am I writing this?


I think it's important for creators to understand their tools, rather than view them as a mysterious black box.  Doing so will make you more effective when using them, and avoid helplessness and frustration when something goes wrong.  

In my experience no system is perfect, and this post represents my experience in dealing with a world of broken systems.

Friday 28 September 2012

Some more Unity editor scripting goodies

Last post I showed you how to make some boring additions to the editor, this post I'll step it up to slightly-less-than-boring.

As I mentioned in a previous post, our game has triggers, and when you hit a trigger sound happens.  Normally, you might make these triggers into a prefab, and, every time you wanted to make a new one you'd create a new instance of the prefab, attach your sound, place the trigger, and adjust the collider.  In order to simplify this process I created a drag & drop area so you can drop in a sound file and it sets everything up such that you need only adjust the horizontal position and radius of the trigger.


Here is a the example code for a drag and drop GUI in the inspector.  Luckily, it's not very complicated.  First, we create a new Rect, which will serve as our drag and drop area, in this example it's 50 units tall and expands to the width of the inspector.  Then we capture the current Event, with Event.current.  This will tell us if the user is performing a drag operation.  There are lot a lot of Event Types, but we're only concerned with DragPerform and DragUpdated.

At this point we check if the user's drag is inside our Rect, otherwise we can ignore it.  This is done by calling Contains() on our Rect and passing it the mouse position.  Next I set the DragAndDrop.visualMode to DragAndDropVisualMode.Copy.  This little touch gives the user visual feedback that he's within the drag area bounds by changing the mouse cursor.  Finally, if the event is a DragPerform, as opposed to DragUpdated, then we call DragAndDrop.AcceptDrag(), which is probably important for something.

So now let's do something with our newly dragged object.

In this snippet I added a block of code to our switch statement.  The idea is that for each object that the user dropped, check if it's an audio clip.  If it is, create a new Trigger, make it a child of our TriggerContainer, set it's position to that of our container, and set it's audio clip to the one dragged.  To do this, first I load the prefab using AssetDatabase.LoadAssetAtPath() and pass it the path to our trigger relative to the root of the project.  Then, we simply Instantiate() a new one, set it's position to that of our container, and set it's parent to our container.  As you may notice, setting the parent/child relationship is done through the transform object, which isn't necessarily obvious.  Finally, I finish it off by grabbing the AudioSource object and setting it's clip to the one that was dragged in.

When you're done, it should look something like this:

and dragging in new AudioClips will create new Triggers:

So there it is.  I covered making a drag and drop GUI, loading assets from an editor script, and instantiating new parent/child objects.

Friday 21 September 2012

How to (start) writing a custom inspector in Unity

Unity's editor extension documentation is quite sparse, so I'm sharing some of what I did this week to help confused developers.

Here is more or less a template for starting a custom inspector script:

What we have here is an inspector for a Monobehaviour of type Trigger, as denoted by [CustomEditor(typeof(Trigger))], which should be adapted for your own objects.  Editor scripts inherit from the Editor class and have an OnEnable function, which works similarly to Start in a normal monobehaviour.

In the beginning


The first thing you'll notice is I convert target into a SerializedObject.  The reason you want to do this is explained in this video, which you'll probably want to watch anyway to learn more about editor scripting.  After that you'll probably want access to the variables in your monobehaviour.  My triggers are round, so they have a public int called radius, and I get access to it by calling FindProperty("radius") and assigning it to a SerializedProperty.

Showing stuff


Now you can access all the properties of your object, so you're going to want to show them to the user. So let's take this party to the OnInspectorGUI() function.  First of all, you're going to want to call Update() on your serialized object.  You do this because of reasons.  Seriously, I don't know what this does but apparently it's important even though nobody thought to mention it anywhere.  Next I added a label for posterity, and a property field.  PropertyField displays the default inspector control for the given property type, which in this case is an int.  Finish by calling ApplyModifiedProperties() on your object.  This applies all the changes you made, but also gives you the ability to undo and other goodies, which clear from the documentation right?. Anyway, at this point you've duplicated exactly what the default inspector view gives you, good work!  By the way, incase you actually wanted to draw the default inspector you can call DrawDefaultInspector() as well.

Showing cooler stuff


Let's make something more interesting, like a dropdown.  I gave my Triggers a priority variable, which is an int, but sometimes it's hard to remember if a low number is priority or high priority.  So instead I added a dropdown with the descriptive choices low, medium, and high.

The code looks something like this.  It the same code as before, but with an added property, a list of words to fill the dropdown and the code to draw it.  The dropdown, or "Popup", is between a horizontal layout block, as denoted by HorizontalBegin() and HorizontalEnd().  Basically all this does is put all the GUI elements between in the block next to each other.  The Popup() function is pretty straightforward, give it the current index and an array of strings to fill itself, and it returns the selected index.  When you're done, it should look something like this


So now you've gotten a taste of building a custom inspector.  Later I'll post about how to make a drag and drop area and other goodies, but if you want to get ahead you should watch this video.

My experience with Unity Editor Scripting

Once upon a time I decided I was going to extend the Unity editor in order to make the most awesomest adventure game maker of all time.  After a few weeks of struggling through Unity's extremely sparse editor extension documentation, I was able to make a barely useable turd of a tool.  At this point I decided it would be easier to make an in-game editor that could save and load level data.  Admittedly, this was pretty cool because anybody could tweak the levels while playing, but part of the exercise was to explore the editor scripting functionality, which was a great big failure.

This week, however, I discovered the Unite sessions, including intro to editor scripting and advanced editor scripting, which are a blessing for a budding editor scripter.  Working with the Unity editor used to feel like this

But after watching those videos it feels like this

So, big thanks to Shawn White, Tim Cooper and Yilmaz Kiymaz.

Friday 14 September 2012

Intuitive flick gestures in Unity


One thing I implemented was flick gestures for iOS in Unity.  This turned out to be a little trickier than anticipated, because it must be carefully designed to feel good.  In our project, flicking is the primary method of interaction, so it's crucial that it's comfortable.


The challenge


Our project has the player throwing objects from a first-person perspective.  We use flick gestures to control the throws, so we have to translate a 2D gesture into a 3D velocity. So to get started, we'll consider what we have to work with.

The 2D gesture has a start position and end position, both in screen coordinates, as well as the time it took to complete the gesture.  In other words, we have the distance, time, and direction of the swipe in screen space.


Getting velocity


The first thing I did was turn the screen space velocity into world space velocity.  To do this, I took the swipe time divided by the swipe distance and multiplied it by a fudge value.  Although this strikes me as a fairly naive approach, it turned out to be surprisingly effective.  To finish, I spent some time tweaking the fudge value until the velocity of the thrown stones aligned with my flick gesture in a way that felt comfortable and intuitive.


Getting direction


First try


Translating the direction of the flick into a 3D direction took a little more effort.  The first thing I tried was getting the angle of the swipe, creating a rotation matrix, and applying it to the forward vector of the camera. At the time we were only concerned with the rotation of the stone around y, in other words the yaw.  This approach felt terrible.  As a player it was difficult to know where you'd throw your stone based on your flick gesture, which means it was completely unintuitive and deemed no good.


Second try


The next attempt was to throw the stone in the direction that the player's finger stopped.  Luckily, Unity has a function ScreenPointToRay built-in to the camera, so all I had to do was take the final position of the touch, and convert it into a ray.  So when I throw the stone, I take the direction of the ray multiplied by the velocity we calculated earlier.  

Initially I was concerned that this approach would feel awkward.  The user could potentially swipe from the bottom left towards the top right, but if she didn't cross the centre of the screen the stone would still fly leftwards.  It turns out I was dead wrong for two reasons.  First, moves like these are incredibly unlikely.  For example, a user who wants to throw a stone right isn't likely to start from the bottom left of the screen and move only halfway across.  He's much more likely to start from the center of the screen and move right.  Second, control is so good that the player is likely to understand it immediately.  Thus he'll never make some sort of awkward flick and be surprised by the result.

So, as you may have guessed, this is the scheme we stuck with.  In fact, it felt so good that I decided to revisit the velocity equation.  I was curious if using world space would feel better than screen space distance for determining the velocity of the stone.  Although it turned out badly due to the perspective of our camera, I'll explain how this is done for those want to try it.


Getting world space distance of the flick


Using our trusty ScreenPointToRay function, we can calculate the exact distance that the player's finger travelled in the game world.  To do this, turn the starting position and ending position of the flick into rays.  Then, perform a raycast onto your terrain, or other relevant collider, using each of these rays.  If the raycast hits, you can extract the world space coordinates of the collision from the RaycastHit object.  Finally, subtract one set of coordinates from the other one and you've got a vector that'll tell you the distance travelled in the game world.


Finishing touches


As a finishing touch you'll probably want to add some flick gesture recognition.  For example, checking the time it took the player to perform the gesture.  This will allow you to differentiate between a flick and a slow swipe. If the direction of you flick matters you may want to check that as well.  For example, in our game we only want flicks that travel from bottom to top.  To determine the direction of the flick I would use atan2 to get the angle of the flick vector and decide if it's within your range of acceptable values.

The code I used while prototyping can be found here: https://gist.github.com/3723838.  Please leave any questions or suggestions in the comments.

Monday 27 August 2012

Embedding Pure Data into a Unity iOS App

UPDATE: I wrote a sample project here.  It uses the most up-to-date version of libPd as of 03/08/2013.

So you're making a Unity-based iOS game, and you want to use Pure Data as your audio engine? Great! Let me show you how it's done.  Before starting, I suggest you read and understand this post on embedding libPd into an iOS app.

 

Step one: Create a wrapper for PdBase


The first thing you'll have to do is create a native wrapper for PdBase.  As mentioned in my previous post, PdBase is the main interface between your application and libPd.  So essentially what you're doing here, is creating a way to send messages to PdBase, which in turn sends messages to your Pure Data patches.  You can find my example wrapper here, however be warned that this is just a proof-of-concept, it is not fully tested, nor production ready code.  I'll explain how these wrapper functions work with an example:

This function is used to send a symbol to a receiver in your Pd patch.  That means it needs to know two things, the symbol you're sending, and, the name of the receiver.  However, you'll notice the function takes four arguments.  That's because both symbols and receivers are strings of characters, and the most reliable way for our native c code to know when a string sent from Unity ends, is to just tell it the length. In other words, symLength is the length of the symbol string in bytes, and recLength is the length of the receiver string in bytes.  The reason we do it this way is because a c string is simply a pointer to the memory location of the first character of the string, whereas a Unity string is an object that contains the string and more.

The first thing you do with each of these strings is convert them into NSStrings, because that's what PdBase expects.  To do so, initialize a new NSString object with the data sent from Unity, and make sure to specify the string is encoded as NSUTF16LittleEndianStringEncoding because that's the encoding that Unity uses.

Next, you call the appropriate PdBase function with our newly created strings, and release those strings because you're done with them.

Step two: Create an interface in Unity


Now that you've wrapped all the PdBase functions you're going to use, it's time to interface with your wrapper from Unity.  My example interface can be seen here.  There isn't much to explain, this code can be used more or less as-is, but I have a few notes to mention.
First, lists of arguments are sent as one long string separated by colons.  For example, "arg1:arg2:arg3".
Second, I append %f to float arguments which are part of a list.  This is so our native interface can differentiate between float and string arguments.  I did this while debugging a particular issue, but I'm fairly sure it's unnecessary.
Finally, you'll notice when passing that when I pass the length of a string to the native code, I multiply it by two.  This is because our C code is interested in the number of bytes a string is, not the number of characters, and since Unity strings are UTF-16 encoded, every character is made up of two bytes.

Step three: Embed Pd into an iOS App


When you build a Unity project for iOS, Unity creates a new XCode project with your game embedded in it.  At this point you'll embed Pd into your project, the process of which is described in this post.  Secondly, you'll want to add your PdBase wrapper, the first file we created, to this project.

Step four: Dance Party!


At this point you should have a Unity project and libPd embedded into a single application running on your iOS device.  Try calling the functions from your Unity-to-native-code-interface and see if your patches react.  If they do, great!  Otherwise, leave a note in the comments so we can deduce what went wrong.

My DIY Controller at Montreal Maker Faire

I spent the weekend demoing my Play-doh controller this weekend at Montreal's first-ever Maker Faire!  Check out the photos:




Embedding Pure Data into an iOS App

So you want to embed Pure Data into an iOS App?  This process is both very simple, and frustratingly complicated, depending on your Pure Data needs.  However you’re in luck, because I’ve run into (almost) every possible issue, and I’m counting on you, the reader, to bring any new issues to me so I can deal with those too.

Requirements

  • XCode
  • an iOS Device
  • Git

Step one: Get libPd for iOS

The first step is grabbing libPd from github.  For more information on libPD, refer to this post.  This is most easily done by cloning the the pd-for-ios repository with git.  Afterwards, follow the installation instructions on the pd-for-ios github page.
At the time of writing, these instructions are:

‘After cloning the pd-for-ios repository, please make sure to cd into the
pd-for-ios folder and say
 git submodule init
 git submodule update
These two commands install the dependencies from libpd.  After the initial
setup, say
 git pull
 git submodule update
whenever you want to sync with the GitHub repositories.’

Step two: Compile the sample projects


The first thing you’re going to want to test is whether or not you can compile the sample project.  So, open up your newly acquired test project from the repository you just cloned, which is aptly named PdTest01, and try to compile and run.  If that worked, you have my permission to give yourself a nice pat on the back before moving on.
Open up the next sample project, PdTest02.  This project has a more complicated structure because libPD is a sub-project inside of PdTest02.  This means that you can update or modify libPD as much as you want, and it’ll recompile it every time you compile the PdTest02 master project.  This is the same structure that your final iOS application should adopt.  Make sure that you select the PdTest02 scheme (pictured below), then compile and run.



Step three: Add libPd as a sub-project to your project

The next thing you'll do is add libPd as a sub-project to your own project.  I assume you've already created your project.  Open up pd-for-ios > libpd in Finder.  Here you’ll see a libpd.xcodeproj file.  Drag this file into your project explorer.  It should look something like this:


Step four: Add libPd as a dependency to your project

Now select your project from the project explorer in XCode, select your app under Targets, and select the Build Phase tab.  

Here you're going to add libPd as a dependency for your project, as well as link libPd to your project.  Expand ‘Target Dependencies’ and click ‘+’ to add the libpd-ios dependency.  Next, expand ‘Link With Libraries’ and click ‘+’ to add libpd-ios.a to link the library with your app.  When you're done, it should look something like this:


 Now when you compile your project, it will compile libpd-ios first and embed the library into your compiled project.

Step five: Compile and party!

Hurray! You’ve successfully embeded Pure Data into your iOS app.  Unfortunately this is only half the battle.  There are two significant issues left to address.

How do I use libPd?

I recommend you explore the sample projects to get an idea of how to use libPd.  This will show you how to initialize Pure Data and open a PD patch.  Also, PdTest02 shows the user how to setup the AppDelegate as a PdReceiverDelegate.  This will allow you to receive messages from PD.  If you want to learn more about interacting with libPd, explore the libPd source, with PdBase being especially useful.

How do I add extras?

Vanilla Pure Data contains as few objects as possible to run.  This applies to libPd as well.  As a result, libPd is as lean as possible, which makes it perfect for embedding.  However most (all) Pure Data programmers use objects that aren’t part of Vanilla PD/libPd.  The good news is that libPD supports these objects.  The bad news is this can be a huge source of frustration.


PD Objects come in two formats: .PD files and .c files.

.Pd Files

FX is an audio programmer I work with.  FX uses Pd-Extended to create patches.  Pd-Extended contains a lot of extra objects that FX depends on, for example tof/sample_granule~ and tof/sample_play~.  In order to accommodate FX, I had to add these objects to libPd, and here’s how.

First, download and install the Pd-Extended application from the Pure Data website.  Right-click the app, and select “Show Package Contents”.  Under Contents > Resources you’ll find a folder called extra.  This is where all the fancy “Extended” objects are hiding.  

Now copy the “tof” folder (or whatever objects you’re looking for) into your project folder.  Pure Data expects a certain folder structure.  What this means is when you create a “tof/sample_play~” object in your PD patch, it expects “sample_play~” to be contained in the folder “tof”.  However, by default when you create an iOS app, all your resources get dumped into the resources directory, and don’t retain any folder structure.  To get around this, when you drag the “tof” folder into your project explorer in XCode, select the option “Create folder references for any added folders”.  

Congratulations! You can now use all the tof objects with your embedded libPd application. (Note: I haven’t been able to make sample_shifft~ work, if anybody has any information on this, please let me know).

.c Files

C extras are easy to add.  Simply add the desired c files into your project, this can be done by dragging and dropping them into your project explorer in XCode, preferably into a group with a descriptive name like libpd extras.  

Next you’re going to initialize the new object before you use it.  Let’s pretend you’re trying to add the object lrshift~.  Every pd object has a setup function called [name_of_object]_setup().  For example, lrshift~ has a setup function lrshift_tilde_setup().  You’re going to call this function when you initialize Pure Data, however you have to tell the compiler that it exists first.  Open your AppDelegate.m file, and somewhere before your function definitions add extern void lrshift_tilde_setup(void);.  Now, before you initialize pure data, call lrshift_tilde_setup();.  Do this for any other c objects that you’re using.

Note about extras

I’ve noticed that Pure Data sometimes expects these objects to be contained in a folder.  For example, the PD file may have a line that says “cyclone/lrshift~”.  The problem is that any extras added to libPd with this method essentially become one with the base Pd library, so Pd will fail to initialize them.  To fix this, I simply run a search and replace on all my PD files, and replace any offending lines.  For example “cyclone/lrshift~” becomes “lrshift~”.  I recognize that this may not be the most convenient method and I’m listening for suggestions on how to fix it.

Where do I find extras?

All the extra objects included in PD-Extended can be found in the PD-Extended app.  To find them, download and install Pd-Extended.  Right-click on the app, and select “Show Package Contents”.  Under Contents > Resources there’s a folder called “extra”.  This is where all the extended objects are hiding.

On libPd

What is libPd?

libPD is the same Pure Data you know and love, but without a fancy UI.

What does that mean?

That means that libPD can do everything that Pure Data does, other than build patches.  That also means that libPD can be embedded into any software, which supercharges your application with Pure Data power.  Some ideas are:
  • games
  • audio software
  • browser-based applications (ie. Chrome apps, Light Table)

That's cool, but why bother?

The most obvious reason is that audio programmers can build audio patches in a way that’s comfortable to them.  That means less time is spent learning new tools.  
It also means that all Pure Data’s resources are available to you.
Most of all, it means that you have the power (and responsibility) to subvert massively overpriced audio software by building your own software with the power of pure data.

Wednesday 22 August 2012

DIY DDR Controller

I built this DIY Controller out of Play-Doh, hooked up to a Teensy++.  Touching the play-doh sends a signal to the Teensy, which acts as a USB Keyboard/Gamepad/Joystick plugged into a laptop.  The picture shows the play-doh controlling Stepmania, an open-source and cross-platform version of DDR.

Wednesday 30 May 2012

Lemaitre is excellent

I present to you Lemaitre, an electronic duo from Oslo.


Wednesday 16 May 2012

Thank you Moai

Are you looking for a cross-platform, open-source C++/Lua game engine?  I was, and I found it.  It's called Moai.  Thank you Moai.

Tuesday 1 May 2012

This struck a chord

"In science if you know what you are doing you should not be doing it.
In engineering if you do not know what you are doing you should not be doing it.
Of course, you seldom, if ever, see either pure state."

—Richard Hamming, The Art of Doing Science and Engineering

Monday 30 April 2012

50 minutes of concentrated wisdom

This talk, given by Richard Hamming circa 1986,  explains how one can become a first class scientist and why others who are so close to greatness never succeed.


Friday 13 April 2012