Saturday 29 October 2011

Terrainicles [WebGL & HaXe]



I have been playing with this thing, tweaking it, making changes for weeks. Theres so many different things I want to add. Different options, scenarios, optimisations etc. I decided however just to follow the 'release early and often' mantra and get this thing out now.

Before I go any further check out what im talking about here:

http://mikecann.co.uk/projects/WebGLTerrainicles/
(You will need a WebGL compatible browser, that means no IE) 


Its a continuation of my earlier work on GPU particles using WebGL and HaXE. Im trying to emulate some work I did years ago in XNA, on the LieroXNA project.

It uses the same techniques for updating and rendering particles entirely on the GPU as my previous post. What that means is that is possible have millions of particles interacting updating and rendering simultaneously as all the operations are performed on the GPU.

What I have added this time is another texture for the particles to collide with as they move. I was originally working with the same dirt and grass texture as my XNA project but I thought as it was Halloween I would get into the spirit a little ;)

There are several options on the right hand side that can be used to tweak the properties of the simulation. I spent alot of time playing around with these, there are some really cool effects you can achieve with just simple modifications.

There are so many things I could add to improve this. You can see some of them in a video I made years ago:



There we have some cool stuff like Bloom, forces and multiple rendering layers going on. It would be nice to get those in this sample too.

For now however I think im going to have a break from this sample. I have spent quite a few weeks to get to this point so far, and I think I need a break for a little bit so I can work on other things. I may come back to it soon tho If people are interested or if (probably more likely) I think of some 'cool new thing' that will 'only take 5 mins'.

I have uploaded the source for this sample to Github for people to lookat/fork if they wish:

https://github.com/mikecann/WebGLTerrainicles

Enjoy!

 

 

 

Friday 21 October 2011

Why Developing for WebGL Sucks!



For some time now I have been working with WebGL and have developed a sort of love/hate relationship with it. I love the ability to instantly target millions of people with GPU accelerated code without any plugins or barriers (excluding the targets that dont support it). However as a developer, writing code that takes advantage of WebGL kinda sucks.

Procedural Based


First off is the way you have to structure your GL calls. For example take a look at the following generic bit of webGL harvested from the net:

[codesyntax lang="javascript" lines="normal"]
texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, 64, 64, 0,
gl.RGB, gl.FLOAT, new Float32Array(pix));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);

texture1 = gl.createTexture();
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, texture1);
gl.pixelStorei(gl.UNPACK_ALIGNMENT, 1);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, 64, 64, 0,
gl.RGB, gl.FLOAT, new Float32Array(pix1));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);

FBO = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, FBO);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D, texture, 0);
FBO1 = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, FBO1);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D, texture1, 0);
if( gl.checkFramebufferStatus(gl.FRAMEBUFFER) != gl.FRAMEBUFFER_COMPLETE)
alert(err + "FLOAT as the color attachment to an FBO");

[/codesyntax]

All it does is create a couple of textures, setting their starting values and creates two frame buffers for rendering to. Its just it looks complicated and difficult to understand.

GL works on a procedural basis, so you tell GL that you are about to work on something by calling a function like "bindTexture()" then on the next line you perform an operation on it such as "pixelStorei()". Now this may have made perfect sense back when we were writing everything in C which is procedural anyway however this is Javascript (or haXe in my case) which is an Object based language, code like this is difficult to understand and follow.

The procedural nature of WebGL means you have to be much more careful about unsetting things you have previously set. For example if you bind a texture to perform some operation, you must then remember to unbind it else you could inadvertently cause operations to be applied to it on subsequent calls elsewhere in your codebase. It this 'hidden state' that has caused me alot of headaches when developing my samples.

The principal behind WebGL was to provide a very low-level library which other developers can build upon to build more complex and abstracted libraries. And there are numerous libraries out there. I personally have tried several of them including the very popular three.js. Three.js is great for doing common things like loading models and putting quads on the screen. I however encountered a problem with render targets which I struggled with for days before I discovered that you had to set "needsUpdate" to true on your texture before using it. In the end I decided to drop three.js beacuse of another issue I encountered and instead attempt to reduce my complications by working with webGL directly.

Flash11's Stage3D has the same philosophy as webGL, to provide a low level API for other developers to build libraries upon. The thing is Flash11's low-level API makes more sense and is more readable. For example the following to me is much more readable than its webGL equivalent:

[codesyntax lang="actionscript3" lines="normal"]
texture = c.createTexture(logo.width, logo.height, Context3DTextureFormat.BGRA, false);
texture.uploadFromBitmapData(logo);

[/codesyntax]

The Stage3D API also uses language like "upload" to let you know when you are transferring data to the GPU, for a new comer to GL you have no clue when things are going to the GPU. Its small things like this that reduce the "WTF?" factor when tackling the tricky world of hardware-accelerated 3D programming.

Cross-domain textures


This one cropped up around July time this year and took me ages to work out what was going on. For some inexplicable reason (or so it seemed) my code one day stopped working. When I looked for demo code online it all worked fine, however when I downloaded it and run it locally it also didnt work. I was getting errors like the following:

Uncaught Error: SECURITY_ERR: DOM Exception 18
Uncaught Error: SECURITY_ERR: DOM Exception 18
Uncaught Error: SECURITY_ERR: DOM Exception 18
Uncaught Error: SECURITY_ERR: DOM Exception 18
Uncaught Error: SECURITY_ERR: DOM Exception 18 

I was so baffled that I posted about it on the HaXe mailing list asking for help, thinking it was something I was doing wrong with HaXe. It turns out (after much wall-head-butting) this was a change that they brought into Chrome 13 and Firefox 5 to combat a security problem when using shaders that use textures from a different domain to the one running the code:

http://blog.chromium.org/2011/07/using-cross-domain-images-in-webgl-and.html

Now I have no problem with cross-domain issues, im used to this from Flash where we have the same sort of setPixel() restrictions on cross-domain BitmapData's. The thing is, it appears that this restriction applies when running code locally too. So If you are developing something on your local machine and trying to access a texture from disk it throws the same security errors because the browser thinks you are reaching across domains to access the image.

At the time the only way to get around this was to create your own webserver that you run on localhost to server up the files. So to do that I had to download python so I could run a simple localhost commandline webserver from my bin directory. What an effort! There may be easier ways these days to solve it but at the time it really frustrated me and formed yet another barrier to developing webGL.

No Error Messages


This is by far the most annoying thing about developing for WebGL. So many times I have been trying to write something that I know SHOULD work but for some reason it doesn't. I dont get any error messages, nothing. It makes writing something from scratch neigh on impossible.

In my last post "GPU State Preserving Particle Systems with WebGL & HaXe" I started with an idea. I attempted to code it 'bottom-up'. That is start with nothing and then add more code until I reached what I wanted. Unfortunately having no error messages in WebGL makes this very difficult indeed. I would spend some time writing something really simple, like trying to get a textured quad to render on the screen only to find I get nothing. I double check my camera matrices my vertex and texture buffers, my shader and still nothing. Eventually I found that I hadn't bound something first before trying to operate on it *sigh*

In the end I found the best way to get anywhere is to go from the other direction, a 'top-down' method. Start with something kind of simmilar to what you want then cut bits out one line at a time until you get what you want. Its extremely time consuming and frustrating, but its less frustrating than going from the other way.



There are tools out there that help with debugging what is going wrong. Namely the WebGL Inspector (see above) is intended to provide gDEBugger / PIX like debugging information about what is going on inside webGL. Its a clever bit of tech, it lets you inspect buffers and traces each gl call, however it suffers from the same underlying problem of having no errors. You setup a buffer incorrectly and what you get is "INVALID_VALUE". No indication as to which of the values is invalid or what part of the call you messed up on :(

Googling Doesn't Help


If you do happen to get an error message (unlikely) or you word your problem in a sufficiently succinct and googaleble way you will then run into the next big problem with WebGL; theres very few people using it. Now I know I am likely to be flamed for that comment, but it just seems that way to me. Whenever I tried to google my problem, or google for what I was trying to achieve (because working bottom-up doesnt work) there would be a very sparse smattering of relevant posts. Usually the results are forum posts and are OpenGL not WebGL related and are from 5-10 years ago.

But..


Now having just ranted on for several hundred words about why it sucks im going to finish it off by saying that im going to continue to develop for WebGL using haXe regardless. Why? Well I just like making pretty things that run fast and GPGPU programming appeals to me for some unknown (likely sadistic) reason.

Thursday 20 October 2011

GPU State Preserving Particle Systems with WebGL & HaXe



Well this is the post I didnt think was going to happen. I have been struggling for weeks with this little bit of tech, ill explain more about why it has been so difficult in another post. For now however, ill just talk about this sample.

So the idea was to build upon what I had been working with previously with my stateless particles systems with WebGL and HaXe. The intention from the start was to replicate some of my very early work (from 2007) on state preserving particle systems in WebGL.

Before I go any further, you can check it out in action here:
http://mikecann.co.uk/projects/HaxeWebGLParticles/ 

First a quick reminder. The difference between a stateless and state-preserving particle simulation is that in the latter we store and update the positions, velocities and other properties of each particle per frame, allowing us to interact and control the simulation. This differs from the stateless particle simulation (detailed in my previous post), where the position for each particle is calculated each frame based on a fixed algorithm.

A fairly reccent addition to WebGL made this possible, namely texture lookups in the vertex shader (aka Vertex Texture Fetch). I wont go over the exact details how this makes state preserving particle systems possible as I have already documented it in my earlier work. A brief explanation is that it allows you to use the fragment shader to perform the updates on particle state stored in textures then use the vertex shader to map those states to a point-sprite vertex buffer.

Basically what this means is that the entire particle simulation can be contained and updated on the GPU, which means no read-back. This allows us to achieve simulations of millions of particles without too much difficulty (depending on GPU ofcourse).

I have uploaded the source for people to perouse at their leisure:
https://github.com/mikecann/HaxeWebGLParticles

As usual it was written using the JS target of HaXe so it should be fairly easy to understand whats going on if you have written any Ecma-script-like-language. Im going to detail this in my next post, but the code isnt the best I have ever written as its a result of a mish-mash of various samples and examples I have found on the web. If anyone has any comments on things that are wrong or could be done better I would be very happy to hear about them.

Thursday 13 October 2011

Game of Life HaXe & NME on iOS

For the last few days I have been playing around with trying to get the game of life sample from my previous post working on the iPhone using haXe with NME.

In theory NME should do all the heavy lifting for you so that it should be as simple as running:

[code lang="text"]
haxelib run nme build nmebuild.nmml ios

[/code]

Unfortunately however when I ran this I was getting rather cryptic errors:

[code lang="text"]
Called from ? line 1
Called from InstallTool.hx line 384
Called from a C function
Called from InstallTool.hx line 70
Called from a C function
Called from installers/InstallerBase.hx line 61
Called from installers/InstallerBase.hx line 668
Called from installers/InstallerBase.hx line 762
Called from haxe/xml/Fast.hx line 59
Uncaught exception - icon is missing attribute name

[/code]

I had read from the NME documentation page that this may have been fixed in the more reccent versions of NME. So I downloaded the beta version (you could checkout from SVN too if you wish) and told haxelib that im going to be working with a development version of NME with the following command:

[code lang="text"]
haxelib dev nme /Users/mikec/Documents/NME_3.1_Beta

[/code]

Now when I try to build for ios I get success!



From there is a simple matter of opening the generated xcode project, connecting my iphone and hitting run:



I really like how easy the workflow is compared to the Adobe Air packaging system. Generating the xcode project makes things so much faster.

If I can get my hands on an Android phone next I think im going to have to have a go at getting this sample working on there too!

Sunday 9 October 2011

Conway's Game of Life in haXe [NME & MassiveUnit]



The second day of try{harder} was dedicated to a single topic; test driven development (TDD).

The group was split into pairs and given the task of using TDD to write a solver for the game of life in AS3. After an hour we then threw away everything we had done, swapped partners and repeated the process.

This was extremely valuable for me as I had never written a unit test before. Seeing how different people tackled the same problem was fascinating and informative.

After repeating the process three times Stray asked if I was interested in teaming up with another attendee of the conference Alec McEachran to investigate unit testing in haXe. It was a great idea as it meant we both could investigate how unit testing worked in haXe and it would give me another code example for my talk the following day.

After a brief search we decided on Mike Stead's MassiveUnit for testing as the testing syntax looked similar to FlexUnit and it contained a toolchain for running the tests on multiple platforms.

An example of a test we wrote is:

[codesyntax lang="actionscript3" lines="normal"]
package ;
import massive.munit.Assert;
import Grid;

/**
* ...
* @author MikeC & Alec McEachran
*/

class GridTest
{
public var grid : Grid;

@Before
public function before():Void
{
grid = new Grid(3, 3);
}

@After
public function after():Void
{
grid = null;
}

@Test
public function initiallyThereAreNoLiveNeighbors():Void
{
var liveNeighbors = grid.getLiveNeighbors(1, 1);
Assert.isTrue(liveNeighbors == 0);
}

@Test
public function liveNeighborCountIsAccurate():Void
{
grid.set(0, 0, true);
grid.set(1, 0, true);
grid.set(2, 1, true);

var liveNeighbors = grid.getLiveNeighbors(1, 1);
Assert.isTrue(liveNeighbors == 3);
}

}

[/codesyntax]

It should look fairly familiar to anyone who has used FlexUnit before. The metatags @Before @After and @Test perform in exactly the same way as they do in FlexUnit. Another benefit of using munit over the built in testing framework in haXe is that you are given a tool to run tests on all platforms simultaneously:

[codesyntax lang="text"]
 haxelib run munit test test.hxml

[/codesyntax]

When executed you get something that looks like the following:



Which presents a nice graphical representation of the tests run and which failed (if any).

Once built and tested we decided to give the code a simple visual representation. We wanted to show off the ability for haXe to target multiple platforms. To do this we decided to go with NME which I had been experimenting around with recently.

NME is a library and tool chain for haXe designed to allow the developer to use the flash API on multiple platforms. They achieve this by writing platform targeted version of the flash API. So what this means is code such as the following:

[codesyntax lang="actionscript3" lines="no"]
package ;
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.display.MovieClip;
import flash.geom.Rectangle;

/**
* ...
* @author MikeC & Alec McEachran
*/

class Render
{

private var _cellSize : Int;
private var _renderTarget : BitmapData;
private var _rect:Rectangle;

public function new(container:MovieClip, cols:Int, rows:Int, cellSize:Int)
{
_cellSize = cellSize;
_renderTarget = new BitmapData(cols * cellSize, rows * cellSize, false);
container.addChild(new Bitmap(_renderTarget));

_rect = new Rectangle(0, 0, _cellSize, _cellSize);
}

public inline function lock():Void
{
_renderTarget.lock();
_renderTarget.fillRect(_renderTarget.rect, 0xff0000);
}

public inline function renderCell(x:Int, y:Int, isLive:Bool):Void
{
if (isLive)
{
_rect.x = x * _cellSize;
_rect.y = y * _cellSize;
_renderTarget.fillRect(_rect, 0);
}
}

public inline function unlock():Void
{
_renderTarget.unlock();
}

}

[/codesyntax]

Will compile down to flash, c++ and Javascript! NME also includes packaging abilities for webos, android and ios. So with a few scripted command lines you can target most app marketplaces:

[codesyntax lang="text"]
haxelib run nme test YourProject.nmml flash
haxelib run nme update YourProject.nmml ios
haxelib run nme test YourProject.nmml webos
haxelib run nme test YourProject.nmml android
haxelib run nme test YourProject.nmml cpp
haxelib run nme test YourProject.nmml cpp -64

[/codesyntax]

What it means for this project is we could very quickly get a view for our game of life running in flash, JS and native desktop.

To show just how easy it is I made the following video:



You can see the HTML5 build here: http://mikecann.co.uk/projects/gameoflife/Export/html5/bin/

And the flash build here: http://mikecann.co.uk/projects/gameoflife/Export/flash/bin/MyApplication.swf

I have uploaded the source for the project here: http://mikecann.co.uk/projects/gameoflife/gameoflife.zip

Saturday 8 October 2011

try {harder} - my haXe slides and code



This week was a week of firsts. It was the first meeting for the try {harder} conference. It was the first flash conference I have ever attended and it was my first time speaking infront of a group of my peers on a topic I feel passionate about.

The idea behind the conference was to introduce a smaller (16 people), more intimate conference environment where the key was to learn and inspire. And in that goal it certainty succeeded.

The fact that everyone had to give a talk encouraged attentiveness and participation as you knew it was only a matter of time before you were in the same situation. It also reduced the stress that comes from speaking to so many very intelligent people.

When Stray contacted me to suggest that my talk's topic was to be on haXe I was very uncertain. I didn't feel like I had enough experience with haXe to confidently speak about it having only worked with it for less than a year. It turned out however to be a good thing, it pushed me to find out more about haXe, research some areas I had heard about but never investigated.

When constructing my slides I knew that I could never cover all of haXe as its such a large topic with so many interesting facets. I decided that I should probably concentrate on targeting the talk towards my target audience. That is very experienced Actionscript and Flex developers who probably had heard of haXe but never really had any impotence to pick it up and use it any more.

With that in mind I structured my slides around what I considered the big advantages of haXe over Actionscript or Javascript. I also knew that my audience are all very experienced programmers so I tried to give plenty of code samples and examples.

Anyways, at the end of the day I really enjoyed preparing and giving my first talk at try {harder}. It was well received and I just hope I have succeeded in inspiring some more people into investigating and using haXe.

I wrote my slides in google docs which are viewable online: https://docs.google.com/present/view?id=dc6wvdg5_151frv985w7

The code samples and examples mentioned on slides are uploaded here: http://mikecann.co.uk/projects/TryHarder.zip

 

 

 

Saturday 1 October 2011

Windows Taskbar Monitor v0.3



I have just pushed a few small changes to one of my projects, Windows Taskbar Monitor.

One of the users of the app noted that they find it rather annoying having the animated progress bars and asked whether it would be possible to add an option to disable it. I agreed that it could get a little distracting so I added the option to disable the bars.

While I was there I also added a couple of command line arguments to the app so that you could configure each instance from a shortcut.

 

The options are:

-bars [true/false]

This tells the app to startup with the bars enabled or not, the default is true.

-type [cpu/mem/net]

This tells which of the three monitors the app should start up with, cpu, memory or network.

 

One way to get all three to start with different monitors when windows startup would be to use the statup folder in windows.

1) Click the windows start icon, then right click on "all programs" and click open:



2) Navigate to Programs > Startup then make three shortcuts to Windows Taskbar Monitor v0.3 you can rename the shortcuts if you like:



3) For each shortcut, right click and select properties:



4) Now enter your command line arguments in the Target field after the final .exe:



 

Et voilĂ  when windows starts you should now have three different monitors opening!

I have pushed all the source code and the new binary files to my google code project: http://code.google.com/p/win7-taskbar-mon/

Enjoy!