Adobe Scout: Profiling Taken to the Next Level – Session video

Here is the recording from the Adobe Scout: Profiling Taken to the Next Level session from Adobe MAX 2013.

We are sneaking here for the first time an experiment we have been working on, to support HTML/JS inside Scout, check at 27:00 ;)

I recommend you playing it fullscreen in HD. Thanks to all of you who attended in person. If you are checking it now, I hope you guys like it!

Posted on May 10, 2013 by Thibault Imbert · 0 comments CONTINUE READING
JavaScript WebGL

Introducing Starling.js

Starling.jsAlmost 2 years ago, Starling was born, an open-source ActionScript 3 framework for game development powered by Stage3D in Flash. In the past year, the momentum we have seen around Starling has been fantastic. Companies like Zynga, Rovio but also many small studios or indie developers have been using Starling to create beautiful mobile and desktop games. The Starling community has been growing with today over 3000 members of the Starling forum contributing extensions and improving the framework everyday.

With the growing demand around mobile browser-based games, we thought it would be useful for developers who love Starling to be able to use the framework they know on top of web standards. Starling.js is based on canvas (WebGL in the longer term) and is developed with TypeScript, allowing developers to choose over the plain JavaScript or TypeScript version to leverage ES6 features and valuable things like optional static typing.

Here is small class, showing what it looks like:

We hope you will love Starling.js, we just posted a little demo here. Daniel posted some more details on his blog.

To get notified when Starling.js is available, you can subscribe here.

Posted on May 2, 2013 by Thibault Imbert · 3 comments CONTINUE READING
Adobe MAX 2013

Session at MAX: HTML5 for ActionScript 3.0 developers

The MAX conference is approaching, happening on May 4-8th in Los Angeles and I will be giving a talk entitled HTML5/JS for ActionScript 3 developers, which I am super excited about.

I will be covering the differences and similarities between JavaScript (today and tomorrow) and ActionScript 3, the browser APIs and solutions you want to use to power expressive content across desktop and mobile, and the options available to profile your content and optimize performance.

If you have been developing with ActionScript for years and you are diving into web standards to create expressive and interactive stuff, then I hope to see you there! For those of you who cannot attend MAX, the session will also be recorded and posted here.

Posted on April 10, 2013 by Thibault Imbert · 0 comments CONTINUE READING
JavaScript WebGL

The WebGL potential

With the rumors of WebGL being more broadly supported, especially IE, the promise of a true cross platform set of GPU APIs is almost there, I personally believe that this will change lots of things on the web, especially how people develop interactive content but I also hope it will push the web forward.

WebGL for 2D too

UVWebGL is basically a low-level JavaScript interface to the OpenGL ES 2.0 feature set (using OpenGL/ES 2.0 or DirectX behind the scenes). A common misconception is that WebGL is for 3D only, it is actually not true. By leveraging low-level primitives you can either recreate higher level constructs for 3D content, like three.js does, or simply 2D primitives to power 2D. Two triangles form a quad, which a texture can be applied to, and there you have it, a 2D image GPU accelerated. This is what most frameworks do today like Cocos2D-x or Pixi.js, with a fallback on canvas when WebGL is not available. You need filters, particle effects? Also reproducible with a programmable pipeline (shaders) like WebGL offers.

When we developed Stage3D in Flash, we wanted to expose low-level primitives to allow people to develop the frameworks they need on top of it. Game developers especially want that level of granularity to control everything and get the best performance out of it. On the other hand, we knew that most developers would not use the low-level GPU primitives and would prefer using a higher-level API exposing primitives that they are familiar with. That’s why we created Starling, which rapidly became the defacto framework for 2D content on top of Stage3D. With that, Rovio shipped Angry Birds on Facebook and more recently Zynga shipped Ruby Blast.

But WebGL will not only help gaming, it will also be useful to power interactive experimentations or parts of a digital marketing websites like we have seen on for many years and traditionally delivered through Flash. It can also revolutionize the way developers build mobile applications. The smooth UI components you appreciate on your phone are all GPU accelerated, powered by OpenGL ES 2 (iOS, Android) or DirectX (Windows 8). I hope lots of UI frameworks will emerge, to provide blazingly fast GPU accelerated UIs on top of WebGL. Feathers is a good example of that, that could be powered by WebGL.

Does this mean that we will do everything through WebGL? No, it all depends on the content and use cases, a classic DOM styled with CSS works great for more text based content, like a news website, a forum, or simple forms applications and the CPU is great at things like font rendering or vectors and many other things, but the combination of the two (CPU or GPU based renderer) will be a very powerful mix.

Implicit vs explicit

Confused developerTo get the performance right today in browsers, techniques are being developed to trick the system. It is pretty scary to see on stackoverflow the conversations about performance tricks, you end up developing your content based on assumptions, or worst, side effects. There has been lots of talks in the past years about some magical GPU acceleration of the DOM, but GPU accelerate a DOM efficiently is hard. Why? Because a display list allows so many things that are hard to implement efficiently on the GPU. Complex nesting, masking, blending, all controllable by JavaScript or CSS. As the developer writing this “magic” code in the browsers to GPU accelerate the DOM, you will probably make assumptions or exceptions, and won’t be able to cover the millions of use cases possible, and this is where it becomes really hard for developers to develop content on top of this surface. Things that you cannot control happen behind the scenes and you end up developing your content with voodoo optimizations and chicken bones with recommendations like:

Use this property, but make you sure you don’t nest anything, and by the way make sure you set the alpha to zero, but never remove the element from the DOM.

What you want is low-level primitives (like WebGL) to be exposed and let people develop higher-level frameworks on top, this is the most flexible model. You have a performance issue or a bug, you can fix it. You don’t like the architecture of a framework, you replace it, cause all that code is then content based, not browser based. That was the idea behind the DOM.js project.

One recent example is the use of translateZ (CSS3) to force GPU acceleration. Here again, the problem with that approach is that you are using a hack which will have side effects. Originally developed for 3D effects, developers are now using translateZ for 2D effects without really knowing what is happening behind the scenes implicitly. What happens is that the DOM element you want to accelerate is rasterized (this simple first step could be costly if the element you are using has lots of nested children) then the bitmap is uploaded to the GPU behind the scenes (note that you never called any upload API) and then blended on the page. By the way, if the texture is too big, it will probably blow the device running out of memory, leaving the developer in the dark.

Again, with lower-level primitives, you allow advanced developers to develop higher-level frameworks (here in JavaScript) and give them a chance to work with APIs that are explicit about what they do, with minimal side effects. The people who just want to use these high-level frameworks to get things done, get a chance to see the implementation and figure out what is wrong and how things work. It is not something new, with Flash, developers developed similar techniques too to make things run faster, the cacheAsBitmap technique was used, but here again, it was a double edge sword. Looks great on paper, but once you start using it, you realize how sneaky the API is and it becomes really hard to build things efficiently on top of it. I wrote a blog post a long time ago on why cacheAsBitmap is evil and how similar results could be achieved using lower-level primitives with minimal side effects.

The importance of higher-level frameworks

Pixi.jsMost developers will probably not dive into the details of GPU programming with WebGL. People will look for high level frameworks exposing constructs they are familar with, to get things done and get best performance with good productivity. These frameworks will be essential because WebGL is not a web developer friendly API, it is literally an API from the 90′s (OpenGL) surfaced directly to JavaScript.

Not only GPU programming can be hard, but in addition to that, the WebGL API surface is large, and the way you script it or catch errors is quite verbose and complex, which will feel very unnatural to most web developers. The great news is that there are already tons of JavaScript frameworks on top of WebGL, popping up like mushrooms, so there will be plenty of options for developers.

Is it still the web?

Just like 2D canvas, WebGL renders everything inside a canvas too, which is pretty much a black box. This model is acceptable for games or some interactive pieces, which are generally not SEO friendly or accessible, but if more 2D interactive content is powered by WebGL, like applications, such content will probably need to be accessible, something that would not work today, but that is with the limitations of today and this could probably change. Why should only the classic DOM be accessible?

Here again, WebGL could introduce new ways of doing of things which could help move the web forward.

Pushing the web forward

Unreal EngineBefore Stage3D in Flash, the main bottleneck was the graphics pipeline, ActionScript 3 was actually fast enough to power most display list or blitted content relying on BitmapData, but once Stage3D got introduced, the situation changed and ActionScript 3 became the limitation. Developers started developing high-level frameworks, which moved lots of expensive code that would traditionally sit on the native side onto ActionScript 3. Code like tree traversal, bounds or matrices calculations and more were now happening at the ActionScript 3 level, pressuring the VM and maxing out the CPU.

To get the best performance, you want to make sure the CPU and the GPU talk to each other smoothly in parallel. If the CPU happens to be maxed out, it won’t feed the GPU fast enough, and performance will crawl. With WebGL, the same pressure will be exercised on JavaScript, hence why initiatives like asm.js are required but able to power something like the Unreal engine, which I hope will ultimately results in pushing JavaScript performance forward. You can like or dislike asm.js, but it tries to solve a problem vital to the next-generation content that we will see on the web tomorrow.

If you are interested in hearing more about asm.js, John Resig jut posted a great article about it.

Posted on April 3, 2013 by Thibault Imbert · 1 comment CONTINUE READING
Space Invaders

Intel 8080 CPU emulation in JavaScript

Space InvadersThis week end, I wanted to try a real world project to play more with TypeScript. Why TypeScript? Because I wanted to leverage a few ES6 features but also type checking. Note that I did not use strong typing, but just relied on the inference of types provided by the TypeScript compiler.

A few years ago, I wrote an Intel 8080 CPU emulator in ActionScript 3 and thought this would be a great fit for a TypeScript exercise. For the context, the Intel 8080 is a 2Mhz 8bit CPU. Through an Uint8Array (typed array), we can read each instruction (byte per byte) coming from a ROM and fully emulate the CPU.

So which game could we run to test the CPU? The Intel 8080 CPU was used inside the famous Space Invaders arcade machine, so using the original Space Invaders ROM, we can emulate the whole arcade system entirely in JavaScript (CPU/RAM/Input/Screen). Check the different files on the github repo. The CPU is the most important part, but I recommend you guys checking the other pieces, really fun to see how things work and how hardware is emulated.

The tricky thing is that because of the lack of byte type (ActionScript 3 has the same limitation), CPU registers (which are originally 8-bit) use the Number type, which is 64-bit, so each register needs to be masked constantly (register & 0xFF) to avoid overflow.

Here is the game playable here. It runs nicely on most browsers on desktop, it even runs nicely on mobile, except on UIWebView based browsers, where the lack of jitting seriously impacts performance.

(Space Invaders art by Alfimov)

Posted on March 18, 2013 by Thibault Imbert · 1 comment CONTINUE READING
JavaScript refresh

A JavaScript refresh


We will cover here some of the key concepts of JavaScript to get us started. If you have not checked JavaScript for the past few years or if you are new to JavaScript, I hope you find this useful.

We will start by covering the language basics like variables, functions, scope, and the different types, but we will not spend much time on the absolute basics like operators, or what is a function or a variable, you probably already know all that as a developer. We will discover JavaScript by going through simple examples and for each of these, highlight specific behaviors and approach the language from an interactive developer standpoint, coming from other technologies like Flash (ActionScript 3), Java, C# or simply native (C++).

Like other managed languages, JavaScript runs inside a JavaScript VM (Virtual Machine), one key difference to note is that unlike executing bytecode, JavaScript VMs are source based, translating JavaScript source code directly to native code by using what is called a JIT (Just in Time compiler) when available. The JIT performs optimization at runtime (just in time) to leverage platform specific optimizations depending on the architecture the code is being run on. Of course, most browsers available today run JavaScript, the list below highlights the most popular JavaScript VMs today in the industry:

JavaScript can provide some serious advantages over low-level languages like automatic memory allocation and collection through garbage collectors. This, however comes at the cost of speed. But managed languages provides so much value in terms of productivity and platform reach that developers today tend to favor them over low-level languages, despite the loss of performance because of the higher cost for those languages when it comes to targeting multiple platforms.

Posted on March 16, 2013 by Thibault Imbert · 31 comments CONTINUE READING
1 2