Note: Source code (FilterExample) available on Github

I was wondering how bitmap programming works on iOS, just like with BitmapData in Flash, I wanted to perform simple operations for painting but also curious about how filters work. Again, in AS3, you would use the applyFilter API on BitmapData, so here is how things work on iOS/MacOS with Swift and the Quartz/Core Image APIs.

Graphics Context and the Quartz drawing engine

In Flash, to perform drawing operations, you create a BitmapData object and use the pixel APIs defined on it. On iOS/MacOS, things are different, you work with a graphics context (which is offscreen) that you manipulate through high-level functions like CGContextSetRGBFillColor, the CG at the beginning stands for Core Graphics which leverages the powerful Quartz drawing engine behind the scenes.

To initiate the drawing, we create a context, by specifying its size, opaque or not and its scaling:

UIGraphicsBeginImageContextWithOptions(CGSize(width: 200, height: 200), true, 1)

Note that we will make our bitmap of 200 by 200px and will be opaque. We pass 1 for the scaling, because to get the size of the bitmap in pixels, you must multiply the width and height values by the value in the scale parameter. If we had passed 0, it would have taken the scaling of the device’s screen.

We now have our context created offscreen, ready for the drawing commands to be passed, but we still don’t have a reference to it. The previous high-level function UIGraphicsBeginImageContextWithOptions creates the context but did not give us a reference to it, for this, we call the UIGraphicsGetCurrentContext API:

let context = UIGraphicsGetCurrentContext()

Ok, now we are ready to draw, so we use the high-level APIs for that, the names are pretty explicit about their purpose:

CGContextSetRGBFillColor (context, 1, 1, 0, 1)
CGContextFillRect (context, CGRectMake (0, 0, 200, 200))
CGContextSetRGBFillColor (context, 1, 0, 0, 1)
CGContextFillRect (context, CGRectMake (0, 0, 100, 100))
CGContextSetRGBFillColor (context, 1, 1, 0, 1)
CGContextFillRect (context, CGRectMake (0, 0, 50, 50))
CGContextSetRGBFillColor (context, 0, 0, 1, 0.5);
CGContextFillRect (context, CGRectMake (0, 0, 50, 100))

We are now drawing offscreen. Note that at this point, this is not really a bitmap yet that can be displayed, this is really just raw pixels painted. To display this on screen, we need a high-level wrapper, just like in Flash and the relationship between Bitmap and BitmapData. So we will use the UIGraphicsGetImageFromCurrentImageContext API, which will basically take a snapshot/raster of our drawing:

var image = UIGraphicsGetImageFromCurrentImageContext()

At this point, we could just display our UIImage object returned here. Because I am using SpriteKit for my experiments, we need to wrap the UIImage object into an SKSprite object that holds a SKTexture object, so that gives us:

// we create a texture, pass the UIImage
var texture = SKTexture(image: image)
// wrap it inside a sprite node
var sprite = SKSpriteNode(texture:texture)
// we scale it a bit
sprite.setScale(0.5);
// we position it
sprite.position = CGPoint (x: 510, y: 280)
// let's display it
self.addChild(sprite)

This is what you get:

Simple bitmap data

Here is the full code:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */

        // we create the graphics context
        UIGraphicsBeginImageContextWithOptions(CGSize(width: 200, height: 200), true, 1)

        // we retrieve it
        let context = UIGraphicsGetCurrentContext()

        // we issue drawing commands
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);
        CGContextFillRect (context, CGRectMake (0, 0, 200, 200));// 4
        CGContextSetRGBFillColor (context, 1, 0, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 100, 100));// 4
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 50, 50));// 4
        CGContextSetRGBFillColor (context, 0, 0, 1, 0.5);// 5
        CGContextFillRect (context, CGRectMake (0, 0, 50, 100));

        // we query an image from it
        let image = UIGraphicsGetImageFromCurrentImageContext()

        // we create a texture, pass the UIImage
        let texture = SKTexture(image: image)
        // wrap it inside a sprite node
        let sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }

    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

Pretty simple right? Now you can move your sprite, animate it, scale it, etc. But what if we have an existing image and we want to apply filters on it. In Flash, a loaded bitmap resource would give us a Bitmap object that had a bitmapData property pointing to the bitmap data that we could work with. How does that work here? This is where Core Image comes into play.

Core Image

This is where it gets really cool. If you need to apply filters and perform any video or image processing, real time, you use the powerful Core Image APIs. So let’s take the image below, unprocessed:

Ayden no filter

Now, let’s apply a filter with the code below. In that example we use the CIPhotoEffectTransfer, that applies a nice Instagramy kind of effect, look at all the filters available, pretty endless capabilities:

// we create Core Image context
let ciContext = CIContext(options: nil)
// we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
let coreImage = CIImage(image: image)
// we pick the filter we want
let filter = CIFilter(name: "CIPhotoEffectTransfer")
// we pass our image as input
filter.setValue(coreImage, forKey: kCIInputImageKey)
// we retrieve the processed image
let filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
// returns a Quartz image from the Core Image context
let filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
// this is our final UIImage ready to be displayed
let filteredImage = UIImage(CGImage: filteredImageRef);

This gives us the following result:

Ayden filtered

And here is the full code:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */

        // we reference our image (path)
        let data = NSData (contentsOfFile: "/Users/timbert/Documents/Ayden.jpg")
        // we create a UIImage out of it
        let image = UIImage(data: data)

        // we create Core Image context
        let ciContext = CIContext(options: nil)
        // we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
        let coreImage = CIImage(image: image)
        // we pick the filter we want
        let filter = CIFilter(name: "CIPhotoEffectTransfer")
        // we pass our image as input
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        // we retrieve the processed image
        let filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
        // returns a Quartz image from the Core Image context
        let filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
        // this is our final UIImage ready to be displayed
        let filteredImage = UIImage(CGImage: filteredImageRef);

        // we create a texture, pass the UIImage
        let texture = SKTexture(image: filteredImage)
        // wrap it inside a sprite node
        let sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }

    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

We can also apply filters and play with the parameters to customize them, we could also use shaders for more flexibility, more on that later :) In the code below, we apply a pinch distortion effect to our initial image, that will give us the following:

Simple distortion

And here is the full code:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */

        // we reference our image (path)
        let data = NSData (contentsOfFile: "/Users/timbert/Documents/Ayden.jpg")
        // we create a UIImage out of it
        let image = UIImage(data: data)

        // we create Core Image context
        let ciContext = CIContext(options: nil)
        // we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
        let coreImage = CIImage(image: image)
        // we pick the filter we want
        let filter = CIFilter(name: "CIPinchDistortion")
        // we pass our image as input
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        // we pass a custom value for the inputCenter parameter, note the use of the CIVector type here
        filter.setValue(CIVector(x: 300, y: 200), forKey: kCIInputCenterKey)
        // we retrieve the processed image
        let filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
        // returns a Quartz image from the Core Image context
        let filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
        // this is our final UIImage ready to be displayed
        let filteredImage = UIImage(CGImage: filteredImageRef);

        // we create a texture, pass the UIImage
        let texture = SKTexture(image: filteredImage)
        // wrap it inside a sprite node
        let sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }

    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

Now, can we apply a filter to the first bitmap we created through drawing commands? Sure. Here is the code for a blur effect:

import SpriteKit

class GameScene: SKScene {
    override func didMoveToView(view: SKView) {
        /* Setup your scene here */

        // we create the graphics context
        UIGraphicsBeginImageContextWithOptions(CGSize(width: 200, height: 200), true, 1)

        // we retrieve it
        let context = UIGraphicsGetCurrentContext()

        // we issue drawing commands
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);
        CGContextFillRect (context, CGRectMake (0, 0, 200, 200));// 4
        CGContextSetRGBFillColor (context, 1, 0, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 100, 100));// 4
        CGContextSetRGBFillColor (context, 1, 1, 0, 1);// 3
        CGContextFillRect (context, CGRectMake (0, 0, 50, 50));// 4
        CGContextSetRGBFillColor (context, 0, 0, 1, 0.5);// 5
        CGContextFillRect (context, CGRectMake (0, 0, 50, 100));

        // we query an image from it
        let image = UIGraphicsGetImageFromCurrentImageContext()

        // we create Core Image context
        let ciContext = CIContext(options: nil)
        // we create a CIImage, think of a CIImage as image data for processing, nothing is displayed or can be displayed at this point
        let coreImage = CIImage(image: image)
        // we pick the filter we want
        let filter = CIFilter(name: "CIGaussianBlur")
        // we pass our image as input
        filter.setValue(coreImage, forKey: kCIInputImageKey)
        // we retrieve the processed image
        let filteredImageData = filter.valueForKey(kCIOutputImageKey) as CIImage
        // returns a Quartz image from the Core Image context
        let filteredImageRef = ciContext.createCGImage(filteredImageData, fromRect: filteredImageData.extent())
        // this is our final UIImage ready to be displayed
        let filteredImage = UIImage(CGImage: filteredImageRef);

        // we create a texture, pass the UIImage
        let texture = SKTexture(image: filteredImage)
        // wrap it inside a sprite node
        let sprite = SKSpriteNode(texture:texture)
        // we scale it a bit
        sprite.setScale(0.5);
        // we position it
        sprite.position = CGPoint (x: 510, y: 380)
        // let's display it
        self.addChild(sprite)
    }

    override func update(currentTime: CFTimeInterval) {
        /* Called before each frame is rendered */
    }
}

And here is the result:

Simple Bitmap data filtered

I hope you guys enjoyed it! Lots of possibilities, lots of fun with these APIs.