Welcome back to our 2-part tutorial series that teaches you how to use LiquidFun with Metal and Swift!
In the first part of the series, you learned how to integrate LiquidFun with Swift and used that knowledge to create an invisible liquid particle system.
In this second part of the series, you’ll learn how to render your LiquidFun particles onscreen using projection transformations, uniform data and shaders in Metal. You’ll also get to move them around in a simulated physics world for some water-splashing fun.
After all, you didn’t name your project LiquidMetal for nothing.
Getting Started
First, make sure you have a copy of the project from Part 1, either by going through the first tutorial or by downloading the finished project.
Before proceeding with Metal, I recommend going through the Introduction to Metal Tutorial if you haven’t already. To keep this part short, I’ll breeze through the basic setup of Metal and focus only on new concepts that aren’t in the other Metal tutorials on our site.
Create a Metal Layer
You first need to create a CAMetalLayer
, which acts as the canvas upon which Metal renders content.
Inside ViewController.swift, add the following properties and new method:
var device: MTLDevice! = nil var metalLayer: CAMetalLayer! = nil func createMetalLayer() { device = MTLCreateSystemDefaultDevice() metalLayer = CAMetalLayer() metalLayer.device = device metalLayer.pixelFormat = .BGRA8Unorm metalLayer.framebufferOnly = true metalLayer.frame = view.layer.frame view.layer.addSublayer(metalLayer) } |
Now replace printParticleInfo()
in viewDidLoad
with a call to this new method:
createMetalLayer() |
Inside createMetalLayer
, you store a reference to an MTLDevice
, which you’ll use later to create the other Metal objects that you’ll need. Next, you create a CAMetalLayer
with default properties and add it as a sublayer to your current view’s main layer. You call createMetalLayer
from viewDidLoad
to ensure your Metal layer is set up along with the view.
Create a Vertex Buffer
The next step is to prepare a buffer that contains the positions of each particle in your LiquidFun world. Metal needs this information to know where to render your particles on the screen.
Still in ViewController.swift, add the following properties and new method:
var particleCount: Int = 0 var vertexBuffer: MTLBuffer! = nil func refreshVertexBuffer () { particleCount = Int(LiquidFun.particleCountForSystem(particleSystem)) let positions = LiquidFun.particlePositionsForSystem(particleSystem) let bufferSize = sizeof(Float) * particleCount * 2 vertexBuffer = device.newBufferWithBytes(positions, length: bufferSize, options: nil) } |
Here you add two new properties, particleCount
to keep track of how many particles you have, and vertexBuffer
to store the MTLBuffer
Metal requires to access the vertex positions.
Inside refreshVertexBuffer
, you call LiquidFun.particleCountForSystem
to get the number of particles in the system, and store the result in particleCount
. Next, you use the MTLDevice
to create a vertex buffer, passing in the position array directly from LiquidFun.particlePositionsForSystem
. Since each position has an x- and y-coordinate pair as float types, you multiply the size in bytes of two Float
s by the number of particles in the system to get the size needed to create the buffer.
Call this method at the end of viewDidLoad
:
refreshVertexBuffer() |
Now that you’ve given Metal access to your particles, it’s time to create the vertex shader that will work with this data.
Create a Vertex Shader
The vertex shader is the program that takes in the vertex buffer you just created and determines the final position of each vertex onscreen. Since LiquidFun’s physics simulation calculates the particle positions for you, your vertex shader only needs to translate LiquidFun particle positions to Metal coordinates.
Right-click the LiquidMetal group in the Project Navigator and select New File…, then select the iOS\Source\Metal File template and click Next. Enter Shaders.metal for the filename and click Create.
First, add the following structs to Shaders.metal:
struct VertexOut { float4 position [[position]]; float pointSize [[point_size]]; }; struct Uniforms { float4x4 ndcMatrix; float ptmRatio; float pointSize; }; |
You’ve defined two structs:
VertexOut
contains data needed to render each vertex. The[[position]]
qualifier indicates thatfloat4 position
contains the position of the vertex onscreen, while the[[point_size]]
qualifier indicates thatfloat pointSize
contains the size of each vertex. Both of these are special keywords that Metal recognizes, so it knows exactly what each property is for.Uniforms
contains properties common to all vertices. This includes the points-to-meters ratio you used for LiquidFun (ptmRatio
), the radius of each particle in the particle system (pointSize
) and the matrix that translates positions from screen points to normalized device coordinates (ndcMatrix
). More on this later.
Next is the shader program itself. Still in Shaders.metal, add this function:
vertex VertexOut particle_vertex(const device packed_float2* vertex_array [[buffer(0)]], const device Uniforms& uniforms [[buffer(1)]], unsigned int vid [[vertex_id]]) { VertexOut vertexOut; float2 position = vertex_array[vid]; vertexOut.position = uniforms.ndcMatrix * float4(position.x * uniforms.ptmRatio, position.y * uniforms.ptmRatio, 0, 1); vertexOut.pointSize = uniforms.pointSize; return vertexOut; } |
The shader’s first parameter is a pointer to an array of packed_float2
data types—a packed vector of two floats, commonly containing x and y position coordinates. Packed vectors don’t contain the extra bytes commonly used to align data elements in a computer’s memory. You’ll read more about that a bit later.
The [[buffer(0)]]
qualifier indicates that vertex_array
will be populated by the first buffer of data that you send to your vertex shader.
The second parameter is a handle to the Uniforms
structure. Similarly, the [[buffer(1)]]
qualifier indicates that the second parameter is populated by the second buffer of data sent to the vertex shader.
The third parameter is the index of the current vertex inside the vertex array, and you use it to retrieve that particular vertex from the array. Remember, the GPU calls the vertex shader many times, once for each vertex to render. For this app, the vertex shader will be called once per water particle to render.
Inside the shader, you get the vertex’s position in LiquidFun’s coordinate system, then convert it to Metal’s coordinate system and output it via vertexOut
.
To understand how the final position is computed, you have to be aware of the different coordinate systems with which you’re working. Between LiquidFun and Metal, there are three different coordinate systems:
- the physics world’s coordinate system;
- the regular screen coordinate system; and
- the normalized screen coordinate system.
Given a regular iPhone 5s screen (320 points wide by 568 points high), these translate to the following coordinate systems:
- The screen coordinate system (red) is the easiest to understand and is what you normally use when positioning objects onscreen. It starts from (0, 0) at the bottom-left corner and goes up to the screen’s width and height in points at the upper-right corner.
- The physics world coordinate system (blue) is how LiquidFun sees things. Since LiquidFun operates in smaller numbers, you use
ptmRatio
to convert screen coordinates to physics world coordinates and back. - The normalized device coordinate system (green) is Metal’s default coordinate system and is the trickiest to work with. While the previous two coordinate systems both agree that the origin (0, 0) is at the lower-left corner, Metal’s coordinate system places it at the center of the screen. The coordinates are device agnostic, so no matter the size of the screen, (-1,-1) is the lower-left corner and (1, 1) is the upper-right corner.
Since the vertex buffer contains vertices in LiquidFun’s coordinate system, you need to convert it to normalized device coordinates so it comes out at the right spot on the screen. This conversion happens in a single line:
vertexOut.position = uniforms.ndcMatrix * float4(position.x * uniforms.ptmRatio, position.y * uniforms.ptmRatio, 0, 1); |
You first convert the vertex to regular screen coordinates by multiplying the x- and y-positions by the points-to-meters ratio. You use these new values to create a float4
to represent XYZW coordinates. Finally, you multiply the XYZW coordinates by a “mathemagical” matrix that translates your coordinates to normalized screen coordinates using an orthographic projection. You’ll get acquainted with this matrix very soon.
Note: I won’t explain in depth what the z- and w-components are for. As far as this tutorial goes, you need these components to do 3D matrix math.
The z-component specifies how far or near the object is from the camera, but this doesn’t matter much when dealing with a 2D coordinate space. You need the w-component because matrix multiplication formulas work on 4×4 matrices. Long story short, the x-, y-, and z-components are divided by the w-component to get the final 3D coordinates. In this case, w is 1 so that the x-, y-, and z-components don’t change.
If you wish to learn more, you can read about homogeneous coordinates on Wikipedia for more information.
The NDC Projection Matrix
How you perceive and see objects on the screen depends on the type of projection transformation used to convert points to the normalized device coordinate system that Metal expects.
There are two general types of projection: perspective and orthographic.
Perspective projection gives you a realistic view of your objects because it scales them relative to their distance from a point of view. With this type of projection, the farther an object is from the viewer, the smaller it will appear.
As an example, take this perspective projection of four cubes and a square:
In the above graphic, you can tell that the two cubes at the top are farther away than the two cubes at the bottom, because they appear smaller. If these objects were rendered with an orthographic projection, you would get the following result:
The four cubes look identical to the flat square. This is because orthographic projection discards depth of view, defined by the z-component of the coordinate system, and gives you a flat view of the world.
An object in orthographic projection only moves along the x- and y-axes, and you only see it according to its true size. This is perfect for 2D, so the next step is to create an orthographic projection matrix for your device.
Open ViewController.swift and add this method:
func makeOrthographicMatrix(#left: Float, right: Float, bottom: Float, top: Float, near: Float, far: Float) -> [Float] { let ral = right + left let rsl = right - left let tab = top + bottom let tsb = top - bottom let fan = far + near let fsn = far - near return [2.0 / rsl, 0.0, 0.0, 0.0, 0.0, 2.0 / tsb, 0.0, 0.0, 0.0, 0.0, -2.0 / fsn, 0.0, -ral / rsl, -tab / tsb, -fan / fsn, 1.0] } |
This function is OpenGL’s way of creating an orthographic projection matrix. I simply copied this function out of the OpenGL library and converted it to Swift.
You don’t need to understand the math for this tutorial, just how to use it. This function takes as parameters the screen’s bounds in points and returns the corresponding orthographic projection matrix.
You plug in the following parameters:
left
andright
: The left- and right-most x-coordinates of the screen. As you saw in the earlier diagram, these values will be 0 forleft
and the screen’s width in points forright
.bottom
andtop
: The bottom-most and top-most y-coordinates of the screen. Herebottom
will be 0 andtop
will be the screen height in points.near
andfar
: The nearest and farthest z-coordinates. These values affect which z-coordinates are visible on the screen. In OpenGL, this means any z-coordinate between near and far is visible, but it’s a slightly different case with Metal. OpenGL’s normalized z-coordinate system ranges from -1 to 1, while Metal’s is only 0 to 1. Therefore, for Metal, you can compute the range of visible z-coordinates using this formula: -(far+near)/2 to -far. You’ll pass innear
andfar
values of-1
to1
in order to create the 0-to-1 range of visible z-coordinates that Metal expects.
Using these parameters, you generate a one-dimensional array that Metal will later use as a 4×4 matrix. That is, Metal will treat the array as values arranged in four rows of four columns each.
It’s important to note that when creating matrices, Metal populates each row of a column before moving on to the next column. The array you create in makeOrthographicMatrix
is arranged in the following way in matrix form:
Starting from the top-most row of the left-most column, Metal populates each row of each column first before moving on to the next column.
Create a Uniform Buffer
You’re all set to create the uniform buffer that the vertex shader needs. It’s the third and last component that controls the fate of your particles onscreen.
Still in ViewController.swift, add the following property and new method:
var uniformBuffer: MTLBuffer! = nil func refreshUniformBuffer () { // 1 let screenSize: CGSize = UIScreen.mainScreen().bounds.size let screenWidth = Float(screenSize.width) let screenHeight = Float(screenSize.height) let ndcMatrix = makeOrthographicMatrix(left: 0, right: screenWidth, bottom: 0, top: screenHeight, near: -1, far: 1) var radius = particleRadius var ratio = ptmRatio // 2 let floatSize = sizeof(Float) let float4x4ByteAlignment = floatSize * 4 let float4x4Size = floatSize * 16 let paddingBytesSize = float4x4ByteAlignment - floatSize * 2 let uniformsStructSize = float4x4Size + floatSize * 2 + paddingBytesSize // 3 uniformBuffer = device.newBufferWithLength(uniformsStructSize, options: nil) let bufferPointer = uniformBuffer.contents() memcpy(bufferPointer, ndcMatrix, UInt(float4x4Size)) memcpy(bufferPointer + float4x4Size, &ratio, UInt(floatSize)) memcpy(bufferPointer + float4x4Size + floatSize, &radius, UInt(floatSize)) } |
Creating the uniform buffer is tricky, because your Swift code isn’t aware of the Uniforms
structure that you created in your Metal shader code. What’s more, Swift doesn’t have a native equivalent for the float4x4
type that Metal uses. So the approach you take is to populate this structure, without knowing what the structure is, by copying the values into memory yourself.
- First, you create the orthographic projection matrix,
ndcMatrix
, by supplying the screen’s dimensions tomakeOrthographicMatrix
. You also copy the constantsparticleRadius
andptmRatio
as local variables for use later. - Next, you compute for the size of the
Uniforms
struct in memory. As a refresher, take a second look at the structure’s definition:struct Uniforms { float4x4 ndcMatrix; float ptmRatio; float pointSize; };
Since
float4x4
consists of 16float
s, you may think that the size ofUniforms
is equal to the total size of 18float
s, but that isn’t the case. Because of structure alignment, the compiler inserts extra bytes as padding to optimally align data members in memory. The amount of padding inserted depends on the byte alignment of its data members. WithUniforms
as an example, you get the following byte alignments and sizes:With structure alignment, the total size of the structure must be a multiple of the largest byte alignment, which in this case is 16 bytes. An easy way to get the amount of padding needed is to subtract the smaller byte alignments from the largest byte alignment. By doing so, you get:
With a padding of 8 bytes, you get a total of 80 bytes, and that’s what’s happening in this section of the code:
let floatSize = sizeof(Float) let float4x4ByteAlignment = floatSize * 4 let float4x4Size = floatSize * 16 let paddingBytesSize = float4x4byteAlignment - floatSize * 2 let uniformsStructSize = float4x4Size + floatSize * 2 + paddingBytesSize
You get the sizes of each data member and compute for the padding size to get the total size of the struct.
- Finally, you create an appropriately sized empty buffer named
uniformBuffer
, and copy the contents of each data member one by one usingmemcpy
.
Before moving on, add a call to refreshVertexBuffer
at the end of viewDidLoad
:
refreshUniformBuffer() |
With the vertex shader in place, Metal will know where your particles are, but you still need a fragment shader to draw them.
Create a Fragment Shader
While the vertex shader determines the position of each vertex, the fragment shader determines the color of each visible fragment on the screen. You don’t need any fancy colors yet for this tutorial, so you’ll use a very simple fragment shader.
Add the following code to the bottom of Shaders.metal:
fragment half4 basic_fragment() { return half4(1.0); } |
You create a fragment shader that simply returns the color white using RGBA values of (1, 1, 1, 1). Expect to see white particles soon—but it will behave like water, not snow!
Build a Render Pipeline
You’re almost there! The rest of the steps should be familiar to you from the Metal Tutorial for Beginners.
Open ViewController.swift and add the following properties and new method:
var pipelineState: MTLRenderPipelineState! = nil var commandQueue: MTLCommandQueue! = nil func buildRenderPipeline() { // 1 let defaultLibrary = device.newDefaultLibrary() let fragmentProgram = defaultLibrary?.newFunctionWithName("basic_fragment") let vertexProgram = defaultLibrary?.newFunctionWithName("particle_vertex") // 2 let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.vertexFunction = vertexProgram pipelineDescriptor.fragmentFunction = fragmentProgram pipelineDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm var pipelineError : NSError? pipelineState = device.newRenderPipelineStateWithDescriptor(pipelineDescriptor, error: &pipelineError) if (pipelineState == nil) { println("Error occurred when creating render pipeline state: \(pipelineError)"); } // 3 commandQueue = device.newCommandQueue() } |
And just like you did with the other setup methods, add a call to this new method at the end of viewDidLoad
:
buildRenderPipeline() |
Inside buildRenderPipeline
, you do the following:
- You use the
MTLDevice
object you created earlier to access your shader programs. Notice you access them using their names as strings. - You initialize a
MTLRenderPipelineDescriptor
with your shaders and a pixel format. Then you use that descriptor to initializepipelineState
. - Finally, you create an
MTLCommandQueue
for use later. The command queue is the channel you’ll use to submit work to the GPU.
Render the Particles
The final step is to draw your particles onscreen.
Still in ViewController.swift, add the following method:
func render() { var drawable = metalLayer.nextDrawable() let renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.colorAttachments[0].texture = drawable.texture renderPassDescriptor.colorAttachments[0].loadAction = .Clear renderPassDescriptor.colorAttachments[0].storeAction = .Store renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 104.0/255.0, blue: 5.0/255.0, alpha: 1.0) let commandBuffer = commandQueue.commandBuffer() if let renderEncoder = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor) { renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, atIndex: 0) renderEncoder.setVertexBuffer(uniformBuffer, offset: 0, atIndex: 1) renderEncoder.drawPrimitives(.Point, vertexStart: 0, vertexCount: particleCount, instanceCount: 1) renderEncoder.endEncoding() } commandBuffer.presentDrawable(drawable) commandBuffer.commit() } |
Here, you create a render pass descriptor to clear the screen and give it a fresh tint of green. Next, you create a render command encoder that tells the GPU to draw a set of points, set up using the pipeline state and vertex and uniform buffers you created previously. Finally, you use a command buffer to commit the transaction to send the task to the GPU.
Now call render
at the end of viewDidLoad
:
render() |
Build and run the app on your device to see your particles onscreen for the first time:
Moving Water
Now is when most of your work from this tutorial and the last will pay off—getting to see the liquid simulation in action. Currently, you have nine water particles onscreen, but they’re not moving. To get them to move, you need to trigger the following events repeatedly:
- LiquidFun needs to update the physics simulation.
- Metal needs to update the screen.
Open LiquidFun.h and add this method declaration:
+ (void)worldStep:(CFTimeInterval)timeStep velocityIterations:(int)velocityIterations positionIterations:(int)positionIterations; |
Switch to LiquidFun.mm and add this method definition:
+ (void)worldStep:(CFTimeInterval)timeStep velocityIterations:(int)velocityIterations positionIterations:(int)positionIterations { world->Step(timeStep, velocityIterations, positionIterations); } |
You’re adding another Objective-C pass-through method for your wrapper class, this time for the world object’s Step
method. This method advances the physics simulation forward by a measure of time called the timeStep
.
velocityIterations
and positionIterations
affect the accuracy and performance of the simulation. Higher values mean greater accuracy, but at a greater performance cost.
Open ViewController.swift and add the following new method:
func update(displayLink:CADisplayLink) { autoreleasepool { LiquidFun.worldStep(displayLink.duration, velocityIterations: 8, positionIterations: 3) self.refreshVertexBuffer() self.render() } } |
Next, add the following code at the end of viewDidLoad
:
let displayLink = CADisplayLink(target: self, selector: Selector("update:")) displayLink.frameInterval = 1 displayLink.addToRunLoop(NSRunLoop.currentRunLoop(), forMode: NSDefaultRunLoopMode) |
You’re creating a CADisplayLink
that calls your new update
method every time the screen refreshes. Then in update
, you do the following:
- You ask LiquidFun to step through the physics simulation using the time interval between the last execution of
update
and the current execution, as represented bydisplayLink.duration
. - You tell the physics simulation to do eight iterations of velocity and three iterations of position. You are free to change these values to how accurate you want the simulation of your particles to be at every time step.
- After LiquidFun steps through the physics simulation, you expect all your particles to have a different position than before. You call
refreshVertexBuffer()
to repopulate the vertex buffer with the new positions. - You send this updated buffer to the render command encoder to show the new positions onscreen.
Build and run, and watch your particles fall off the bottom of the screen:
That’s not quite the effect you’re looking for. You can prevent the particles from falling off by adding walls to your physics world, and to keep things interesting, you’ll also move the particles using the device accelerometer.
Open LiquidFun.h and add these method declarations:
+ (void *)createEdgeBoxWithOrigin:(Vector2D)origin size:(Size2D)size; + (void)setGravity:(Vector2D)gravity; |
Switch to LiquidFun.mm and add these methods:
+ (void *)createEdgeBoxWithOrigin:(Vector2D)origin size:(Size2D)size { // create the body b2BodyDef bodyDef; bodyDef.position.Set(origin.x, origin.y); b2Body *body = world->CreateBody(&bodyDef); // create the edges of the box b2EdgeShape shape; // bottom shape.Set(b2Vec2(0, 0), b2Vec2(size.width, 0)); body->CreateFixture(&shape, 0); // top shape.Set(b2Vec2(0, size.height), b2Vec2(size.width, size.height)); body->CreateFixture(&shape, 0); // left shape.Set(b2Vec2(0, size.height), b2Vec2(0, 0)); body->CreateFixture(&shape, 0); // right shape.Set(b2Vec2(size.width, size.height), b2Vec2(size.width, 0)); body->CreateFixture(&shape, 0); return body; } + (void)setGravity:(Vector2D)gravity { world->SetGravity(b2Vec2(gravity.x, gravity.y)); } |
createEdgeBoxWithOrigin
creates a bounding box shape, given an origin (located at the lower-left corner) and size. It creates a b2EdgeShape
, defines the four corners of the shape’s rectangle and attaches it to a new b2Body
.
setGravity
is another pass-through method for the world object’s SetGravity
method. You use it to change the current world’s horizontal and vertical gravities.
Switch to ViewController.swift and add the following import:
import CoreMotion |
You’re importing the CoreMotion framework because you need it to work with the accelerometer. Now add the following property:
let motionManager: CMMotionManager = CMMotionManager() |
Here you create a CMMotionManager
to report on the accelerometer’s state.
Now, create the world boundary by adding the following line inside viewDidLoad
, before the call to createMetalLayer
:
LiquidFun.createEdgeBoxWithOrigin(Vector2D(x: 0, y: 0), size: Size2D(width: screenWidth / ptmRatio, height: screenHeight / ptmRatio)) |
This should prevent particles from falling off the screen.
Finally, add the following code at the end of viewDidLoad
:
motionManager.startAccelerometerUpdatesToQueue(NSOperationQueue(), withHandler: { (accelerometerData, error) -> Void in let acceleration = accelerometerData.acceleration let gravityX = self.gravity * Float(acceleration.x) let gravityY = self.gravity * Float(acceleration.y) LiquidFun.setGravity(Vector2D(x: gravityX, y: gravityY)) }) |
Here, you create a closure that receives updates from CMMotionManager
whenever there are changes to the accelerometer. The accelerometer contains 3D data on the current device’s orientation. Since you’re only concerned with 2D space, you set the world’s gravity to the x- and y-values of the accelerometer.
Build and run, and tilt your device to move the particles:
The particles will slide around, and with a bit of imagination you can see them as water droplets!
Producing More Water
Now you can go crazy by adding more particles. But with a bounding box constraining your particles to the screen, you don’t want to add too many, or you’ll risk having an unstable simulation.
Open LiquidFun.h and declare the following method:
+ (void)setParticleLimitForSystem:(void *)particleSystem maxParticles:(int)maxParticles; |
Switch to LiquidFun.mm and add the method’s implementation:
+ (void)setParticleLimitForSystem:(void *)particleSystem maxParticles:(int)maxParticles { ((b2ParticleSystem *)particleSystem)->SetDestructionByAge(true); ((b2ParticleSystem *)particleSystem)->SetMaxParticleCount(maxParticles); } |
This method sets a maximum particle limit for a particle system. You enable SetDestructionByAge
so that the oldest particles get destroyed first when you exceed the amount of allowable particles in SetMaxParticleCount
.
Next, switch to ViewController.swift and add the following line in viewDidLoad
, right after the call to LiquidFun.createParticleSystemWithRadius
:
LiquidFun.setParticleLimitForSystem(particleSystem, maxParticles: 1500) |
This line sets a limit for the particle system you created so that you won’t be able to create more than 1500 particles at any given time.
Now add this method to handle touches:
override func touchesBegan(touches: NSSet, withEvent event: UIEvent) { for touch in touches { let touchLocation = touch.locationInView(view) let position = Vector2D(x: Float(touchLocation.x) / ptmRatio, y: Float(view.bounds.height - touchLocation.y) / ptmRatio) let size = Size2D(width: 100 / ptmRatio, height: 100 / ptmRatio) LiquidFun.createParticleBoxForSystem(particleSystem, position: position, size: size) } } |
Here you implement touchesBegan
, the method that gets called when the user taps the screen. When called, the method creates a particle box that is 100 points wide by 100 points high at the location of the touch.
Build and run, and tap the screen repeatedly to produce more particles:
Have fun!
Cleaning up After Yourself
Since LiquidFun runs on C++ and isn’t covered by Swift’s automatic reference counting, you have to make sure to clean up your world object after you’re done with it. This can be done in a few easy steps. As usual, you start with some LiquidFun wrapper methods.
Open LiquidFun.h and declare this method:
+ (void)destroyWorld; |
Quickly switch to LiquidFun.mm and add the following code:
+ (void)destroyWorld { delete world; world = NULL; } |
You’re adding a method that deletes the world object you create in createWorldWithGravity
. All accompanying particle systems and physics bodies inside the world will be deleted along with it.
You need to call this method the moment you no longer need your physics simulation to run. For now, since the whole simulation is running in ViewController
, you’ll delete it only when ViewController
ceases to exist.
Open ViewController.swift and add this method:
deinit { LiquidFun.destroyWorld() } |
Swift automatically deallocates the ViewController
instance when it’s no longer needed. When this happens, Swift calls its deinitializer method, deinit
. This is the perfect place to do some additional clean up yourself, so you clean up the physics world using the wrapper method you just added.
Where to Go From Here?
Here is the final sample project from this tutorial series.
Congratulations! You’ve just learned how to simulate water using LiquidFun. Not only that—you’ve learned how to render your own particle system using Metal and Swift. I’m not sure if you feel the same way, but I’m left with a thirst (pun intended) for more learning!
From here, you can work on improving your simulation further by implementing more of LiquidFun’s features. Here are some of the fun things you can still do:
- Implement and show particle colors and multiple particle systems.
- Mix the colors of particles that collide, similar to the behavior in LiquidSketch.
- Map textures to your particle system.
- Add advanced shader effects for better-looking water.
Do let me know if you’d like to see more tutorials in this series covering the advanced topics above. In the meantime, you can head over to the LiquidFun Programmer’s Guide to learn more.
If you have questions, comments or suggestions on the tutorial, please join the forum discussion below!
LiquidFun Tutorial with Metal and Swift – Part 2 is a post from: Ray Wenderlich
The post LiquidFun Tutorial with Metal and Swift – Part 2 appeared first on Ray Wenderlich.