Update: This tutorial has been updated for Xcode 8.2 and Swift 3.
Welcome back to our Swift 3 Metal tutorial series!
In the first part of the series, you learned how to get started with Metal and render a simple 2D triangle.
In the second part of the series, you learned how to set up a series of transformations to move from a triangle to a full 3D cube.
In this third part of the series, you’ll learn how to add a texture to the cube. As you work through this tutorial, you’ll learn:
- How to reuse uniform buffers
- How to apply textures to a 3D model
- How to add touch input to your app
- How to debug Metal
Dust off your guitars — it’s time to rock Metal!
Getting Started
First, download the starter project. It’s very similar to the app at the end of part two, but with a few modifications as explained below.Previously, ViewController
was a heavy lifter. Even though you’d refactored it, it still had more than one responsibility. Now ViewController
is split into two classes:
MetalViewController
: The base class that contains the generic Metal setup code.MySceneViewController
: A subclass that contains code specific to this app for creating and rendering the cube model.
The most important part to note is the new protocol MetalViewControllerDelegate
:
protocol MetalViewControllerDelegate : class{ func updateLogic(timeSinceLastUpdate: CFTimeInterval) func renderObjects(drawable: CAMetalDrawable) } |
This establishes callbacks from MetalViewController
so that your app knows when to update logic and when to render.
In MySceneViewController
, you set yourself as a delegate and then implement MetalViewControllerDelegate
methods. This is where all the cube rendering and updating action happens.
Now that you’re up to speed on the changes from part two, it’s time to move forward and delve deeper into the world of Metal.
Reusing Uniform Buffers (optional)
In the previous part of this series, you learned about allocating new uniform buffers for every new frame — and you also learned that it’s not very efficient.
So, the time has come to change your ways and make Metal sing, like an epic hair-band guitar solo. But every great solution starts with identifying the actual problem.
The Problem
In the render method in Node.swift, find:
let uniformBuffer = device.makeBuffer(length: MemoryLayout<Float>.size * Matrix4.numberOfElements() * 2, options: []) |
Take a good look at this monster! This method is called 60 times per second, and you create a new buffer each time it’s called.
Since this is a performance issue, you’ll want to compare stats before and after optimization.
Build and run the app, open the Debug Navigator tab and select the FPS row.
You should have numbers similar to these:
You’ll return to those numbers after optimization, so you may want to grab a screencap or simply jot down the stats before you move on.
The Solution
The solution is that instead of allocating a buffer each time, you’ll reuse a pool of buffers.
To keep your code clean, you’ll encapsulate all of the logic to create and reuse buffers into a helper class named BufferProvider
.
You can visualize the class as follows:
BufferProvider
will be responsible for creating a pool of buffers, and it will have a method to get the next available reusable buffer. This is kind of like UITableViewCell
!
Now it’s time to dig in and make some magic happen. Create a new Swift class named BufferProvider, and make it a subclass of NSObject
.
First import Metal at the top of the file:
import Metal |
Now, add these properties to the class:
// 1 let inflightBuffersCount: Int // 2 private var uniformsBuffers: [MTLBuffer] // 3 private var avaliableBufferIndex: Int = 0 |
You’ll get some errors at the moment due to a missing initializer, but you’ll fix those shortly. For now, review each property you just added:
- An
Int
that will store the number of buffers stored by BufferProvider. In the diagram above, this equals 3. - An array that will store the buffers themselves.
- The index of the next available buffer. In your case, it will change like this: 0 -> 1 -> 2 -> 0 -> 1 -> 2 -> 0 -> …
Now add the following initializer:
init(device:MTLDevice, inflightBuffersCount: Int, sizeOfUniformsBuffer: Int) { self.inflightBuffersCount = inflightBuffersCount uniformsBuffers = [MTLBuffer]() for _ in 0...inflightBuffersCount-1 { let uniformsBuffer = device.makeBuffer(length: sizeOfUniformsBuffer, options: []) uniformsBuffers.append(uniformsBuffer) } } |
Here you create a number of buffers, equal to the inflightBuffersCount
parameter passed in to this initializer, and append them to the array.
Now add a method to fetch the next available buffer and copy some data into it:
func nextUniformsBuffer(projectionMatrix: Matrix4, modelViewMatrix: Matrix4) -> MTLBuffer { // 1 let buffer = uniformsBuffers[avaliableBufferIndex] // 2 let bufferPointer = buffer.contents() // 3 memcpy(bufferPointer, modelViewMatrix.raw(), MemoryLayout<Float>.size * Matrix4.numberOfElements()) memcpy(bufferPointer + MemoryLayout<Float>.size*Matrix4.numberOfElements(), projectionMatrix.raw(), MemoryLayout<Float>.size*Matrix4.numberOfElements()) // 4 avaliableBufferIndex += 1 if avaliableBufferIndex == inflightBuffersCount{ avaliableBufferIndex = 0 } return buffer } |
Reviewing each section in turn:
- Fetch MTLBuffer from the
uniformsBuffers
array atavaliableBufferIndex
index. - Get
void *
pointer from MTLBuffer. - Copy the passed-in matrices data into the buffer using
memcpy
. - Increment
avaliableBufferIndex
.
You’re almost done: you just need to set up the rest of the code to use this.
To do this, open Node.swift, and add this new property:
var bufferProvider: BufferProvider |
Find init
and add this at the end of the method:
self.bufferProvider = BufferProvider(device: device, inflightBuffersCount: 3, sizeOfUniformsBuffer: MemoryLayout<Float>.size * Matrix4.numberOfElements() * 2) |
Finally, inside render
, replace this code:
let uniformBuffer = device.makeBuffer(length: MemoryLayout<Float>.size * Matrix4.numberOfElements() * 2, options: []) let bufferPointer = uniformBuffer.contents() memcpy(bufferPointer, nodeModelMatrix.raw(), MemoryLayout<Float>.size * Matrix4.numberOfElements()) memcpy(bufferPointer + MemoryLayout<Float>.size * Matrix4.numberOfElements(), projectionMatrix.raw(), MemoryLayout<Float>.size * Matrix4.numberOfElements()) |
With this far more elegant code:
let uniformBuffer = bufferProvider.nextUniformsBuffer(projectionMatrix: projectionMatrix, modelViewMatrix: nodeModelMatrix) |
Build and run. Everything should work just as well as it did before you added bufferProvider
:
A Wild Race Condition Appears!
Things are running smoothly, but there is a problem that could cause you some major pain later.
Have a look at this graph (and the explanation below):
Currently, the CPU gets the “next available buffer”, fills it with data, and then sends it to the GPU for processing.
But since there’s no guarantee about how long the GPU takes to render each frame, there could be a situation where you’re filling buffers on the CPU faster than the GPU can deal with them. In that case, you could find yourself in a scenario where you need a buffer on the CPU, even though it’s in use on the GPU.
On the graph above, the CPU wants to encode the third frame while the GPU draws the first frame, but its uniform buffer is still in use.
So how do you fix this?
The easiest way is to increase the number of buffers in the reuse pool so that it’s unlikely for the CPU to be ahead of the GPU. This would probably fix it, but wouldn’t be 100% safe.
Patience. That’s what you need to solve this problem like a real Metal ninja.
Like A Ninja
Like an undisciplined ninja, the CPU lacks patience, and that’s the problem. It’s good that the CPU can encode commands so quickly, but it wouldn’t hurt the CPU to wait a bit to avoid this racing condition.
Fortunately, it’s easy to “train” the CPU to wait when the buffer it wants is still in use.
For this task you’ll use semaphores, a low-level synchronization primitive. Basically, semaphores allow you to keep track of the count of limited resources are available, and block when no more resources are available.
Here’s how you’ll use a semaphore in this example:
- Initialize with the number of buffers. You’ll be using the semaphore to keep track of how many buffers are currently in use on the GPU, so you’ll initialize the semaphore with the number of buffers that are available (3 to start in this case).
- Wait before accessing a buffer. Every time you need to access a buffer, you’ll ask the semaphore to “wait”. If a buffer is available, you’ll continue running as usual (but decrement the count on the semaphore). If all buffers are in use, this will block the thread until one becomes available. This should be a very short wait in practice as the GPU is fast.
- Signal when done with a buffer. When the GPU is done with a buffer, you will “signal” the semaphore to track that it’s available again.
Note: To learn more about semaphores, check out this great explanation.
This will make more sense in code than in prose. Go to BufferProvider.swift and add the following property:
var avaliableResourcesSemaphore: DispatchSemaphore |
Now add this to the top of init
:
avaliableResourcesSemaphore = DispatchSemaphore(value: inflightBuffersCount) |
Here you create your semaphore with an initial count equal to the number of available buffers.
Now open Node.swift and add this at the top of render
:
_ = bufferProvider.avaliableResourcesSemaphore.wait(timeout: DispatchTime.distantFuture) |
This will make the CPU wait in case bufferProvider.avaliableResourcesSemaphore
has no free resources.
Now you need to signal the semaphore when the resource becomes available.
While you’re still in render
, find:
let commandBuffer = commandQueue.makeCommandBuffer() |
And add this below:
commandBuffer.addCompletedHandler { (_) in self.bufferProvider.avaliableResourcesSemaphore.signal() } |
When the GPU finishes rendering, it executes a completion handler to signal the semaphore and bumps its count back up again.
Also in BufferProvider.swift, add this method:
deinit{ for _ in 0...self.inflightBuffersCount{ self.avaliableResourcesSemaphore.signal() } } |
deinit
simply does a little cleanup before object deletion. Without this, your app would crash when the semaphore is waiting and you’d deleted BufferProvider
.
Build and run. Everything should work as before — ninja style!
Performance Results
You must be eager to see if there’s been any performance improvement. As you did before, open the Debug Navigator tab and select the FPS row.
These are my stats: the CPU Frame Time decreased from 1.7ms to 1.2ms. It looks like a small win, but the more objects you draw, the more value it gains. Please note that your actual results will depend on the device you’re using.
Texturing
So, what are textures? Simply put, textures are 2D images that are typically mapped to 3D models.
Think about some real life objects, such as a orange. How would the orange’s texture look in Metal? Probably something like this:
If you wanted to render an orange, you’d first create a sphere-like 3D model, then you would use a texture similar to the one above, and Metal would map it.
Texture Coordinates
Contrary to the bottom-left origination of OpenGL, Metal’s textures originate in the top-left corner. Standards — aren’t they great?
Here’s a sneak peek of the texture you’ll use in this tutorial.
With 3D graphics, it’s typical to see the texture coordinate axis marked with letter s for horizontal and t for vertical, just like the image above.
To differentiate between iOS device pixels and texture pixels, you’ll refer to texture pixels as texels.
Your texture has 512×512 texels. In this tutorial, you’ll use normalized coordinates, which means that coordinates within the texture are always within the range of 0->1. So therefore:
- The top-left corner has the coordinates (0.0, 0.0)
- The top-right corner has the coordinates (1.0, 0.0)
- The bottom-left corner has the coordinates (0.0, 1.0)
- The bottom-right corner has the coordinates (1.1, 1.1)
When you map this texture to your cube, normalized coordinates will be important to understand.
Using normalized coordinates isn’t mandatory, but it has some advantages. For example, say you want to switch texture with one that has the resolution of 256×256 texels. If you use normalized coordinates, it’ll “just work”, as long as the new texture maps correctly.
Using Textures in Metal
In Metal, an object that represents texture is any object that conforms to MTLTexture
protocol. There are countless texture types in Metal, but for now all you need is a type called MTLTexture2D
.
Another important protocol is MTLSamplerState
. An object that conforms to this protocol basically instructs the GPU how to use the texture.
When you pass a texture, you’ll pass the sampler as well. When using multiple textures that need to be treated similarly, you use the same sampler.
Here is a small visual to help illustrate how you’ll work with textures:
For your convenience, the project file contains a special, handcrafted class named MetalTexture
that holds all the code to create MTLTexture
from the image file in bundle.
Note: I’m not going to delve into it here, but if you want to learn how to create MTLTexture
, refer to this post on MetalByExample.com.
MetalTexture
Now that you understand how this will work, it’s time to bring this texture to life. Download and copy MetalTexture.swift to your project and open it.
There are two important methods in this file. The first is:
init(resourceName: String,ext: String, mipmaped:Bool) |
Here you pass the name of the file and its extension, and you also indicate whether you want mipmaps
.
But wait, what’s a mipmap?
When mipmaped
is true
, it generates an array of images instead of a single image when the texture loads, and each image in the array is two times smaller than the previous one. The GPU automatically selects the best mip level from which to read texels.
The other method to note is this:
func loadTexture(device: MTLDevice, commandQ: MTLCommandQueue, flip: Bool) |
This method is called when MetalTexture
actually creates MTLTexture
. To create this object, you need a device object (similar to the way you use buffers). Also, you pass in MTLCommandQueue
, which is used when mipmap levels are generated. Usually textures are loaded upside down, so this also has a flip
param to deal with that.
Okay — it’s time to put it all together.
Open Node.swift, and add two new variables:
var texture: MTLTexture lazy var samplerState: MTLSamplerState? = Node.defaultSampler(device: self.device) |
For now, Node
holds just one texture and one sampler.
Now add the following method to the end of the file:
class func defaultSampler(device: MTLDevice) -> MTLSamplerState { let sampler = MTLSamplerDescriptor() sampler.minFilter = MTLSamplerMinMagFilter.nearest sampler.magFilter = MTLSamplerMinMagFilter.nearest sampler.mipFilter = MTLSamplerMipFilter.nearest sampler.maxAnisotropy = 1 sampler.sAddressMode = MTLSamplerAddressMode.clampToEdge sampler.tAddressMode = MTLSamplerAddressMode.clampToEdge sampler.rAddressMode = MTLSamplerAddressMode.clampToEdge sampler.normalizedCoordinates = true sampler.lodMinClamp = 0 sampler.lodMaxClamp = FLT_MAX return device.makeSamplerState(descriptor: sampler) } |
This method generates a simple texture sampler that basically holds a bunch of flags. Here you’ve enabled “nearest-neighbor” filtering, which is faster than “linear”, as well as “clamp to edge”, which instructs Metal how to deal with out-of-range values. You won’t have out-of range values in this tutorial, but it’s always smart to code defensively.
Find the following code in render
:
renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, at: 0) |
And add this below it:
renderEncoder.setFragmentTexture(texture, at: 0) if let samplerState = samplerState{ renderEncoder.setFragmentSamplerState(samplerState, at: 0) } |
This simply passes the texture and sampler to the shaders. It’s similar to what you did with vertex and uniform buffers, except that now you pass them to a fragment shader because you want to map texels to fragments.
Now you need to modify init
. Change its declaration so it matches this:
init(name: String, vertices: Array<Vertex>, device: MTLDevice, texture: MTLTexture) { |
Now find this:
vertexCount = vertices.count |
And add this just below it:
self.texture = texture |
Each vertex needs to map to some coordinates on the texture. So open Vertex.swift and replace its contents with the following:
struct Vertex{ var x,y,z: Float // position data var r,g,b,a: Float // color data var s,t: Float // texture coordinates func floatBuffer() -> [Float] { return [x,y,z,r,g,b,a,s,t] } }; |
This adds two floats that hold texture coordinates.
Now open Cube.swift, and change init
so it looks like this:
init(device: MTLDevice, commandQ: MTLCommandQueue){ // 1 //Front let A = Vertex(x: -1.0, y: 1.0, z: 1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.25, t: 0.25) let B = Vertex(x: -1.0, y: -1.0, z: 1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.25, t: 0.50) let C = Vertex(x: 1.0, y: -1.0, z: 1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.50, t: 0.50) let D = Vertex(x: 1.0, y: 1.0, z: 1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.50, t: 0.25) //Left let E = Vertex(x: -1.0, y: 1.0, z: -1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.00, t: 0.25) let F = Vertex(x: -1.0, y: -1.0, z: -1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.00, t: 0.50) let G = Vertex(x: -1.0, y: -1.0, z: 1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.25, t: 0.50) let H = Vertex(x: -1.0, y: 1.0, z: 1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.25, t: 0.25) //Right let I = Vertex(x: 1.0, y: 1.0, z: 1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.50, t: 0.25) let J = Vertex(x: 1.0, y: -1.0, z: 1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.50, t: 0.50) let K = Vertex(x: 1.0, y: -1.0, z: -1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.75, t: 0.50) let L = Vertex(x: 1.0, y: 1.0, z: -1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.75, t: 0.25) //Top let M = Vertex(x: -1.0, y: 1.0, z: -1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.25, t: 0.00) let N = Vertex(x: -1.0, y: 1.0, z: 1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.25, t: 0.25) let O = Vertex(x: 1.0, y: 1.0, z: 1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.50, t: 0.25) let P = Vertex(x: 1.0, y: 1.0, z: -1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.50, t: 0.00) //Bot let Q = Vertex(x: -1.0, y: -1.0, z: 1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.25, t: 0.50) let R = Vertex(x: -1.0, y: -1.0, z: -1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.25, t: 0.75) let S = Vertex(x: 1.0, y: -1.0, z: -1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 0.50, t: 0.75) let T = Vertex(x: 1.0, y: -1.0, z: 1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 0.50, t: 0.50) //Back let U = Vertex(x: 1.0, y: 1.0, z: -1.0, r: 1.0, g: 0.0, b: 0.0, a: 1.0, s: 0.75, t: 0.25) let V = Vertex(x: 1.0, y: -1.0, z: -1.0, r: 0.0, g: 1.0, b: 0.0, a: 1.0, s: 0.75, t: 0.50) let W = Vertex(x: -1.0, y: -1.0, z: -1.0, r: 0.0, g: 0.0, b: 1.0, a: 1.0, s: 1.00, t: 0.50) let X = Vertex(x: -1.0, y: 1.0, z: -1.0, r: 0.1, g: 0.6, b: 0.4, a: 1.0, s: 1.00, t: 0.25) // 2 let verticesArray:Array<Vertex> = [ A,B,C ,A,C,D, //Front E,F,G ,E,G,H, //Left I,J,K ,I,K,L, //Right M,N,O ,M,O,P, //Top Q,R,S ,Q,S,T, //Bot U,V,W ,U,W,X //Back ] //3 let texture = MetalTexture(resourceName: "cube", ext: "png", mipmaped: true) texture.loadTexture(device: device, commandQ: commandQ, flip: true) super.init(name: "Cube", vertices: verticesArray, device: device, texture: texture.texture) } |
Taking each numbered comment in turn:
- As you create each vertex, you also specify the texture coordinate for each vertex. To understand this better, study the following image, and make sure you understand the s and t values of each vertex.
Note that you also need to create vertices for each side of the cube individually, rather than reusing vertices. This is because the texture coordinates might not match up correctly otherwise. It’s okay if the process of adding extra vertices is a little confusing at this stage — your brain will grasp it soon enough.
- Here you form triangles, just as you did in part two of this tutorial series.
- You create and load the texture using the
MetalTexture
helper class.
Since you aren’t drawing triangles anymore, delete Triangle.swift
Handling Texture on the GPU
At this point, you’re done working on the CPU side of things, and it’s all GPU from here.
Add this image to your project.
Open Shaders.metal and replace the entire file with the following:
#include <metal_stdlib> using namespace metal; // 1 struct VertexIn{ packed_float3 position; packed_float4 color; packed_float2 texCoord; }; struct VertexOut{ float4 position [[position]]; float4 color; float2 texCoord; }; struct Uniforms{ float4x4 modelMatrix; float4x4 projectionMatrix; }; vertex VertexOut basic_vertex( const device VertexIn* vertex_array [[ buffer(0) ]], const device Uniforms& uniforms [[ buffer(1) ]], unsigned int vid [[ vertex_id ]]) { float4x4 mv_Matrix = uniforms.modelMatrix; float4x4 proj_Matrix = uniforms.projectionMatrix; VertexIn VertexIn = vertex_array[vid]; VertexOut VertexOut; VertexOut.position = proj_Matrix * mv_Matrix * float4(VertexIn.position,1); VertexOut.color = VertexIn.color; // 2 VertexOut.texCoord = VertexIn.texCoord; return VertexOut; } // 3 fragment float4 basic_fragment(VertexOut interpolated [[stage_in]], texture2d<float> tex2D [[ texture(0) ]], // 4 sampler sampler2D [[ sampler(0) ]]) { // 5 float4 color = tex2D.sample(sampler2D, interpolated.texCoord); return color; } |
Here’s all the things you changed:
- The vertex structs now contain texture coordinates.
- You now pass texture coordinates from
VertexIn
toVertexOut
. - Here you receive the texture you passed in.
- Here you receive the sampler.
- You use
sample()
on the texture to get color for the specific texture coordinate from the texture by using rules specified in sampler.
Almost done! Open MySceneViewController.swift and replace this line:
objectToDraw = Cube(device: device) |
With this:
objectToDraw = Cube(device: device, commandQ:commandQueue) |
Build and run. Your cube should now be texturized!
Colorizing a Texture (Optional)
At this point, you’re ignoring the cube’s color values and simply using color values from the texture. But what if you need to texturize the object’s color, instead of covering it up?
In the fragment shader, replace this line:
float4 color = tex2D.sample(sampler2D, interpolated.texCoord); |
With:
float4 color = interpolated.color * tex2D.sample(sampler2D, interpolated.texCoord); |
You should get something like this:
You did this just to see how you can combine colors inside the fragment shader. And yes, it’s as simple as doing a little multiplication.
But don’t continue until you revert that last change — because it really doesn’t look that good. :]
Adding User Input
All this texturing is cool, but it’s rather static. Wouldn’t it be cool if you could rotate the cube with your finger and see your beautiful texturing work from every angle?
You can use UIPanGestureRecognizer
to detect user interactions.
Open MySceneViewController.swift, and add these two new properties:
let panSensivity:Float = 5.0 var lastPanLocation: CGPoint! |
Now add two new methods:
//MARK: - Gesture related // 1 func setupGestures(){ let pan = UIPanGestureRecognizer(target: self, action: #selector(MySceneViewController.pan)) self.view.addGestureRecognizer(pan) } // 2 func pan(panGesture: UIPanGestureRecognizer){ if panGesture.state == UIGestureRecognizerState.changed { let pointInView = panGesture.location(in: self.view) // 3 let xDelta = Float((lastPanLocation.x - pointInView.x)/self.view.bounds.width) * panSensivity let yDelta = Float((lastPanLocation.y - pointInView.y)/self.view.bounds.height) * panSensivity // 4 objectToDraw.rotationY -= xDelta objectToDraw.rotationX -= yDelta lastPanLocation = pointInView } else if panGesture.state == UIGestureRecognizerState.began { lastPanLocation = panGesture.location(in: self.view) } } |
Here’s what’s going on in the code above:
- Create a pan gesture recognizer and add it to your view.
- Check if the touch moved.
- When the touch moves, calculate how much it moved using normalized coordinates. You also apply
panSensivity
to control rotation speed. - Apply the changes to the cube by setting the rotation properties.
Now add the following to the end of viewDidLoad()
:
setupGestures() |
Build and run.
Hmmm, the cube spins all by itself. Why is that? Think through what you just did and see if you can identify the problem here and how to solve it. Open the spoiler to check if your assumption is correct.
Debugging Metal
Like any code, you’ll need to do a little debugging to make sure your work is free of errors. And if you look closely, you’ll notice that at some angles, the sides are a little “crispy”.
To fully understand the problem, you’ll need to debug. Fortunately, Metal comes with some stellar tools to help you.
While the app is running, press the Capture the GPU Frame button.
Pressing the button will automatically pause the app on a breakpoint; Xcode will then collect all values and states of this single frame.
Xcode may put you into assistant mode, meaning that it splits your main area into two. You don’t need all that, so feel free to return to regular mode. Also, select All MTL Objects in the debug area as shown in the screenshot:
In the left sidebar, select the final line (the commit) and at last, you have proof that you’re actually drawing in triangles, not squares!
In the debug area, find and open the Textures group.
Why do you have two textures? You only passed in one, remember?
One texture is for the cube image, and the other is formed from the fragment shader and the one shown to the screen.
The weird part is this other texture has non-Retina resolution. Ah-ha! So the reason why your cube was a bit crispy is because the non-Retina texture stretched to fill the screen. You’ll fix this in a moment.
Fixing Drawable Texture Resizing
There is one more problem to debug and solve before you can officially declare your mastery of Metal. Run your app again and rotate the device into landscape mode.
Not the best view, eh?
The problem here is that when the device rotates, its bounds change. However, the displayed texture dimensions don’t have any reason to change.
Fortunately, it’s pretty easy to fix. Open MetalViewController.swift and take a look at this setup code in viewDidLoad
:
device = MTLCreateSystemDefaultDevice() metalLayer = CAMetalLayer() metalLayer.device = device metalLayer.pixelFormat = .bgra8Unorm metalLayer.framebufferOnly = true metalLayer.frame = view.layer.frame view.layer.addSublayer(metalLayer) |
The important line is metalLayer.frame = view.layer.frame
, which sets the layer frame just once. You just need to update it when the device rotates.
Override viewDidLayoutSubviews
like so:
//1 override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() if let window = view.window { let scale = window.screen.nativeScale let layerSize = view.bounds.size //2 view.contentScaleFactor = scale metalLayer.frame = CGRect(x: 0, y: 0, width: layerSize.width, height: layerSize.height) metalLayer.drawableSize = CGSize(width: layerSize.width * scale, height: layerSize.height * scale) } } |
Here’s what the code is doing:
- Gets the display
nativeScale
for the device (2 for iPhone 5s, 6 and iPads, 3 for iPhone 6 Plus) - Applies the scale to increase the drawable texture size.
Now delete the following line in viewDidLoad
:
metalLayer.frame = view.layer.frame |
Build and run. Here is a classic before-and-after comparison.
The difference is even more obvious when you’re on an iPhone 6+.
Now rotate to landscape — does it work?
It’s rather flat now, but at least the background is a rich green and the edges look far better.
If you repeat the steps from the debug section, you’d see the texture’s dimensions are now correct. So, what’s the problem?
Think through what you just did and try to figure out what’s causing you pain. Then check the answer below to see if you figured it out — and how to solve it.
Where To Go From Here?
Here is the final example project from this Swift 3 Metal Tutorial.
Nicely done! Take a moment to review what you’ve done in this tutorial.
- You created
BufferProvider
to cleverly reuse uniform buffers instead of creating new buffers every time. - You added MetalTexture and loaded a
MTLTexture
with it. - You modified the structure of Vertex so it stores corresponding texture coordinates from
MTLTexture
. - You modified Cube so it contains 24 vertices, each with its own texture coordinates.
- You modified the shaders to receive texture coordinates of the fragments, and then you read the corresponding texel using
sample()
. - You added a cool rotation UI effect with
UIPanGestureRecognizer
. - You debugged the Metal frame and identified why it rendered a subpar image.
- You resized a drawable texture in
viewDidLayoutSubviews
to fix the rotation issue and improve the image’s quality.
Here are some great resources to deepen your understanding of Metal:
- Apple’s Metal for Developers page, which has tons of links to documentation, videos and sample code
- Apple’s Metal Programming Guide
- Apple’s Metal Shading Language Guide
- The Metal videos from WWDC 2014
- MetalByExample.com.
You also might enjoy the Beginning Metal course on our site, where we explain these same concepts in video form, but with even more detail.
Thank you for joining me for this tour through Metal. As you can see, it’s a powerful technology that’s relatively easy to implement once you understand how it works.
If you have questions, comments or Metal discoveries to share, please leave them in the comments below!
The post Metal Tutorial with Swift 3 Part 3: Adding Texture appeared first on Ray Wenderlich.