When people say "toon outlines", they are referring to any technique that render lines around objects. Like cel shading, outlines can help your game look more stylized. They can give the impression that objects are drawn or inked. You can see examples of this in games such as Okami, Borderlands and Dragon Ball FighterZ.
In this tutorial, you will learn how to:
- Create outlines using an inverted mesh
- Create outlines using post processing and convolution
- Create and use material functions
- Sample neighboring pixels
If you are new to post process materials, you should go through our cel shading tutorial first. This tutorial will use some of the concepts presented in the cel shading tutorial.
Getting Started
Start by downloading the materials for this tutorial (you can find a link at the top or bottom of this tutorial). Unzip it and navigate to ToonOutlineStarter and open ToonOutline.uproject. You will see the following scene:
To start, you will create outlines by using an inverted mesh.
Inverted Mesh Outlines
The idea behind this method is to duplicate your target mesh. Then, make the duplicate a solid color (usually black) and expand it so that it is slightly larger than the original mesh. This will give you a silhouette.
If you use the duplicate as is, it will completely block the original.
To fix this, you can invert the normals of the duplicate. With backface culling enabled, you will see the inward faces instead of the outward faces.
This will allow the original to show through the duplicate. And because the duplicate is larger than the original, you will get an outline.
Advantages:
- You will always have clean lines since the outline is made up of polygons
- Appearance and thickness are easily adjustable by moving vertices
- Outlines shrink over distance. This can also be a disadvantage.
Disadvantages:
- Generally, does not outline details inside the mesh
- Since the outline consists of polygons, they are prone to clipping. You can see this in the example above where the duplicate overlaps the ground.
- Possibly bad for performance. This depends on how many polygons your mesh has. Since you are using duplicates, you are basically doubling your polygon count.
- Works better on smooth and convex meshes. Hard edges and concave areas will create holes in the outline. You can see this in the image below.
Generally, you should create the inverted mesh in a modelling program. This will give you more control over the silhouette. If working with skeletal meshes, it will also allow you to skin the duplicate to the original skeleton. This will allow the duplicate to move with the original mesh.
For this tutorial, you will create the mesh in Unreal rather than a modelling program. The method is slightly different but the concept remains the same.
First, you need to create the material for the duplicate.
Creating the Inverted Mesh Material
For this method, you will mask the outward-facing polygons. This will leave you with the inward-facing polygons.
Navigate to the Materials folder and open M_Inverted. Afterwards, go to the Details panel and adjust the following settings:
- Blend Mode: Set this to Masked. This will allow you to mark areas as visible or invisible. You can adjust the threshold by editing Opacity Mask Clip Value.
- Shading Model: Set this to Unlit. This will make it so lights do not affect the mesh.
- Two Sided: Set this to enabled. By default, Unreal culls backfaces. Enabling this option disables backface culling. If you leave backface culling enabled, you will not be able to see the inward-facing polygons.
Next, create a Vector Parameter and name it OutlineColor. This will control the color of the outline. Connect it to Emissive Color.
To mask the outward-facing polygons, create a TwoSidedSign and multiply it by -1. Connect the result to Opacity Mask.
TwoSidedSign will output 1 for frontfaces and -1 for backfaces. This means frontfaces will be visible and backfaces will be invisible. However, you want the opposite effect. To do this, you reverse the signs by multiplying by -1. Now frontfaces will output -1 and backfaces will output 1.
Finally, you need a way to control the outline thickness. To do this, add the highlighted nodes:
In Unreal, you can move the position of every vertex using World Position Offset. By multiplying the vertex normal by OutlineThickness, you are making the mesh thicker. Here is a demonstration using the original mesh:
At this point, the material is complete. Click Apply and then close M_Inverted.
Now, you need to duplicate the mesh and apply the material you just created.
Duplicating the Mesh
Navigate to the Blueprints folder and open BP_Viking. Add a Static Mesh component as a child of Mesh and name it Outline.
Make sure you have Outline selected and set its Static Mesh to SM_Viking. Afterwards, set its material to MI_Inverted.
MI_Inverted is an instance of M_Inverted. This will allow you to adjust the OutlineColor and OutlineThickness parameters without recompiling.
Click Compile and then close BP_Viking. The viking will now have an outline. You can control the color and thickness by opening MI_Inverted and adjusting the parameters.
That’s it for this method! See if you can create an inverted mesh in your modelling program and then bring it into Unreal.
If you want to create outlines in a different way, you can use post processing instead.
Post Process Outlines
You can create post process outlines by using edge detection. This is a technique which detects discontinuities across regions in an image. Here are a few types of discontinuities you can look for:
Advantages:
- Can apply to the entire scene easily
- Fixed performance cost since the shader always runs for every pixel
- Line width stays the same at various distances. This can also be a disadvantage.
- Lines don’t clip into geometry since it is a post process effect
Disadvantages:
- Usually requires multiple edge detectors to catch all edges. This has an impact on performance.
- Prone to noise. This means edges will show up in areas with a lot of variance.
A common way to do edge detection is to perform convolution on each pixel.
What is Convolution?
In image processing, convolution is an operation on two groups of numbers to produce a single number. First, you take a grid of numbers (known as a kernel) and place the center over each pixel. Below is an example of a 3×3 kernel moving over the top two rows of an image:
For every pixel, multiply each kernel entry by its corresponding pixel. Let’s take the pixel from the top-left corner of the mouth for demonstration. We’ll also convert the image to grayscale to simplify the calculations.
First, place the kernel (we’ll use the same one from before) so that the target pixel is in the center. Afterwards, multiply each kernel element with the pixel it overlaps.
Finally, add all the results together. This will be the new value for the center pixel. In this case, the new value is 0.5 + 0.5 or 1. Here is the image after performing convolution on every pixel:
The kernel you use determines what effect you get. The kernel from the examples is used for edge detection. Here are a few examples of other kernels:
To detect edges in an image, you can use Laplacian edge detection.
Laplacian Edge Detection
First, what is the kernel for Laplacian edge detection? It’s actually the one you saw in the examples from the last section!
This kernel works for edge detection because the Laplacian measures the change in slope. Areas with greater change diverge from zero, indicating it is an edge.
To help you understand it, let’s look at the Laplacian in one dimension. The kernel for this would be:
First, place the kernel over an edge pixel and then perform convolution.
This will give you a value of 1 which indicates there was a large change. This means the target pixel is likely to be an edge.
Next, let’s convolve an area with less variance.
Even though the pixels have different values, the gradient is linear. This means there is no change in slope and indicates the target pixel is not an edge.
Below is the image after convolution and a graph with each value plotted. You can see that pixels on an edge are further away from zero.
Phew! That was a lot of theory but don’t worry — now comes the fun part. In the next section, you will build a post process material that performs Laplacian edge detection on the depth buffer.
Building the Laplacian Edge Detector
Navigate to the Maps folder and open PostProcess. You will see a black screen. This is because the map contains a Post Process Volume using an empty post process material.
This is the material you will edit to build the edge detector. The first step is to figure out how to sample neighboring pixels.
To get the position of the current pixel, you can use a TextureCoordinate. For example, if the current pixel is in the middle, it will return (0.5, 0.5). This two-component vector is called a UV.
To sample a different pixel, you just need to add an offset to the TextureCoordinate. In a 100×100 image, each pixel has a size of 0.01 in UV space. To sample a pixel to the right, you add 0.01 on the X-axis.
However, there is a problem with this. As the image resolution changes, the pixel size also changes. If you use the same offset (0.01, 0) in a 200×200 image, it will sample two pixels to the right.
To fix this, you can use the SceneTexelSize node which returns the pixel size. To use it, you do something like this:
Since you are going to be sampling multiple pixels, you would have to create this multiple times.
Obviously, this will quickly become messy. Fortunately, you can use material functions to keep your graph clean.
In the next section, you will put the duplicate nodes into the function and create an input for the offset.
Creating the Sample Pixel Function
First, navigate to the Materials\PostProcess folder. To create a material function, click Add New and select Materials & Textures\Material Function.
Rename it to MF_GetPixelDepth and then open it. The graph will have a single FunctionOutput. This is where you will connect the value of the sampled pixel.
First, you need to create an input that will accept an offset. To do this, create a FunctionInput.
This will show up as an input pin when you use the function later.
Now you need to specify a few settings for the input. Make sure you have the FunctionInput selected and then go to the Details panel. Adjust the following settings:
- InputName: Offset
- InputType: Function Input Vector 2. Since the depth buffer is a 2D image, the offset needs to be a Vector 2.
- Use Preview Value as Default: Enabled. If you don’t provide an input value, the function will use the value from Preview Value.
Next, you need to multiply the offset by the pixel size. Then, you need to add the result to the TextureCoordinate. To do this, add the highlighted nodes:
Finally, you need to sample the depth buffer using the provided UVs. Add a SceneDepth and connect everything like so:
Summary:
- Offset will take in a Vector 2 and multiply it by SceneTexelSize. This will give you an offset in UV space.
- Add the offset to TextureCoordinate to get a pixel that is (x, y) pixels away from the current pixel
- SceneDepth will use the provided UVs to sample the appropriate pixel and then output it
That’s it for the material function. Click Apply and then close MF_GetPixelDepth.
Next, you need to use the function to perform convolution on the depth buffer.
Performing Convolution
First, you need to create the offsets for each pixel. Since the corners of the kernel are always zero, you can skip them. This leaves you with the left, right, top and bottom pixels.
Open PP_Outline and create four Constant2Vector nodes. Set them to the following:
- (-1, 0)
- (1, 0)
- (0, -1)
- (0, 1)
Next, you need to sample the five pixels in the kernel. Create five MaterialFunctionCall nodes and set each to MF_GetPixelDepth. Afterwards, connect each offset to their own function.
This will give you the depth values for each pixel.
Next is the multiplication stage. Since the multiplier for neighboring pixels is 1, you can skip the multiplication. However, you still need to multiply the center pixel (bottom function) by -4.
Next, you need to sum up all the values. Create four Add nodes and connect them like so:
If you remember the graph of pixel values, you’ll see that some of them are negative. If you use the material as is, the negative pixels will appear black because they are below zero. To fix this, you can get the absolute value which converts any inputs to a positive value. Add an Abs and connect everything like so:
Summary:
- The MF_GetPixelDepth nodes will get the depth value for the center, left, right, top and bottom pixels
- Multiply each pixel by its corresponding kernel value. In this case, you only need to multiply the center pixel.
- Calculate the sum of all the pixels
- Get the absolute value of the sum. This will prevent pixels with negative values from appearing as black.
Click Apply and then go back to the main editor. The entire image will now have lines!
There are a few problems with this though. First, there are edges where there is only a slight depth difference. Second, the background has circular lines due to it being a sphere. This is not a problem if you are going to isolate the edge detection to meshes. However, if you want lines for your entire scene, the circles are undesirable.
To fix these, you can implement thresholding.
Implementing Thresholding
First, you will fix the lines that appear because of small depth differences. Go back to the material editor and create the setup below. Make sure you set Threshold to 4.
Later, you will connect the result from the edge detection to A. This will output 1 (indicating an edge) if the pixel’s value is higher than 4. Otherwise, it will output 0 (no edge).
Next, you will get rid of the lines in the background. Create the setup below. Make sure you set DepthCutoff to 9000.
This will output 0 (no edge) if the current pixel’s depth is greater than 9000. Otherwise, it will output the value from A < B.
Finally, connect everything like so:
Now, lines will only appear if the pixel value is above 4 (Threshold) and its depth is lower than 9000 (DepthCutoff).
Click Apply and then go back to the main editor. The small lines and background lines are now gone!
The edge detection is working pretty well. But what if you want thicker lines? To do this. you need a larger kernel size.
Creating Thicker Lines
Generally, larger kernel sizes have a greater impact on performance. This is because you have to sample more pixels. But what if there was a way to have larger kernels with the same performance as a 3×3 kernel? This is where dilated convolution comes in handy.
In dilated convolution, you simply space the offsets further apart. To do this, you multiply each offset by a scalar called the dilation rate. This defines the spacing between each kernel element.
As you can see, this allows you to increase the kernel size while sampling the same number of pixels.
Now let’s implement dilated convolution. Go back to the material editor and create a ScalarParameter called DilationRate. Set its value to 3. Afterwards, multiply each offset by DilationRate.
This will place each offset 3 pixels away from the center pixel.
Click Apply and then go back to the main editor. You will see that your lines are a lot thicker. Here is a comparison between multiple dilation rates:
Unless you’re going for a line art look for your game, you probably want to have the original scene show through. In the final section, you will add the lines to the original scene image.
Adding Lines to the Original Image
Go back to the material editor and create the setup below. Order is important here!
Next, connect everything like so:
Now, the Lerp will output the scene image if the alpha reaches zero (black). Otherwise, it will output LineColor.
Click Apply and then close PP_Outline. The original scene will now have outlines!
Where to Go From Here?
You can download the completed project using the link at the top or bottom of this tutorial.
If you’d like to do more with edge detection, try creating one that works on the normal buffer. This will give you some edges that don’t appear in a depth edge detector. You can then combine both types of edge detection together.
Convolution is a wide topic that has many uses including artificial intelligence and audio processing. I encourage you to explore convolution by creating other effects such as sharpening and blurring. Some of these are as simple as changing the values in the kernel! Check out Images Kernels explained visually for an interactive explanation of convolution. It also contains the kernels for some other effects.
I also highly recommend you check out the GDC presentation on Guilty Gear Xrd’s art style. They also use the inverted mesh method for the outer lines. However, for the inner lines, they present a simple yet ingenious technique using textures and UV manipulation.
If there are any effects you’d like to me cover, let me know in the comments below!
The post Unreal Engine 4 Toon Outlines Tutorial appeared first on Ray Wenderlich.