Skip to content

Unity Surface Shader Cartoon Renderer

In the newer versions of Unity, shaders use a Surface function, as opposed to a Vert/Frag combo to render everything.  But they also allow for a vertex function to be used if it’s needed.  The problem I was finding is, in the lower level vert/frag shader format the vertex function is in view space, whereas the vertex shader in a surface shader is in model space.  This makes it difficult to do things such as moving vertices along the XY plane while keeping the Z plane static.

It took me a little while, but I eventually managed to cobble together a function that achieves the cartoon look that I was looking for.  I still don’t quite understand why the normal has to be flipped, but it appears to work.  If anyone knows why I had to flip the normal, please let me know.

        void vert (inout appdata_full v) 
        {
             float4 eyePosition =  mul (UNITY_MATRIX_MV, v.vertex);
            float4x4 it_mv = UNITY_MATRIX_IT_MV;
            float3 norm   = normalize(mul ((float3x3)UNITY_MATRIX_IT_MV, v.normal));
            float2 offset = -norm.xy;
            eyePosition.xy += offset * eyePosition.z * _Outline;
            v.vertex = mul(eyePosition, it_mv);
        }

I’ve released some new software

Over the past year I’ve been plugging away on a piece of software during my commute to my day job.  It’s finally done and up on the Unity asset store.  It’s called Get Me Started and is a piece of middleware that provides all of the basic functionality that pretty much every game under the sun needs before it can be sold.  It does things like display messageboxes, localization, writing of data to xml files, handles sound volumes, etc.  It also has various developer tools such as a listbox of buttons which you can program to do anything you need to do while testing, and it also shows your frame rate and safe zone.

I encourage people to check it out and let me know if they think I can add any more features to make it more robust.

Photoshop select layers

This is just a small function for Photoshop’s JavaScript engine (ExtendScript) that will select all of the layers in a folder.  It’s nothing big, but I wasn’t able to find any straight forward method besides using the internal Photoshop language.

function SelectFolderPixels(folder)
{ 
    var idsetd = charIDToTypeID( "setd" );
    var desc28 = new ActionDescriptor();
    var idnull = charIDToTypeID( "null" );
        var ref27 = new ActionReference();
        var idChnl = charIDToTypeID( "Chnl" );
        var idfsel = charIDToTypeID( "fsel" );
        ref27.putProperty( idChnl, idfsel );
    desc28.putReference( idnull, ref27 );
    var idT = charIDToTypeID( "T   " );
        var ref28 = new ActionReference();
        var idChnl = charIDToTypeID( "Chnl" );
        var idChnl = charIDToTypeID( "Chnl" );
        var idTrsp = charIDToTypeID( "Trsp" );
        ref28.putEnumerated( idChnl, idChnl, idTrsp );
        var idLyr = charIDToTypeID( "Lyr " );
        ref28.putName( idLyr, folder.layers[0].name );
    desc28.putReference( idT, ref28 );
    executeAction( idsetd, desc28, DialogModes.NO );

    if (folder.layers.length > 1)
    {

        for (var i = 1; i < folder.layers.length; i++)
        {
            var idAdd = charIDToTypeID( "Add " );
            var desc35 = new ActionDescriptor();
            var idnull = charIDToTypeID( "null" );
                var ref37 = new ActionReference();
                var idChnl = charIDToTypeID( "Chnl" );
                var idChnl = charIDToTypeID( "Chnl" );
                var idTrsp = charIDToTypeID( "Trsp" );
                ref37.putEnumerated( idChnl, idChnl, idTrsp );
                var idLyr = charIDToTypeID( "Lyr " );
                ref37.putName( idLyr, folder.layers[i].name );
            desc35.putReference( idnull, ref37 );
            var idT = charIDToTypeID( "T   " );
                var ref38 = new ActionReference();
                var idChnl = charIDToTypeID( "Chnl" );
                var idfsel = charIDToTypeID( "fsel" );
                ref38.putProperty( idChnl, idfsel );
            desc35.putReference( idT, ref38 );
            executeAction( idAdd, desc35, DialogModes.NO );
        }
    }
}

Unlocking extra image information using shaders

4bit result

Sometimes when working on a game, you just need to pack a little more information stored in an image but you don’t necessarily want to include an additional image.  Recently I wanted to cram 5 channels worth of information into the 4 available (Red, Green, Blue, Alpha).  What I put together were a couple handy shader routines.  The idea is to bit shift the data in the alpha channel to store more pixel data at the expense of the colour depth.  One rather large problem with GLES is that there aren’t any bit shifting operators available on the current iPhones and shaders don’t normally work with integers so you have to jump through a couple hoops to do it.

Unfortunately in the process of creating the texture packing routines I discovered some drawbacks that made them useful for the task at hand, but they may be of use to someone else.  You cannot use linear interpolation on your textures, so they wind up looking blocky.  This was a show stopper, but I didn’t discover this until I put together 3 different versions that use different bit depths.  Also I’m good at writing shaders but there are lots of people out there that are better and knowing my luck there might be a one line solution to the issue that I’m unaware of.  With that said, lets get on with the show.

1-Bit Channel

Steal one bit of information from the alpha channel to create a basic mask, while maintaining the majority of the bit depth so that it can still be used with very little visual loss.

1bit result

Painting this image in Photoshop is the easiest of all the methods.

  • Create a layer.  This will be the layer with the majority of the colour data on it.  So “Alpha” or “AO”.
  • Paint a black and white image with all the shades of grey that you want to use.  This could be the alpha channel of the object, or an ambient occlusion pass, etc.
  • Use the Output levels portion of the Levels adjustment tool to make the brightest shade 127.  This will darken the image substantially and is where you lose the colour data but gain space for a new mask.levels
  • Create a new layer and fill it with pure black. Call this layer “Mask”
  • Select the Pencil tool.  Make sure that all anti-aliasing is off so the brush has hard jaggy edges.
  • Set the color to R: 128, G: 128, B: 128.
  • Paint your mask.
  • Set the layer style of to “Linear Dodge (Add)”.

If you take this image and put it into the Alpha channel of your texture, the shader will be able to have RGBA + a mask.

1bit1bit closeup

highp vec2 unpackColor1bit(highp float f)
{
    highp vec2 finalColour = vec2(0.0);
    bool shadow = (f >= 0.5);      // 1/2.  Divide the colour range into two.
    finalColour.r = float(shadow); // Set the red channel to 0.0 or 1.0 if there is a mask
    finalColour.g = f;
    if (shadow)                    // Remove the mask data if it needs to be removed
    {
        finalColour.g -= 0.5;
    }
    finalColour.g /= 0.5;          // Stretch it back out to 0.0 .. 1.0 range
    return finalColour;
}

How it works

AddingColour

Imagine an image that is 1 pixel x 9 pixels in size.  In it you have a gradient from black to white and the shades of grey go from (0, 0, 0) to (127, 127, 127).  That’s the bar graph on the left.  Then on a new layer you paint some of the pixels with a value of (128, 128, 128) and leave some at (0, 0, 0).  When you set the blending mode to Linear Dodge (Add) or Additive blending in shader speak you’re adding the two shades of grey together.  What you wind up with is what is pictured on the right.  Some pixels are greater than or equal to 128 and others are below it.  That is how you can tell if the mask is on that pixel or not.  If it’s greater than or equal to 128, that means that the pixel on the 1 bit mask is set, and if you subtract 128 from the value, you have recovered the shade of grey on the alpha channel (Blue bar).

The 2-bit and 4-bit shaders work in the exact same fashion, but they are more granular.  With a 2 bit mask, instead of dividing the colour space into 2 chunks, you divide it into 4.  That gives you 4 shades of grey to work with on one side and 64 colours on the other.  With the 4-bit mask you’re dividing it into 16 chunks and you have two images that contain 16 colours each.

2-Bit Channel

2bit result

This is very similar to the 1-Bit Channel, but 2-Bits are used so that you can have 3 values of grey on the mask.  This allows for a tiny amount of anti-aliasing on the edges.  On the low pass use the Levels filter to limit the range from 0 to 63.  The shades of grey on the High pass increase by 16 every time starting at zero, so (0, 0, 0) (16, 16, 16) (128, 128, 128) and (192, 192, 192) are the values available to you.  This is still usable by an artist without any tools to convert images.  If you just make sure that you’re painting one of those 4 shades, everything should work out fine.

2bit2bit closeup

highp vec2 unpackColor2bit(highp float f)
{
    const highp float splitter = 0.25; // 1/4.  The 0.0 .. 1.0 range is divided into quarters.
    highp vec2 finalColour = vec2(0.0);

    // Unrolled loop that just sets the 4 mask colours explicitly
    if (f >= splitter * 3.0)
    {
        finalColour.r = 1.0;
        finalColour.g = f - splitter * 3.0; // Remove 0.75 from the colour
    }
    else if (f >= splitter * 2.0)
    {
        finalColour.r = 0.66;
        finalColour.g = f - splitter * 2.0; // Remove 0.50 from the colour
    }
    else if (f >= splitter)
    {
        finalColour.r = 0.33;
        finalColour.g = f - splitter;       // Remove 0.25 from the colour
    }
    else
    {
        finalColour.r = 0.0;
        finalColour.g = f;
    }

    finalColour.g /= splitter;    // Bring the green channel back into the 0.0 .. 1.0 range.
    return finalColour;
}

4-Bit Channel

4bit result

When creating a 4-Bit Channel you’re splitting the 8-Bits that are normally used for the Alpha channel in half.  This limits both packed images to 16 shades of grey.  It also gets a little more difficult to hand paint in Photoshop, but the trick is to paint the High layer with greys that are powers of two. (0, 0, 0), (16, 16, 16), (32, 32, 32), (64, 64, 64), …, (240, 240, 240).  The lower layer gets the colours (0, 0, 0), (1, 1, 1), (2, 2, 2), … (15, 15, 15).  Set the higher layer to additive and you should see a result similar to the attached screenshots.

4bit result4bit

Displaying just the lower layer looks like this: 4bit result pass2

Displaying just the higher layer looks like this:

4bit result pass1

You will notice the grid like steps because each image is only 16 shades of grey and the interpolation is set to Nearest Neighbour.  The problem shows up when you change the filter modes to Linear.  Then you start to see interpolation errors between the pixels and I’m not sure there is anything that can be done about it.

4bit problem

highp vec2 unpackColor4bit(highp float f)
{
    highp vec2 finalColour = vec2(0.0, f);       // Start the green channel with full colour, and the red channel with no colour.
    const highp float splitter = 0.0625; // 1/16th
    while( f > splitter )
    {
        finalColour.r += splitter; // Increment the red channel by 1/16th
        finalColour.g -= splitter; // Decrement the green channel by 1/16th
        f -= splitter;     // Remove this 1/16th chunk of colour from the float containing all the colours and repeat if necessary.
    }
    finalColour.r /= (1.0 - splitter); // Flip the values so that they're in the right order from 0 .. 15 and then divide them by 1/16th to get it back into the 0.0 .. 1.0 range.
    finalColour.g /= splitter;         // Get the green channel back into the 0.0 .. 1.0 range.

    return col;
}

All in one

Here is the final fragment shader done up all nicely so that it can be used in your project.  There are helper functions to grab the bit depth that you want and then on generalized function that does all the work.

uniform highp sampler2D Diffuse;
varying highp vec2 texcoord0;

highp vec2 unpackColour1Bit(highp float f)
{
    const highp float splitter = 0.5; // 1/2
    return unpackColourVariable(f, splitter);
}
highp vec2 unpackColour2Bit(highp float f)
{
    const highp float splitter = 0.25; // 1/4
    return unpackColourVariable(f, splitter);
}

highp vec2 unpackColour4Bit(highp float f)
{
 const highp float splitter = 0.0625; // 1/16
 return unpackColourVariable(f, splitter);
}
highp vec2 unpackColourVariable(highp float f, highp float splitter)
{
    highp vec2 finalColour = vec2(0.0, f);       // Start the green channel with full colour, and the red channel with no colour.
    while( f > splitter )
    {
        finalColour.r += splitter; // Add a colour chunk to the red channel
        finalColour.g -= splitter; // Remove a colour chunk from the green channel
        f -= splitter;             // Remove this colour chunk from the float containing all the colours and repeat if necessary.
    }
    finalColour.r /= (1.0 - splitter); // Normalize the high bits to 15 / 15.
    finalColour.g /= splitter;         // Get the green channel back into the 0.0 .. 1.0 range.

    return finalColour;
}
void main()
{
    highp vec4 diffuse = texture2D(Diffuse, texcoord0);
    highp float packedColour = diffuse.a;
    highp vec2 extracted = unpackColor4bit(packedColour);
    highp vec4 result = vec4(extracted.rg, 0.0, 1.0);
    gl_FragColor = result;
}

Raw Code (Ignore this)

This is the raw code that I wrote before typing up this article.  It’s just here as a reference in case I screwed something up that I need to fix later.

uniform highp sampler2D Diffuse;

varying highp vec2 texcoord0;

// 24 bit colour
highp float packColor(highp vec3 color) {
    return color.r + color.g * 256.0 + color.b * 256.0 * 256.0;
}

// 24 bit colour
highp vec3 unpackColor(highp float f) {
    highp vec3 color;
    color.b = floor(f / 256.0 / 256.0);
    color.g = floor((f - color.b * 256.0 * 256.0) / 256.0);
    color.r = floor(f - color.b * 256.0 * 256.0 - color.g * 256.0);
    // now we have a vec3 with the 3 components in range [0..256]. Let's normalize it!
    return color / 256.0;
}

highp vec2 unpackColor4bit(highp float f)
{
    int fIn8Bit = int(f * 255.0);
    int val = fIn8Bit;
    val /= 16;
    highp vec2 col = vec2(0.0);
    col.r = float(val) / 16.0;
    int val2 = fIn8Bit - val * 16;
    col.g = float(val2) / 16.0;

    return col;
}

highp vec2 unpackColor4bitV2(highp float f)
{
    highp vec2 col = vec2(0.0, f);
    const highp float splitter = 0.25; // 0.0625; 1/16 1/4 1/2
    while( f > splitter )
    {
        col.r += splitter;
        col.g -= splitter;
        f -= splitter;
    }
    col.r /= (1.0 - splitter); // Normalize the high bits to 15 / 15.
    col.g /= splitter;

    return col;
}

highp vec2 unpackColor1bit(highp float f)
{
    highp vec2 col = vec2(0.0);
    bool shadow = (f >= 0.5);
    col.r = float(shadow);
    col.g = f;
    if (shadow)
    {
        col.g -= 0.5;
    }
    col.g /= 0.5;
    return col;
}

highp vec2 unpackColor2bit(highp float f)
{
    const highp float splitter = 0.25;
    highp vec2 col = vec2(0.0);
    if (f >= splitter * 3.0)
    {
        col.r = 1.0;
        col.g = f - splitter * 3.0;
    }
    else if (f >= splitter * 2.0)
    {
        col.r = 0.66;
        col.g = f - splitter * 2.0;
    }
    else if (f >= splitter)
    {
        col.r = 0.33;
        col.g = f - splitter;
    }
    else
    {
        col.r = 0.0;
        col.g = f;
    }

    col.g /= splitter;
    return col;
}

void main()
{
    highp vec4 diffuse = texture2D(Diffuse, texcoord0);
    //highp float upper = diffuse.r / 16.0;
    //highp vec4 result = vec4(upper, upper, upper, 1.0);
    highp float f = diffuse.r;
    highp vec2 x = unpackColor1bit(f);
    highp vec4 result = vec4(x.rg, 0.0, 1.0);
    gl_FragColor = result;
}

Stretchy Bones Re-visted

ScaleFK Viewport

I’ve needed to revisit the stretchy bone issue again because some game engines will export spline controllers if they’re parented into the hierarchy.  Plus, sometimes you need FK stretch.

ScaleFK Schematic

I’m going to create a simple arm rig.

  • Click create -> Spacewarp -> Bones.  Create three bones.  bone L upper arm, bone L lower arm, and bone L hand.
  • Create three circle splines.  control L upper arm, control L lower arm, control L hand.
  • Align the splines to the bones
  • Parent control L lower arm to control L upper arm.  And parent control L hand to control L lower arm.
  • Animation -> Bone Tools
  • Select bone L upper arm and, in the Object Properties rollout, turn off Freeze Length.  This will allow the bone to stretch.  Do the same for bone L lower arm.
  • Select bone L upper arm.  Animation -> Constraints -> Position Constraint.  Click control L upper arm
  • Animation -> Constraints -> Orientation Constraint.  Click control L upper arm
  • Select bone L lower arm
  • Animation -> Constraints -> Orientation Constraint.  Click control L lower arm
  • Select bone L hand
  • Animation -> Constraints -> Orientation Constraint.  Click control L hand
  • Check to see if rotating both controllers will rotate the bones and moving control upper arm will move the entire setup.
  • Create two Expose Transform objects by clicking Create -> Helpers -> ExposeTM.  Call one etm L upper arm and etm L lower arm.
  • Set the display size of the expose transforms to something appropriate to your model.
  • In etm L upper arm set the expose node to control L lower arm.  Uncheck Parent.  Set the local reference node to bone L upper arm.  This will calculate the distance between the shoulder and the elbow controller.
  • In etm L lower arm set the expose node to control L hand.  Uncheck Parent.  Set the local reference node to bone L lower arm.
  • Parent etm L upper arm to control L lower arm.  Change the move mode from View to Parent. Set etm L upper arm‘s position to 0.0, 0.0, 0.0.  This should snap it to the elbow.
  • Parent etm L lower arm to control L hand.  Set the position to 0.0, 0.0, 0.0 in parent space.
  • Select bone L lowerarm
  • In the motion panel, open the Assign Controller rollout.  Select Transform -> Position -> X Position.  Click Assign Controller and assign a Float Script.
  • In the Expression Editor create a variable called TargetDistance.
  • Select TargetDistance and press Assign Track.
  • Find etm L upper arm, go into the Object section, select Distance, and press Ok.
  • In the Expression textbox type “TargetDistance”.  This will make the X axis of the bone equal to the distance of the controller from the shoulder.  Click Evaluate to check for any errors and if there are none, click close.
  • Do the same for the lower arm.  Select bone L hand.  Assign a Float script to the X Position.  Create a TargetDistance variable.  Associate it with the distance track.  Tell the bone to take the TargetDistance as its X Position and click ok.

You now should have a bone that rotates normally, is not connected to the controllers in the schematic view, and if control L lower arm is moved on the Parent coordsys along the x axis will scale in size.

You could of course stop here and let the animator go nuts, but sometimes they like interfaces that are a bit easier to use.  The only way that I know how to do this is with a MaxScript.

To start off lock the move on control L lower arm and control L hand to stop people from erroneously moving the controllers outside of parent space.

  • Select control L lower arm
  • Click the Hierarchy tab and go to Link Info.
  • Turn on the Move X/Y/Z locks.
  • Do the same for control L hand.

Open the Max Script editor window.

(
    clearListener()

	-- Gets an existing Attribute holder, or creates one if it's needed.
    fn GetAttributeModifier obj =
    (
        for i = 1 to obj.modifiers.count do
		(
			if classof obj.modifiers[i] == EmptyModifier then
			(
				return obj.modifiers[i]
			)
		)

		emptyMod = emptyModifier()
		addModifier obj emptyMod
		return emptyMod
    )

	-- Deletes a rollout from the Attribute Holder modifier specified
	fn DeleteAttributeRollout emptymod name =
	(
		defs = custAttributes.getDefs emptymod
		-- print ("Defs " + defs as string)
		if defs != undefined and defs.count > 0 then
		(
			for i = defs.count to 1 by -1 do
			(
				-- print (defs[i].name)
				if defs[i].name == #armData then
				(
					custAttributes.delete emptymod defs[i]
				)
			)
		)
	)

	-- Adds the arm attributes to an Attribute holder on the shoulder.
    fn AddArmAttributes obj lengths emptymod =
    (
        attributeCA = attributes armData
        (
            parameters main rollout:paramsRollout
            (
                upperArmLength type:#float ui:upperArmLengthUI default:1.0
                lowerArmLength type:#float ui:lowerArmLengthUI default:1.0
                originalUpperArmLength type:#float default:17.0
				originalLowerArmLength type:#float default:17.0
				scaledUpperArmLength type:#float default:17.0
                scaledLowerArmLength type:#float default:17.0
            )

            rollout paramsRollout "Joint Scale"
            (
                spinner upperArmLengthUI "UArm Length" type:#float
                spinner lowerArmLengthUI "LArm Length" type:#float

                on upperArmLengthUI changed val do
                (
                    scaledX = val * originalUpperArmLength
                    scaledUpperArmLength = scaledX
                )

                on lowerArmLengthUI changed val do
                (
                    scaledX = val * originalLowerArmLength
                    scaledLowerArmLength = scaledX
                )

            )
        )

		-- Delete the old rollout if it exists
		DeleteAttributeRollout emptymod #armData

		-- Add the new version of the rollout
        custAttributes.add emptymod attributeCA BaseObject:false

		-- Set the proper default values to the original and scaled arm lengths.
		-- you can't set them in the parameters for some reason, so do it here 
		-- before we really need to use the data.
		emptymod.armData.originalUpperArmLength = lengths[1]
		emptymod.armData.originalLowerArmLength = lengths[2]
		emptymod.armData.scaledUpperArmLength = emptymod.armData.originalUpperArmLength * emptymod.armData.upperArmLength
		emptymod.armData.scaledLowerArmLength = emptymod.armData.originalLowerArmLength * emptymod.armData.lowerArmLength

		return ""
    )

	fn SetupLeftArm =
	(
		obj = $control_L_upper_arm
		lengths = #($etm_L_upper_arm.distance, $etm_L_lower_arm.distance)
		select obj
		attrib = GetAttributeModifier obj
		AddArmAttributes obj lengths attrib
	)

	SetupLeftArm()
)

This script, although it looks a little complex just does some house keeping by grabbing an Attribute Holder modifier, if it exists or creates one if it’s needed.  Then inside of it, it deletes any old rollouts called armData, and generates a new one.  It exposes just what the animator needs to animate, and stores the rest of the variables in there, but hidden.  This is something I haven’t been able to replicate outside of using MaxScript.

Now that the scale variables are all hooked up, the last step is using them to control the X position of the elbow and hand controllers that we locked earlier.

  • Select Control L Upper Arm.
  • Right click and select Wire Parameter
  • Modified Object -> Attribute Holder -> Arm Data -> ScaledUpperArmLength
  • Click on control L lower arm
  • Transform -> Position -> X Position
  • Click the right arrow and then connect.
  • Right click on Control L Upper Arm and select Wire Parameter again.
  • Modified Object -> Attribute Holder -> Arm Data -> ScaledLowerArmLength
  • Click on control L hand
  • Transform -> Position -> X Position

Now when you select the players controller, you can adjust the two spinners and get the arm bones to scale.

Image

Sketch of the Day Week 23

Week23

Image

Sketch of the Day Week 22

Week22