Shader graph: Rigid body animation using vertex animation textures

Using Unity 2019.2.1f1, LWRP 6.9.1 and Shader Graph 6.9.1. You can get the article’s code and shaders here.

I saw two Youtube talks (The Illusion of Motion: Making Magic with Textures in the Vertex Shader, Unleashing Houdini for AAA Mobile Games Production – MIGS) about using specially encoded textures in the vertex shader to animate mesh. Both talks use Houdini to generate animations and because I don’t have Houdini, I decided to do everything in Unity.

The whole castle is the single mesh, in which recorded physics simulation.

Overview of example

Creating vertex animation consists of the following steps:

  1. Selecting the target
  2. Recording positions and rotations
  3. Combining meshes into single saving pivots and mesh ids
  4. Encoding position and rotation textures
  5. Using special shader that decodes these textures

Selecting target

VATRecorder is the class that is responsible for recording animation. It receives as Target a gameobject with several children with renderers. They can have rigidbody components and colliders to record physics simulations.

Recording positions and rotations

After VATRecorder.StartRecording called VATRecorder starts saving Target’s children’s positions and rotations in Update or FixedUpdate. FramesPerSecond property can be used to change the rate of the recording.

To record physics animation without glitches, VATRecorder has a FixedUpdate recording mode, in which position and rotation recorded from RigidBody components in FixedUpdate.

Manual animation that happens in Update should be recorded in Update recording mode, in which position and rotation recorded from Transform components in Update.

Combining meshes into single mesh

VATMeshGenerator generates combined mesh from VATRecorder.Target  children’s meshes.

Before combining VATMeshGenerator, process all meshes. It records the position of a child in its mesh as vertex colours. For the mesh’s vertices, this position is the pivot position. In VATShader, the pivot positions are used to transform vertices to pivot space in which they are rotated.

All vertex colour channels are 8bit (from 0/255 to 255/255) Because of this to increase precision, all positions are encoded in bound space. This bound space is created by combining bounds of all Target’s children. So, all children are positioned in the combined bounds.

Vector3 positionInBounds = targetRenderers[i].transform.position - startBounds.center;
positionInBounds = new Vector3(Mathf.InverseLerp(-startBounds.extents.x, startBounds.extents.x, positionInBounds.x),
Mathf.InverseLerp(-startBounds.extents.y, startBounds.extents.y, positionInBounds.y),
Mathf.InverseLerp(-startBounds.extents.z, startBounds.extents.z, positionInBounds.z));

Color encodedPosition = new Color(pivotPositionInBounds.x, pivotPositionInBounds.y, pivotPositionInBounds.z, 1);

Alpha channel of vertex colour is not used by pivot position. It can be used to store the Id of a child. This Id corresponds with the column of position and rotation textures.

float rendererID = i / (float)(targetRenderers.Length - 1);

But because alpha has only 8bit if the children count is more than 256, the children’s ids are encoded in UV3 (UV1 is used to sample textures, UV2 is often used for lightmaps.)

Functions for encoding/decoding float into two 8bit channels are taken from Unity’s built-in shaders UnityCG.cginc.

Vector2 rendererIDasRG = MathHelpers.EncodeFloatRG(i / (float)(targetRenderers.Length));

        /// Encoding [0..1) float into 8 bit/channel RG. Note that 1.0 will not be encoded properly.
        public static Vector2 EncodeFloatRG(float v)
        {
            Vector2 kEncodeMul = new Vector2(1.0f, 255.0f);
            float kEncodeBit = 1.0f / 255.0f;
            Vector2 enc = kEncodeMul * v;
            enc = new Vector2(enc.x - Mathf.Floor(enc.x), enc.y - Mathf.Floor(enc.y));
            enc.x -= enc.y * kEncodeBit;
            return enc;
        }
        /// Decodes [0..1) float into 8 bit/channel RG. Note that 1.0 will not be encoded properly.
        /// </summary>
        public static float DecodeFloatRG(Vector2 enc)
        {
            Vector2 kDecodeDot = new Vector2(1.0f, 1.0f / 255.0f);
            return Vector2.Dot(enc, kDecodeDot);
        }

Because all children’s rotations are stored in the rotation texture, the combined mesh must have all children’s meshes with zero rotation

Matrix4x4 localToWorlds = targetRenderers[i].transform.localToWorldMatrix;

Vector3 position = localToWorlds.GetColumn(3);
Vector3 scale = new Vector3(localToWorlds.GetColumn(0).magnitude, localToWorlds.GetColumn(1).magnitude, localToWorlds.GetColumn(2).magnitude);

Matrix4x4 trsWithOutRotation = Matrix4x4.TRS(position, Quaternion.identity, scale);

Encoding position and rotation textures

VATGenerator generates position and rotation textures and creates VATAnimation(Sciptable object) in which stored information about the animation.

VATShader samples these textures in the vertex stage, using as UV coordinates children Ids for X(columns) and current frame as Y(rows).

Distortion of textures prevented using the following import settings:

textureImporter.sRGBTexture = false;
textureImporter.mipmapEnabled = false;
textureImporter.filterMode = FilterMode.Bilinear;
textureImporter.wrapMode = TextureWrapMode.Clamp;
textureImporter.npotScale = TextureImporterNPOTScale.None;
textureImporter.textureCompression = TextureImporterCompression.Uncompressed;
                
TextureImporterPlatformSettings textureImporterPlatformSettings = textureImporter.GetDefaultPlatformTextureSettings();
textureImporterPlatformSettings.format = TextureImporterFormat.RGBA32;

Rotation texture

Target’s children’s rotations saved to texture as Quaternion (Vector4 (x,y,z,w)) because Unity uses normalized Quaternions range of all Quaternion’s components, between -1 and 1. The components are remapped to the 8bit range and nicely fit in one texture.

Color rotation = new Color(
renderersRotations[x][y].x.Remap(-1, 1, 0, 1), 
renderersRotations[x][y].y.Remap(-1, 1, 0, 1), 
renderersRotations[x][y].z.Remap(-1, 1, 0, 1), 
renderersRotations[x][y].w.Remap(-1, 1, 0, 1)
);
rotationTex.SetPixel(x, y, rotation);

Position texture

Even if the bounds of the animation are small, the movement won’t look smooth because texture channels have 8bit range. For example, if along any axis bounds are from 0 to 26 meters, then precision along this axis will be 26/255 =~0.1m.

To make smooth animation Target’s children’s positions encoded into two textures.
Position.X is split between textureA.r and textureB.r.
Position.Y is split between textureA.g and textureB.g.  
Position.Z is split between textureA.b and textureB.b.

Vector3 positionInBounds = renderersPositions[x][y] - bounds.center;
positionInBounds = new Vector3(
Mathf.InverseLerp(-bounds.extents.x, bounds.extents.x,positionInBounds.x),
Mathf.InverseLerp(-bounds.extents.y, bounds.extents.y,positionInBounds.y),
Mathf.InverseLerp(-bounds.extents.z, bounds.extents.z, positionInBounds.z)
);

Vector2 encodedX = EncodeFloatRG(positionInBounds.x);
Vector2 encodedY = EncodeFloatRG(positionInBounds.y);
Vector2 encodedZ = EncodeFloatRG(positionInBounds.z);

Color encodedPositionPartA = new Color(encodedX.x, encodedY.x, encodedZ.x, 1);
Color encodedPositionPartB = new Color(encodedX.y, encodedY.y, encodedZ.y, 1);
positionsTexA.SetPixel(x, y, encodedPositionPartA);
positionsTexB.SetPixel(x, y, encodedPositionPartB);

Also, VATGenerator supports writing positions into a single texture. This behaviour controlled by VATRecorder.HighPrecisionPosition.

Vector3 positionInBounds = renderersPositions[x][y] - bounds.center;

positionInBounds = new Vector3(Mathf.InverseLerp(-bounds.extents.x, bounds.extents.x, positionInBounds.x),
Mathf.InverseLerp(-bounds.extents.y, bounds.extents.y, positionInBounds.y),
Mathf.InverseLerp(-bounds.extents.z, bounds.extents.z, positionInBounds.z));

Color encodedPosition = new Color(positionInBounds.x, positionInBounds.y, positionInBounds.z, 1);
positionsTex.SetPixel(x, y, encodedPosition);

Special shader that decodes textures

The animation is done using Shader Graph’s Custom Function  CalculateVAT_float that declared in VATCustomNode.cginc. CalculateVAT_float outputs new object position and rotated Normal in object space. The rotated Normal needs to be converted to Target space as PBR Master Node’s Normal input requires. Vertex colour used as PBR Master’s Albedo input for test purposes.

Click to view full graph

The custom function VAT Custom Node.cginc is composed from several parts inputs, helper functions, and main function:

Inputs

VATGPUPlayer takes data from VATAnimation, creates a material instance and populates it with data to the renderer with combined mesh. Usage of material instances allows SRP Batcher to combine several VATGPUPlayers and render them in a single SRP Batch, even if they have different _State.

sampler2D _PositionsTex;
sampler2D _PositionsTexB;
sampler2D _RotationsTex;
float _State;
int _PartsCount;
float3 _BoundsCenter;
float3 _BoundsExtents;
float3 _StartBoundsCenter;
float3 _StartBoundsExtents;
int _HighPrecisionMode;
int _PartsIdsInUV3;

Decoding functions

These functions mirror encoding functions in C# code.

float3 DecodePositionInBounds(float3 encodedPosition, float3 boundsCenter, float3 boundsExtents)
{
return boundsCenter + float3(lerp(-boundsExtents.x, boundsExtents.x, encodedPosition.x), lerp(-boundsExtents.y, boundsExtents.y, encodedPosition.y), lerp(-boundsExtents.z, boundsExtents.z, encodedPosition.z));
}

float4 DecodeQuaternion(float4 encodedRotation)
{
return float4(lerp(-1, 1, encodedRotation.x), lerp(-1, 1, encodedRotation.y), lerp(-1, 1, encodedRotation.z), lerp(-1, 1, encodedRotation.w));
}

inline float DecodeFloatRG(float2 enc)
{
    float2 kDecodeDot = float2(1.0, 1 / 255.0);
    return dot(enc, kDecodeDot);
}
float Remap(float In, float2 InMinMax, float2 OutMinMax)
{
return OutMinMax.x + (In - InMinMax.x) * (OutMinMax.y - OutMinMax.x) / (InMinMax.y - InMinMax.x);
}

Rotation Vector with Quaternion

To apply rotation to vertex using Quaternion, the method by Fabian Giesen (ryg of Farbrausch fame) is used. The method is described in this blog post by Stefan Reinalter. Also, you can read this blog post by Fabian Giesen the author of this method to understand why this method works.

float3 RotateVectorUsingQuaternionFast(float4 q, float3 v)
{
    float3 t = 2 * cross(q.xyz, v);
    return v + q.w * t + cross(q.xyz, t);
}

Calculating position and normal

The main function uses helper functions to calculate object space position and rotation of vertex in the current frame of animation.

Vertex Normal needs to be rotated too. Otherwise, the lighting won’t look/work correctly.

void CalculateVAT_float(float3 inputObjectPosition, float3 inputObjectNormal, float4 vertexColor, float2 uv3, out float3 objectPosition, out float3 rotatedNormal)
{
    float encodedPartId;
    if (_PartsIdsInUV3 == 1)
    {
        encodedPartId = Remap(DecodeFloatRG(uv3), float2(0, 1 - 1.0 / (float)_PartsCount), float2(0, 1)); //needs to be remapped to [0,1], because 1.0 will not be encoded properly using FloatRG encoding
    }
    else
    {
        encodedPartId = vertexColor.a;
    }

     //To prevent Bilinear FilterMode from interpolating between idOfMeshParts, sample over X axis must be in the centre of pixel.
    //Without this remap some parts of the mesh could be in wrong positions
    //something similar described there http://www.asawicki.info/news_1516_half-pixel_offset_in_directx_11.html
    float halfPixel = 1.0 / (_PartsCount * 2);
    float idOfMeshPart = Remap(encodedPartId, float2(0, 1), float2(halfPixel, 1 - halfPixel));

    float currentFrame = _State;
 
    float4 vatRotation = tex2Dlod(_RotationsTex, float4(idOfMeshPart, currentFrame, 0, 0));
    float4 decodedRotation = DecodeQuaternion(vatRotation);

    float3 pivot = vertexColor.xyz;
    float3 decodedPivot = DecodePositionInBounds(pivot, _StartBoundsCenter, _StartBoundsExtents);
    float3 offset = inputObjectPosition - decodedPivot;

    float3 rotated = RotateVectorUsingQuaternionFast(decodedRotation, offset);
    
    if (_HighPrecisionMode == 1)
    {
        float3 vatPosition = tex2Dlod(_PositionsTex, float4(idOfMeshPart, currentFrame, 0, 0)).xyz;
        float3 vatPositionB = tex2Dlod(_PositionsTexB, float4(idOfMeshPart, currentFrame, 0, 0)).xyz;
        float3 decodedPosition = float3(DecodeFloatRG(float2(vatPosition.x, vatPositionB.x)), DecodeFloatRG(float2(vatPosition.y, vatPositionB.y)), DecodeFloatRG(float2(vatPosition.z, vatPositionB.z)));
        objectPosition = rotated + DecodePositionInBounds(decodedPosition, _BoundsCenter, _BoundsExtents);
    }
    else
    {
        float3 vatPosition = tex2Dlod(_PositionsTex, float4(idOfMeshPart, currentFrame, 0, 0)).xyz;
        objectPosition = rotated + DecodePositionInBounds(vatPosition, _BoundsCenter, _BoundsExtents);
    }
    rotatedNormal = RotateVectorUsingQuaternionFast(decodedRotation, inputObjectNormal);
}

Result

One vs two position textures

The animation on the left has two-position textures. It looks smoother than the animation on the right, which has only one position texture.

Physics vs VAT with two position textures

The animation on the right has two-position textures and looks as smooth as the physics simulation on the left.

VAT with two position textures recoded in 60 fps vs 5 fps

Textures use FilterMode.Bilinear because of it positions and rotations are linearly interpolated between frames. And even with small amount of frames animations look good. The animation on the left has only 5 frames per second and looks as smooth as the animation on the right which has 60 frames per second.

Conclusion

Vertex animation textures is a fun thing to play with, they can be used to record some complex destruction and then play without using Physics. Or to make some custom animation in update with different meshes, and then combine them into a single one to reduce draw calls and still be able to play the animation.

You can get the article’s code and shaders here.

2 thoughts on “Shader graph: Rigid body animation using vertex animation textures

  1. Hello

    This is a very helpfull tutorial..Good job!

    I would like to ask you about the comment line “To prevent Bilinear FilterMode from interpolating between idOfMeshParts, sample over X axis must be in the centre of pixel”

    How could you prevent Bilinear FilterMode to interpolate along X axis in the shader state; Isn’t the texture already bilinear filtered while on TextureImporter?

    Thank you in advance

    • Yes, the texture is in Bilinear FilterMode. Because we need interpolation over Y-axis(position and rotation) of the texture, so 5 fps animation looks as smooth as 60 fps animation.

      But over X-axis of the texture, we don’t need interpolation, because each column corresponds to a child of mesh. To prevent interpolation, we sample from the centre of each pixel. Look at this picture: https://storyprogramming.com/wp-content/uploads/2020/11/HowToPreventInterpolation.png

      You can see 3 pixels. And the white and the black dots, these dots are the sampling positions. The black dots are sampling positions without the remap. As you can see, two of the black dots are in between pixels, so Bilinear FilterMode interpolates between values of adjacent pixels.

      The white dots are the sampling positions with applied remap. They are in centres of each pixel and because of this Bilinear FilterMode interpolates between the value of the same pixel simply returning the value of the pixel.

      Sorry for the late reply.

Leave a Reply