Temporal anti-aliasing and constrast adaptive sharpening (#1161)

First version of temporal antialiasing and contrast adaptive sharpening for GA. Works well in most cases but still has a few issues that will need additional time. This is only the passes and shaders with no exposure to the editor. TAA and CAS can be turned on by enabling their respective passes in the pipeline.

All of the code has been previously reviewed in smaller PRs into the taa_staging branch:
aws-lumberyard-dev#29
aws-lumberyard-dev#53
aws-lumberyard-dev#73
aws-lumberyard-dev#79
aws-lumberyard-dev#84

Main issues:

- Bloom doesn't play nice with TAA and seems to greatly amplify any flickering
- AuxGeom jitters with the camera, so TAA doesn't currently work well in editor
- Transparencies don't have correct motion vectors. History rectification keeps this from looking too bad, but could still be improved
- There is still more that could be done to inhibit flickering, usually from specular aliasing
- Motion vectors aren't correct on POM unless PDO is turned on, which can result in some blurring during motion.
- SSAO can contribute to flickering in its default half res configuration. Changing this to full res mitigates the problem.

Squashed merge of the following:

* [ATOM-13987] Initial checkin of Taa pass.

* TAA pass setup WIP. (does not work yet due to pass configuration issues).

* Taa WIP - Camera motion vectors fixed and hooked up. TAA does simple reprojection and rejection based on depth.

* Small update to use lerp and add some comments.

* Fix issue with attachments not being set up on bindings at initialization. Fixing issue with half-pixel offsets in TAA shader

* - Motion vector passes now use the same output with mesh motion vectors overwriting camera motion vectors.
- Taa pass now works with multiple pipelines.
- Cleaned up TAA shader a bit.

* Fixes from PR review.

* Adding check for multiple attachments of the same name with different resources in Pass::ImportAttachments().

* Adding camera jitter with configurable position count. Updated TAA to blend in tonemapped space.

* Fixes from PR review. Fixing camera motion vectors for background (infinite distance)

* Updates to taa shader from PR review

* Adding a rcp input color size.

* Fix comment on PassAttachment::Update()

* Updates for PR review.

* Fixing missing const on the FrameAttachment* in Pass's call to FindAttachment()

* Taa WIP - Adding filtering to both the current pixel and history. Adding rectification based on variance clipping. Adding some basic anti-flickering. Removing rejection based on depth.

* Updates from PR code review. Mostly better commenting and naming.

* Adding contrast adaptive sharpening based on AMD FidelityFX CAS to help with the softness added by TAA.

* Changing to using luminance for sharpening instead of just green. Added some comments.

* Moving Taa's NaN check to a better location. Disabling TAA and sharpening in prep for check in.

* Updates from PR feedback.
main
Ken Pruiksma 5 years ago committed by GitHub
parent 3b60bcc0f1
commit 9df995dd26
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -0,0 +1,75 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "ContrastAdaptiveSharpeningTemplate",
"PassClass": "ComputePass",
"Slots": [
{
"Name": "InputColor",
"SlotType": "Input",
"ShaderInputName": "m_inputColor",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "OutputColor",
"SlotType": "Output",
"ShaderInputName": "m_outputColor",
"ScopeAttachmentUsage": "Shader"
}
],
"ImageAttachments": [
{
"Name": "Output",
"FormatSource": {
"Pass": "This",
"Attachment": "InputColor"
},
"SizeSource": {
"Source": {
"Pass": "This",
"Attachment": "InputColor"
}
},
"ImageDescriptor": {
"Format": "R16G16B16A16_FLOAT",
"BindFlags": "3",
"SharedQueueMask": "1"
}
}
],
"Connections": [
{
"LocalSlot": "OutputColor",
"AttachmentRef": {
"Pass": "This",
"Attachment": "Output"
}
}
],
"FallbackConnections": [
{
"Input": "InputColor",
"Output": "OutputColor"
}
],
"PassData": {
"$type": "ComputePassData",
"ShaderAsset": {
"FilePath": "Shaders/Postprocessing/ContrastAdaptiveSharpening.shader"
},
"Make Fullscreen Pass": true,
"ShaderDataMappings": {
"FloatMappings": [
{
"Name": "m_strength",
"Value": 0.25
}
]
}
}
}
}
}

@ -341,6 +341,13 @@
"Attachment": "Depth"
}
},
{
"LocalSlot": "MotionVectors",
"AttachmentRef": {
"Pass": "MotionVectorPass",
"Attachment": "MotionVectorOutput"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {

@ -13,22 +13,11 @@
"SlotType": "Input",
"ScopeAttachmentUsage": "InputAssembly"
},
// Outputs...
// Input/Output...
{
"Name": "Output",
"SlotType": "Output",
"ScopeAttachmentUsage": "RenderTarget",
"LoadStoreAction": {
"ClearValue": {
"Value": [
0.0,
0.0,
0.0,
{}
]
},
"LoadAction": "Clear"
}
"Name": "MotionInputOutput",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "RenderTarget"
},
{
"Name": "OutputDepthStencil",
@ -46,19 +35,6 @@
}
],
"ImageAttachments": [
{
"Name": "MotionBuffer",
"SizeSource": {
"Source": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
},
"ImageDescriptor": {
"Format": "R16G16_FLOAT",
"SharedQueueMask": "Graphics"
}
},
{
"Name": "DepthStencil",
"SizeSource": {
@ -74,13 +50,6 @@
}
],
"Connections": [
{
"LocalSlot": "Output",
"AttachmentRef": {
"Pass": "This",
"Attachment": "MotionBuffer"
}
},
{
"LocalSlot": "OutputDepthStencil",
"AttachmentRef": {

@ -20,6 +20,19 @@
{
"Name": "SwapChainOutput",
"SlotType": "InputOutput"
},
{
"Name": "MotionVectorOutput",
"SlotType": "Output"
}
],
"Connections": [
{
"LocalSlot": "MotionVectorOutput",
"AttachmentRef": {
"Pass": "MeshMotionVectorPass",
"Attachment": "MotionInputOutput"
}
}
],
"PassRequests": [
@ -50,6 +63,13 @@
"Pass": "Parent",
"Attachment": "SkinnedMeshes"
}
},
{
"LocalSlot": "MotionInputOutput",
"AttachmentRef": {
"Pass": "CameraMotionVectorPass",
"Attachment": "Output"
}
}
],
"PassData": {

@ -284,6 +284,14 @@
"Name": "SMAA1xApplyPerceptualColorTemplate",
"Path": "Passes/SMAA1xApplyPerceptualColor.pass"
},
{
"Name": "TaaTemplate",
"Path": "Passes/Taa.pass"
},
{
"Name": "ContrastAdaptiveSharpeningTemplate",
"Path": "Passes/ContrastAdaptiveSharpening.pass"
},
{
"Name": "SsaoParentTemplate",
"Path": "Passes/SsaoParent.pass"

@ -16,6 +16,10 @@
"Name": "Depth",
"SlotType": "Input"
},
{
"Name": "MotionVectors",
"SlotType": "Input"
},
// SwapChain here is only used to reference the frame height and format
{
"Name": "SwapChainOutput",
@ -40,8 +44,8 @@
{
"LocalSlot": "Output",
"AttachmentRef": {
"Pass": "LightAdaptation",
"Attachment": "Output"
"Pass": "ContrastAdaptiveSharpeningPass",
"Attachment": "OutputColor"
}
},
{
@ -80,6 +84,34 @@
}
]
},
{
"Name": "TaaPass",
"TemplateName": "TaaTemplate",
"Enabled": false,
"Connections": [
{
"LocalSlot": "InputColor",
"AttachmentRef": {
"Pass": "SMAA1xApplyLinearHDRColorPass",
"Attachment": "OutputColor"
}
},
{
"LocalSlot": "InputDepth",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "Depth"
}
},
{
"LocalSlot": "MotionVectors",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "MotionVectors"
}
}
]
},
{
"Name": "DepthOfFieldPass",
"TemplateName": "DepthOfFieldTemplate",
@ -88,7 +120,7 @@
{
"LocalSlot": "DoFColorInput",
"AttachmentRef": {
"Pass": "SMAA1xApplyLinearHDRColorPass",
"Pass": "TaaPass",
"Attachment": "OutputColor"
}
},
@ -134,6 +166,20 @@
}
}
]
},
{
"Name": "ContrastAdaptiveSharpeningPass",
"TemplateName": "ContrastAdaptiveSharpeningTemplate",
"Enabled": false,
"Connections": [
{
"LocalSlot": "InputColor",
"AttachmentRef": {
"Pass": "LightAdaptation",
"Attachment": "Output"
}
}
]
}
]
}

@ -40,6 +40,12 @@
}
}
],
"FallbackConnections": [
{
"Input": "InputColor",
"Output": "OutputColor"
}
],
"PassRequests": [
{
"Name": "SMAAConvertToPerceptualColor",

@ -0,0 +1,113 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "TaaTemplate",
"PassClass": "TaaPass",
"Slots": [
{
"Name": "InputColor",
"SlotType": "Input",
"ShaderInputName": "m_inputColor",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "InputDepth",
"SlotType": "Input",
"ShaderInputName": "m_inputDepth",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "MotionVectors",
"SlotType": "Input",
"ShaderInputName": "m_motionVectors",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "LastFrameAccumulation",
"SlotType": "Input",
"ShaderInputName": "m_lastFrameAccumulation",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "OutputColor",
"SlotType": "Output",
"ShaderInputName": "m_outputColor",
"ScopeAttachmentUsage": "Shader"
}
],
"ImageAttachments": [
{
"Name": "Accumulation1",
"Lifetime": "Imported",
"FormatSource": {
"Pass": "This",
"Attachment": "InputColor"
},
"SizeSource": {
"Source": {
"Pass": "This",
"Attachment": "InputColor"
}
},
"ImageDescriptor": {
"Format": "R16G16B16A16_FLOAT",
"BindFlags": "3",
"SharedQueueMask": "1"
}
},
{
"Name": "Accumulation2",
"Lifetime": "Imported",
"FormatSource": {
"Pass": "This",
"Attachment": "InputColor"
},
"SizeSource": {
"Source": {
"Pass": "This",
"Attachment": "InputColor"
}
},
"ImageDescriptor": {
"Format": "R16G16B16A16_FLOAT",
"BindFlags": "3",
"SharedQueueMask": "1"
}
}
],
"FallbackConnections": [
{
"Input": "InputColor",
"Output": "OutputColor"
}
],
"PassData": {
"$type": "TaaPassData",
"ShaderAsset": {
"FilePath": "Shaders/Postprocessing/Taa.shader"
},
"Make Fullscreen Pass": true,
"ShaderDataMappings": {
"FloatMappings": [
{
"Name": "m_currentFrameContribution",
"Value": 0.1
},
{
"Name": "m_clampGamma",
"Value": 1.0
},
{
"Name": "m_maxDeviationBeforeDampening",
"Value": 0.5
}
]
},
"NumJitterPositions": 16
}
}
}
}

@ -39,10 +39,27 @@ PSOutput MainPS(VSOutput IN)
PSOutput OUT;
float depth = PassSrg::m_depthStencil.Sample(PassSrg::LinearSampler, IN.m_texCoord).r;
// If depth is 0, that means depth is on the far plane. This should be treated as being infinitely far
// away, not actually on the far plane, because the infinitely far background shouldn't move as a result
// of camera translation. Tweaking the depth to -near/far distance makes that happen. Keep in mind near
// and far are inverted, so this normally a very small value.
if (depth == 0.0)
{
depth = -ViewSrg::GetFarZ() / ViewSrg::GetNearZ();
}
float2 clipPos = float2(mad(IN.m_texCoord.x, 2.0, -1.0), mad(IN.m_texCoord.y, -2.0, 1.0));
float4 worldPos = mul(ViewSrg::m_viewProjectionInverseMatrix, float4(clipPos, depth, 1.0));
float4 clipPosPrev = mul(ViewSrg::m_viewProjectionPrevMatrix, float4((worldPos / worldPos.w).xyz, 1.0));
clipPosPrev = (clipPosPrev / clipPosPrev.w);
// Clip space is from -1.0 to 1.0, so the motion vectors are 2x as big as they should be
OUT.m_motion = (clipPos - clipPosPrev.xy) * 0.5;
// Flip y to line up with uv coordinates
OUT.m_motion.y = -OUT.m_motion.y;
OUT.m_motion = (clipPos - (clipPosPrev / clipPosPrev.w).xy) * 0.5;
return OUT;
}

@ -41,5 +41,9 @@ PSOutput MainPS(VSOutput IN)
float2 motion = (clipPos.xy / clipPos.w - clipPosPrev.xy / clipPosPrev.w) * 0.5;
OUT.m_motion = motion;
// Flip y to line up with uv coordinates
OUT.m_motion.y = -OUT.m_motion.y;
return OUT;
}

@ -0,0 +1,85 @@
/*
* All or portions of this file Copyright (c) Amazon.com, Inc. or its affiliates or
* its licensors.
*
* For complete copyright and license terms please see the LICENSE at the root of this
* distribution (the "License"). All use of this software is governed by the License,
* or, if provided, by the license below or the license accompanying this file. Do not
* remove or modify any license notices. This file is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
*/
#include <scenesrg.srgi>
#include <Atom/Features/PostProcessing/PostProcessUtil.azsli>
#define TILE_DIM_X 16
#define TILE_DIM_Y 16
ShaderResourceGroup PassSrg : SRG_PerPass
{
Texture2D<float4> m_inputColor;
RWTexture2D<float4> m_outputColor;
float m_strength; // Strength of the sharpening effect. Range from 0 to 1.
}
// Constrast Adaptive Sharpening, based on AMD FidelityFX CAS - https://gpuopen.com/fidelityfx-cas/
// This shader sharpens the input based on the contrast of the local neighborhood
// so that only areas that need sharpening are sharpened, while high constast areas
// are mostly left alone.
[numthreads(TILE_DIM_X, TILE_DIM_Y, 1)]
void MainCS(
uint3 dispatchThreadID : SV_DispatchThreadID,
uint3 groupID : SV_GroupID,
uint groupIndex : SV_GroupIndex)
{
uint2 pixelCoord = dispatchThreadID.xy;
// Fetch local neighborhood to determin sharpening weight.
// a
// b c d
// e
float3 sampleA = PassSrg::m_inputColor[pixelCoord + int2( 0, -1)].rgb;
float3 sampleB = PassSrg::m_inputColor[pixelCoord + int2(-1, 0)].rgb;
float3 sampleC = PassSrg::m_inputColor[pixelCoord + int2( 0, 0)].rgb;
float3 sampleD = PassSrg::m_inputColor[pixelCoord + int2( 1, 0)].rgb;
float3 sampleE = PassSrg::m_inputColor[pixelCoord + int2( 0, 1)].rgb;
float lumA = GetLuminance(sampleA);
float lumB = GetLuminance(sampleB);
float lumC = GetLuminance(sampleC);
float lumD = GetLuminance(sampleD);
float lumE = GetLuminance(sampleE);
// Get the min and max. Just use the green channel for luminance.
float minLum = min(min(lumA, lumB), min(lumC, min(lumD, lumE)));
float maxLum = max(max(lumA, lumB), max(lumC, max(lumD, lumE)));
float dMinLum = minLum; // Distance from 0 to minimum
float dMaxLum = 1.0 - maxLum; // Distance from 1 to the maximum
// baseSharpening is higher when local contrast is lower to avoid over-sharpening.
float baseSharpening = min(dMinLum, dMaxLum) / max(maxLum, 0.0001);
baseSharpening = sqrt(baseSharpening); // bias towards more sharpening
// Negative weights for sharpening effect, center pixel is always weighted 1.
float developerMaximum = lerp(-0.125, -0.2, PassSrg::m_strength);
float weight = baseSharpening * developerMaximum;
float totalWeight = weight * 4 + 1.0;
float3 output =
(
sampleA * weight +
sampleB * weight +
sampleC +
sampleD * weight +
sampleE * weight
) / totalWeight;
PassSrg::m_outputColor[pixelCoord] = float4(output, 1.0);
}

@ -0,0 +1,11 @@
{
"Source": "ContrastAdaptiveSharpening",
"ProgramSettings": {
"EntryPoints": [
{
"name": "MainCS",
"type": "Compute"
}
]
}
}

@ -0,0 +1,271 @@
/*
* All or portions of this file Copyright (c) Amazon.com, Inc. or its affiliates or
* its licensors.
*
* For complete copyright and license terms please see the LICENSE at the root of this
* distribution (the "License"). All use of this software is governed by the License,
* or, if provided, by the license below or the license accompanying this file. Do not
* remove or modify any license notices. This file is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
*/
#include <scenesrg.srgi>
#include <Atom/Features/PostProcessing/PostProcessUtil.azsli>
#define TILE_DIM_X 16
#define TILE_DIM_Y 16
ShaderResourceGroup PassSrg : SRG_PerPass
{
Texture2D<float4> m_inputColor;
Texture2D<float4> m_inputDepth;
Texture2D<float2> m_motionVectors;
Texture2D<float4> m_lastFrameAccumulation;
RWTexture2D<float4> m_outputColor;
Sampler LinearSampler
{
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;
AddressW = Clamp;
};
// Current frame's default contribution to the history.
float m_currentFrameContribution;
// Increase this value for weaker clamping, decrease for stronger clamping, default 1.0.
float m_clampGamma;
// Default 0.5, used for flicker reduction. Any sample further than this many standard deviations outside the neighborhood
// will have its weight decreased. The further outside the max deviation, the more its weight is reduced.
float m_maxDeviationBeforeDampening;
struct Constants
{
uint2 m_inputColorSize;
float2 m_inputColorRcpSize;
// 3x3 filter weights
// 8 2 6
// 3 0 1
// 7 4 5
float4 m_weights1; // 0 1 2 3
float4 m_weights2; // 4 5 6 7
float4 m_weights3; // 8 x x x
};
Constants m_constantData;
}
static const int2 offsets[9] =
{
// Center
int2(0, 0),
// Cross
int2( 1, 0),
int2( 0,-1),
int2(-1, 0),
int2( 0, 1),
// Diagonals
int2( 1,-1),
int2( 1, 1),
int2(-1,-1),
int2(-1, 1),
};
float3 RgbToYCoCg(float3 rgb)
{
const float3x3 conversionMatrix =
{
0.25, 0.50, 0.25,
0.50, 0.00, -0.50,
-0.25, 0.50, -0.25
};
return mul(conversionMatrix, rgb);
}
float3 YCoCgToRgb(float3 yCoCg)
{
const float3x3 conversionMatrix =
{
1.0, 1.0, -1.0,
1.0, 0.0, 1.0,
1.0, -1.0, -1.0
};
return mul(conversionMatrix, yCoCg);
}
// Sample a texture with a 5 tap Catmull-Rom. Consider ripping this out and putting in a more general location.
// This function samples a 4x4 neighborhood around the uv. By taking advantage of bilinear filtering this can be
// done with only 9 taps on the edges between pixels. The cost is further reduced by dropping the 4 diagonal
// samples as their influence is negligible.
float4 SampleCatmullRom5Tap(Texture2D<float4> texture, SamplerState linearSampler, float2 uv, float2 textureSize, float2 rcpTextureSize, float sharpness)
{
// Think of sample locations in the 4x4 neighborhood as having a top left coordinate of 0,0 and
// a bottom right coordinate of 3,3.
// Find the position in texture space then round it to get the center of the 1,1 pixel (tc1)
float2 texelPos = uv * textureSize;
float2 tc1= floor(texelPos - 0.5) + 0.5;
// Offset from center position to texel
float2 f = texelPos - tc1;
// Compute Catmull-Rom weights based on the offset and sharpness
float c = sharpness;
float2 w0 = f * (-c + f * (2.0 * c - c * f));
float2 w1 = 1.0 + f * f * (c -3.0 + (2.0 - c) * f);
float2 w2 = f * (c + f * ((3.0 - 2.0 * c) - (2.0 - c) * f));
float2 w3 = f * f * (c * f - c);
float2 w12 = w1 + w2;
// Compute uv coordinates for sampling the texture
float2 tc0 = (tc1 - 1.0f) * rcpTextureSize;
float2 tc3 = (tc1 + 2.0f) * rcpTextureSize;
float2 tc12 = (tc1 + w2 / w12) * rcpTextureSize;
// Compute sample weights
float sw0 = w12.x * w0.y;
float sw1 = w0.x * w12.y;
float sw2 = w12.x * w12.y;
float sw3 = w3.x * w12.y;
float sw4 = w12.x * w3.y;
// total weight of samples to normalize result.
float totalWeight = sw0 + sw1 + sw2 + sw3 + sw4;
float4 result = 0.0f;
result += texture.SampleLevel(linearSampler, float2(tc12.x, tc0.y), 0.0) * sw0;
result += texture.SampleLevel(linearSampler, float2( tc0.x, tc12.y), 0.0) * sw1;
result += texture.SampleLevel(linearSampler, float2(tc12.x, tc12.y), 0.0) * sw2;
result += texture.SampleLevel(linearSampler, float2( tc3.x, tc12.y), 0.0) * sw3;
result += texture.SampleLevel(linearSampler, float2(tc12.x, tc3.y), 0.0) * sw4;
return result / totalWeight;
}
[numthreads(TILE_DIM_X, TILE_DIM_Y, 1)]
void MainCS(
uint3 dispatchThreadID : SV_DispatchThreadID,
uint3 groupID : SV_GroupID,
uint groupIndex : SV_GroupIndex)
{
uint2 pixelCoord = dispatchThreadID.xy;
const float filterWeights[9] =
{
PassSrg::m_constantData.m_weights1.x,
PassSrg::m_constantData.m_weights1.y,
PassSrg::m_constantData.m_weights1.z,
PassSrg::m_constantData.m_weights1.w,
PassSrg::m_constantData.m_weights2.x,
PassSrg::m_constantData.m_weights2.y,
PassSrg::m_constantData.m_weights2.z,
PassSrg::m_constantData.m_weights2.w,
PassSrg::m_constantData.m_weights3.x,
};
float3 sum = 0.0;
float3 sumOfSquares = 0.0;
float nearestDepth = 1.0;
uint2 nearestDepthPixelCoord;
float3 thisFrameColor = float3(0.0, 0.0, 0.0);
// Sample the neighborhood to filter the current pixel, gather statistics about
// its neighbors, and find the closest neighbor to choose a motion vector.
[unroll] for (int i = 0; i < 9; ++i)
{
uint2 neighborhoodPixelCoord = pixelCoord + offsets[i];
float3 neighborhoodColor = PassSrg::m_inputColor[neighborhoodPixelCoord].rgb;
// Convert to YCoCg space for better clipping.
neighborhoodColor = RgbToYCoCg(neighborhoodColor);
sum += neighborhoodColor;
sumOfSquares += neighborhoodColor * neighborhoodColor;
thisFrameColor += neighborhoodColor * filterWeights[i];
// Find the coordinate of the nearest depth
float neighborhoodDepth = PassSrg::m_inputDepth[neighborhoodPixelCoord].r;
if (neighborhoodDepth < nearestDepth)
{
nearestDepth = neighborhoodDepth;
nearestDepthPixelCoord = neighborhoodPixelCoord;
}
}
// Variance clipping, see http://developer.download.nvidia.com/gameworks/events/GDC2016/msalvi_temporal_supersampling.pdf
float3 mean = sum / 9.0;
float3 standardDeviation = max(0.0, sqrt(sumOfSquares / 9.0 - mean * mean));
standardDeviation *= PassSrg::m_clampGamma;
// Grab the motion vector from the closest pixel in the 3x3 neighborhood. This is done so that motion vectors correctly
// track edges. For instance, if a pixel lies on the edge of a moving object, where the color is a blend of the
// forground and background, it's possible for the pixel center to hit the (not moving) background. However, the correct
// history for this pixel will be the location this edge was the previous frame. By choosing the motion of the nearest
// pixel in the neighborhood that edge will be correctly tracked.
// Motion vectors store the direction of movement, so to look up where things were in the previous frame, it's negated.
float2 previousPositionOffset = -PassSrg::m_motionVectors[nearestDepthPixelCoord];
// Get the uv coordinate for the previous frame.
float2 rcpSize = PassSrg::m_constantData.m_inputColorRcpSize;
float2 uvCoord = (pixelCoord + 0.5f) * rcpSize;
float2 uvOld = uvCoord + previousPositionOffset;
float2 previousPositionOffsetInPixels = float2(PassSrg::m_constantData.m_inputColorSize) * previousPositionOffset;
// Sample the last frame using a 5-tap Catmull-Rom
float3 lastFrameColor = SampleCatmullRom5Tap(PassSrg::m_lastFrameAccumulation, PassSrg::LinearSampler, uvOld, PassSrg::m_constantData.m_inputColorSize, PassSrg::m_constantData.m_inputColorRcpSize, 0.5).rgb;
lastFrameColor = RgbToYCoCg(lastFrameColor);
// Last frame color relative to mean
float3 centerColorOffset = lastFrameColor - mean;
float3 colorOffsetStandardDeviationRatio = abs(standardDeviation / centerColorOffset);
// Clamp the color by the aabb of the standardDeviation. Can never be greater than 1, so will always be inside or on the bounds of the aabb.
float clampedColorLength = min(min(min(1, colorOffsetStandardDeviationRatio.x), colorOffsetStandardDeviationRatio.y), colorOffsetStandardDeviationRatio.z);
// Calculate the true clamped color by offsetting it back from the mean.
float3 lastFrameClampedColor = mean + centerColorOffset * clampedColorLength;
// Anti-flickering - Reduce current frame weight the more it deviates from the history based on the standard deviation of the neighborhood.
// Start reducing weight at differences greater than m_maxDeviationBeforeDampening standard deviations in luminance.
float standardDeviationWeight = standardDeviation.r * PassSrg::m_maxDeviationBeforeDampening;
float3 sdFromLastFrame = standardDeviationWeight / abs(lastFrameClampedColor.r - thisFrameColor.r);
float currentFrameWeight = PassSrg::m_currentFrameContribution;
currentFrameWeight *= saturate(sdFromLastFrame * sdFromLastFrame);
// Back to Rgb space
thisFrameColor = YCoCgToRgb(thisFrameColor);
lastFrameClampedColor = YCoCgToRgb(lastFrameClampedColor);
// Out of bounds protection.
if (any(uvOld > 1.0) || any(uvOld < 0.0))
{
currentFrameWeight = 1.0f;
}
// Blend should be in perceptual space, so tonemap first
float luminance = GetLuminance(thisFrameColor);
thisFrameColor = thisFrameColor / (1 + luminance);
lastFrameClampedColor = lastFrameClampedColor / (1 + luminance);
// Blend color with history
float3 color = lerp(lastFrameClampedColor, thisFrameColor, currentFrameWeight);
// Un-tonemap color
color = color * (1.0 + luminance);
// NaN protection (without this NaNs could get in the history buffer and quickly consume the frame)
color = max(0.0, color);
PassSrg::m_outputColor[pixelCoord].rgb = color;
}

@ -0,0 +1,11 @@
{
"Source": "Taa",
"ProgramSettings": {
"EntryPoints": [
{
"name": "MainCS",
"type": "Compute"
}
]
}
}

@ -89,6 +89,7 @@ set(FILES
Passes/CascadedShadowmaps.pass
Passes/CheckerboardResolveColor.pass
Passes/CheckerboardResolveDepth.pass
Passes/ContrastAdaptiveSharpening.pass
Passes/ConvertToAcescg.pass
Passes/DebugOverlayParent.pass
Passes/DeferredFog.pass
@ -207,6 +208,7 @@ set(FILES
Passes/SsaoHalfRes.pass
Passes/SsaoParent.pass
Passes/SubsurfaceScattering.pass
Passes/Taa.pass
Passes/Transparent.pass
Passes/TransparentParent.pass
Passes/UI.pass

@ -64,6 +64,7 @@
#include <PostProcessing/SMAANeighborhoodBlendingPass.h>
#include <PostProcessing/SsaoPasses.h>
#include <PostProcessing/SubsurfaceScatteringPass.h>
#include <PostProcessing/TaaPass.h>
#include <PostProcessing/BloomDownsamplePass.h>
#include <PostProcessing/BloomBlurPass.h>
#include <PostProcessing/BloomCompositePass.h>
@ -133,6 +134,7 @@ namespace AZ
PostProcessFeatureProcessor::Reflect(context);
ImGuiPassData::Reflect(context);
RayTracingPassData::Reflect(context);
TaaPassData::Reflect(context);
LightingPreset::Reflect(context);
ModelPreset::Reflect(context);
@ -231,6 +233,9 @@ namespace AZ
// Add Depth Downsample/Upsample passes
passSystem->AddPassCreator(Name("DepthUpsamplePass"), &DepthUpsamplePass::Create);
// Add Taa Pass
passSystem->AddPassCreator(Name("TaaPass"), &TaaPass::Create);
// Add DepthOfField pass
passSystem->AddPassCreator(Name("DepthOfFieldCompositePass"), &DepthOfFieldCompositePass::Create);
passSystem->AddPassCreator(Name("DepthOfFieldBokehBlurPass"), &DepthOfFieldBokehBlurPass::Create);

@ -0,0 +1,247 @@
/*
* All or portions of this file Copyright (c) Amazon.com, Inc. or its affiliates or
* its licensors.
*
* For complete copyright and license terms please see the LICENSE at the root of this
* distribution (the "License"). All use of this software is governed by the License,
* or, if provided, by the license below or the license accompanying this file. Do not
* remove or modify any license notices. This file is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
*/
#include <PostProcessing/TaaPass.h>
#include <AzCore/Math/Random.h>
#include <Atom/RPI.Public/Image/AttachmentImagePool.h>
#include <Atom/RPI.Public/Image/ImageSystemInterface.h>
#include <Atom/RPI.Public/Pass/PassUtils.h>
#include <Atom/RPI.Public/RenderPipeline.h>
#include <Atom/RPI.Public/View.h>
#include <Atom/RPI.Reflect/Pass/PassName.h>
namespace AZ::Render
{
RPI::Ptr<TaaPass> TaaPass::Create(const RPI::PassDescriptor& descriptor)
{
RPI::Ptr<TaaPass> pass = aznew TaaPass(descriptor);
return pass;
}
TaaPass::TaaPass(const RPI::PassDescriptor& descriptor)
: Base(descriptor)
{
uint32_t numJitterPositions = 8;
const TaaPassData* taaPassData = RPI::PassUtils::GetPassData<TaaPassData>(descriptor);
if (taaPassData)
{
numJitterPositions = taaPassData->m_numJitterPositions;
}
// The coprimes 2, 3 are commonly used for halton sequences because they have an even distribution even for
// few samples. With larger primes you need to offset by some amount between each prime to have the same
// effect. We could allow this to be configurable in the future.
SetupSubPixelOffsets(2, 3, numJitterPositions);
}
void TaaPass::CompileResources(const RHI::FrameGraphCompileContext& context)
{
struct TaaConstants
{
AZStd::array<uint32_t, 2> m_size = { 1, 1 };
AZStd::array<float, 2> m_rcpSize = { 0.0, 0.0 };
AZStd::array<float, 4> m_weights1 = { 0.0 };
AZStd::array<float, 4> m_weights2 = { 0.0 };
AZStd::array<float, 4> m_weights3 = { 0.0 };
};
TaaConstants cb;
RHI::Size inputSize = m_lastFrameAccumulationBinding->m_attachment->m_descriptor.m_image.m_size;
cb.m_size[0] = inputSize.m_width;
cb.m_size[1] = inputSize.m_height;
cb.m_rcpSize[0] = 1.0f / inputSize.m_width;
cb.m_rcpSize[1] = 1.0f / inputSize.m_height;
Offset jitterOffset = m_subPixelOffsets.at(m_offsetIndex);
GenerateFilterWeights(Vector2(jitterOffset.m_xOffset, jitterOffset.m_yOffset));
cb.m_weights1 = { m_filterWeights[0], m_filterWeights[1], m_filterWeights[2], m_filterWeights[3] };
cb.m_weights2 = { m_filterWeights[4], m_filterWeights[5], m_filterWeights[6], m_filterWeights[7] };
cb.m_weights3 = { m_filterWeights[8], 0.0f, 0.0f, 0.0f };
m_shaderResourceGroup->SetConstant(m_constantDataIndex, cb);
Base::CompileResources(context);
}
void TaaPass::FrameBeginInternal(FramePrepareParams params)
{
RHI::Size inputSize = m_inputColorBinding->m_attachment->m_descriptor.m_image.m_size;
Vector2 rcpInputSize = Vector2(1.0 / inputSize.m_width, 1.0 / inputSize.m_height);
RPI::ViewPtr view = GetRenderPipeline()->GetDefaultView();
m_offsetIndex = (m_offsetIndex + 1) % m_subPixelOffsets.size();
Offset offset = m_subPixelOffsets.at(m_offsetIndex);
view->SetClipSpaceOffset(offset.m_xOffset * rcpInputSize.GetX(), offset.m_yOffset * rcpInputSize.GetY());
m_lastFrameAccumulationBinding->SetAttachment(m_accumulationAttachments[m_accumulationOuptutIndex]);
m_accumulationOuptutIndex ^= 1; // swap which attachment is the output and last frame
UpdateAttachmentImage(m_accumulationAttachments[m_accumulationOuptutIndex]);
m_outputColorBinding->SetAttachment(m_accumulationAttachments[m_accumulationOuptutIndex]);
Base::FrameBeginInternal(params);
}
void TaaPass::ResetInternal()
{
m_accumulationAttachments[0].reset();
m_accumulationAttachments[1].reset();
m_inputColorBinding = nullptr;
m_lastFrameAccumulationBinding = nullptr;
m_outputColorBinding = nullptr;
Base::ResetInternal();
}
void TaaPass::BuildAttachmentsInternal()
{
m_accumulationAttachments[0] = FindAttachment(Name("Accumulation1"));
m_accumulationAttachments[1] = FindAttachment(Name("Accumulation2"));
bool hasAttachments = m_accumulationAttachments[0] || m_accumulationAttachments[1];
AZ_Error("TaaPass", hasAttachments, "TaaPass must have Accumulation1 and Accumulation2 ImageAttachments defined.");
if (hasAttachments)
{
// Make sure the attachments have images when the pass first loads.
for (auto i : { 0, 1 })
{
if (!m_accumulationAttachments[i]->m_importedResource)
{
UpdateAttachmentImage(m_accumulationAttachments[i]);
}
}
}
m_inputColorBinding = FindAttachmentBinding(Name("InputColor"));
AZ_Error("TaaPass", m_inputColorBinding, "TaaPass requires a slot for InputColor.");
m_lastFrameAccumulationBinding = FindAttachmentBinding(Name("LastFrameAccumulation"));
AZ_Error("TaaPass", m_lastFrameAccumulationBinding, "TaaPass requires a slot for LastFrameAccumulation.");
m_outputColorBinding = FindAttachmentBinding(Name("OutputColor"));
AZ_Error("TaaPass", m_outputColorBinding, "TaaPass requires a slot for OutputColor.");
// Set up the attachment for last frame accumulation and output color if it's never been done to
// ensure SRG indices are set up correctly by the pass system.
if (m_lastFrameAccumulationBinding->m_attachment == nullptr)
{
m_lastFrameAccumulationBinding->SetAttachment(m_accumulationAttachments[0]);
m_outputColorBinding->SetAttachment(m_accumulationAttachments[1]);
}
Base::BuildAttachmentsInternal();
}
void TaaPass::UpdateAttachmentImage(RPI::Ptr<RPI::PassAttachment>& attachment)
{
if (!attachment)
{
return;
}
// update the image attachment descriptor to sync up size and format
attachment->Update(true);
RHI::ImageDescriptor& imageDesc = attachment->m_descriptor.m_image;
RPI::AttachmentImage* currentImage = azrtti_cast<RPI::AttachmentImage*>(attachment->m_importedResource.get());
if (attachment->m_importedResource && imageDesc.m_size == currentImage->GetDescriptor().m_size)
{
// If there's a resource already and the size didn't change, just keep using the old AttachmentImage.
return;
}
Data::Instance<RPI::AttachmentImagePool> pool = RPI::ImageSystemInterface::Get()->GetSystemAttachmentPool();
// set the bind flags
imageDesc.m_bindFlags |= RHI::ImageBindFlags::Color | RHI::ImageBindFlags::ShaderReadWrite;
// The ImageViewDescriptor must be specified to make sure the frame graph compiler doesn't treat this as a transient image.
RHI::ImageViewDescriptor viewDesc = RHI::ImageViewDescriptor::Create(imageDesc.m_format, 0, 0);
viewDesc.m_aspectFlags = RHI::ImageAspectFlags::Color;
viewDesc.m_overrideBindFlags = RHI::ImageBindFlags::ShaderReadWrite;
// The full path name is needed for the attachment image so it's not deduplicated from accumulation images in different pipelines.
AZStd::string imageName = RPI::ConcatPassString(GetPathName(), attachment->m_path);
auto attachmentImage = RPI::AttachmentImage::Create(*pool.get(), imageDesc, Name(imageName), nullptr, &viewDesc);
attachment->m_path = attachmentImage->GetAttachmentId();
attachment->m_importedResource = attachmentImage;
}
void TaaPass::SetupSubPixelOffsets(uint32_t haltonX, uint32_t haltonY, uint32_t length)
{
m_subPixelOffsets.resize(length);
HaltonSequence<2> sequence = HaltonSequence<2>({haltonX, haltonY});
sequence.FillHaltonSequence(m_subPixelOffsets.begin(), m_subPixelOffsets.end());
// Adjust to the -1.0 to 1.0 range. This is done because the view needs offsets in clip
// space and is one less calculation that would need to be done in FrameBeginInternal()
AZStd::for_each(m_subPixelOffsets.begin(), m_subPixelOffsets.end(),
[](Offset& offset)
{
offset.m_xOffset = 2.0f * offset.m_xOffset - 1.0f;
offset.m_yOffset = 2.0f * offset.m_yOffset - 1.0f;
}
);
}
// Approximation of a Blackman Harris window function of width 3.3.
// https://en.wikipedia.org/wiki/Window_function#Blackman%E2%80%93Harris_window
static float BlackmanHarris(AZ::Vector2 uv)
{
return expf(-2.29f * (uv.GetX() * uv.GetX() + uv.GetY() * uv.GetY()));
}
// Generates filter weights for the 3x3 neighborhood of a pixel. Since jitter positions are the
// same for every pixel we can calculate this once here and upload to the SRG.
// Jitter weights are based on a window function centered at the pixel center (we use Blackman-Harris).
// As the jitter position moves around, some neighborhood locations decrease in weight, and others
// increase in weight based on their distance from the center of the pixel.
void TaaPass::GenerateFilterWeights(AZ::Vector2 jitterOffset)
{
static const AZStd::array<Vector2, 9> pixelOffsets =
{
// Center
Vector2(0.0f, 0.0f),
// Cross
Vector2( 1.0f, 0.0f),
Vector2( 0.0f, 1.0f),
Vector2(-1.0f, 0.0f),
Vector2( 0.0f, -1.0f),
// Diagonals
Vector2( 1.0f, 1.0f),
Vector2( 1.0f, -1.0f),
Vector2(-1.0f, 1.0f),
Vector2(-1.0f, -1.0f),
};
float sum = 0.0f;
for (uint32_t i = 0; i < 9; ++i)
{
m_filterWeights[i] = BlackmanHarris(pixelOffsets[i] + jitterOffset);
sum += m_filterWeights[i];
}
// Normalize the weight so the sum of all weights is 1.0.
float normalization = 1.0f / sum;
for (uint32_t i = 0; i < 9; ++i)
{
m_filterWeights[i] *= normalization;
}
}
} // namespace AZ::Render

@ -0,0 +1,105 @@
/*
* All or portions of this file Copyright (c) Amazon.com, Inc. or its affiliates or
* its licensors.
*
* For complete copyright and license terms please see the LICENSE at the root of this
* distribution (the "License"). All use of this software is governed by the License,
* or, if provided, by the license below or the license accompanying this file. Do not
* remove or modify any license notices. This file is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
*
*/
#pragma once
#include <Atom/RPI.Public/Pass/ComputePass.h>
#include <Atom/RPI.Reflect/Pass/ComputePassData.h>
namespace AZ::Render
{
//! Custom data for the Taa Pass.
struct TaaPassData
: public RPI::ComputePassData
{
AZ_RTTI(TaaPassData, "{BCDF5C7D-7A78-4C69-A460-FA6899C3B960}", ComputePassData);
AZ_CLASS_ALLOCATOR(TaaPassData, SystemAllocator, 0);
TaaPassData() = default;
virtual ~TaaPassData() = default;
static void Reflect(ReflectContext* context)
{
if (auto* serializeContext = azrtti_cast<SerializeContext*>(context))
{
serializeContext->Class<TaaPassData, RPI::ComputePassData>()
->Version(1)
->Field("NumJitterPositions", &TaaPassData::m_numJitterPositions)
;
}
}
uint32_t m_numJitterPositions = 8;
};
class TaaPass : public RPI::ComputePass
{
using Base = RPI::ComputePass;
AZ_RPI_PASS(TaaPass);
public:
AZ_RTTI(AZ::Render::TaaPass, "{AB3BD4EA-33D7-477F-82B4-21DDFB517499}", Base);
AZ_CLASS_ALLOCATOR(TaaPass, SystemAllocator, 0);
virtual ~TaaPass() = default;
/// Creates a TaaPass
static RPI::Ptr<TaaPass> Create(const RPI::PassDescriptor& descriptor);
private:
TaaPass(const RPI::PassDescriptor& descriptor);
// Scope producer functions...
void CompileResources(const RHI::FrameGraphCompileContext& context) override;
// Pass behavior overrides...
void FrameBeginInternal(FramePrepareParams params) override;
void ResetInternal() override;
void BuildAttachmentsInternal() override;
void UpdateAttachmentImage(RPI::Ptr<RPI::PassAttachment>& attachment);
void SetupSubPixelOffsets(uint32_t haltonX, uint32_t haltonY, uint32_t length);
void GenerateFilterWeights(AZ::Vector2 jitterOffset);
RHI::ShaderInputNameIndex m_outputIndex = "m_output";
RHI::ShaderInputNameIndex m_lastFrameAccumulationIndex = "m_lastFrameAccumulation";
RHI::ShaderInputNameIndex m_constantDataIndex = "m_constantData";
Data::Instance<RPI::PassAttachment> m_accumulationAttachments[2];
RPI::PassAttachmentBinding* m_inputColorBinding = nullptr;
RPI::PassAttachmentBinding* m_lastFrameAccumulationBinding = nullptr;
RPI::PassAttachmentBinding* m_outputColorBinding = nullptr;
struct Offset
{
Offset() = default;
// Constructor for implicit conversion from array output by HaltonSequence.
Offset(AZStd::array<float, 2> offsets)
: m_xOffset(offsets[0])
, m_yOffset(offsets[1])
{};
float m_xOffset = 0.0f;
float m_yOffset = 0.0f;
};
AZStd::array<float, 9> m_filterWeights = { 0.0f };
AZStd::vector<Offset> m_subPixelOffsets;
uint32_t m_offsetIndex = 0;
uint8_t m_accumulationOuptutIndex = 0;
};
} // namespace AZ::Render

@ -252,6 +252,8 @@ set(FILES
Source/PostProcessing/SsaoPasses.h
Source/PostProcessing/SubsurfaceScatteringPass.cpp
Source/PostProcessing/SubsurfaceScatteringPass.h
Source/PostProcessing/TaaPass.h
Source/PostProcessing/TaaPass.cpp
Source/RayTracing/RayTracingFeatureProcessor.h
Source/RayTracing/RayTracingFeatureProcessor.cpp
Source/RayTracing/RayTracingAccelerationStructurePass.cpp

@ -89,6 +89,12 @@ namespace AZ
return m_attachmentDatabase.IsAttachmentValid(attachmentId);
}
//! Returns the FrameAttachment for a given AttachmentId, or nullptr if not found.
const FrameAttachment* FindAttachment(const AttachmentId& attachmentId) const
{
return m_attachmentDatabase.FindAttachment(attachmentId);
}
//! Resolves an attachment id to a buffer descriptor. This is useful when accessing buffer information for
//! an attachment that was declared in a different scope.
//! \param attachmentId The attachment id used to lookup the descriptors.

@ -184,8 +184,8 @@ namespace AZ
//! Collect all different view tags from this pass
virtual void GetPipelineViewTags(SortedPipelineViewTags& outTags) const;
//! Adds this pass' DrawListTags to the outDrawListMask.
virtual void GetViewDrawListInfo(RHI::DrawListMask& outDrawListMask, PassesByDrawList& outPassesByDrawList, const PipelineViewTag& viewTag) const;
//! Adds this pass' DrawListTags to the outDrawListMask.
virtual void GetViewDrawListInfo(RHI::DrawListMask& outDrawListMask, PassesByDrawList& outPassesByDrawList, const PipelineViewTag& viewTag) const;
//! Check if the pass has a DrawListTag. Pass' DrawListTag can be used to filter draw items.
virtual RHI::DrawListTag GetDrawListTag() const;

@ -52,7 +52,8 @@ namespace AZ
const RHI::TransientBufferDescriptor GetTransientBufferDescriptor() const;
//! Updates the size and format of this attachment using the sources below if specified
void Update();
//! @param updateImportedAttachments - Imported attchments will only update if this is true.
void Update(bool updateImportedAttachments = false);
//! Sets all formats to nearest device supported formats and warns if changes where made
void ValidateDeviceFormats(const AZStd::vector<RHI::Format>& formatFallbacks, RHI::FormatCapabilities capabilities = RHI::FormatCapabilities::None);

@ -88,6 +88,9 @@ namespace AZ
//! Sets the viewToClip matrix and recalculates the other matrices
void SetViewToClipMatrix(const AZ::Matrix4x4& viewToClip);
//! Sets a pixel offset on the view, usually used for jittering the camera for anti-aliasing techniques.
void SetClipSpaceOffset(float xOffset, float yOffset);
const AZ::Matrix4x4& GetWorldToViewMatrix() const;
//! Use GetViewToWorldMatrix().GetTranslation() to get the camera's position.
const AZ::Matrix4x4& GetViewToWorldMatrix() const;
@ -173,7 +176,6 @@ namespace AZ
Matrix4x4 m_worldToViewMatrix;
Matrix4x4 m_viewToWorldMatrix;
Matrix4x4 m_viewToClipMatrix;
Matrix4x4 m_clipToViewMatrix;
Matrix4x4 m_clipToWorldMatrix;
// View's position in world space
@ -188,17 +190,15 @@ namespace AZ
// Cached matrix to transform from world space to clip space
Matrix4x4 m_worldToClipMatrix;
Matrix4x4 m_worldToClipPrevMatrix;
Matrix4x4 m_worldToViewPrevMatrix;
Matrix4x4 m_viewToClipPrevMatrix;
// Clip space offset for camera jitter with taa
Vector2 m_clipSpaceOffset = Vector2(0.0f, 0.0f);
// Flags whether view matrices are dirty which requires rebuild srg
bool m_needBuildSrg = true;
// Following two bools form a delay circuit to update history of next frame
// if vp matrix is changed during current frame, this is required because
// view class doesn't contain subroutines called at the end of each frame
bool m_worldToClipMatrixChanged = true;
bool m_worldToClipPrevMatrixNeedsUpdate = false;
MatrixChangedEvent m_onWorldToClipMatrixChange;
MatrixChangedEvent m_onWorldToViewMatrixChange;

@ -914,23 +914,40 @@ namespace AZ
{
// make sure to only import the resource one time
RHI::AttachmentId attachmentId = attachment->GetAttachmentId();
if (!attachmentDatabase.IsAttachmentValid(attachmentId))
const RHI::FrameAttachment* currentAttachment = attachmentDatabase.FindAttachment(attachmentId);
if (azrtti_istypeof<Image>(attachment->m_importedResource.get()))
{
if (azrtti_istypeof<Image>(attachment->m_importedResource.get()))
Image* image = static_cast<Image*>(attachment->m_importedResource.get());
if (currentAttachment == nullptr)
{
Image* image = static_cast<Image*>(attachment->m_importedResource.get());
attachmentDatabase.ImportImage(attachmentId, image->GetRHIImage());
}
else if (azrtti_istypeof<Buffer>(attachment->m_importedResource.get()))
else
{
AZ_Assert(currentAttachment->GetResource() == image->GetRHIImage(),
"Importing image attachment named \"%s\" but a different attachment with the "
"same name already exists in the database.\n", attachmentId.GetCStr());
}
}
else if (azrtti_istypeof<Buffer>(attachment->m_importedResource.get()))
{
Buffer* buffer = static_cast<Buffer*>(attachment->m_importedResource.get());
if (currentAttachment == nullptr)
{
Buffer* buffer = static_cast<Buffer*>(attachment->m_importedResource.get());
attachmentDatabase.ImportBuffer(attachmentId, buffer->GetRHIBuffer());
}
else
{
AZ_RPI_PASS_ERROR(false, "Can't import unknown resource type");
AZ_Assert(currentAttachment->GetResource() == buffer->GetRHIBuffer(),
"Importing buffer attachment named \"%s\" but a different attachment with the "
"same name already exists in the database.\n", attachmentId.GetCStr());
}
}
else
{
AZ_RPI_PASS_ERROR(false, "Can't import unknown resource type");
}
}
}
}

@ -114,9 +114,9 @@ namespace AZ
return RHI::TransientBufferDescriptor(GetAttachmentId(), m_descriptor.m_buffer);
}
void PassAttachment::Update()
void PassAttachment::Update(bool updateImportedAttachments)
{
if (m_descriptor.m_type == RHI::AttachmentType::Image && m_lifetime == RHI::AttachmentLifetimeType::Transient)
if (m_descriptor.m_type == RHI::AttachmentType::Image && (m_lifetime == RHI::AttachmentLifetimeType::Transient || updateImportedAttachments == true))
{
if (m_settingFlags.m_getFormatFromPipeline && m_renderPipelineSource)
{

@ -127,7 +127,6 @@ namespace AZ
m_worldToViewMatrix = worldToView;
m_worldToClipMatrix = m_viewToClipMatrix * m_worldToViewMatrix;
m_worldToClipMatrixChanged = true;
m_onWorldToViewMatrixChange.Signal(m_worldToViewMatrix);
m_onWorldToClipMatrixChange.Signal(m_worldToClipMatrix);
@ -166,8 +165,6 @@ namespace AZ
m_worldToViewMatrix = m_viewToWorldMatrix.GetInverseFast();
m_worldToClipMatrix = m_viewToClipMatrix * m_worldToViewMatrix;
m_clipToWorldMatrix = m_viewToWorldMatrix * m_clipToViewMatrix;
m_worldToClipMatrixChanged = true;
m_onWorldToViewMatrixChange.Signal(m_worldToViewMatrix);
m_onWorldToClipMatrixChange.Signal(m_worldToClipMatrix);
@ -178,12 +175,8 @@ namespace AZ
void View::SetViewToClipMatrix(const AZ::Matrix4x4& viewToClip)
{
m_viewToClipMatrix = viewToClip;
m_clipToViewMatrix = viewToClip.GetInverseFull();
m_worldToClipMatrix = m_viewToClipMatrix * m_worldToViewMatrix;
m_worldToClipMatrixChanged = true;
m_clipToWorldMatrix = m_viewToWorldMatrix * m_clipToViewMatrix;
// Update z depth constant simultaneously
// zNear -> n, zFar -> f
@ -211,6 +204,12 @@ namespace AZ
InvalidateSrg();
}
void View::SetClipSpaceOffset(float xOffset, float yOffset)
{
m_clipSpaceOffset.Set(xOffset, yOffset);
InvalidateSrg();
}
const AZ::Matrix4x4& View::GetWorldToViewMatrix() const
{
return m_worldToViewMatrix;
@ -368,36 +367,56 @@ namespace AZ
void View::UpdateSrg()
{
if (m_worldToClipPrevMatrixNeedsUpdate)
if (m_needBuildSrg)
{
m_shaderResourceGroup->SetConstant(m_worldToClipPrevMatrixConstantIndex, m_worldToClipPrevMatrix);
m_worldToClipPrevMatrixNeedsUpdate = false;
}
if (m_clipSpaceOffset.IsZero())
{
Matrix4x4 worldToClipPrevMatrix = m_viewToClipPrevMatrix * m_worldToViewPrevMatrix;
m_shaderResourceGroup->SetConstant(m_worldToClipPrevMatrixConstantIndex, worldToClipPrevMatrix);
m_shaderResourceGroup->SetConstant(m_viewProjectionMatrixConstantIndex, m_worldToClipMatrix);
m_shaderResourceGroup->SetConstant(m_projectionMatrixConstantIndex, m_viewToClipMatrix);
m_shaderResourceGroup->SetConstant(m_clipToWorldMatrixConstantIndex, m_clipToWorldMatrix);
m_shaderResourceGroup->SetConstant(m_projectionMatrixInverseConstantIndex, m_viewToClipMatrix.GetInverseFull());
}
else
{
// Offset the current and previous frame clip matricies
Matrix4x4 offsetViewToClipMatrix = m_viewToClipMatrix;
offsetViewToClipMatrix.SetElement(0, 2, m_clipSpaceOffset.GetX());
offsetViewToClipMatrix.SetElement(1, 2, m_clipSpaceOffset.GetY());
Matrix4x4 offsetViewToClipPrevMatrix = m_viewToClipPrevMatrix;
offsetViewToClipPrevMatrix.SetElement(0, 2, m_clipSpaceOffset.GetX());
offsetViewToClipPrevMatrix.SetElement(1, 2, m_clipSpaceOffset.GetY());
// Build other matricies dependent on the view to clip matricies
Matrix4x4 offsetWorldToClipMatrix = offsetViewToClipMatrix * m_worldToViewMatrix;
Matrix4x4 offsetWorldToClipPrevMatrix = offsetViewToClipPrevMatrix * m_worldToViewPrevMatrix;
Matrix4x4 offsetClipToViewMatrix = offsetViewToClipMatrix.GetInverseFull();
Matrix4x4 offsetClipToWorldMatrix = m_viewToWorldMatrix * offsetClipToViewMatrix;
m_shaderResourceGroup->SetConstant(m_worldToClipPrevMatrixConstantIndex, offsetWorldToClipPrevMatrix);
m_shaderResourceGroup->SetConstant(m_viewProjectionMatrixConstantIndex, offsetWorldToClipMatrix);
m_shaderResourceGroup->SetConstant(m_projectionMatrixConstantIndex, offsetViewToClipMatrix);
m_shaderResourceGroup->SetConstant(m_clipToWorldMatrixConstantIndex, offsetClipToWorldMatrix);
m_shaderResourceGroup->SetConstant(m_projectionMatrixInverseConstantIndex, offsetViewToClipMatrix.GetInverseFull());
}
if (m_worldToClipMatrixChanged)
{
m_worldToClipPrevMatrix = m_worldToClipMatrix;
m_worldToClipPrevMatrixNeedsUpdate = true;
m_worldToClipMatrixChanged = false;
}
m_shaderResourceGroup->SetConstant(m_worldPositionConstantIndex, m_position);
m_shaderResourceGroup->SetConstant(m_viewMatrixConstantIndex, m_worldToViewMatrix);
m_shaderResourceGroup->SetConstant(m_viewMatrixInverseConstantIndex, m_worldToViewMatrix.GetInverseFull());
m_shaderResourceGroup->SetConstant(m_zConstantsConstantIndex, m_nearZ_farZ_farZTimesNearZ_farZMinusNearZ);
m_shaderResourceGroup->SetConstant(m_unprojectionConstantsIndex, m_unprojectionConstants);
if (!m_needBuildSrg)
{
return;
m_shaderResourceGroup->Compile();
m_needBuildSrg = false;
}
m_shaderResourceGroup->SetConstant(m_worldPositionConstantIndex, m_position);
m_shaderResourceGroup->SetConstant(m_viewProjectionMatrixConstantIndex, m_worldToClipMatrix);
m_shaderResourceGroup->SetConstant(m_viewMatrixConstantIndex, m_worldToViewMatrix);
m_shaderResourceGroup->SetConstant(m_viewMatrixInverseConstantIndex, m_worldToViewMatrix.GetInverseFull());
m_shaderResourceGroup->SetConstant(m_projectionMatrixConstantIndex, m_viewToClipMatrix);
m_shaderResourceGroup->SetConstant(m_projectionMatrixInverseConstantIndex, m_viewToClipMatrix.GetInverseFull());
m_shaderResourceGroup->SetConstant(m_zConstantsConstantIndex, m_nearZ_farZ_farZTimesNearZ_farZMinusNearZ);
m_shaderResourceGroup->SetConstant(m_clipToWorldMatrixConstantIndex, m_clipToWorldMatrix);
m_shaderResourceGroup->SetConstant(m_unprojectionConstantsIndex, m_unprojectionConstants);
m_shaderResourceGroup->Compile();
m_needBuildSrg = false;
m_viewToClipPrevMatrix = m_viewToClipMatrix;
m_worldToViewPrevMatrix = m_worldToViewMatrix;
m_clipSpaceOffset.Set(0);
}
void View::BeginCulling()

Loading…
Cancel
Save