Hair - Hair Gem introduction to O3DE from Atom/barlev/AtomTressFX_DevRebase (#4439)

* Hair
- First introduction of Hair gem to Atom and O3DE
- The hair technology is based off TressFX 4.1
- These are some of the areas we enhanced the original TressFX implementation:
   - Lighting model was replaced and we now use a modified Marschner model
   - Blending is done directly with the back buffer removing the silhouette of the original implementation
   - Hair depth / thickness is now calculated to remove incorrect back lighting (TT lobe in the Marschner model)
   - Thickness corrected to handle hair gaps hence introducing better light passage for the TT
   - The hair is fully integrated into the Atom pipeline and structure design
   - Usage of single shared buffer for the computer buffers reduces barriers sync overhead

Remarks:
- Collisions via SDF compute are to be introduced soon
- Improved shortcut rendering method ala Eidos Montreal to be introduced soon

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - code clean pass

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - EMFX Actor visibility implementation

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - COnnecting hair passes to Atom's MainPipeline.pass

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - adding dedicated thumbnail pipeline that does not include the hair gem

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - changed Atom shader files to allow hooking the hair to the lighting data structures

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - fixed a few headers to have the latest O3DE license + verification fixes

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - enabling editor component only when tool pipline is built + default texture add

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - fixing Linux and Android compilation builds

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - another files change to make Linux compile

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - more Linux and Android build fixes

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair

- Adding usage of fallback white texture
- Removing invalid null assignments into vectors
- Removing redundant mutex preventing deletion on some platforms

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair

- Shame: removed forgoten #pragma optimize
- Adding header complained by Android

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - removing the Hair Gem connection in the active project.

- This submission removes the connection to the active project hence allowing to run without the Gem.  Enable the passes in MainPipeline.pass and declare them again when you want to use the Gem.

Remark: the gem file PassTemplates.azasset was renamed and will be connected via code in the future to avoid the need to declare in the global pass template.
Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - registrating gem pass templates through the gem templates file (#198)

* Hair - registrating gem pass templates through the gem templates file

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - adding handler disconnect for the pass template registration.

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - PPLLIndexCounter buffer going data driven via the pass declarations (#202)

* Hair - PPLLIndexCounter buffer going data driven via the pass declaration
- Moving PPLLIndexCounter from code allocation and attachment to be data driven
- Fixed RPI typo bug that can prevent using buffers like that

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - fixing UI Editor (LYShine) crash (#209)

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair - resolved the multi pipeline mismatches and crashes + cleaned initialization & leftovers (#222)

* [Hair] - multiple render pipelines handling

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair] - Shut down order is handle to allow hair feature processor be deregistered only after the bootstrap component has disabled it

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair] - minor cleanups

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair] - followups from review nits

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - code fixes based on the CR remarks for the Hair merge to Dev (#248)

* Hair
- First introduction of Hair gem to Atom and O3DE
- The hair technology is based off TressFX 4.1
- These are some of the areas we enhanced the original TressFX implementation:
   - Lighting model was replaced and we now use a modified Marschner model
   - Blending is done directly with the back buffer removing the silhouette of the original implementation
   - Hair depth / thickness is now calculated to remove incorrect back lighting (TT lobe in the Marschner model)
   - Thickness corrected to handle hair gaps hence introducing better light passage for the TT
   - The hair is fully integrated into the Atom pipeline and structure design
   - Usage of single shared buffer for the computer buffers reduces barriers sync overhead

Remarks:
- Collisions via SDF compute are to be introduced soon
- Improved shortcut rendering method ala Eidos Montreal to be introduced soon

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - code clean pass

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - EMFX Actor visibility implementation

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - COnnecting hair passes to Atom's MainPipeline.pass

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - adding dedicated thumbnail pipeline that does not include the hair gem

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - changed Atom shader files to allow hooking the hair to the lighting data structures

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - fixed a few headers to have the latest O3DE license + verification fixes

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - enabling editor component only when tool pipline is built + default texture add

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - fixing Linux and Android compilation builds

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - another files change to make Linux compile

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - more Linux and Android build fixes

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair

- Adding usage of fallback white texture
- Removing invalid null assignments into vectors
- Removing redundant mutex preventing deletion on some platforms

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair

- Shame: removed forgoten #pragma optimize
- Adding header complained by Android

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - removing the Hair Gem connection in the active project.

- This submission removes the connection to the active project hence allowing to run without the Gem.  Enable the passes in MainPipeline.pass and declare them again when you want to use the Gem.

Remark: the gem file PassTemplates.azasset was renamed and will be connected via code in the future to avoid the need to declare in the global pass template.
Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - registrating gem pass templates through the gem templates file (#198)

* Hair - registrating gem pass templates through the gem templates file

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - adding handler disconnect for the pass template registration.

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - PPLLIndexCounter buffer going data driven via the pass declarations (#202)

* Hair - PPLLIndexCounter buffer going data driven via the pass declaration
- Moving PPLLIndexCounter from code allocation and attachment to be data driven
- Fixed RPI typo bug that can prevent using buffers like that

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - fixing UI Editor (LYShine) crash (#209)

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair - resolved the multi pipeline mismatches and crashes + cleaned initialization & leftovers (#222)

* [Hair] - multiple render pipelines handling

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair] - Shut down order is handle to allow hair feature processor be deregistered only after the bootstrap component has disabled it

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair] - minor cleanups

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* [Hair] - followups from review nits

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - last fixes based on CR remarks

Signed-off-by: Adi-Amazon <barlev@amazon.com>

* Hair - fixing AR

Signed-off-by: Adi-Amazon <barlev@amazon.com>
monroegm-disable-blank-issue-2
Adi Bar-Lev 4 years ago committed by GitHub
parent 4d49a604af
commit 9786d63596
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -88,6 +88,7 @@ namespace AZ
dependent.push_back(AZ_CRC("CoreLightsService", 0x91932ef6));
dependent.push_back(AZ_CRC("DynamicDrawService", 0x023c1673));
dependent.push_back(AZ_CRC("CommonService", 0x6398eec4));
dependent.push_back(AZ_CRC_CE("HairService"));
}
void BootstrapSystemComponent::GetIncompatibleServices(ComponentDescriptor::DependencyArrayType& incompatible)

@ -468,6 +468,14 @@
"Name": "OpaqueParentTemplate",
"Path": "Passes/OpaqueParent.pass"
},
{
"Name": "ThumbnailPipeline",
"Path": "Passes/ThumbnailPipeline.pass"
},
{
"Name": "ThumbnailPipelineRenderToTexture",
"Path": "Passes/ThumbnailPipelineRenderToTexture.pass"
},
{
"Name": "TransparentParentTemplate",
"Path": "Passes/TransparentParent.pass"

@ -0,0 +1,463 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "ThumbnailPipeline",
"PassClass": "ParentPass",
"Slots": [
{
"Name": "SwapChainOutput",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "RenderTarget"
}
],
"PassRequests": [
{
"Name": "MorphTargetPass",
"TemplateName": "MorphTargetPassTemplate"
},
{
"Name": "SkinningPass",
"TemplateName": "SkinningPassTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshOutputStream",
"AttachmentRef": {
"Pass": "MorphTargetPass",
"Attachment": "MorphTargetDeltaOutput"
}
}
]
},
{
"Name": "RayTracingAccelerationStructurePass",
"TemplateName": "RayTracingAccelerationStructurePassTemplate"
},
{
"Name": "DiffuseProbeGridUpdatePass",
"TemplateName": "DiffuseProbeGridUpdatePassTemplate",
"ExecuteAfter": [
"RayTracingAccelerationStructurePass"
]
},
{
"Name": "DepthPrePass",
"TemplateName": "DepthMSAAParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "MotionVectorPass",
"TemplateName": "MotionVectorParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "Depth",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "LightCullingPass",
"TemplateName": "LightCullingParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "DepthMSAA",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthMSAA"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "ShadowPass",
"TemplateName": "ShadowParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "OpaquePass",
"TemplateName": "OpaqueParentTemplate",
"Connections": [
{
"LocalSlot": "DirectionalShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalShadowmap"
}
},
{
"LocalSlot": "DirectionalESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalESM"
}
},
{
"LocalSlot": "ProjectedShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedShadowmap"
}
},
{
"LocalSlot": "ProjectedESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedESM"
}
},
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "LightListRemapped",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "LightListRemapped"
}
},
{
"LocalSlot": "DepthLinear",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthLinear"
}
},
{
"LocalSlot": "DepthStencil",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthMSAA"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "TransparentPass",
"TemplateName": "TransparentParentTemplate",
"Connections": [
{
"LocalSlot": "DirectionalShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalShadowmap"
}
},
{
"LocalSlot": "DirectionalESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalESM"
}
},
{
"LocalSlot": "ProjectedShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedShadowmap"
}
},
{
"LocalSlot": "ProjectedESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedESM"
}
},
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "LightListRemapped",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "LightListRemapped"
}
},
{
"LocalSlot": "InputLinearDepth",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthLinear"
}
},
{
"LocalSlot": "DepthStencil",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "InputOutput",
"AttachmentRef": {
"Pass": "OpaquePass",
"Attachment": "Output"
}
}
]
},
{
"Name": "DeferredFogPass",
"TemplateName": "DeferredFogPassTemplate",
"Enabled": false,
"Connections": [
{
"LocalSlot": "InputLinearDepth",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthLinear"
}
},
{
"LocalSlot": "InputDepthStencil",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "RenderTargetInputOutput",
"AttachmentRef": {
"Pass": "TransparentPass",
"Attachment": "InputOutput"
}
}
],
"PassData": {
"$type": "FullscreenTrianglePassData",
"ShaderAsset": {
"FilePath": "Shaders/ScreenSpace/DeferredFog.shader"
},
"PipelineViewTag": "MainCamera"
}
},
{
"Name": "ReflectionCopyFrameBufferPass",
"TemplateName": "ReflectionCopyFrameBufferPassTemplate",
"Enabled": false,
"Connections": [
{
"LocalSlot": "Input",
"AttachmentRef": {
"Pass": "DeferredFogPass",
"Attachment": "RenderTargetInputOutput"
}
}
]
},
{
"Name": "PostProcessPass",
"TemplateName": "PostProcessParentTemplate",
"Connections": [
{
"LocalSlot": "LightingInput",
"AttachmentRef": {
"Pass": "DeferredFogPass",
"Attachment": "RenderTargetInputOutput"
}
},
{
"LocalSlot": "Depth",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "MotionVectors",
"AttachmentRef": {
"Pass": "MotionVectorPass",
"Attachment": "MotionVectorOutput"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "AuxGeomPass",
"TemplateName": "AuxGeomPassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "ColorInputOutput",
"AttachmentRef": {
"Pass": "PostProcessPass",
"Attachment": "Output"
}
},
{
"LocalSlot": "DepthInputOutput",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
}
],
"PassData": {
"$type": "RasterPassData",
"DrawListTag": "auxgeom",
"PipelineViewTag": "MainCamera"
}
},
{
"Name": "DebugOverlayPass",
"TemplateName": "DebugOverlayParentTemplate",
"Connections": [
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "RawLightingInput",
"AttachmentRef": {
"Pass": "PostProcessPass",
"Attachment": "RawLightingOutput"
}
},
{
"LocalSlot": "LuminanceMipChainInput",
"AttachmentRef": {
"Pass": "PostProcessPass",
"Attachment": "LuminanceMipChainOutput"
}
},
{
"LocalSlot": "InputOutput",
"AttachmentRef": {
"Pass": "AuxGeomPass",
"Attachment": "ColorInputOutput"
}
}
]
},
{
"Name": "UIPass",
"TemplateName": "UIParentTemplate",
"Connections": [
{
"LocalSlot": "InputOutput",
"AttachmentRef": {
"Pass": "DebugOverlayPass",
"Attachment": "InputOutput"
}
},
{
"LocalSlot": "DepthInputOutput",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
}
]
},
{
"Name": "CopyToSwapChain",
"TemplateName": "FullscreenCopyTemplate",
"Connections": [
{
"LocalSlot": "Input",
"AttachmentRef": {
"Pass": "UIPass",
"Attachment": "InputOutput"
}
},
{
"LocalSlot": "Output",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
}
]
}
}
}

@ -0,0 +1,32 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "ThumbnailPipelineRenderToTexture",
"PassClass": "RenderToTexturePass",
"PassData": {
"$type": "RenderToTexturePassData",
"OutputWidth": 512,
"OutputHeight": 512,
"OutputFormat": "R8G8B8A8_UNORM"
},
"PassRequests": [
{
"Name": "Pipeline",
"TemplateName": "ThumbnailPipeline",
"Connections": [
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "Output"
}
}
]
}
]
}
}
}

@ -8,14 +8,14 @@
#pragma once
option bool o_specularF0_enableMultiScatterCompensation;
option bool o_specularF0_enableMultiScatterCompensation = true;
option bool o_enableShadows = true;
option bool o_enableDirectionalLights = true;
option bool o_enablePunctualLights = true;
option bool o_enableAreaLights = true;
option bool o_enableIBL = true;
option bool o_enableSubsurfaceScattering;
option bool o_clearCoat_feature_enabled;
option bool o_enableSubsurfaceScattering = false;
option bool o_clearCoat_feature_enabled = false;
option enum class TransmissionMode {None, ThickObject, ThinObject} o_transmission_mode;
option bool o_meshUseForwardPassIBLSpecular = false;
option bool o_materialUseForwardPassIBLSpecular = false;

@ -8,6 +8,7 @@
#pragma once
#include <Atom/Features/PBR/ForwardPassSrg.azsli>
#include <Atom/Features/PBR/Lights/CapsuleLight.azsli>
#include <Atom/Features/PBR/Lights/DirectionalLight.azsli>
#include <Atom/Features/PBR/Lights/DiskLight.azsli>

@ -8,7 +8,7 @@
#pragma once
#include <Atom/Features/PBR/Lights/LightTypesCommon.azsli>
#include <Atom/Features/LightCulling/LightCullingTileIterator.azsli>
void ApplySimplePointLight(ViewSrg::SimplePointLight light, Surface surface, inout LightingData lightingData)
{

@ -10,7 +10,6 @@
#include <scenesrg.srgi>
#include <viewsrg.srgi>
#include <Atom/Features/PBR/ForwardPassSrg.azsli>
#include <Atom/Features/Shadow/ShadowmapAtlasLib.azsli>
#include <Atom/RPI/Math.azsli>
#include "BicubicPcfFilters.azsli"

@ -160,6 +160,8 @@ set(FILES
Passes/LuminanceHistogramGenerator.pass
Passes/MainPipeline.pass
Passes/MainPipelineRenderToTexture.pass
Passes/ThumbnailPipeline.pass
Passes/ThumbnailPipelineRenderToTexture.pass
Passes/MeshMotionVector.pass
Passes/ModulateTexture.pass
Passes/MorphTarget.pass

@ -31,6 +31,23 @@ void swap(inout float a, inout float b)
b = c;
}
float Pow2(float x)
{
return x * x;
}
float Pow3(float x)
{
return x * x * x;
}
float Pow4(float x)
{
x *= x;
return x * x;
}
float Pow5(float x)
{
return x * Pow4(x);
}
// ---------- Intersection -----------
// a simple ray sphere intersection function, didn't take limited precision
@ -250,3 +267,19 @@ float LerpInverse(float a, float b, float value)
return (value - a) / (b - a);
}
}
//-------- Gaussinan distribution functions ---------
// https://en.wikipedia.org/wiki/Gaussian_function
// Parameter:
// amplitude - the height of the curve's peak
// median - the position of the Mean / Median - the center of the "bell"
// standardDeviation - controls the width of the "bell"
float Gaussian(float x, float amplitude, float median, float standardDeviation)
{
float exponent = -0.5 * Pow2((x - median) / standardDeviation);
return amplitude * exp(exponent);
}
float GaussianNormalized(float x, float expectedValue, float standardDeviation)
{
float normalizedAmplitude = 1.0f / (standardDeviation * sqrt(TWO_PI));
return Gaussian(x, normalizedAmplitude, expectedValue, standardDeviation);
}

@ -410,7 +410,7 @@ namespace AZ
{
RHI::Format format = slot.m_bufferViewDesc->m_elementFormat;
AZStd::string formatLocation = AZStd::string::format("BufferViewDescriptor on Slot [%s] in PassTemplate [%s]", slot.m_name.GetCStr(), passTemplate->m_name.GetCStr());
RHI::FormatCapabilities capabilities = RHI::GetCapabilities(slot.m_scopeAttachmentUsage, slot.GetAttachmentAccess(), RHI::AttachmentType::Image);
RHI::FormatCapabilities capabilities = RHI::GetCapabilities(slot.m_scopeAttachmentUsage, slot.GetAttachmentAccess(), RHI::AttachmentType::Buffer);
slot.m_bufferViewDesc->m_elementFormat = RHI::ValidateFormat(format, formatLocation.c_str(), slot.m_formatFallbacks, capabilities);
}
}

@ -86,7 +86,7 @@ namespace AZ
RPI::RenderPipelineDescriptor pipelineDesc;
pipelineDesc.m_mainViewTagName = "MainCamera";
pipelineDesc.m_name = data->m_pipelineName;
pipelineDesc.m_rootPassTemplate = "MainPipelineRenderToTexture";
pipelineDesc.m_rootPassTemplate = "ThumbnailPipelineRenderToTexture";
// We have to set the samples to 4 to match the pipeline passes' setting, otherwise it may lead to device lost issue
// [GFX TODO] [ATOM-13551] Default value sand validation required to prevent pipeline crash and device lost
pipelineDesc.m_renderSettings.m_multisampleState.m_samples = 4;

@ -0,0 +1,4 @@
*.ptx filter=lfs diff=lfs merge=lfs -text
*.tfx filter=lfs diff=lfs merge=lfs -text
*.tfxbone filter=lfs diff=lfs merge=lfs -text
*.tfxmesh filter=lfs diff=lfs merge=lfs -text

@ -0,0 +1,4 @@
.maya_data
.mayaSwatches
.ma.swatches
*.pyc

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:d7310efe7e71a8de16f32daf3688984e7d7ad3761690bd7af948965c9b5ecd2e
size 132

@ -0,0 +1,552 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "MainPipeline",
"PassClass": "ParentPass",
"Slots": [
{
"Name": "SwapChainOutput",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "RenderTarget"
}
],
"PassRequests": [
{
"Name": "MorphTargetPass",
"TemplateName": "MorphTargetPassTemplate"
},
{
"Name": "SkinningPass",
"TemplateName": "SkinningPassTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshOutputStream",
"AttachmentRef": {
"Pass": "MorphTargetPass",
"Attachment": "MorphTargetDeltaOutput"
}
}
]
},
{
"Name": "RayTracingAccelerationStructurePass",
"TemplateName": "RayTracingAccelerationStructurePassTemplate"
},
{
"Name": "DiffuseProbeGridUpdatePass",
"TemplateName": "DiffuseProbeGridUpdatePassTemplate",
"ExecuteAfter": [
"RayTracingAccelerationStructurePass"
]
},
{
"Name": "DepthPrePass",
"TemplateName": "DepthMSAAParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "MotionVectorPass",
"TemplateName": "MotionVectorParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "Depth",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "LightCullingPass",
"TemplateName": "LightCullingParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "DepthMSAA",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthMSAA"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "ShadowPass",
"TemplateName": "ShadowParentTemplate",
"Connections": [
{
"LocalSlot": "SkinnedMeshes",
"AttachmentRef": {
"Pass": "SkinningPass",
"Attachment": "SkinnedMeshOutputStream"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "OpaquePass",
"TemplateName": "OpaqueParentTemplate",
"Connections": [
{
"LocalSlot": "DirectionalShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalShadowmap"
}
},
{
"LocalSlot": "DirectionalESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalESM"
}
},
{
"LocalSlot": "ProjectedShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedShadowmap"
}
},
{
"LocalSlot": "ProjectedESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedESM"
}
},
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "LightListRemapped",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "LightListRemapped"
}
},
{
"LocalSlot": "DepthLinear",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthLinear"
}
},
{
"LocalSlot": "DepthStencil",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthMSAA"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
// NOTE: HairParentPass does not write into Depth MSAA from Opaque Pass. If new passes downstream
// of HairParentPass will need to use Depth MSAA, HairParentPass will need to be updated to use Depth MSAA
// instead of regular Depth as DepthStencil. Specifically, HairResolvePPLL.pass and the associated
// .azsl file will need to be updated.
"Name": "HairParentPass",
"TemplateName": "HairParentPassTemplate",
"Enabled": true,
"Connections": [
// Critical to keep DepthLinear as input - used to set the size of the Head PPLL image buffer.
// If DepthLinear is not availbale - connect to another viewport (non MSAA) image.
{
"LocalSlot": "DepthLinearInput",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "DepthLinear"
}
},
{
"LocalSlot": "Depth",
"AttachmentRef": {
"Pass": "DepthPrePass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "RenderTargetInputOutput",
"AttachmentRef": {
"Pass": "OpaquePass",
"Attachment": "Output"
}
},
{
"LocalSlot": "RenderTargetInputOnly",
"AttachmentRef": {
"Pass": "OpaquePass",
"Attachment": "Output"
}
},
// Shadows resources
{
"LocalSlot": "DirectionalShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalShadowmap"
}
},
{
"LocalSlot": "DirectionalESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalESM"
}
},
{
"LocalSlot": "ProjectedShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedShadowmap"
}
},
{
"LocalSlot": "ProjectedESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedESM"
}
},
// Lighting Resources
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "LightListRemapped",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "LightListRemapped"
}
}
]
},
{
"Name": "TransparentPass",
"TemplateName": "TransparentParentTemplate",
"Connections": [
{
"LocalSlot": "DirectionalShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalShadowmap"
}
},
{
"LocalSlot": "DirectionalESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "DirectionalESM"
}
},
{
"LocalSlot": "ProjectedShadowmap",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedShadowmap"
}
},
{
"LocalSlot": "ProjectedESM",
"AttachmentRef": {
"Pass": "ShadowPass",
"Attachment": "ProjectedESM"
}
},
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "LightListRemapped",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "LightListRemapped"
}
},
{
"LocalSlot": "InputLinearDepth",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "DepthLinear"
}
},
{
"LocalSlot": "DepthStencil",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "InputOutput",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "RenderTargetInputOutput"
}
}
]
},
{
"Name": "DeferredFogPass",
"TemplateName": "DeferredFogPassTemplate",
"Enabled": false,
"Connections": [
{
"LocalSlot": "InputLinearDepth",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "DepthLinear"
}
},
{
"LocalSlot": "InputDepthStencil",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "RenderTargetInputOutput",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "RenderTargetInputOutput"
}
}
],
"PassData": {
"$type": "FullscreenTrianglePassData",
"ShaderAsset": {
"FilePath": "Shaders/ScreenSpace/DeferredFog.shader"
},
"PipelineViewTag": "MainCamera"
}
},
{
"Name": "ReflectionCopyFrameBufferPass",
"TemplateName": "ReflectionCopyFrameBufferPassTemplate",
"Enabled": false,
"Connections": [
{
"LocalSlot": "Input",
"AttachmentRef": {
"Pass": "DeferredFogPass",
"Attachment": "RenderTargetInputOutput"
}
}
]
},
{
"Name": "PostProcessPass",
"TemplateName": "PostProcessParentTemplate",
"Connections": [
{
"LocalSlot": "LightingInput",
"AttachmentRef": {
"Pass": "DeferredFogPass",
"Attachment": "RenderTargetInputOutput"
}
},
{
"LocalSlot": "Depth",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "Depth"
}
},
{
"LocalSlot": "MotionVectors",
"AttachmentRef": {
"Pass": "MotionVectorPass",
"Attachment": "MotionVectorOutput"
}
},
{
"LocalSlot": "SwapChainOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
},
{
"Name": "AuxGeomPass",
"TemplateName": "AuxGeomPassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "ColorInputOutput",
"AttachmentRef": {
"Pass": "PostProcessPass",
"Attachment": "Output"
}
},
{
"LocalSlot": "DepthInputOutput",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "Depth"
}
}
],
"PassData": {
"$type": "RasterPassData",
"DrawListTag": "auxgeom",
"PipelineViewTag": "MainCamera"
}
},
{
"Name": "DebugOverlayPass",
"TemplateName": "DebugOverlayParentTemplate",
"Connections": [
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "LightCullingPass",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "RawLightingInput",
"AttachmentRef": {
"Pass": "PostProcessPass",
"Attachment": "RawLightingOutput"
}
},
{
"LocalSlot": "LuminanceMipChainInput",
"AttachmentRef": {
"Pass": "PostProcessPass",
"Attachment": "LuminanceMipChainOutput"
}
},
{
"LocalSlot": "InputOutput",
"AttachmentRef": {
"Pass": "AuxGeomPass",
"Attachment": "ColorInputOutput"
}
}
]
},
{
"Name": "UIPass",
"TemplateName": "UIParentTemplate",
"Connections": [
{
"LocalSlot": "InputOutput",
"AttachmentRef": {
"Pass": "DebugOverlayPass",
"Attachment": "InputOutput"
}
},
{
"LocalSlot": "DepthInputOutput",
"AttachmentRef": {
"Pass": "HairParentPass",
"Attachment": "Depth"
}
}
]
},
{
"Name": "CopyToSwapChain",
"TemplateName": "FullscreenCopyTemplate",
"Connections": [
{
"LocalSlot": "Input",
"AttachmentRef": {
"Pass": "UIPass",
"Attachment": "InputOutput"
}
},
{
"LocalSlot": "Output",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "SwapChainOutput"
}
}
]
}
]
}
}
}

@ -0,0 +1,45 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "AssetAliasesSourceData",
"ClassData": {
"AssetPaths": [
{
"Name": "HairParentPassTemplate",
"Path": "Passes/HairParentPass.pass"
},
{
"Name": "HairGlobalShapeConstraintsComputePassTemplate",
"Path": "Passes/HairGlobalShapeConstraintsCompute.pass"
},
{
"Name": "HairCalculateStrandLevelDataComputePassTemplate",
"Path": "Passes/HairCalculateStrandLevelDataCompute.pass"
},
{
"Name": "HairVelocityShockPropagationComputePassTemplate",
"Path": "Passes/HairVelocityShockPropagationCompute.pass"
},
{
"Name": "HairLocalShapeConstraintsComputePassTemplate",
"Path": "Passes/HairLocalShapeConstraintsCompute.pass"
},
{
"Name": "HairLengthConstraintsWindAndCollisionComputePassTemplate",
"Path": "Passes/HairLengthConstraintsWindAndCollisionCompute.pass"
},
{
"Name": "HairUpdateFollowHairComputePassTemplate",
"Path": "Passes/HairUpdateFollowHairCompute.pass"
},
{
"Name": "HairPPLLRasterPassTemplate",
"Path": "Passes/HairFillPPLL.pass"
},
{
"Name": "HairPPLLResolvePassTemplate",
"Path": "Passes/HairResolvePPLL.pass"
}
]
}
}

@ -0,0 +1,30 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairCalculateStrandLevelDataComputePassTemplate",
"PassClass": "HairSkinningComputePass",
"Slots": [
{
"Name": "SkinnedHairSharedBuffer",
"ShaderInputName": "m_skinnedHairSharedBuffer",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
}
],
"PassData": {
"$type": "ComputePassData",
"ShaderAsset": {
// Looking for it in the Shaders directory relative to the Assets directory
"FilePath": "Shaders/HairCalculateStrandLevelDataCompute.shader"
}
}
}
}
}

@ -0,0 +1,130 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairPPLLRasterPassTemplate",
"PassClass": "HairPPLLRasterPass",
"Slots": [
// Input/Outputs...
{
// Super important to keep this as it is used to set the size of the Head PPLL image buffer.
// If DepthLinear is not availbale - connect to another viewport (non MSAA) image.
"Name": "DepthLinear",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "Shader"
},
{ // The depth buffer will be used to write the closest hair fragment depth
"Name": "Depth",
"SlotType": "Input",
"ScopeAttachmentUsage": "DepthStencil"
},
{
"Name": "RenderTargetInputOutput",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "RenderTarget",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
},
{
"Name": "SkinnedHairSharedBuffer",
"ShaderInputName": "m_skinnedHairSharedBuffer",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "PPLLIndexCounter",
"ShaderInputName": "m_linkedListCounter",
"SlotType": "Output",
"ScopeAttachmentUsage": "Shader",
"BufferViewDesc": {
"m_elementOffset": "0",
"m_elementCount": "1",
"m_elementSize": "4",
"m_elementFormat": "R32_UINT" // Unknown is not accpeted and the pass compilation failsd
},
"LoadStoreAction": {
"ClearValue": {
"Value": [ 0, 0, 0, 0 ]
},
"LoadAction": "Clear"
}
},
{
"Name": "PerPixelListHead",
"ShaderInputName": "m_fragmentListHead",
"SlotType": "Output",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"ClearValue": {
"Value": [
0,
0,
0,
0
]
},
"LoadAction": "Clear",
"StoreAction": "Store"
}
},
{
"Name": "PerPixelLinkedList",
"ShaderInputName": "m_linkedListNodes",
"SlotType": "Output",
"ScopeAttachmentUsage": "Shader"
}
],
"BufferAttachments": [
{
"Name": "PPLLIndexCounter",
"BufferDescriptor": {
"m_bindFlags": "ShaderReadWrite",
"m_byteCount": "4",
"m_alignment": "0" // or size of the buffer element
}
}
],
"ImageAttachments": [
{
"Name": "PerPixelListHead",
"SizeSource": {
"Source": {
"Pass": "This",
"Attachment": "DepthLinear"
}
},
"ImageDescriptor": {
"Format": "R32_UINT",
"SharedQueueMask": "Graphics",
"BindFlags": "ShaderReadWrite"
}
}
],
"Connections": [
{
"LocalSlot": "PerPixelListHead",
"AttachmentRef": {
"Pass": "This",
"Attachment": "PerPixelListHead"
}
},
{
"LocalSlot": "PPLLIndexCounter",
"AttachmentRef": {
"Pass": "This",
"Attachment": "PPLLIndexCounter"
}
}
],
"PassData": {
"$type": "RasterPassData",
"DrawListTag": "hairFillPass",
"PipelineViewTag": "MainCamera"
}
}
}
}

@ -0,0 +1,30 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairGlobalShapeConstraintsComputePassTemplate",
"PassClass": "HairSkinningComputePass",
"Slots": [
{
"Name": "SkinnedHairSharedBuffer",
"ShaderInputName": "m_skinnedHairSharedBuffer",
"SlotType": "Output",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
}
],
"PassData": {
"$type": "ComputePassData",
"ShaderAsset": {
// Looking for it in the Shaders directory relative to the Assets directory
"FilePath": "Shaders/HairGlobalShapeConstraintsCompute.shader"
}
}
}
}
}

@ -0,0 +1,30 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairLengthConstraintsWindAndCollisionComputePassTemplate",
"PassClass": "HairSkinningComputePass",
"Slots": [
{
"Name": "SkinnedHairSharedBuffer",
"ShaderInputName": "m_skinnedHairSharedBuffer",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
}
],
"PassData": {
"$type": "ComputePassData",
"ShaderAsset": {
// Looking for it in the Shaders directory relative to the Assets directory
"FilePath": "Shaders/HairLengthConstraintsWindAndCollisionCompute.shader"
}
}
}
}
}

@ -0,0 +1,30 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairLocalShapeConstraintsComputePassTemplate",
"PassClass": "HairSkinningComputePass",
"Slots": [
{
"Name": "SkinnedHairSharedBuffer",
"ShaderInputName": "m_skinnedHairSharedBuffer",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
}
],
"PassData": {
"$type": "ComputePassData",
"ShaderAsset": {
// Looking for it in the Shaders directory relative to the Assets directory
"FilePath": "Shaders/HairLocalShapeConstraintsCompute.shader"
}
}
}
}
}

@ -0,0 +1,347 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairParentPassTemplate",
"PassClass": "ParentPass",
"Slots": [
{
"Name": "RenderTargetInputOutput",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "RenderTarget"
},
{
"Name": "RenderTargetInputOnly",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader"
},
// This is the depth stencil buffer that is to be used by the fill pass
// to early reject pixels by depth and in the resolve pass to write the
// the hair depth
{
"Name": "Depth",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "DepthStencil"
},
// Keep DepthLinear as input - used to set the size of the Head PPLL image buffer.
// If DepthLinear is not availbale - connect to another viewport (non MSAA) image.
{
"Name": "DepthLinearInput",
"SlotType": "Input"
},
{
"Name": "DepthLinear",
"SlotType": "Output"
},
// Lights & Shadows resources
{
"Name": "DirectionalShadowmap",
"SlotType": "Input"
},
{
"Name": "DirectionalESM",
"SlotType": "Input"
},
{
"Name": "ProjectedShadowmap",
"SlotType": "Input"
},
{
"Name": "ProjectedESM",
"SlotType": "Input"
},
{
"Name": "TileLightData",
"SlotType": "Input"
},
{
"Name": "LightListRemapped",
"SlotType": "Input"
}
],
"Connections": [
{
"LocalSlot": "DepthLinear",
"AttachmentRef": {
"Pass": "DepthToDepthLinearPass",
"Attachment": "Output"
}
}
],
"PassRequests": [
{
"Name": "HairGlobalShapeConstraintsComputePass",
"TemplateName": "HairGlobalShapeConstraintsComputePassTemplate",
"Enabled": true
},
{
"Name": "HairCalculateStrandLevelDataComputePass",
"TemplateName": "HairCalculateStrandLevelDataComputePassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "SkinnedHairSharedBuffer",
"AttachmentRef": {
"Pass": "HairGlobalShapeConstraintsComputePass",
"Attachment": "SkinnedHairSharedBuffer"
}
}
]
},
{
"Name": "HairVelocityShockPropagationComputePass",
"TemplateName": "HairVelocityShockPropagationComputePassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "SkinnedHairSharedBuffer",
"AttachmentRef": {
"Pass": "HairCalculateStrandLevelDataComputePass",
"Attachment": "SkinnedHairSharedBuffer"
}
}
]
},
{
"Name": "HairLocalShapeConstraintsComputePass",
"TemplateName": "HairLocalShapeConstraintsComputePassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "SkinnedHairSharedBuffer",
"AttachmentRef": {
"Pass": "HairVelocityShockPropagationComputePass",
"Attachment": "SkinnedHairSharedBuffer"
}
}
]
},
{
"Name": "HairLengthConstraintsWindAndCollisionComputePass",
"TemplateName": "HairLengthConstraintsWindAndCollisionComputePassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "SkinnedHairSharedBuffer",
"AttachmentRef": {
"Pass": "HairLocalShapeConstraintsComputePass",
"Attachment": "SkinnedHairSharedBuffer"
}
}
]
},
{
"Name": "HairUpdateFollowHairComputePass",
"TemplateName": "HairUpdateFollowHairComputePassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "SkinnedHairSharedBuffer",
"AttachmentRef": {
"Pass": "HairLengthConstraintsWindAndCollisionComputePass",
"Attachment": "SkinnedHairSharedBuffer"
}
}
]
},
{
"Name": "HairPPLLRasterPass",
"TemplateName": "HairPPLLRasterPassTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "SkinnedHairSharedBuffer",
"AttachmentRef": {
"Pass": "HairUpdateFollowHairComputePass",
"Attachment": "SkinnedHairSharedBuffer"
}
},
// Keep DepthLinear as input - used to set the size of the Head PPLL image buffer.
// If DepthLinear is not availbale - connect to another viewport (non MSAA) image.
{
"LocalSlot": "DepthLinear",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "DepthLinearInput"
}
},
{
"LocalSlot": "Depth",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "Depth"
}
},
{
"LocalSlot": "RenderTargetInputOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "RenderTargetInputOutput"
}
}
]
},
{
"Name": "RenderTargetCopyPass",
"TemplateName": "FullscreenCopyTemplate",
"Connections": [
{
"LocalSlot": "Input",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "RenderTargetInputOnly"
}
},
{
"LocalSlot": "Output",
"AttachmentRef": {
"Pass": "This",
"Attachment": "Output"
}
}
],
"ImageAttachments": [
{
"Name": "Output",
"SizeSource": {
"Source": {
"Pass": "This",
"Attachment": "Input"
}
},
"FormatSource": {
"Pass": "This",
"Attachment": "Input"
},
"GenerateFullMipChain": false
}
]
},
{
"Name": "HairPPLLResolvePass",
"TemplateName": "HairPPLLResolvePassTemplate",
"Enabled": true,
"Connections": [
// General + Render Target resources
{
"LocalSlot": "RenderTargetInputOutput",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "RenderTargetInputOutput"
}
},
{
"LocalSlot": "RenderTargetCopy",
"AttachmentRef": {
"Pass": "RenderTargetCopyPass",
"Attachment": "Output"
}
},
{
"LocalSlot": "Depth",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "Depth"
}
},
{
"LocalSlot": "DepthLinear",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "DepthLinearInput"
}
},
// Shadows resources
{
"LocalSlot": "DirectionalShadowmap",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "DirectionalShadowmap"
}
},
{
"LocalSlot": "DirectionalESM",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "DirectionalESM"
}
},
{
"LocalSlot": "ProjectedShadowmap",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "ProjectedShadowmap"
}
},
{
"LocalSlot": "ProjectedESM",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "ProjectedESM"
}
},
// Lights Resources
{
"LocalSlot": "TileLightData",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "TileLightData"
}
},
{
"LocalSlot": "LightListRemapped",
"AttachmentRef": {
"Pass": "Parent",
"Attachment": "LightListRemapped"
}
},
// PPLL Resources
{
"LocalSlot": "PerPixelListHead",
"AttachmentRef": {
"Pass": "HairPPLLRasterPass",
"Attachment": "PerPixelListHead"
}
},
{
"LocalSlot": "PerPixelLinkedList",
"AttachmentRef": {
"Pass": "HairPPLLRasterPass",
"Attachment": "PerPixelLinkedList"
}
}
]
},
{
// This pass copies the updated depth buffer (now contains hair depth) to linear depth texture
// for downstream passes to use. This can be optimized even further by writing into the stencil
// buffer pixels that were touched by HairPPLLResolvePass hence preventing depth update unless
// it is hair.
"Name": "DepthToDepthLinearPass",
"TemplateName": "DepthToLinearTemplate",
"Enabled": true,
"Connections": [
{
"LocalSlot": "Input",
"AttachmentRef": {
"Pass": "HairPPLLResolvePass",
"Attachment": "Depth"
}
}
]
}
]
}
}
}

@ -0,0 +1,150 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairPPLLResolvePassTemplate",
"PassClass": "HairPPLLResolvePass",
"Slots": [
//------ General Input/Output resources and Render Target ------
{
"Name": "RenderTargetInputOutput",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "RenderTarget",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
},
{ // Will be used to get the background color for the hair blending
"Name": "RenderTargetCopy",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader",
"ShaderInputName": "m_frameBuffer"
},
{ // Used to get the transform from screen space to world space.
"Name": "DepthLinear",
"SlotType": "Input",
"ShaderInputName": "m_linearDepth",
"ScopeAttachmentUsage": "Shader"
},
{ // The depth buffer will be used to write the closest hair fragment depth
"Name": "Depth",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "DepthStencil",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
},
//------------- Shadowing Resources -------------
{
"Name": "DirectionalShadowmap",
"ShaderInputName": "m_directionalLightShadowmap",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader",
"ImageViewDesc": {
"IsArray": 1
}
},
{
"Name": "DirectionalESM",
"ShaderInputName": "m_directionalLightExponentialShadowmap",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader",
"ImageViewDesc": {
"IsArray": 1
}
},
{
"Name": "ProjectedShadowmap",
"ShaderInputName": "m_projectedShadowmaps",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader",
"ImageViewDesc": {
"IsArray": 1
}
},
{
"Name": "ProjectedESM",
"ShaderInputName": "m_projectedExponentialShadowmap",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader",
"ImageViewDesc": {
"IsArray": 1
}
},
//------------- Lighting Resources -------------
{
"Name": "BRDFTextureInput",
"ShaderInputName": "m_brdfMap",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "TileLightData",
"SlotType": "Input",
"ShaderInputName": "m_tileLightData",
"ScopeAttachmentUsage": "Shader"
},
{
"Name": "LightListRemapped",
"SlotType": "Input",
"ShaderInputName": "m_lightListRemapped",
"ScopeAttachmentUsage": "Shader"
},
//---------- PPLL Resources -----------
{
"Name": "PerPixelListHead",
"ShaderInputName": "m_fragmentListHead",
"SlotType": "Input", // Read only - the reset is done before the fill pass
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "DontCare"
}
},
{
"Name": "PerPixelLinkedList",
"ShaderInputName": "m_linkedListNodes",
"SlotType": "Input",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "DontCare"
}
}
],
"ImageAttachments": [
{
"Name": "BRDFTexture",
"Lifetime": "Imported",
"AssetRef": {
"FilePath": "Textures/BRDFTexture.attimage"
}
}
],
"Connections": [
{
"LocalSlot": "BRDFTextureInput",
"AttachmentRef": {
"Pass": "This",
"Attachment": "BRDFTexture"
}
}
],
"PassData": {
"$type": "FullscreenTrianglePassData",
"ShaderAsset": {
// Looking for it in the Shaders directory relative to the Assets directory
"FilePath": "Shaders/HairRenderingResolvePPLL.shader"
}
}
}
}
}

@ -0,0 +1,30 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairUpdateFollowHairComputePassTemplate",
"PassClass": "HairSkinningComputePass",
"Slots": [
{
"Name": "SkinnedHairSharedBuffer",
"ShaderInputName": "m_skinnedHairSharedBuffer",
"SlotType": "inputOutput",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
}
],
"PassData": {
"$type": "ComputePassData",
"ShaderAsset": {
// Looking for it in the Shaders directory relative to the Assets directory
"FilePath": "Shaders/HairUpdateFollowHairCompute.shader"
}
}
}
}
}

@ -0,0 +1,30 @@
{
"Type": "JsonSerialization",
"Version": 1,
"ClassName": "PassAsset",
"ClassData": {
"PassTemplate": {
"Name": "HairVelocityShockPropagationComputePassTemplate",
"PassClass": "HairSkinningComputePass",
"Slots": [
{
"Name": "SkinnedHairSharedBuffer",
"ShaderInputName": "m_skinnedHairSharedBuffer",
"SlotType": "InputOutput",
"ScopeAttachmentUsage": "Shader",
"LoadStoreAction": {
"LoadAction": "Load",
"StoreAction": "Store"
}
}
],
"PassData": {
"$type": "ComputePassData",
"ShaderAsset": {
// Looking for it in the Shaders directory relative to the Assets directory
"FilePath": "Shaders/HairVelocityShockPropagationCompute.shader"
}
}
}
}
}

@ -0,0 +1,14 @@
{
"Source" : "HairSimulationCompute.azsl",
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "CalculateStrandLevelData",
"type": "Compute"
}
]
}
}

@ -0,0 +1,14 @@
{
"Source" : "HairSimulationCompute.azsl",
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "IntegrationAndGlobalShapeConstraints",
"type": "Compute"
}
]
}
}

@ -0,0 +1,14 @@
{
"Source" : "HairSimulationCompute.azsl",
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "LengthConstriantsWindAndCollision",
"type": "Compute"
}
]
}
}

@ -0,0 +1,495 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//------------------------------------------------------------------------------
// Shader code related to lighting and shadowing for TressFX
//------------------------------------------------------------------------------
#pragma once
//==============================================================================
// Atom Lighting & Light Types
//==============================================================================
// Include options first
#include <Atom/Features/PBR/LightingOptions.azsli>
#include <HairSurface.azsli>
#include <HairUtilities.azsli>
#include <Atom/Features/PBR/Lighting/LightingData.azsli>
#include <Atom/Features/PBR/LightingUtils.azsli>
#include <Atom/Features/PBR/Microfacet/Brdf.azsli>
#include <Atom/Features/PBR/BackLighting.azsli>
//------------------------------------------------------------------------------
#include <Atom/Features/PBR/Decals.azsli>
#include <HairLightingEquations.azsli>
//===================== Hair Lighting Shader Options ===========================
enum class HairLightingModel {GGX, Marschner, Kajiya};
option HairLightingModel o_hairLightingModel = HairLightingModel::Marschner;
//==============================================================================
float3 GetSpecularLighting(Surface surface, LightingData lightingData, const float3 lightIntensity, const float3 dirToLight)
{
float3 specular = float3(1, 0, 1); // purple - error color
if (o_hairLightingModel == HairLightingModel::GGX)
{
specular = SpecularGGX(lightingData.dirToCamera, dirToLight, surface.normal, surface.specularF0, lightingData.NdotV, surface.roughnessA2, lightingData.multiScatterCompensation);
}
else if(o_hairLightingModel == HairLightingModel::Marschner)
{
specular = HairMarschnerBSDF(surface, lightingData, dirToLight);
}
// [To Do] - add the Kajiya-Kay lighting model option here in order to connect to Atom lighting loop
return specular * lightIntensity;
}
float3 GetHairBackLighting(Surface surface, LightingData lightingData, float3 lightIntensity, float3 dirToLight, float shadowRatio)
{
if (o_hairLightingModel == HairLightingModel::GGX)
{
float3 result = float3(0.0, 0.0, 0.0);
float thickness = 0.0;
float4 transmissionParams = surface.transmission.transmissionParams;
// Thin object mode, using thin-film assumption proposed by Jimenez J. et al, 2010, "Real-Time Realistic Skin Translucency"
// http://www.iryoku.com/translucency/downloads/Real-Time-Realistic-Skin-Translucency.pdf
result = shadowRatio ?
float3(0.0, 0.0, 0.0) :
TransmissionKernel(surface.transmission.thickness * transmissionParams.w, rcp(transmissionParams.xyz)) *
saturate(dot(-surface.normal, dirToLight)) * lightIntensity * shadowRatio;
return result;
}
else // if ((o_hairLightingModel == HairLightingModel::Marschner) || (o_hairLightingModel == HairLightingModel::Kajiya))
{
return float3(0.0f, 0.0f, 0.0f);
}
return float3(1.0f, 0.0f, 0.0f);
}
//! Simple Lambertian BRDF
float3 HairDiffuseLambertian(float3 albedo, float3 normal, float3 dirToLight)
{
float NdotL = saturate(dot(normal, dirToLight));
return albedo * NdotL * INV_PI;
}
// Replacing the generic Diffuse and Specular methods in StandardLighting.azsli
// and removing the regular usage of clear coat second lobe energy distribution.
float3 GetDiffuseLighting(Surface surface, LightingData lightingData, float3 lightIntensity, float3 dirToLight)
{
float3 diffuse = float3(0, 1, 0); // Green - error color
if (o_hairLightingModel == HairLightingModel::GGX)
{
// Notice that addition of the response (1-F) here
diffuse = HairDiffuseLambertian(surface.albedo, surface.normal, dirToLight) * lightingData.diffuseResponse;
}
else if (o_hairLightingModel == HairLightingModel::Marschner)
{
return float3(0.0f, 0.0f, 0.0f);
}
// [To Do] - add the Kajiya-Kay lighting model option here in order to connect to Atom lighting loop
diffuse *= lightIntensity;
return diffuse;
}
void UpdateLightingParameters(
inout LightingData lightingData,
float3 positionWS, float3 normal, float roughnessLinear)
{
lightingData.dirToCamera = normalize(ViewSrg::m_worldPosition.xyz - positionWS);
// sample BRDF map (indexed by smoothness values rather than roughness)
lightingData.NdotV = saturate(dot(normal, lightingData.dirToCamera));
float2 brdfUV = float2(lightingData.NdotV, (1.0f - roughnessLinear));
lightingData.brdf = PassSrg::m_brdfMap.Sample(PassSrg::LinearSampler, brdfUV).rg;
}
void SetNormalAndUpdateLightingParams(
in float3 tangent, in float3 dirToLight,
inout Surface surface,
inout LightingData lightingData)
{
float3 biNormal;
if (o_hairLightingModel == HairLightingModel::GGX)
{ // Towards half vector but never cross fully (more weight to camera direction)
float3 halfDir = normalize( dirToLight + 1.2 * lightingData.dirToCamera);
biNormal = normalize(cross(tangent, halfDir));
}
else
{ // Face forward towards the camera
biNormal = normalize(cross(tangent, lightingData.dirToCamera));
}
float3 projectedNormal = cross(biNormal, tangent);
surface.normal = normalize(projectedNormal); // the normalization might be redundunt
// Next is important in order to set NdotV and other PBR settings - needs to be set once per light
UpdateLightingParameters(lightingData, surface.position, surface.normal, surface.roughnessLinear);
// Diffuse and Specular response
lightingData.specularResponse = FresnelSchlickWithRoughness(lightingData.NdotV, surface.specularF0, surface.roughnessLinear);
lightingData.diffuseResponse = 1.0f - lightingData.specularResponse;
}
//==============================================================================
#include <Atom/Features/PBR/Lights/LightTypesCommon.azsli>
//==============================================================================
//==============================================================================
// Simple Point Light
//==============================================================================
void ApplySimplePointLight(ViewSrg::SimplePointLight light, Surface surface, inout LightingData lightingData)
{
float3 posToLight = light.m_position - surface.position;
float d2 = dot(posToLight, posToLight); // light distance squared
float falloff = d2 * light.m_invAttenuationRadiusSquared;
// Only calculate shading if light is in range
if (falloff < 1.0f)
{
// Smoothly adjusts the light intensity so it reaches 0 at light.m_attenuationRadius distance
float radiusAttenuation = 1.0 - (falloff * falloff);
radiusAttenuation = radiusAttenuation * radiusAttenuation;
// Standard quadratic falloff
d2 = max(0.001 * 0.001, d2); // clamp the light to at least 1mm away to avoid extreme values.
float3 lightIntensity = (light.m_rgbIntensityCandelas / d2) * radiusAttenuation;
float3 dirToLight = normalize(posToLight);
SetNormalAndUpdateLightingParams(surface.tangent, dirToLight, surface, lightingData);
// Diffuse contribution
lightingData.diffuseLighting += GetDiffuseLighting(surface, lightingData, lightIntensity, dirToLight);
// Specular contribution
lightingData.specularLighting += GetSpecularLighting(surface, lightingData, lightIntensity, dirToLight);
}
}
// The following function is currently not used and is written here for future testing.
// The culling scheme requires some more preperation and testing in HairLighting.azsli
// and once done, all light types can use the culling.
// Not that the current culling scheme assumes light types lists order and is bogus if
// some light types are not used or out of order (Atom's To Do list)
void ApplyCulledSimplePointLights(Surface surface, inout LightingData lightingData)
{
lightingData.tileIterator.LoadAdvance();
int lightCount = 0;
while( !lightingData.tileIterator.IsDone() && lightCount<1)
{
uint currLightIndex = lightingData.tileIterator.GetValue();
lightingData.tileIterator.LoadAdvance();
ViewSrg::SimplePointLight light = ViewSrg::m_simplePointLights[currLightIndex];
ApplySimplePointLight(light, surface, lightingData);
++lightCount;
}
}
void ApplyAllSimplePointLights(Surface surface, inout LightingData lightingData)
{
for (int l = 0; l < ViewSrg::m_simplePointLightCount; ++l)
{
ViewSrg::SimplePointLight light = ViewSrg::m_simplePointLights[l];
ApplySimplePointLight(light, surface, lightingData);
}
}
//==============================================================================
// Simple Spot Light
//==============================================================================
void ApplySimpleSpotLight(ViewSrg::SimpleSpotLight light, Surface surface, inout LightingData lightingData)
{
float3 posToLight = light.m_position - surface.position;
float3 dirToLight = normalize(posToLight);
float dotWithDirection = dot(dirToLight, -normalize(light.m_direction));
// If outside the outer cone angle return.
if (dotWithDirection < light.m_cosOuterConeAngle)
{
return;
}
float d2 = dot(posToLight, posToLight); // light distance squared
float falloff = d2 * light.m_invAttenuationRadiusSquared;
// Only calculate shading if light is in range
if (falloff < 1.0f)
{
// Smoothly adjusts the light intensity so it reaches 0 at light.m_attenuationRadius distance
float radiusAttenuation = 1.0 - (falloff * falloff);
radiusAttenuation = radiusAttenuation * radiusAttenuation;
// Standard quadratic falloff
d2 = max(0.001 * 0.001, d2); // clamp the light to at least 1mm away to avoid extreme values.
float3 lightIntensity = (light.m_rgbIntensityCandelas / d2) * radiusAttenuation;
if (dotWithDirection < light.m_cosInnerConeAngle) // in penumbra
{
// Normalize into 0.0 - 1.0 space.
float penumbraMask = (dotWithDirection - light.m_cosOuterConeAngle) / (light.m_cosInnerConeAngle - light.m_cosOuterConeAngle);
// Apply smoothstep
penumbraMask = penumbraMask * penumbraMask * (3.0 - 2.0 * penumbraMask);
lightIntensity *= penumbraMask;
}
SetNormalAndUpdateLightingParams(surface.tangent, dirToLight, surface, lightingData);
// Tranmission contribution
lightingData.translucentBackLighting += GetHairBackLighting(surface, lightingData, lightIntensity, dirToLight, 1.0);
// Diffuse contribution
lightingData.diffuseLighting += GetDiffuseLighting(surface, lightingData, lightIntensity, dirToLight);
// Specular contribution
lightingData.specularLighting += GetSpecularLighting(surface, lightingData, lightIntensity, dirToLight);
}
}
void ApplyAllSimpleSpotLights(Surface surface, inout LightingData lightingData)
{
for (int l = 0; l < ViewSrg::m_simpleSpotLightCount; ++l)
{
ViewSrg::SimpleSpotLight light = ViewSrg::m_simpleSpotLights[l];
ApplySimpleSpotLight(light, surface, lightingData);
}
}
//==============================================================================
// Point Light
//==============================================================================
void ApplyPointLight(ViewSrg::PointLight light, Surface surface, inout LightingData lightingData)
{
float3 posToLight = light.m_position - surface.position;
float d2 = dot(posToLight, posToLight); // light distance squared
float falloff = d2 * light.m_invAttenuationRadiusSquared;
// Only calculate shading if light is in range
if (falloff < 1.0f)
{
// Smoothly adjusts the light intensity so it reaches 0 at light.m_attenuationRadius distance
float radiusAttenuation = 1.0 - (falloff * falloff);
radiusAttenuation = radiusAttenuation * radiusAttenuation;
// Standard quadratic falloff
d2 = max(0.001 * 0.001, d2); // clamp the light to at least 1mm away to avoid extreme values.
float3 lightIntensity = (light.m_rgbIntensityCandelas / d2) * radiusAttenuation;
float3 dirToLight = normalize(posToLight);
SetNormalAndUpdateLightingParams(surface.tangent, dirToLight, surface, lightingData);
// Diffuse contribution
lightingData.diffuseLighting += GetDiffuseLighting(surface, lightingData, lightIntensity, dirToLight);
// Tranmission contribution
lightingData.translucentBackLighting += GetHairBackLighting(surface, lightingData, lightIntensity, dirToLight, 1.0);
// Adjust the light direcion for specular based on bulb size
// Calculate the reflection off the normal from the view direction
float3 reflectionDir = reflect(-lightingData.dirToCamera, surface.normal);
// Calculate a vector from the reflection vector to the light
float3 reflectionPosToLight = posToLight - dot(posToLight, reflectionDir) * reflectionDir;
// Adjust the direction to light based on the bulb size
posToLight -= reflectionPosToLight * saturate(light.m_bulbRadius / length(reflectionPosToLight));
// Adjust the intensity of the light based on the bulb size to conserve energy
float sphereIntensityNormalization = GetIntensityAdjustedByRadiusAndRoughness(surface.roughnessA, light.m_bulbRadius, d2);
// Specular contribution
lightingData.specularLighting += sphereIntensityNormalization * GetSpecularLighting(surface, lightingData, lightIntensity, normalize(posToLight));
}
}
void ApplyAllPointLights(Surface surface, inout LightingData lightingData)
{
for (int l = 0; l < ViewSrg::m_pointLightCount; ++l)
{
ViewSrg::PointLight light = ViewSrg::m_pointLights[l];
ApplyPointLight(light, surface, lightingData);
}
}
//==============================================================================
// Disk Lights
//==============================================================================
#include <Atom/Features/PBR/Lights/DiskLight.azsli>
void ApplyAllDiskLights(Surface surface, inout LightingData lightingData)
{
SetNormalAndUpdateLightingParams(surface.tangent, lightingData.dirToCamera, surface, lightingData);
for (int l = 0; l < ViewSrg::m_diskLightCount; ++l)
{
ViewSrg::DiskLight light = ViewSrg::m_diskLights[l];
ApplyDiskLight(light, surface, lightingData);
}
}
//==============================================================================
// Directional Lights
//==============================================================================
void ApplyDirectionalLights(Surface surface, inout LightingData lightingData)
{
DirectionalLightShadow::DebugInfo debugInfo = {0, false};
// Shadowed check
const uint shadowIndex = ViewSrg::m_shadowIndexDirectionalLight;
float litRatio = 1.0f;
float backShadowRatio = 0.0f;
SetNormalAndUpdateLightingParams(surface.tangent, -lightingData.dirToCamera, surface, lightingData);
if (o_enableShadows && shadowIndex < SceneSrg::m_directionalLightCount)
{
litRatio = DirectionalLightShadow::GetVisibility(
shadowIndex,
lightingData.shadowCoords,
surface.normal,
debugInfo);
}
// Add the lighting contribution for each directional light
for (int index = 0; index < SceneSrg::m_directionalLightCount; index++)
{
SceneSrg::DirectionalLight light = SceneSrg::m_directionalLights[index];
float3 dirToLight = normalize(-light.m_direction);
// Adjust the direction of the light based on its angular diameter.
float3 reflectionDir = reflect(-lightingData.dirToCamera, surface.normal);
float3 lightDirToReflectionDir = reflectionDir - dirToLight;
float lightDirToReflectionDirLen = length(lightDirToReflectionDir);
lightDirToReflectionDir = lightDirToReflectionDir / lightDirToReflectionDirLen; // normalize the length
lightDirToReflectionDirLen = min(light.m_angularRadius, lightDirToReflectionDirLen);
dirToLight += lightDirToReflectionDir * lightDirToReflectionDirLen;
dirToLight = normalize(dirToLight);
float currentLitRatio = 1.0f;
float currentBackShadowRatio = 1.0f;
if (o_enableShadows)
{
currentLitRatio = (index == shadowIndex) ? litRatio : 1.0f;
currentBackShadowRatio = 1.0 - currentLitRatio;
if (o_transmission_mode == TransmissionMode::ThickObject)
{
currentBackShadowRatio = (index == shadowIndex) ? backShadowRatio : 0.;
}
}
lightingData.diffuseLighting += GetDiffuseLighting(surface, lightingData, light.m_rgbIntensityLux, dirToLight) * currentLitRatio;
lightingData.specularLighting += GetSpecularLighting(surface, lightingData, light.m_rgbIntensityLux, dirToLight) * currentLitRatio;
lightingData.translucentBackLighting += GetHairBackLighting(surface, lightingData, light.m_rgbIntensityLux, dirToLight, currentBackShadowRatio);
}
}
//==============================================================================
// IBL - GI and Reflections
//==============================================================================
// Adding diffuse contribution from IBL to the hair - this part still requires
// multiple passes and improvements for both diffuse and specular IBL components.
// The immediate things to improve:
// - Virtual direction to 'light'
// - Absorption function based on hair accumulated thickness (back to front) and
// reverse thickness (front to back)
// - Diffuse contribution elements / scaling
float3 ApplyIblDiffuse(Surface surface, LightingData lightingData)
{
float3 irradianceDir = MultiplyVectorQuaternion(surface.normal, SceneSrg::m_iblOrientation);
float3 diffuseSample = SceneSrg::m_diffuseEnvMap.Sample(SceneSrg::m_samplerEnv, GetCubemapCoords(irradianceDir)).rgb;
// float3 diffuseLighting = HairDiffuseLambertian(surface.albedo, surface.normal, lightingData.dirToCamera) * lightingData.diffuseResponse * diffuseSample;
// float3 diffuseLighting = GetDiffuseLighting(surface, lightingData, diffuseSample, surface.normal);
// Notice the multiplication with inverse thickness used as a measure of occlusion
return lightingData.diffuseResponse * surface.albedo * diffuseSample * (1.0f - surface.thickness);
}
float3 ApplyIblSpecular(Surface surface, LightingData lightingData)
{
float3 reflectDir = reflect(-lightingData.dirToCamera, surface.normal);
reflectDir = MultiplyVectorQuaternion(reflectDir, SceneSrg::m_iblOrientation);
// global
float3 specularSample = SceneSrg::m_specularEnvMap.SampleLevel(
SceneSrg::m_samplerEnv, GetCubemapCoords(reflectDir),
GetRoughnessMip(surface.roughnessLinear)).rgb;
float3 specularLighting = GetSpecularLighting(surface, lightingData, specularSample, reflectDir);
return specularLighting;
}
// Remark: IBL is still WIP and this part will change in the near future
void ApplyIBL(Surface surface, inout LightingData lightingData)
{
// float3 normal = normalize(float3(surface.tangent.z, -surface.tangent.x, surface.tangent.y));
// SetNormalAndUpdateLightingParams(surface.tangent, normal, surface, lightingData);
SetNormalAndUpdateLightingParams(surface.tangent, lightingData.dirToCamera, surface, lightingData);
float3 iblDiffuse = ApplyIblDiffuse(surface, lightingData);
float3 iblSpecular = ApplyIblSpecular(surface, lightingData);
// Adjust IBL lighting by exposure.
float iblExposureFactor = pow(2.0, SceneSrg::m_iblExposure);
lightingData.diffuseLighting += (iblDiffuse * iblExposureFactor * lightingData.diffuseAmbientOcclusion);
lightingData.specularLighting += (iblSpecular * iblExposureFactor);
}
//==============================================================================
//
// Light Types Application
//
//==============================================================================
void ApplyLighting(inout Surface surface, inout LightingData lightingData)
{
// Shadow coordinates generation for the directional light
const uint shadowIndex = ViewSrg::m_shadowIndexDirectionalLight;
if (o_enableShadows && shadowIndex < SceneSrg::m_directionalLightCount)
{
DirectionalLightShadow::GetShadowCoords(shadowIndex, surface.position, lightingData.shadowCoords);
}
// Light loops application.
// If culling is used, the order of the calls must match the light types list order
// ApplyDecals(lightingData.tileIterator, surface);
if (o_enableDirectionalLights)
{
ApplyDirectionalLights(surface, lightingData);
}
if (o_enablePunctualLights)
{
ApplyAllSimplePointLights(surface, lightingData);
ApplyAllSimpleSpotLights(surface, lightingData);
}
if (o_enableAreaLights)
{
ApplyAllPointLights(surface, lightingData);
ApplyAllDiskLights(surface, lightingData);
}
if (o_enableIBL)
{
ApplyIBL(surface, lightingData);
}
}

@ -0,0 +1,275 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//------------------------------------------------------------------------------
// Shader code related to lighting and shadowing for TressFX
//------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#pragma once
#include <HairUtilities.azsli>
//------------------------------------------------------------------------------
struct HairShadeParams
{
float3 m_color;
float m_hairShadowAlpha;
float m_fiberRadius;
float m_fiberSpacing;
// Original TressFX Kajiya lighting model parameters
float m_Ka;
float m_Kd;
float m_Ks1;
float m_Ex1;
float m_Ks2;
float m_Ex2;
// Marschner lighting model parameters
float m_cuticleTilt;
float m_roughness;
};
//! Original TressFX enhanced Kajiya-Kay lighting model code
//!
//! Returns a float3 which is the scale for diffuse, spec term, and colored spec term.
//!
//! The diffuse term is from Kajiya.
//!
//! The spec term is what Marschner refers to as "R", reflecting directly off the surface
//! of the hair, taking the color of the light like a dielectric specular term. This
//! highlight is shifted towards the root of the hair.
//!
//! The colored spec term is caused by light passing through the hair, bouncing off the
//! back, and coming back out. It therefore picks up the color of the light.
//! Marschner refers to this term as the "TRT" term. This highlight is shifted towards the
//! tip of the hair.
//!
//! vEyeDir, vLightDir and vTangentDir are all pointing out.
//! coneAngleRadians explained below.
//!
//! hair has a tiled-conical shape along its lenght. Sort of like the following.
//!
//! \ /
//! \ /
//! \ /
//! \ /
//!
//! The angle of the cone is the last argument, in radians.
//! It's typically in the range of 5 to 10 degrees
float3 ComputeDiffuseSpecFactors(
float3 vEyeDir, float3 vLightDir, float3 vTangentDir, HairShadeParams params,
float coneAngleRadians = 10 * AMD_PI / 180)
{
// In Kajiya's model: diffuse component: sin(t, l)
float cosTL = (dot(vTangentDir, vLightDir));
float sinTL = sqrt(1 - cosTL * cosTL);
float diffuse = sinTL;
float cosTRL = -cosTL;
float sinTRL = sinTL;
float cosTE = (dot(vTangentDir, vEyeDir));
float sinTE = sqrt(1 - cosTE * cosTE);
// Primary highlight: reflected direction shift towards root (2 * coneAngleRadians)
float cosTRL_root = cosTRL * cos(2 * coneAngleRadians) - sinTRL * sin(2 * coneAngleRadians);
float sinTRL_root = sqrt(1 - cosTRL_root * cosTRL_root);
float specular_root = max(0, cosTRL_root * cosTE + sinTRL_root * sinTE);
// Secondary highlight: reflected direction shifted toward tip (3*coneAngleRadians)
float cosTRL_tip = cosTRL * cos(-3 * coneAngleRadians) - sinTRL * sin(-3 * coneAngleRadians);
float sinTRL_tip = sqrt(1 - cosTRL_tip * cosTRL_tip);
float specular_tip = max(0, cosTRL_tip * cosTE + sinTRL_tip * sinTE);
return float3(
params.m_Kd * diffuse,
params.m_Ks1 * pow(specular_root, params.m_Ex1),
params.m_Ks2 * pow(specular_tip, params.m_Ex2));
}
float LinearizeDepth(float depthNDC, float fNear, float fFar)
{
return fNear * fFar / (fFar - depthNDC * (fFar - fNear));
}
//! The following code is for reference only and should be removed once the
//! Kajiya-Kay original TressFX lighting model is connected to Atom as was
//! done for the Marshcner lighting model.
#define DEMO_NUMBER_OF_LIGHTS 3
#define DEMO_NUMBER_OF_LIGHTS 3
float3 SimplifiedHairLighting(float3 vTangent, float3 vPositionWS, float3 vViewDirWS, in HairShadeParams params, float3 vNDC)
{
// Initialize information needed for all lights
float3 V = normalize(vViewDirWS);
float3 T = normalize(vTangent);
float3 accumulatedHairColor = float3(0.0, 0.0, 0.0);
float4 lightPosWSVec[DEMO_NUMBER_OF_LIGHTS] = {
float4(3, 0, 3, 1.5f), // Sun
float4(-.5, 0, 0.5, .5f),
float4(.5, 0, 0.5, .5f),
};
float3 lightColorVec[DEMO_NUMBER_OF_LIGHTS] = {
float3(1,1,.95f), // Sun
float3(1,1,1),
float3(1,1,1)
};
// Static lights loop for reference - not connected to Atom
// [To Do] - connect to Atom lighting ala HairLightTypes loop
for (int l = 0; l < DEMO_NUMBER_OF_LIGHTS ; l++)
{
float3 lightPosWS = lightPosWSVec[l].xyz;
float3 LightVector = normalize( vPositionWS - lightPosWS );
float lightIntensity = lightPosWSVec[l].w;
float3 LightColor = lightColorVec[l];
float3 L = LightVector;
// Reference usage of shadow
// float shadowTerm = ComputeLightShadow(l, vPositionWS, params);
// if (shadowTerm <= 0.f)
// continue;
float3 lightSurfaceCoeffs = ComputeDiffuseSpecFactors(V, L, T, params);
// The diffuse coefficient here is a rough approximation as per the Kajiya model
float3 diffuseCompoenent = lightSurfaceCoeffs.x * params.m_color;
// This is the approximation to Marschner R but azimuthal only
float3 specularAtPos = lightSurfaceCoeffs.y;
// This is the approximation to Marschner TRT but azimuthal only
// Notice the base color gather due to the trsmittance within the hair
float3 specularAtBase = lightSurfaceCoeffs.z * params.m_color;
// Final result
float3 lightContribution = (diffuseCompoenent + specularAtPos + specularAtBase) * lightIntensity * LightColor; // * shadowTerm;
accumulatedHairColor += max(float3(0, 0, 0), lightContribution );
}
return accumulatedHairColor;
}
//==============================================================================
// Atom Lighting
//==============================================================================
#include <Atom/Features/Shadow/DirectionalLightShadow.azsli>
#include <HairLightTypes.azsli>
//------------------------------------------------------------------------------
float3 CalculateLighting(
float4 screenCoords, // XY - screen coords 0..max pix res, Z - depth 0..1
float3 vPositionWS, float3 vViewDirWS, float3 vTangent,
float thickness, in HairShadeParams material )
{
//-------- Surface init --------
Surface surface;
const float specularF0Factor = 0.04f; // set this to 0.04?!
surface.position = vPositionWS;
surface.tangent = vTangent; // Redundant - will be calculated per light
surface.normal = float3(0, 0, 0); // Will fail lights that did not initialize properly.
surface.roughnessLinear = material.m_roughness;
surface.cuticleTilt = material.m_cuticleTilt;
surface.thickness = thickness;
surface.CalculateRoughnessA();
surface.SetAlbedoAndSpecularF0( material.m_color, specularF0Factor);
// The trasmission / back lighting does not seem to work!
surface.transmission.InitializeToZero(); // Assuming thin layer
surface.transmission.tint = material.m_color;
surface.transmission.thickness = 0.001; // 1 mm settings
surface.transmission.transmissionParams = float4(1.0, 1.0, 1.0, 32.0); // for thin surface XYZ are partials * W that is the exponent mult
//------- LightingData init -------
LightingData lightingData;
float4 screenPositionForLighting = mul(ViewSrg::m_viewProjectionMatrix, float4(vPositionWS, 1.0));
uint2 dimensions;
PassSrg::m_linearDepth.GetDimensions(dimensions.x, dimensions.y);
screenPositionForLighting.y = 1.0 - screenPositionForLighting.y;
screenPositionForLighting.xy = (screenPositionForLighting.xy * 0.5 + 0.5) * dimensions;
// Light iterator - required for the init but the culling is not used yet.
lightingData.tileIterator.Init(screenCoords, PassSrg::m_lightListRemapped, PassSrg::m_tileLightData);
// The normal assignment will be overriden afterwards per light
lightingData.Init(surface.position, surface.normal, surface.roughnessLinear);
ApplyLighting(surface, lightingData);
return lightingData.diffuseLighting + lightingData.specularLighting;
}
float3 TressFXShading(float2 pixelCoord, float depth, float3 vTangentCoverage, float3 baseColor, float thickness, int shaderParamIndex)
{
float3 vNDC; // normalized device / screen coordinates: [-1..1, -1..1, 0..1]
float3 vPositionWS = ScreenPosToWorldPos(PassSrg::m_linearDepth, pixelCoord, depth, vNDC);
// [To Do] the follwing two lines are a hack to make the tile lighting work for now
#define _BIG_HACK_FOR_TESTING_ 1//32
float4 screenCoords = float4( _BIG_HACK_FOR_TESTING_ * pixelCoord, depth, depth); // screen space position - XY in pixels - ZW are depth 0..1
float3 vViewDirWS = g_vEye - vPositionWS;
// Need to expand the tangent that was compressed to store in the buffer
float3 vTangent = normalize(vTangentCoverage.xyz * 2.f - 1.f);
//---- TressFX original lighting params setting ----
HairShadeParams params;
params.m_color = baseColor;
params.m_hairShadowAlpha = HairParams[shaderParamIndex].m_shadowAlpha;
params.m_fiberRadius = HairParams[shaderParamIndex].m_fiberRadius;
params.m_fiberSpacing = HairParams[shaderParamIndex].m_fiberSpacing;
params.m_Ka = HairParams[shaderParamIndex].m_matKValue.x;
params.m_Kd = HairParams[shaderParamIndex].m_matKValue.y;
params.m_Ks1 = HairParams[shaderParamIndex].m_matKValue.z;
params.m_Ex1 = HairParams[shaderParamIndex].m_matKValue.w;
params.m_Ks2 = HairParams[shaderParamIndex].m_hairKs2;
params.m_Ex2 = HairParams[shaderParamIndex].m_hairEx2;
params.m_cuticleTilt = HairParams[shaderParamIndex].m_cuticleTilt;
params.m_roughness = HairParams[shaderParamIndex].m_roughness;
//---------------------------------------------------
float3 accumulatedLight = float3(0, 0, 1);
if (o_hairLightingModel == HairLightingModel::Kajiya)
{ // This option should be removed and the Kajiya-Kay model should be operated from within
// the Atom lighting loop.
accumulatedLight = SimplifiedHairLighting(vTangent, vPositionWS, vViewDirWS, params, vNDC);
}
else
{
accumulatedLight = CalculateLighting(screenCoords, vPositionWS, vViewDirWS, vTangent, thickness, params);
}
return accumulatedLight;
}

@ -0,0 +1,244 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
#pragma once
// --- Overview ---
//
// This is a modified Marschner Lighting Model for hair based on the following papers:
//
// The original Marschner Siggraph paper defining the fundementsal hair lighting model.
// http://www.graphics.stanford.edu/papers/hair/hair-sg03final.pdf
//
// An Energy-Conserving Hair Reflectance Model
// http://www.eugenedeon.com/project/an-energy-conserving-hair-reflectance-model/
// http://www.eugenedeon.com/wp-content/uploads/2014/04/egsrhair.pdf
//
// Physically Based Hair Shading in Unreal - specifically adapted in our shader
// https://blog.selfshadow.com/publications/s2016-shading-course/karis/s2016_pbs_epic_hair.pptx
//
// Strand-based Hair Rendering in Frostbite for reference
// https://advances.realtimerendering.com/s2019/hair_presentation_final.pdf
//
//
// Path Notations
// --------------
// The Marschner model divides hair rendering into three light paths: R, TT and TRT
// R - the light bounces straight off the hair fiber (Reflection)
// TT - the light penetrates the hair fiber and exits at the other side (Transmitance - Transmitance)
// TRT - the light penetrates the hair fiber, bounces inside and exists (Transmitance - Reflection - Transmitance)
//
// The calculations for each path are devided into longitude (M) and azimuth (N) terms:
// Longtitude: M_R, M_TT, M_TRT
// Azimuth: N_R, N_TT, N_TRT
//
// Other notations
// ---------------
// Wi - incoming light vector
// Wr - reflected light vector
// L - angles with respect to the Longitude
// O - vectors with respect to the azimuth
// Li and Lr are the longitudinal angles with respect to incoming/reflected light, i.e. the angle between
// those vector and the normal plane (plane perpendicular to the hair)
// Oi and Or are the azimuthal angles, i.e. the angles contained by the normal plane
// Lh and Oh are the averages, i.e. Lh = (Lr + Li) / 2 and Oh = (Or + Oi) / 2
// Ld is the difference angle, i.e. Ld = (Lr - Li) / 2
// O denotes the relative azimuth, simply O = (Or - Oi)
#include <Atom/RPI/Math.azsli>
//------------------------------------------------------------------------------
option bool o_enableMarschner_R = true;
option bool o_enableMarschner_TRT = true;
option bool o_enableMarschner_TT = true;
option bool o_enableDiffuseLobe = true;
option bool o_enableSpecularLobe = true;
option bool o_enableTransmittanceLobe = true;
//------------------------------------------------------------------------------
// Longitudinal functions (M_R, M_TT, M_RTR)
// Notice that tilt and roughness multipliers are a-priori per Epic artistic taste
float M_R(Surface surface, float Lh, float sinLiPlusSinLr)
{
float a = 1.0f * surface.cuticleTilt; // Tilt is translate as the mean offset
float b = 0.5 * surface.roughnessA2; // Roughness is used as the standard deviation
// return GaussianNormalized(sinLiPlusSinLr, a, b); // reference
return GaussianNormalized(Lh, a, b);
}
float M_TT(Surface surface, float Lh, float sinLiPlusSinLr)
{
float a = 1.0 * surface.cuticleTilt;
float b = 0.5 * surface.roughnessA2;
// return GaussianNormalized(sinLiPlusSinLr, a, b); // reference
return GaussianNormalized(Lh, a, b);
}
float M_TRT(Surface surface, float Lh, float sinLiPlusSinLr)
{
float a = 1.5 * surface.cuticleTilt;
float b = 1.0 * surface.roughnessA2;
// return GaussianNormalized(sinLiPlusSinLr, a, b); // reference
return GaussianNormalized(Lh, a, b);
}
// Azimuth functions (N_R, N_TT, N_RTR)
float N_R(Surface surface, float cos_O2, float3 Wi, float3 Wr, float f0)
{
// Fresnel part of the attentuation term (A in the papers)
float fresnel = FresnelSchlick( sqrt(0.5 * dot(Wi, Wr) + 0.5) , f0);
// Distribution term
float distribution = 0.25 * cos_O2;
// No absorption term since this is the reflected light path
return fresnel * distribution;
}
// Light passes through the hair and exits at the back - most dominant effect
// will be of lights at the back of the hair when the hair is thin and not concealed
float3 N_TT(Surface surface, float n2, float cos_O, float cos_O2, float3 cos_Ld, float f0)
{
// Helper variables (see papers)
float a = rcp(n2);
float h = (1 + a * (0.6 - (0.8 * cos_O))) * cos_O2;
// Fresnel part of the attentuation term (A in the papers)
float fresnel = FresnelSchlick(cos_Ld * sqrt( 1 - (h*h) ), f0);
fresnel = Pow2(1 - fresnel);
// The absorption part of the attenuation term (A in the papers).
float3 absorption = pow(surface.albedo, sqrt( 1 - (h*h*a*a) ) / (2 * cos_Ld) );
// Distribution term
float distribution = exp(-3.65 * cos_O - 3.98);
return absorption * (fresnel * distribution);
}
float3 N_TRT(Surface surface, float cos_O, float3 cos_Ld, float f0)
{
// Helper variables (see papers)
float h = sqrt(3.0f) * 0.5f;
// Fresnel part of the attentuation term (A in the papers)
float fresnel = FresnelSchlick(cos_Ld * sqrt( 1 - (h*h) ), f0);
fresnel = Pow2(1 - fresnel) * fresnel;
// How much light the hair will absorb. Part of the attenuation term (A in the papers)
float3 absorption = pow(surface.albedo, 0.8f / max(cos_Ld, 0.001) );
// Distribution term
float distribution = exp(17.0f * cos_O - 16.78f);
return absorption * (fresnel * distribution);
}
//------------------------------------------------------------------------------
// The BSDF lighting function used by Hair
//------------------------------------------------------------------------------
float3 HairMarschnerBSDF(Surface surface, LightingData lightingData, const float3 dirToLight)
{
//-------------- Lighting Parameters Calculations ---------------
// Incoming and outgoing light directions
float3 Wi = normalize(dirToLight); // Incident light direction
float3 Wr = normalize(lightingData.dirToCamera); // Reflected light measurement direction AKA reflection
float3 T = normalize(surface.tangent); // Hair tangent
// Incident light and reflection direction projected along the tangent
float Ti = T * dot(T, Wi);
float Tr = T * dot(T, Wr);
// The light and reflection vectors projected along the normal plane to the hair.
// The plane is spliting the Azimuth and Longtitude angles and vectors contributions.
float3 NPi = normalize(Wi - Ti);
float3 NPr = normalize(Wr - Tr);
// Azimuthal angle between the incident light vector and the reflection
// direction (the direction at which we measure the light scaterring)
// float O = acos(dot(NPi, NPr)) <- Unused, for reference only
float cos_O = dot(NPi, NPr);
// cosine(O / 2)
float cos_O2 = sqrt(0.5 * cos_O + 0.5); // <- trigonometric formula for calculating cos(x/2) given cos(x)
// Longitude angles
float Li = acos(clamp(dot(Wi, NPi),-1.0f, 1.0f));
float Lr = acos(clamp(dot(Wr, NPr),-1.0f, 1.0f));
float Lh = (Lr + Li) * 0.5f;
float Ld = (Lr - Li) * 0.5f;
// The folowing is according to the original article - reference only
// float sinLiPlusSinLr = dot(Wi, NPi) * dot(Wr, NPr);// sin(Li) + sin(Lr);
float sinLiPlusSinLr = sin(Li) + sin(Lr);
// Refraction index
const float n = 1.55f;
float cos_Ld = cos(Ld);
float n2 = (1.19f / cos_Ld) + 0.36f * cos_Ld;
// Fresnel F0
float f0 = Pow2( (1.0f - n) / (1.0f + n) );
//--------------- Lighting accumulation per lobe ------------------
float3 lighting = float3(0.0f, 0.0f, 0.0f);
// R Path - single reflection from the hair towards the eye.
if (o_enableMarschner_R)
{
float lighting_R = o_enableDiffuseLobe ? M_R(surface, Lh, sinLiPlusSinLr) : 1.0f;
if (o_enableSpecularLobe)
lighting_R *= N_R(surface, cos_O2, Wi, Wr, f0);
// The following lines are a cheap method to get occluded reflection by accoounting
// for the thickness if the reflection of light is going through the hair.
// A reminder for this approximation - this is the R and not the TT lobe.
float lightToEye = saturate(-dot(Wi, Wr));
float selfOcclude = lightToEye * surface.thickness;
float lightTransferRatio = 1.0f - selfOcclude;
lightTransferRatio *= lightTransferRatio;
lighting_R *= lightTransferRatio;
lighting += float3(lighting_R, lighting_R, lighting_R);
}
// TT Path - ray passes through the hair.
// The ray from the eye is refracted into the hair, then refracted again through the
// back of the hair. The main contribution here would for thin hair areas from lights
// behind the hair that are not concealed. For thicker hair this contribution should
// be blocked by the head consealing the back light and by the thickness of the hair
// that absorbs this energy over thick area hence reducing the energy pass based
// on the average thickness.
if (o_enableMarschner_TT)
{
float3 lighting_TT = o_enableDiffuseLobe ? M_TT(surface, Lh, sinLiPlusSinLr) : float3(1.0f, 1.0f, 1.0f);
if (o_enableSpecularLobe)
lighting_TT *= N_TT(surface, n2, cos_O, cos_O2, cos_Ld, f0);
// Reduce back transmittance based on the thickness of the hair
lighting_TT *= (1.0f - surface.thickness);
lighting += lighting_TT;
}
// TRT Path - ray refracted into the hair, reflected back inside and exits (refracted)
// the hair towards the eye.
if (o_enableMarschner_TRT)
{
float3 lighting_TRT = o_enableDiffuseLobe ? M_TRT(surface, Lh, sinLiPlusSinLr) : float3(1.0f, 1.0f, 1.0f);
if (o_enableSpecularLobe)
lighting_TRT *= N_TRT(surface, cos_O, cos_Ld, f0);
lighting += lighting_TRT;
}
return lighting;
}

@ -0,0 +1,14 @@
{
"Source" : "HairSimulationCompute.azsl",
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "LocalShapeConstraints",
"type": "Compute"
}
]
}
}

@ -0,0 +1,233 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//---------------------------------------------------------------------------------------
// Shader code related to per-pixel linked lists.
//-------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
//==============================================================================
#include <Atom/Features/SrgSemantics.azsli>
#include <HairRenderingSrgs.azsli>
//!------------------------------ SRG Structure --------------------------------
//! Per pass SRG the holds the dynamic shared read-write buffer shared
//! across all dispatches and draw calls and used for all the dynamic buffers
//! that can change between passes due to the application of skinning, simulation
//! and physics affect and is then read by the rendering shaders.
ShaderResourceGroup PassSrg : SRG_PerPass_WithFallback
{ //! This shared buffer needs to match the SharedBuffer structure
//! shared between all draw calls / dispatches for the hair skinning
StructuredBuffer<int> m_skinnedHairSharedBuffer;
//! Per Pixel Linked List data used by the render raster pass to generate per pixel
//! hair OIT data and shade it in the full screen resolve pass.
//! Originally used space3 for raster pass linked lists and space0 for the resolve pass.
RWTexture2D<uint> m_fragmentListHead;
RWStructuredBuffer<PPLL_STRUCT> m_linkedListNodes;
RWBuffer<uint> m_linkedListCounter;
// Linear depth is used for getting the screen to world transform
Texture2D<float> m_linearDepth;
}
//------------------------------------------------------------------------------
// Originally marked for the TressFX raster pass at space3
#define RWFragmentListHead PassSrg::m_fragmentListHead
#define LinkedListUAV PassSrg::m_linkedListNodes
#define LinkedListCounter PassSrg::m_linkedListCounter
//------------------------------------------------------------------------------
//!=============================================================================
//!
//! Per Instance Space 1 - Dynamic Buffers for Hair Skinning and Simulation
//!
//! ----------------------------------------------------------------------------
struct StrandLevelData
{
float4 skinningQuat;
float4 vspQuat;
float4 vspTranslation;
};
//!------------------------------ SRG Structure --------------------------------
//! Per instance/draw SRG representing dynamic read-write set of buffers
//! that are unique per instance and are shared and changed between passes due
//! to the application of skinning, simulation and physics affect.
//! It is then also read by the rendering shaders.
//! This Srg is NOT shared by the passes since it requires having barriers between
//! both passes and draw calls, instead, all buffers are allocated from a single
//! shared buffer (through BufferViews) and that buffer is then shared between
//! the passes via the PerPass Srg frequency.
ShaderResourceGroup HairDynamicDataSrg : SRG_PerObject // space 1 - per instance / object
{
Buffer<float4> m_hairVertexPositions;
Buffer<float4> m_hairVertexTangents;
//! Per hair object offset to the start location of each buffer within
//! 'm_skinnedHairSharedBuffer'. The offset is in bytes!
uint m_positionBufferOffset;
uint m_tangentBufferOffset;
};
//------------------------------------------------------------------------------
// Allow for the code to run with minimal changes - skinning / simulation compute passes
// Usage of per-instance buffer
#define g_GuideHairVertexPositions HairDynamicDataSrg::m_hairVertexPositions
#define g_GuideHairVertexTangents HairDynamicDataSrg::m_hairVertexTangents
//==============================================================================
#include <HairStrands.azsli>
//==============================================================================
//! Hair input structure to Pixel shaders
struct PS_INPUT_HAIR
{
float4 Position : SV_POSITION;
float4 Tangent : Tangent;
float4 p0p1 : TEXCOORD0;
float4 StrandColor : TEXCOORD1;
};
//! Hair Render VS
PS_INPUT_HAIR RenderHairVS(uint vertexId : SV_VertexID)
{
// uint2 scrSize;
// PassSrg::m_linearDepth.GetDimensions(scrSize.x, scrSize.y);
// TressFXVertex tressfxVert = GetExpandedTressFXVert(vertexId, g_vEye.xyz, float2(scrSize), g_mVP);
// [To Do] Hair: the above code should replace the existing but requires modifications to
// the function GetExpandedTressFXVert.
// Note that in Atom g_vViewport is aspect ratio and NOT size.
TressFXVertex tressfxVert = GetExpandedTressFXVert(vertexId, g_vEye.xyz, g_vViewport.zw, g_mVP);
PS_INPUT_HAIR Output;
Output.Position = tressfxVert.Position;
Output.Tangent = tressfxVert.Tangent;
Output.p0p1 = tressfxVert.p0p1;
Output.StrandColor = tressfxVert.StrandColor;
return Output;
}
// Allocate a new fragment location in fragment color, depth, and link buffers
int AllocateFragment(int2 vScreenAddress)
{
uint newAddress;
InterlockedAdd(LinkedListCounter[0], 1, newAddress);
if (newAddress < 0 || newAddress >= NodePoolSize)
newAddress = FRAGMENT_LIST_NULL;
return newAddress;
}
// Insert a new fragment at the head of the list. The old list head becomes the
// the second fragment in the list and so on. Return the address of the *old* head.
int MakeFragmentLink(int2 vScreenAddress, int nNewHeadAddress)
{
int nOldHeadAddress;
InterlockedExchange(RWFragmentListHead[vScreenAddress], nNewHeadAddress, nOldHeadAddress);
return nOldHeadAddress;
}
// Write fragment attributes to list location.
void WriteFragmentAttributes(int nAddress, int nPreviousLink, float4 vData, float3 vColor3, float fDepth)
{
PPLL_STRUCT element;
element.data = PackFloat4IntoUint(vData);
element.color = PackFloat3ByteIntoUint(vColor3, RenderParamsIndex);
element.depth = asuint(saturate(fDepth));
element.uNext = nPreviousLink;
LinkedListUAV[nAddress] = element;
}
// Use the following structure for debug purposes for outputting test results to RT
// You can use depth, color or both when outputing and testing the calculation.
struct PSOutput
{
float4 m_color : SV_Target0;
float m_depth : SV_Depth;
};
//////////////////////////////////////////////////////////////
// PPLL Fill PS
// First pass of PPLL implementation
// Builds up the linked list of hair fragments
[earlydepthstencil]
void PPLLFillPS(PS_INPUT_HAIR input)
{
// Strand Color read in is either the BaseMatColor, or BaseMatColor modulated with a color read
// from texture in the vertex shader for base color along with modulation by the tip color
float4 strandColor = float4(input.StrandColor.rgb, MatBaseColor.a);
// If we are supporting strand UV texturing, further blend in the texture color/alpha
// Do this while computing NDC and coverage to hide latency from texture lookup
if (EnableStrandUV)
{
// Grab the uv in case we need it
float2 uv = float2(input.Tangent.w, input.StrandColor.w);
// Apply StrandUVTiling
float2 strandUV = float2(uv.x, (uv.y * StrandUVTilingFactor) - floor(uv.y * StrandUVTilingFactor));
strandColor.rgb *= StrandAlbedoTexture.Sample(LinearWrapSampler, strandUV).rgb;
}
//////////////////////////////////////////////////////////////////////
// [To Do] Hair: anti aliasing via coverage requires work and is disabled for now
float3 vNDC = ScreenPosToNDC(PassSrg::m_linearDepth, input.Position.xy, input.Position.z);
uint2 dimensions;
PassSrg::m_linearDepth.GetDimensions(dimensions.x, dimensions.y);
// float coverage = ComputeCoverage(input.p0p1.xy, input.p0p1.zw, vNDC.xy, float2(dimensions.x, dimensions.y));
float coverage = 1.0;
/////////////////////////////////////////////////////////////////////
// Update the alpha to have proper value (accounting for coverage, base alpha, and strand alpha)
float alpha = coverage * strandColor.w;
// Early out
if (alpha < 1.f / 255.f)
{
discard;
}
// Get the screen address
int2 vScreenAddress = int2(input.Position.xy);
// Allocate a new fragment
int nNewFragmentAddress = AllocateFragment(vScreenAddress);
int nOldFragmentAddress = MakeFragmentLink(vScreenAddress, nNewFragmentAddress);
WriteFragmentAttributes(nNewFragmentAddress, nOldFragmentAddress,
float4(input.Tangent.xyz * 0.5 + float3(0.5, 0.5, 0.5), alpha),
strandColor.xyz,
input.Position.z
);
}

@ -0,0 +1,43 @@
{
"Source" : "HairRenderingFillPPLL.azsl",
"DrawList" : "hairFillPass",
"DepthStencilState" :
{
"Depth" :
{
"Enable" : true,
"CompareFunc" : "GreaterEqual"
// Originally in TressFX this is LessEqual - Atom is using reverse sort
},
"Stencil" :
{
"Enable" : false
}
},
"RasterState" :
{
"CullMode" : "None"
},
"BlendState" :
{
"Enable" : false
},
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "RenderHairVS",
"type": "Vertex"
},
{
"name": "PPLLFillPS",
"type": "Fragment"
}
]
}
}

@ -0,0 +1,394 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//---------------------------------------------------------------------------------------
// Shader code related to per-pixel linked lists.
//-------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
//==============================================================================
#include <Atom/Features/SrgSemantics.azsli>
#include <HairRenderingSrgs.azsli>
#define AMD_TRESSFX_MAX_HAIR_GROUP_RENDER 16
//!------------------------------ SRG Structure --------------------------------
//! Per pass SRG that holds the dynamic shared read-write buffer shared
//! across all dispatches and draw calls. It is used for all the dynamic buffers
//! that can change between passes due to the application of skinning, simulation
//! and physics affect.
//! Once the compute pases are done, it is read by the rendering shaders.
ShaderResourceGroup PassSrg : SRG_PerPass_WithFallback
{
//! Per Pixel Linked List data used by the render raster pass to generate per pixel
//! hair OIT data and shade it in the full screen resolve pass.
//! Originally used space3 for raster pass linked lists and space0 for the resolve pass.
Texture2D<uint> m_fragmentListHead;
StructuredBuffer<PPLL_STRUCT> m_linkedListNodes;
//! Per hair object material array used by the PPLL resolve pass
//! Originally in TressFXRendering.hlsl this is space 0
HairObjectShadeParams m_hairParams[AMD_TRESSFX_MAX_HAIR_GROUP_RENDER];
// Used as the base color to blend with the furthest hair strand - it is the first
// in the OIT process.
// It can also be used to avoid the HW blend done at the end of the pixel
// shader stage but HW blend might be cheaper than additional PS blend.
Texture2D<float4> m_frameBuffer; // The merged MSAA output
// Linear depth is used for getting the screen to world transform
Texture2D<float> m_linearDepth;
//------------------------------
// Lighting Data
//------------------------------
Sampler LinearSampler
{ // Required by LightingData.azsli
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
AddressU = Clamp;
AddressV = Clamp;
AddressW = Clamp;
};
Texture2DArray<float> m_directionalLightShadowmap;
Texture2DArray<float> m_directionalLightExponentialShadowmap;
Texture2DArray<float> m_projectedShadowmaps;
Texture2DArray<float> m_projectedExponentialShadowmap;
Texture2D m_brdfMap;
Texture2D<uint4> m_tileLightData;
StructuredBuffer<uint> m_lightListRemapped;
}
//------------------------------------------------------------------------------
// Originally defined for the TressFX resolve pass at space0
#define FragmentListHead PassSrg::m_fragmentListHead
#define LinkedListNodes PassSrg::m_linkedListNodes
//! The hair objects' material array buffer used by the rendering resolve pass
#define HairParams PassSrg::m_hairParams
//==============================================================================
#include <HairLighting.azsli>
#include <Atom/Features/PostProcessing/FullscreenVertexInfo.azsli>
#include <Atom/Features/PostProcessing/FullscreenVertexUtil.azsli>
// Generates a fullscreen triangle from pipeline provided vertex id
VSOutput FullScreenVS(VSInput input)
{
VSOutput OUT;
float4 posTex = GetVertexPositionAndTexCoords(input.m_vertexID);
OUT.m_texCoord = float2(posTex.z, posTex.w);
OUT.m_position = float4(posTex.x, posTex.y, 0.0, 1.0);
return OUT;
}
//////////////////////////////////////////////////////////////
// Bind data for PPLLResolvePS
#define NODE_DATA(x) LinkedListNodes[x].data
#define NODE_NEXT(x) LinkedListNodes[x].uNext
#define NODE_DEPTH(x) LinkedListNodes[x].depth
#define NODE_COLOR(x) LinkedListNodes[x].color
#define GET_DEPTH_AT_INDEX(uIndex) kBuffer[uIndex].x
#define GET_DATA_AT_INDEX(uIndex) kBuffer[uIndex].y
#define GET_COLOR_AT_INDEX(uIndex) kBuffer[uIndex].z
#define STORE_DEPTH_AT_INDEX(uIndex, uValue) kBuffer[uIndex].x = uValue
#define STORE_DATA_AT_INDEX( uIndex, uValue) kBuffer[uIndex].y = uValue
#define STORE_COLOR_AT_INDEX( uIndex, uValue ) kBuffer[uIndex].z = uValue
float GetLinearDepth(float zDepth)
{
return abs(((ViewSrg::GetFarZTimesNearZ()) / (ViewSrg::GetFarZMinusNearZ() * zDepth - ViewSrg::GetFarZ())));
}
float4 GatherLinkedList(float2 vfScreenAddress, float2 screenUV, inout float outDepth )
{
uint2 vScreenAddress = uint2(vfScreenAddress);
uint pointer = FragmentListHead[vScreenAddress];
if (pointer == FRAGMENT_LIST_NULL) // [To Do] Skips the very first hair if reset value is 0
{
discard;
}
uint4 kBuffer[KBUFFER_SIZE];
// Init kbuffer to far depth values (reverse depth - 0 is the furthest)
[unroll]
for (int t = 0; t < KBUFFER_SIZE; ++t)
{
STORE_DEPTH_AT_INDEX(t, asuint(0.0));
STORE_DATA_AT_INDEX(t, 0);
}
// Get first K elements from the top (top to bottom)
// And store them in the kbuffer for later
for (int p = 0; p < KBUFFER_SIZE; ++p)
{
if (pointer != FRAGMENT_LIST_NULL)
{
STORE_DEPTH_AT_INDEX(p, NODE_DEPTH(pointer));
STORE_DATA_AT_INDEX(p, NODE_DATA(pointer));
STORE_COLOR_AT_INDEX(p, NODE_COLOR(pointer));
pointer = NODE_NEXT(pointer);
}
}
// float4 fcolor = float4(1, 1, 1, 1); // Blend alpha and inverse alpha
// float4 fcolor = float4(1, 1, 1, 0); // Blend one and inverse alpha
// The very first color taken is the background render target pixel color.
// Alpha should be 1 for Alpha blend alpha and 0 for alpha One, depending on your
// alpha blending method of choice
// When using the render target as the input, alpha blending mode should be disabled!
float4 backgroundColor = float4(PassSrg::m_frameBuffer.Sample(PassSrg::LinearSampler, screenUV).xyz, 0); // Blend one and inverse alpha
float4 fcolor = backgroundColor;
float backgroundLinearDepth = PassSrg::m_linearDepth.Sample(PassSrg::LinearSampler, screenUV).x;
float previousLinearDepth = backgroundLinearDepth;
// Go through the remaining layers of hair
[allow_uav_condition]
for (int iFragment = 0; iFragment < MAX_FRAGMENTS && pointer != FRAGMENT_LIST_NULL ; ++iFragment)
{
int id = 0;
float minDepth = 1.0;
// Find the current furthest sample in the KBuffer
for (int i = 0; i < KBUFFER_SIZE; i++)
{
float fDepth = asfloat(GET_DEPTH_AT_INDEX(i));
if (minDepth > fDepth)
{
minDepth = fDepth;
id = i;
}
}
// Fetch the node data
uint data = NODE_DATA(pointer);
uint color = NODE_COLOR(pointer);
uint nodeDepth = NODE_DEPTH(pointer);
float fNodeDepth = asfloat(nodeDepth);
// If the node in the linked list is nearer than the furthest one in the local array, exchange the node
// in the local array for the one in the linked list.
if (minDepth < fNodeDepth)
{
uint tmp = GET_DEPTH_AT_INDEX(id);
STORE_DEPTH_AT_INDEX(id, nodeDepth);
fNodeDepth = asfloat(tmp);
tmp = GET_DATA_AT_INDEX(id);
STORE_DATA_AT_INDEX(id, data);
data = tmp;
tmp = GET_COLOR_AT_INDEX(id);
STORE_COLOR_AT_INDEX(id, color);
color = tmp;
}
// Calculate color contribution from whatever sample we are using
float4 vData = UnpackUintIntoFloat4(data);
float alpha = vData.w;
uint shadeParamIndex; // So we know what settings to shade with
float3 fragmentColor = UnpackUintIntoFloat3Byte(color, shadeParamIndex);
// Cheap back layers - the bottom hair layers use background and scalp base color
// The first layer blends the image buffer and the rest of the hairs are blended
// on top.
// These layer also used as the blocking factor for the TT lobe in the Marschner
// lighting model by accumulating the depth.
fcolor.xyz = fcolor.xyz * (1.f - alpha) + fragmentColor * alpha;
fcolor.w += alpha * (1.0f - fcolor.w);
pointer = NODE_NEXT(pointer);
}
// Make sure we are blending the correct number of strands (don't blend more than we have)
float maxAlpha = 0;
float minDepth = 1.0; // furthest fragment in Atom
const float closeRangeTH = 0.01f; // Lying on the skin - block lights from the back
const float gapRangeTH = 0.05f; // Far enough from the previous hair - allow for TT lobe to pass
bool isCloseToObject = false;
// Blend the top-most entries
for (int j = 0; j < KBUFFER_SIZE; j++)
{
int id = 0;
minDepth = 1.0;
// find the furthest node in the array
for (int i = 0; i < KBUFFER_SIZE; i++)
{
float fDepth = asfloat(GET_DEPTH_AT_INDEX(i));
if (minDepth > fDepth)
{
minDepth = fDepth;
id = i;
}
}
// take this node out of the next search
uint nodeDepth = GET_DEPTH_AT_INDEX(id);
uint data = GET_DATA_AT_INDEX(id);
uint color = GET_COLOR_AT_INDEX(id);
// take this node out of the next search
STORE_DEPTH_AT_INDEX(id, asuint(1.0));
// Use high quality shading for the nearest k fragments
float fDepth = asfloat(nodeDepth);
float currentLinearDepth = GetLinearDepth(fDepth);
// Light should not pass through hair if the back of the hair is too close to the
// background object. In this case mark the hair as thick to prevent TT lobe from
// transmitting light and cancel light accumulation passed so far.
bool currentIsCloseToObject = (backgroundLinearDepth - currentLinearDepth < closeRangeTH) ? true : false;
if (!isCloseToObject && currentIsCloseToObject)
{ // Indicate that the object behind is very close - need to preven any light passage.
// Food for Thought: should the color be reset to background color / other?
isCloseToObject = true; // TT should be blocked
// fcolor.xyz = backgroundColor; // remove the accumulated lighting so far
fcolor.w = 1.0; // Mark hair as thick / blocked from behind.
}
// When the front hair strands are separated from the back, hence creating a large
// gap we should only count for the front hair group thickness (restart counting).
bool hairHasGap = (previousLinearDepth - currentLinearDepth > gapRangeTH) ? true : false;
if (!currentIsCloseToObject && hairHasGap)
{ // These is a gap to the previous strands group - restard depth blocking
fcolor.w = 0.0f; // Reset the hair thickness - large gap.
}
previousLinearDepth = currentLinearDepth;
float4 vData = UnpackUintIntoFloat4(data);
float3 vTangent = vData.xyz;
float alpha = vData.w; // Alpha will be used to determine light pass
uint shadeParamIndex; // So we know what settings to shade with
float3 vColor = UnpackUintIntoFloat3Byte(color, shadeParamIndex);
float3 fragmentColor = TressFXShading(vfScreenAddress, fDepth, vTangent, vColor, fcolor.w, shadeParamIndex);
// Blend in the fragment color
fcolor.xyz = fcolor.xyz * (1.f - alpha) + fragmentColor * alpha;
// No HW alpha blending - the first layer blends the image buffer and
// the rest of the hairs are blended on top. However, this might be used
// as the blocking factor for the TT lobe in the Marschner lighting model
// to gradually block light passing through the hair strands from the back.
fcolor.w += alpha * (1.0f - fcolor.w);
}
outDepth = minDepth; // Output closest hair depth
return fcolor;
}
//!-----------------------------------------------------------------------------
//! This method is a testing method for displaying only the closest hair
//! strand for getting a clear method for testing the lighting elelments, the
//! depth and the blending of a single hair strand.
//!-----------------------------------------------------------------------------
float4 GetClosestFragment(float2 vfScreenAddress, float2 screenUV, inout float closestDepth)
{
uint2 vScreenAddress = uint2(vfScreenAddress);
uint pointer = FragmentListHead[vScreenAddress];
if (pointer == FRAGMENT_LIST_NULL)
{
discard;
}
float4 fcolor = float4(PassSrg::m_frameBuffer.Sample(PassSrg::LinearSampler, screenUV).xyz, 0); // Blend one and inverse alpha
float maxDepth = -999.0f;
float minDepth = 999.0f;
uint curColor, curData;
for ( ; (pointer!=FRAGMENT_LIST_NULL) ; )
{
float depth = asfloat(NODE_DEPTH(pointer));
if (depth > maxDepth)
{
maxDepth = depth;
curColor = NODE_COLOR(pointer);
curData = NODE_DATA(pointer);
}
if (depth < minDepth)
{
minDepth = depth;
}
pointer = NODE_NEXT(pointer);
}
float curDepth = closestDepth = maxDepth;
float4 vData = UnpackUintIntoFloat4(curData);
float3 vTangent = vData.xyz;
float alpha = 1.0;
uint shadeParamIndex; // the material index
float3 vColor = UnpackUintIntoFloat3Byte(curColor, shadeParamIndex);
float3 fragmentColor = TressFXShading(vfScreenAddress, curDepth, vTangent, vColor, fcolor.w, shadeParamIndex);
// Blend in the fragment color
fcolor.xyz = fcolor.xyz * (1.f - alpha) + (fragmentColor * alpha);
fcolor.w = saturate(fcolor.w + alpha); // Blend alpha and inverse alpha
/*--------------------
// Depth Testing - this block will draw [closets hair depth, furthest hair depth, background depth]
float backgroundLinearDepth = PassSrg::m_linearDepth.Sample(PassSrg::LinearSampler, screenUV).x;
float minLinearDepth = GetLinearDepth(minDepth);
float maxLinearDepth = GetLinearDepth(maxDepth);
fcolor = float4( minLinearDepth * 0.1f, maxLinearDepth * 0.1f, backgroundLinearDepth * 0.1f, 1.0f);
//------------------- */
return fcolor;
}
struct PSColorDepthOutput
{
float4 m_color : SV_Target;
float m_depth : SV_Depth;
};
// The resolve will combine the base color driven by the further fragments while
// the actual shading will be combined on top based on the shaded closest fragments.
// The closest depth will be returned and written in the depth buffer.
PSColorDepthOutput PPLLResolvePS(VSOutput input)
{
PSColorDepthOutput pixOut;
// GetClosestFragment is a refernece method for testing the closest hair strand lone
// pixOut.m_color = GetClosestFragment(input.m_position.xy, input.m_texCoord, tangent, pixOut.m_depth);
pixOut.m_color = GatherLinkedList(input.m_position.xy, input.m_texCoord, pixOut.m_depth);
return pixOut;
}

@ -0,0 +1,42 @@
{
"Source" : "HairRenderingResolvePPLL.azsl",
"DepthStencilState" :
{
"Depth" :
{
"Enable" : true, // The resolve will write the closest hair depth
// Pixels that don't belong to the hair will be discarded, otherwise if a fragment
// already exists in the list, it passed the depth test in the fill pass
"CompareFunc" : "Always"
},
"Stencil" :
{
"Enable" : false
}
},
// Note that in the original TressFX 4.1 a blend operation is used with source One and destination AlphaSource.
// In our current implementation alpha blending is not required as the backbuffer is being sampled to create
// the proper background color. This will prevent having the slight silhuette that the original implementation
// sometime had.
"BlendState" :
{
"Enable" : false
},
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "FullScreenVS",
"type": "Vertex"
},
{
"name": "PPLLResolvePS",
"type": "Fragment"
}
]
}
}

@ -0,0 +1,203 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//------------------------------------------------------------------------------
// Shader code related to lighting and shadowing for TressFX
//------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
// originated from: TressFXRendering.hlsl
#pragma once
#include <viewsrg.srgi>
//------------------------------------------------------------------------------
#define g_mVP ViewSrg::m_viewProjectionMatrix
#define g_vEye ViewSrg::m_worldPosition.xyz
#define g_vViewport ViewSrg::m_unprojectionConstants // xy = normaliz scale, zw = offset in 0..1 scale
#define cInvViewProjMatrix ViewSrg::m_viewProjectionInverseMatrix
//------------------------------------------------------------------------------
#define AMD_PI 3.1415926
#define AMD_e 2.71828183
#define AMD_TRESSFX_KERNEL_SIZE 5
#define KBUFFER_SIZE 8
#define MAX_FRAGMENTS 512
// Using value 0 as indicator will skip the first hair fragment to allow
// for clear screen value (0).
// If Atom will allow other value (0xffffffff) this can be avoided.
#define FRAGMENT_LIST_NULL 0
//!=============================================================================
//!
//! Render Pass only
//!
//!=============================================================================
// If you change this, you MUST also change TressFXRenderParams in TressFXConstantBuffers.h
// originally: cbuffer TressFXParameters : register(b3, space0)
struct TressFXRenderParameters
{
// General information
float m_hairFiberRadius;
// For deep approximated shadow lookup
float m_shadowAlpha;
float m_fiberSpacing;
// Original TressFX Kajiya lighting model parameters
float m_hairKs2;
float m_hairEx2;
float3 m_fPadding0;
float4 m_matKValue;
int m_maxShadowFibers;
// Marschner lighting model parameters
float m_roughness;
float m_cuticleTilt;
float m_fPadding1;
};
// Separate strand params from pixel render params (so we can index for PPLL)
// If you change this, you MUST change TressFXStrandParams in TressFXConstantBuffers.h
// originally: cbuffer TressFXStrandParameters : register(b4, space0)
struct TressFXStrandParameters
{
float4 m_matBaseColor;
float4 m_matTipColor;
float m_tipPercentage;
float m_strandUVTilingFactor;
float m_fiberRatio;
float m_fiberRadius;
int m_numVerticesPerStrand;
int m_enableThinTip;
int m_nodePoolSize;
int m_renderParamsIndex; // Material index in the hair material array
// Other params
int m_enableStrandUV;
int m_enableStrandTangent;
int2 m_iPadding1;
};
//!------------------------------ SRG Structure --------------------------------
//! Used by the rendering raster pass only - might be possible to harness
//! the material pipeline / tooling.
//! This is the per draw Srg containing the specific per hair object parameters
//! for the physics simulation and material rendering.
//! Originally at space0 in TressFXRendering.hlsl
//!-----------------------------------------------------------------------------
ShaderResourceGroup HairRenderingMaterialSrg : SRG_PerMaterial
{
// Generated
Buffer<float> m_hairThicknessCoeffs; // Does not seem to be used!
Buffer<float2> m_hairStrandTexCd;
//! The hair textures defining the color of the hair at its base
Texture2D<float4> m_baseAlbedoTexture;
Texture2D<float4> m_strandAlbedoTexture;
//! The hair render material properties (combined with the above textures)
TressFXRenderParameters m_tressFXRenderParameters;
//! The hair object physical material properties.
TressFXStrandParameters m_tressFXStrandParameters;
Sampler LinearWrapSampler
{
MinFilter = Linear;
MagFilter = Linear;
MipFilter = Linear;
AddressU = Wrap;
AddressV = Wrap;
AddressW = Wrap;
};
};
//------------------------------------------------------------------------------
#define g_HairThicknessCoeffs HairRenderingMaterialSrg::m_hairThicknessCoeffs
#define g_HairStrandTexCd HairRenderingMaterialSrg::m_hairStrandTexCd
#define BaseAlbedoTexture HairRenderingMaterialSrg::m_baseAlbedoTexture
#define StrandAlbedoTexture HairRenderingMaterialSrg::m_strandAlbedoTexture
#define LinearWrapSampler HairRenderingMaterialSrg::LinearWrapSampler
#define HairFiberRadius HairRenderingMaterialSrg::m_tressFXRenderParameters.m_hairFiberRadius
#define ShadowAlpha HairRenderingMaterialSrg::m_tressFXRenderParameters.m_shadowAlpha
#define FiberSpacing HairRenderingMaterialSrg::m_tressFXRenderParameters.m_fiberSpacing
#define HairKs2 HairRenderingMaterialSrg::m_tressFXRenderParameters.m_hairKs2
#define HairEx2 HairRenderingMaterialSrg::m_tressFXRenderParameters.m_hairEx2
#define MatKValue HairRenderingMaterialSrg::m_tressFXRenderParameters.m_matKValue
#define MaxShadowFibers HairRenderingMaterialSrg::m_tressFXRenderParameters.m_maxShadowFibers
#define MatBaseColor HairRenderingMaterialSrg::m_tressFXStrandParameters.m_matBaseColor
#define MatTipColor HairRenderingMaterialSrg::m_tressFXStrandParameters.m_matTipColor
#define TipPercentage HairRenderingMaterialSrg::m_tressFXStrandParameters.m_tipPercentage
#define StrandUVTilingFactor HairRenderingMaterialSrg::m_tressFXStrandParameters.m_strandUVTilingFactor
#define FiberRatio HairRenderingMaterialSrg::m_tressFXStrandParameters.m_fiberRatio
#define FiberRadius HairRenderingMaterialSrg::m_tressFXStrandParameters.m_fiberRadius
#define NumVerticesPerStrand HairRenderingMaterialSrg::m_tressFXStrandParameters.m_numVerticesPerStrand
#define EnableThinTip HairRenderingMaterialSrg::m_tressFXStrandParameters.m_enableThinTip
#define NodePoolSize HairRenderingMaterialSrg::m_tressFXStrandParameters.m_nodePoolSize
#define RenderParamsIndex HairRenderingMaterialSrg::m_tressFXStrandParameters.m_renderParamsIndex
#define EnableStrandUV HairRenderingMaterialSrg::m_tressFXStrandParameters.m_enableStrandUV
#define EnableStrandTangent HairRenderingMaterialSrg::m_tressFXStrandParameters.m_enableStrandTangent
//------------------------------------------------------------------------------
//------------------------------------------------------------------------------
//! If you change this, you MUST also change TressFXShadeParams in TressFXConstantBuffers.h
//! and ShadeParams in TressFXShortcut.hlsl
struct HairObjectShadeParams
{
// General information
float m_fiberRadius;
// For deep approximated shadow lookup
float m_shadowAlpha;
float m_fiberSpacing;
// Original TressFX Kajiya lighting model parameters
float m_hairEx2;
float4 m_matKValue; // KAmbient, KDiffuse, KSpec1, Exp1
float m_hairKs2;
// Marschner lighting model parameters
float m_roughness;
float m_cuticleTilt;
float fPadding0;
};
//------------------------------------------------------------------------------
struct PPLL_STRUCT
{
uint depth;
uint data;
uint color;
uint uNext;
};
//------------------------------------------------------------------------------

@ -0,0 +1,444 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//---------------------------------------------------------------------------------------
// Shader code related to simulating hair strands in compute.
//-------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
//--------------------------------------------------------------------------------------
// File: HairSimulation.azsl
//
// Physics simulation of hair using compute shaders
//--------------------------------------------------------------------------------------
#pragma once
#define USE_MESH_BASED_HAIR_TRANSFORM 0
// If you change the value below, you must change it in TressFXAsset.h as well.
#ifndef THREAD_GROUP_SIZE
#define THREAD_GROUP_SIZE 64
#endif
groupshared float4 sharedPos[THREAD_GROUP_SIZE];
groupshared float4 sharedTangent[THREAD_GROUP_SIZE];
groupshared float sharedLength[THREAD_GROUP_SIZE];
//--------------------------------------------------------------------------------------
//
// Helper Functions for the main simulation shaders
//
//--------------------------------------------------------------------------------------
bool IsMovable(float4 particle)
{
if ( particle.w > 0 )
return true;
return false;
}
float2 ConstraintMultiplier(float4 particle0, float4 particle1)
{
if (IsMovable(particle0))
{
if (IsMovable(particle1))
return float2(0.5, 0.5);
else
return float2(1, 0);
}
else
{
if (IsMovable(particle1))
return float2(0, 1);
else
return float2(0, 0);
}
}
float4 MakeQuaternion(float angle_radian, float3 axis)
{
// create quaternion using angle and rotation axis
float4 quaternion;
float halfAngle = 0.5f * angle_radian;
float sinHalf = sin(halfAngle);
quaternion.w = cos(halfAngle);
quaternion.xyz = sinHalf * axis.xyz;
return quaternion;
}
// Makes a quaternion from a 4x4 column major rigid transform matrix. Rigid transform means that rotational 3x3 sub matrix is orthonormal.
// Note that this function does not check the orthonormality.
float4 MakeQuaternion(column_major float4x4 m)
{
float4 q;
float trace = m[0][0] + m[1][1] + m[2][2];
if (trace > 0.0f)
{
float r = sqrt(trace + 1.0f);
q.w = 0.5 * r;
r = 0.5 / r;
q.x = (m[1][2] - m[2][1])*r;
q.y = (m[2][0] - m[0][2])*r;
q.z = (m[0][1] - m[1][0])*r;
}
else
{
int i = 0, j = 1, k = 2;
if (m[1][1] > m[0][0])
{
i = 1; j = 2; k = 0;
}
if (m[2][2] > m[i][i])
{
i = 2; j = 0; k = 1;
}
float r = sqrt(m[i][i] - m[j][j] - m[k][k] + 1.0f);
float qq[4];
qq[i] = 0.5f * r;
r = 0.5f / r;
q.w = (m[j][k] - m[k][j])*r;
qq[j] = (m[j][i] + m[i][j])*r;
qq[k] = (m[k][i] + m[i][k])*r;
q.x = qq[0]; q.y = qq[1]; q.z = qq[2];
}
return q;
}
float4 InverseQuaternion(float4 q)
{
float lengthSqr = q.x*q.x + q.y*q.y + q.z*q.z + q.w*q.w;
if ( lengthSqr < 0.001 )
return float4(0, 0, 0, 1.0f);
q.x = -q.x / lengthSqr;
q.y = -q.y / lengthSqr;
q.z = -q.z / lengthSqr;
q.w = q.w / lengthSqr;
return q;
}
float3 MultQuaternionAndVector(float4 q, float3 v)
{
float3 uv, uuv;
float3 qvec = float3(q.x, q.y, q.z);
uv = cross(qvec, v);
uuv = cross(qvec, uv);
uv *= (2.0f * q.w);
uuv *= 2.0f;
return v + uv + uuv;
}
float4 MultQuaternionAndQuaternion(float4 qA, float4 qB)
{
float4 q;
q.w = qA.w * qB.w - qA.x * qB.x - qA.y * qB.y - qA.z * qB.z;
q.x = qA.w * qB.x + qA.x * qB.w + qA.y * qB.z - qA.z * qB.y;
q.y = qA.w * qB.y + qA.y * qB.w + qA.z * qB.x - qA.x * qB.z;
q.z = qA.w * qB.z + qA.z * qB.w + qA.x * qB.y - qA.y * qB.x;
return q;
}
float4 NormalizeQuaternion(float4 q)
{
float4 qq = q;
float n = q.x*q.x + q.y*q.y + q.z*q.z + q.w*q.w;
if (n < 1e-10f)
{
qq.w = 1;
return qq;
}
qq *= 1.0f / sqrt(n);
return qq;
}
// Compute a quaternion which rotates u to v. u and v must be unit vector.
float4 QuatFromTwoUnitVectors(float3 u, float3 v)
{
float r = 1.f + dot(u, v);
float3 n;
// if u and v are parallel
if (r < 1e-7)
{
r = 0.0f;
n = abs(u.x) > abs(u.z) ? float3(-u.y, u.x, 0.f) : float3(0.f, -u.z, u.y);
}
else
{
n = cross(u, v);
}
float4 q = float4(n.x, n.y, n.z, r);
return NormalizeQuaternion(q);
}
// Make the inpute 4x4 matrix be identity
float4x4 MakeIdentity()
{
float4x4 m;
m._m00 = 1; m._m01 = 0; m._m02 = 0; m._m03 = 0;
m._m10 = 0; m._m11 = 1; m._m12 = 0; m._m13 = 0;
m._m20 = 0; m._m21 = 0; m._m22 = 1; m._m23 = 0;
m._m30 = 0; m._m31 = 0; m._m32 = 0; m._m33 = 1;
return m;
}
void ApplyDistanceConstraint(inout float4 pos0, inout float4 pos1, float targetDistance, float stiffness = 1.0)
{
float3 delta = pos1.xyz - pos0.xyz;
float distance = max(length(delta), 1e-7);
float stretching = 1 - targetDistance / distance;
delta = stretching * delta;
float2 multiplier = ConstraintMultiplier(pos0, pos1);
pos0.xyz += multiplier[0] * delta * stiffness;
pos1.xyz -= multiplier[1] * delta * stiffness;
}
void CalcIndicesInVertexLevelTotal(uint local_id, uint group_id, inout uint globalStrandIndex, inout uint localStrandIndex, inout uint globalVertexIndex, inout uint localVertexIndex, inout uint numVerticesInTheStrand, inout uint indexForSharedMem, inout uint strandType)
{
indexForSharedMem = local_id;
numVerticesInTheStrand = (THREAD_GROUP_SIZE / g_NumOfStrandsPerThreadGroup);
localStrandIndex = local_id % g_NumOfStrandsPerThreadGroup;
globalStrandIndex = group_id * g_NumOfStrandsPerThreadGroup + localStrandIndex;
localVertexIndex = (local_id - localStrandIndex) / g_NumOfStrandsPerThreadGroup;
strandType = GetStrandType(globalStrandIndex);
globalVertexIndex = globalStrandIndex * numVerticesInTheStrand + localVertexIndex;
}
void CalcIndicesInVertexLevelMaster(uint local_id, uint group_id, inout uint globalStrandIndex, inout uint localStrandIndex, inout uint globalVertexIndex, inout uint localVertexIndex, inout uint numVerticesInTheStrand, inout uint indexForSharedMem, inout uint strandType)
{
indexForSharedMem = local_id;
numVerticesInTheStrand = (THREAD_GROUP_SIZE / g_NumOfStrandsPerThreadGroup);
localStrandIndex = local_id % g_NumOfStrandsPerThreadGroup;
globalStrandIndex = group_id * g_NumOfStrandsPerThreadGroup + localStrandIndex;
globalStrandIndex *= (g_NumFollowHairsPerGuideHair+1);
localVertexIndex = (local_id - localStrandIndex) / g_NumOfStrandsPerThreadGroup;
strandType = GetStrandType(globalStrandIndex);
globalVertexIndex = globalStrandIndex * numVerticesInTheStrand + localVertexIndex;
}
void CalcIndicesInStrandLevelTotal(uint local_id, uint group_id, inout uint globalStrandIndex, inout uint numVerticesInTheStrand, inout uint globalRootVertexIndex, inout uint strandType)
{
globalStrandIndex = THREAD_GROUP_SIZE*group_id + local_id;
numVerticesInTheStrand = (THREAD_GROUP_SIZE / g_NumOfStrandsPerThreadGroup);
strandType = GetStrandType(globalStrandIndex);
globalRootVertexIndex = globalStrandIndex * numVerticesInTheStrand;
}
void CalcIndicesInStrandLevelMaster(uint local_id, uint group_id, inout uint globalStrandIndex, inout uint numVerticesInTheStrand, inout uint globalRootVertexIndex, inout uint strandType)
{
globalStrandIndex = THREAD_GROUP_SIZE*group_id + local_id;
globalStrandIndex *= (g_NumFollowHairsPerGuideHair+1);
numVerticesInTheStrand = (THREAD_GROUP_SIZE / g_NumOfStrandsPerThreadGroup);
strandType = GetStrandType(globalStrandIndex);
globalRootVertexIndex = globalStrandIndex * numVerticesInTheStrand;
}
//--------------------------------------------------------------------------------------
//
// Integrate
//
// Verlet integration for calculating the new position based on exponential decay to move
// from the current position towards an approximated extrapolation point based
// on the velocity between the two last points and influenced by gravity force.
//--------------------------------------------------------------------------------------
float3 Integrate(float3 curPosition, float3 oldPosition, float3 initialPos, float dampingCoeff = 1.0f)
{
float3 force = g_GravityMagnitude * float3(0, 0, -1.0f);
// float decay = exp(-g_TimeStep/decayTime)
float decay = exp(-dampingCoeff * g_TimeStep * 60.0f);
return curPosition + decay * (curPosition - oldPosition) + force * g_TimeStep * g_TimeStep;
}
struct CollisionCapsule
{
float4 p0; // xyz = position of capsule 0, w = radius 0
float4 p1; // xyz = position of capsule 1, w = radius 1
};
//--------------------------------------------------------------------------------------
//
// CapsuleCollision
//
// Moves the position based on collision with capsule
//
//--------------------------------------------------------------------------------------
bool CapsuleCollision(float4 curPosition, float4 oldPosition, inout float3 newPosition, CollisionCapsule cc, float friction = 0.4f)
{
const float radius0 = cc.p0.w;
const float radius1 = cc.p1.w;
newPosition = curPosition.xyz;
if ( !IsMovable(curPosition) )
return false;
float3 segment = cc.p1.xyz - cc.p0.xyz;
float3 delta0 = curPosition.xyz - cc.p0.xyz;
float3 delta1 = cc.p1.xyz - curPosition.xyz;
float dist0 = dot(delta0, segment);
float dist1 = dot(delta1, segment);
// colliding with sphere 1
if (dist0 < 0.f )
{
if ( dot(delta0, delta0) < radius0 * radius0)
{
float3 n = normalize(delta0);
newPosition = radius0 * n + cc.p0.xyz;
return true;
}
return false;
}
// colliding with sphere 2
if (dist1 < 0.f )
{
if ( dot(delta1, delta1) < radius1 * radius1)
{
float3 n = normalize(-delta1);
newPosition = radius1 * n + cc.p1.xyz;
return true;
}
return false;
}
// colliding with middle cylinder
float3 x = (dist0 * cc.p1.xyz + dist1 * cc.p0.xyz) / (dist0 + dist1);
float3 delta = curPosition.xyz - x;
float radius_at_x = (dist0 * radius1 + dist1 * radius0) / (dist0 + dist1);
if ( dot(delta, delta) < radius_at_x * radius_at_x)
{
float3 n = normalize(delta);
float3 vec = curPosition.xyz - oldPosition.xyz;
float3 segN = normalize(segment);
float3 vecTangent = dot(vec, segN) * segN;
float3 vecNormal = vec - vecTangent;
newPosition = oldPosition.xyz + friction * vecTangent + (vecNormal + radius_at_x * n - delta);
return true;
}
return false;
}
float3 ApplyVertexBoneSkinning(float3 vertexPos, BoneSkinningData skinningData, inout float4 bone_quat)
{
float3 newVertexPos;
#if TRESSFX_DQ
{
// weighted rotation part of dual quaternion
float4 nq = g_BoneSkinningDQ[skinningData.boneIndex.x * 2] * skinningData.boneWeight.x +
g_BoneSkinningDQ[skinningData.boneIndex.y * 2] * skinningData.boneWeight.y +
g_BoneSkinningDQ[skinningData.boneIndex.z * 2] * skinningData.boneWeight.z +
g_BoneSkinningDQ[skinningData.boneIndex.w * 2] * skinningData.boneWeight.w;
// weighted tranlation part of dual quaternion
float4 dq = g_BoneSkinningDQ[skinningData.boneIndex.x * 2 + 1] * skinningData.boneWeight.x +
g_BoneSkinningDQ[skinningData.boneIndex.y * 2 + 1] * skinningData.boneWeight.y +
g_BoneSkinningDQ[skinningData.boneIndex.z * 2 + 1] * skinningData.boneWeight.z +
g_BoneSkinningDQ[skinningData.boneIndex.w * 2 + 1] * skinningData.boneWeight.w;
float len = rsqrt(dot(nq, nq));
nq *= len;
dq *= len;
bone_quat = nq;
//convert translation part of dual quaternion to translation vector:
float3 translation = (nq.w*dq.xyz - dq.w*nq.xyz + cross(nq.xyz, dq.xyz)) * 2;
newVertexPos = MultQuaternionAndVector(nq, vertexPos) + translation.xyz;
}
#else
{
// Interpolate world space bone matrices using weights.
row_major float4x4 bone_matrix = g_BoneSkinningMatrix[skinningData.boneIndex[0]] * skinningData.boneWeight[0];
float weight_sum = skinningData.boneWeight[0];
for (int i = 1; i < 4; i++)
{
if (skinningData.boneWeight[i] > 0)
{
bone_matrix += g_BoneSkinningMatrix[skinningData.boneIndex[i]] * skinningData.boneWeight[i];
weight_sum += skinningData.boneWeight[i];
}
}
bone_matrix /= weight_sum;
bone_quat = MakeQuaternion(bone_matrix);
newVertexPos = mul(float4(vertexPos, 1), bone_matrix).xyz;
}
#endif
return newVertexPos;
}
//--------------------------------------------------------------------------------------
//
// UpdateFinalVertexPositions
//
// Updates the hair vertex positions based on the physics simulation
//
//--------------------------------------------------------------------------------------
void UpdateFinalVertexPositions(float4 oldPosition, float4 newPosition, int globalVertexIndex)
{
SetSharedPrevPrevPosition(globalVertexIndex, GetSharedPrevPosition(globalVertexIndex));
SetSharedPrevPosition(globalVertexIndex, oldPosition);
SetSharedPosition(globalVertexIndex, newPosition);
}
//--------------------------------------------------------------------------------------

@ -0,0 +1,437 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//---------------------------------------------------------------------------------------
// Shader code related to simulating hair strands in compute.
//-------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
//--------------------------------------------------------------------------------------
#include <HairSimulationSRGs.azsli>
#include <HairSimulationCommon.azsli>
//--------------------------------------------------------------------------------------
//
// IntegrationAndGlobalShapeConstraints
//
// Compute shader to simulate the gravitational force with integration and to maintain the
// global shape constraints.
//
// One thread computes one vertex.
//
//--------------------------------------------------------------------------------------
[numthreads(THREAD_GROUP_SIZE, 1, 1)]
void IntegrationAndGlobalShapeConstraints(
uint GIndex : SV_GroupIndex,
uint3 GId : SV_GroupID,
uint3 DTid : SV_DispatchThreadID)
{
uint globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType;
CalcIndicesInVertexLevelMaster(GIndex, GId.x, globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType);
// Copy data from init rest data to be used to set updated shared memory
float4 initialPos = float4(CM_TO_METERS,CM_TO_METERS,CM_TO_METERS,1.0) * g_InitialHairPositions[globalVertexIndex]; // rest position
// Apply bone skinning to initial position
BoneSkinningData skinningData = g_BoneSkinningData[globalStrandIndex];
float4 bone_quat;
initialPos.xyz = ApplyVertexBoneSkinning(initialPos.xyz, skinningData, bone_quat);
// position when this step starts. In other words, a position from the last step.
sharedPos[indexForSharedMem] = GetSharedPosition(globalVertexIndex);
float4 currentPos = sharedPos[indexForSharedMem];
// float4 currentPos = sharedPos[indexForSharedMem] = g_HairVertexPositions[globalVertexIndex];
GroupMemoryBarrierWithGroupSync();
// Integrate
float dampingCoeff = GetDamping(strandType);
float4 oldPos = g_HairVertexPositionsPrev[globalVertexIndex];
// reset if we got teleported
if (g_ResetPositions != 0.0f)
{ // Originally part of the data here was NaN as the original TressFX code wrote number
// vertices including the follow hair although the shader accounts for that, hence
// memory was overwriten. In our implementation the memory resides all within
// a single buffer and this would actively overwrite the rest of the buffer hence
// destroying the original contexnt.
currentPos = initialPos;
g_HairVertexPositions[globalVertexIndex] = initialPos;
g_HairVertexPositionsPrev[globalVertexIndex] = initialPos;
g_HairVertexPositionsPrevPrev[globalVertexIndex] = initialPos;
oldPos = initialPos;
}
// skipping all the physics simulation in between
if ( IsMovable(currentPos) )
sharedPos[indexForSharedMem].xyz = Integrate(currentPos.xyz, oldPos.xyz, initialPos.xyz, dampingCoeff);
else
sharedPos[indexForSharedMem] = initialPos;
// Global Shape Constraints
float stiffnessForGlobalShapeMatching = GetGlobalStiffness(strandType);
float globalShapeMatchingEffectiveRange = GetGlobalRange(strandType);
if ( stiffnessForGlobalShapeMatching > 0 && globalShapeMatchingEffectiveRange )
{
if ( IsMovable(sharedPos[indexForSharedMem]) )
{
if ( (float)localVertexIndex < globalShapeMatchingEffectiveRange * (float)numVerticesInTheStrand )
{
float factor = stiffnessForGlobalShapeMatching;
float3 del = factor * (initialPos - sharedPos[indexForSharedMem]).xyz;
sharedPos[indexForSharedMem].xyz += del;
}
}
}
// update global position buffers
UpdateFinalVertexPositions(currentPos, sharedPos[indexForSharedMem], globalVertexIndex);
}
//--------------------------------------------------------------------------------------
//
// Calculate Strand Level Data
//
// Propagate velocity shock resulted by attached based mesh
//
// One thread computes two vertices within a strand.
//
//--------------------------------------------------------------------------------------
[numthreads(THREAD_GROUP_SIZE, 1, 1)]
void CalculateStrandLevelData(
uint GIndex : SV_GroupIndex,
uint3 GId : SV_GroupID,
uint3 DTid : SV_DispatchThreadID)
{
uint local_id, group_id, globalStrandIndex, numVerticesInTheStrand, globalRootVertexIndex, strandType;
CalcIndicesInStrandLevelMaster(GIndex, GId.x, globalStrandIndex, numVerticesInTheStrand, globalRootVertexIndex, strandType);
// Accounting for the right and left side of the strand.
float4 pos_old_old[2]; // previous previous positions for vertex 0 (root) and vertex 1.
float4 pos_old[2]; // previous positions for vertex 0 (root) and vertex 1.
float4 pos_new[2]; // current positions for vertex 0 (root) and vertex 1.
pos_old_old[0] = g_HairVertexPositionsPrevPrev[globalRootVertexIndex];
pos_old_old[1] = g_HairVertexPositionsPrevPrev[globalRootVertexIndex + 1];
pos_old[0] = g_HairVertexPositionsPrev[globalRootVertexIndex];
pos_old[1] = g_HairVertexPositionsPrev[globalRootVertexIndex + 1];
pos_new[0] = g_HairVertexPositions[globalRootVertexIndex];
pos_new[1] = g_HairVertexPositions[globalRootVertexIndex + 1];
float3 u = normalize(pos_old[1].xyz - pos_old[0].xyz);
float3 v = normalize(pos_new[1].xyz - pos_new[0].xyz);
// Compute rotation and translation which transform pos_old to pos_new.
// Since the first two vertices are immovable, we can assume that there is no scaling during tranform.
float4 rot = QuatFromTwoUnitVectors(u, v);
float3 trans = pos_new[0].xyz - MultQuaternionAndVector(rot, pos_old[0].xyz);
float vspCoeff = GetVelocityShockPropogation();
float restLength0 = g_HairRestLengthSRV[globalRootVertexIndex];
float vspAccelThreshold = GetVSPAccelThreshold();
// Increase the VSP coefficient by checking pseudo-acceleration to handle over-stretching when the character moves very fast
float accel = length(pos_new[1] - 2.0 * pos_old[1] + pos_old_old[1]);
if (accel > vspAccelThreshold)
vspCoeff = 1.0f;
g_StrandLevelData[globalStrandIndex].vspQuat = rot;
g_StrandLevelData[globalStrandIndex].vspTranslation = float4(trans, vspCoeff);
// Skinning
// Copy data from init rest data to be used to set updated shared memory
float4 initialPos = float4(CM_TO_METERS,CM_TO_METERS,CM_TO_METERS,1.0) * g_InitialHairPositions[globalRootVertexIndex]; // rest position
// Apply bone skinning to initial position
BoneSkinningData skinningData = g_BoneSkinningData[globalStrandIndex];
float4 bone_quat;
initialPos.xyz = ApplyVertexBoneSkinning(initialPos.xyz, skinningData, bone_quat);
g_StrandLevelData[globalStrandIndex].skinningQuat = bone_quat;
}
//--------------------------------------------------------------------------------------
//
// VelocityShockPropagation
//
// Propagate velocity shock resulted by attached based mesh
//
// One thread computes a vertex in a strand.
//
//--------------------------------------------------------------------------------------
[numthreads(THREAD_GROUP_SIZE, 1, 1)]
void VelocityShockPropagation(
uint GIndex : SV_GroupIndex,
uint3 GId : SV_GroupID,
uint3 DTid : SV_DispatchThreadID)
{
uint globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType;
CalcIndicesInVertexLevelMaster(GIndex, GId.x, globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType);
// The first two vertices are the ones attached to the skin
if (localVertexIndex < 2)
return;
float4 vspQuat = g_StrandLevelData[globalStrandIndex].vspQuat;
float4 vspTrans = g_StrandLevelData[globalStrandIndex].vspTranslation;
float vspCoeff = vspTrans.w;
float4 pos_new_n = g_HairVertexPositions[globalVertexIndex];
float4 pos_old_n = g_HairVertexPositionsPrev[globalVertexIndex];
pos_new_n.xyz = (1.f - vspCoeff) * pos_new_n.xyz + vspCoeff * (MultQuaternionAndVector(vspQuat, pos_new_n.xyz) + vspTrans.xyz);
pos_old_n.xyz = (1.f - vspCoeff) * pos_old_n.xyz + vspCoeff * (MultQuaternionAndVector(vspQuat, pos_old_n.xyz) + vspTrans.xyz);
g_HairVertexPositions[globalVertexIndex].xyz = pos_new_n.xyz;
g_HairVertexPositionsPrev[globalVertexIndex].xyz = pos_old_n.xyz;
}
//--------------------------------------------------------------------------------------
//
// LocalShapeConstraints
//
// Compute shader to maintain the local shape constraints.
//
// One thread computes one strand.
//
//--------------------------------------------------------------------------------------
[numthreads(THREAD_GROUP_SIZE, 1, 1)]
void LocalShapeConstraints(
uint GIndex : SV_GroupIndex,
uint3 GId : SV_GroupID,
uint3 DTid : SV_DispatchThreadID)
{
uint local_id, group_id, globalStrandIndex, numVerticesInTheStrand, globalRootVertexIndex, strandType;
CalcIndicesInStrandLevelMaster(GIndex, GId.x, globalStrandIndex, numVerticesInTheStrand, globalRootVertexIndex, strandType);
// stiffness for local shape constraints
float stiffnessForLocalShapeMatching = GetLocalStiffness(strandType);
// Going beyond the TH will create less stability in convergence
const float stabilityTH = 0.95f;
stiffnessForLocalShapeMatching = 0.5f * min(stiffnessForLocalShapeMatching, stabilityTH);
//--------------------------------------------
// Local shape constraint for bending/twisting
//--------------------------------------------
{
float4 boneQuat = g_StrandLevelData[globalStrandIndex].skinningQuat;
// vertex 1 through n-1
for (uint localVertexIndex = 1; localVertexIndex < numVerticesInTheStrand - 1; localVertexIndex++)
{
uint globalVertexIndex = globalRootVertexIndex + localVertexIndex;
float4 pos = g_HairVertexPositions[globalVertexIndex];
float4 pos_plus_one = g_HairVertexPositions[globalVertexIndex + 1];
float4 pos_minus_one = g_HairVertexPositions[globalVertexIndex - 1];
float3 bindPos = MultQuaternionAndVector(boneQuat, g_InitialHairPositions[globalVertexIndex].xyz * CM_TO_METERS);
float3 bindPos_plus_one = MultQuaternionAndVector(boneQuat, g_InitialHairPositions[globalVertexIndex + 1].xyz * CM_TO_METERS);
float3 bindPos_minus_one = MultQuaternionAndVector(boneQuat, g_InitialHairPositions[globalVertexIndex - 1].xyz * CM_TO_METERS);
float3 lastVec = pos.xyz - pos_minus_one.xyz;
float3 vecBindPose = bindPos_plus_one - bindPos;
float3 lastVecBindPose = bindPos - bindPos_minus_one;
float4 rotGlobal = QuatFromTwoUnitVectors(normalize(lastVecBindPose), normalize(lastVec));
float3 orgPos_i_plus_1_InGlobalFrame = MultQuaternionAndVector(rotGlobal, vecBindPose) + pos.xyz;
float3 del = stiffnessForLocalShapeMatching * (orgPos_i_plus_1_InGlobalFrame - pos_plus_one.xyz);
if (IsMovable(pos))
pos.xyz -= del.xyz;
if (IsMovable(pos_plus_one))
pos_plus_one.xyz += del.xyz;
g_HairVertexPositions[globalVertexIndex].xyz = pos.xyz;
g_HairVertexPositions[globalVertexIndex + 1].xyz = pos_plus_one.xyz;
}
}
}
//--------------------------------------------------------------------------------------
//
// LengthConstriantsWindAndCollision
//
// Compute shader to move the vertex position based on wind, maintain the lenght constraints
// and handles collisions.
//
// One thread computes one vertex.
//
//--------------------------------------------------------------------------------------
[numthreads(THREAD_GROUP_SIZE, 1, 1)]
void LengthConstriantsWindAndCollision(uint GIndex : SV_GroupIndex,
uint3 GId : SV_GroupID,
uint3 DTid : SV_DispatchThreadID)
{
uint globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType;
CalcIndicesInVertexLevelMaster(GIndex, GId.x, globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType);
uint numOfStrandsPerThreadGroup = g_NumOfStrandsPerThreadGroup;
//------------------------------
// Copy data into shared memory
//------------------------------
sharedPos[indexForSharedMem] = g_HairVertexPositions[globalVertexIndex];
sharedLength[indexForSharedMem] = g_HairRestLengthSRV[globalVertexIndex] * CM_TO_METERS;
GroupMemoryBarrierWithGroupSync();
/*
//------------
// Wind - does not work yet and requires some LTC
//------------
if (any(g_Wind.xyz)) // g_Wind.w is the current frame
{
float4 force = float4(0, 0, 0, 0);
if ( localVertexIndex >= 2 && localVertexIndex < numVerticesInTheStrand-1 )
{
// combining four winds.
float a = ((float)(globalStrandIndex % 20))/20.0f;
float3 w = a* g_Wind.xyz + (1.0f - a) * g_Wind1.xyz + a * g_Wind2.xyz + (1.0f - a) * g_Wind3.xyz;
// float3 w = float3(5.2, 0, 0);
uint sharedIndex = localVertexIndex * numOfStrandsPerThreadGroup + localStrandIndex;
float3 v = sharedPos[sharedIndex].xyz - sharedPos[sharedIndex+numOfStrandsPerThreadGroup].xyz;
float3 force = -cross(cross(v, w), v);
sharedPos[sharedIndex].xyz += force*g_TimeStep*g_TimeStep;
}
}
GroupMemoryBarrierWithGroupSync();
*/
//----------------------------
// Enforce length constraints
//----------------------------
uint a = numVerticesInTheStrand/2.0f;
uint b = (numVerticesInTheStrand-1)/2.0f;
int lengthContraintIterations = GetLengthConstraintIterations();
for ( int iterationE=0; iterationE < lengthContraintIterations; iterationE++ )
{
uint sharedIndex = 2 * localVertexIndex * numOfStrandsPerThreadGroup + localStrandIndex;
// Notice that the following is based on the fact that we are dealing here with two vertices
// one at each side of the central control point and each should extend towards its side only.
if( localVertexIndex < a )
ApplyDistanceConstraint(sharedPos[sharedIndex], sharedPos[sharedIndex+numOfStrandsPerThreadGroup], sharedLength[sharedIndex].x);
GroupMemoryBarrierWithGroupSync();
if( localVertexIndex < b )
ApplyDistanceConstraint(sharedPos[sharedIndex+numOfStrandsPerThreadGroup], sharedPos[sharedIndex+numOfStrandsPerThreadGroup*2], sharedLength[sharedIndex+numOfStrandsPerThreadGroup].x);
GroupMemoryBarrierWithGroupSync();
}
//------------------------------------------
// Collision handling with capsule objects
//------------------------------------------
float4 oldPos = g_HairVertexPositionsPrev[globalVertexIndex];
bool bAnyColDetected = false; // Adi
// bool bAnyColDetected = ResolveCapsuleCollisions(sharedPos[indexForSharedMem], oldPos);
GroupMemoryBarrierWithGroupSync();
//-------------------
// Compute tangent
//-------------------
// If this is the last vertex in the strand, we can't get tangent from subtracting from the next vertex, need to use last vertex to current
uint indexForTangent = (localVertexIndex == numVerticesInTheStrand - 1) ? indexForSharedMem - numOfStrandsPerThreadGroup : indexForSharedMem;
float3 tangent = sharedPos[indexForTangent + numOfStrandsPerThreadGroup].xyz - sharedPos[indexForTangent].xyz;
g_HairVertexTangents[globalVertexIndex].xyz = normalize(tangent);
//---------------------------------------
// clamp velocities, rewrite history
//---------------------------------------
float3 positionDelta = sharedPos[indexForSharedMem].xyz - oldPos;
float speedSqr = dot(positionDelta, positionDelta);
if (speedSqr > g_ClampPositionDelta * g_ClampPositionDelta) {
positionDelta *= g_ClampPositionDelta * g_ClampPositionDelta / speedSqr;
g_HairVertexPositionsPrev[globalVertexIndex].xyz = sharedPos[indexForSharedMem].xyz - positionDelta;
}
//---------------------------------------
// update global position buffers
//---------------------------------------
g_HairVertexPositions[globalVertexIndex] = sharedPos[indexForSharedMem];
if (bAnyColDetected)
g_HairVertexPositionsPrev[globalVertexIndex] = sharedPos[indexForSharedMem];
return;
}
//--------------------------------------------------------------------------------------
//
// UpdateFollowHairVertices
//
// Last stage update of the follow hair to follow their guide hair
//
// One thread computes one vertex.
//
//--------------------------------------------------------------------------------------
[numthreads(THREAD_GROUP_SIZE, 1, 1)]
void UpdateFollowHairVertices(
uint GIndex : SV_GroupIndex,
uint3 GId : SV_GroupID,
uint3 DTid : SV_DispatchThreadID)
{
uint globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType;
CalcIndicesInVertexLevelMaster(GIndex, GId.x, globalStrandIndex, localStrandIndex, globalVertexIndex, localVertexIndex, numVerticesInTheStrand, indexForSharedMem, strandType);
sharedPos[indexForSharedMem] = GetSharedPosition(globalVertexIndex); // g_HairVertexPositions[globalVertexIndex];
sharedTangent[indexForSharedMem].xyz = GetSharedTangent(globalVertexIndex); // g_HairVertexTangents[globalVertexIndex];
GroupMemoryBarrierWithGroupSync();
for ( uint i = 0; i < g_NumFollowHairsPerGuideHair; i++ )
{
int globalFollowVertexIndex = globalVertexIndex + numVerticesInTheStrand * (i + 1);
int globalFollowStrandIndex = globalStrandIndex + i + 1;
float factor = g_TipSeparationFactor*((float)localVertexIndex / (float)numVerticesInTheStrand) + 1.0f;
float3 followPos = sharedPos[indexForSharedMem].xyz + factor * CM_TO_METERS * g_FollowHairRootOffset[globalFollowStrandIndex].xyz;
SetSharedPosition3(globalFollowVertexIndex, followPos);
// g_HairVertexPositions[globalFollowVertexIndex].xyz = followPos;
//-----------------------
// SetSharedTangent(globalFollowVertexIndex, sharedTangent[indexForSharedMem]);
g_HairVertexTangents[globalFollowVertexIndex] = sharedTangent[indexForSharedMem];
}
return;
}

@ -0,0 +1,179 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//-----------------------------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
//------------------------------------------------------------------------------
// File: HairSRGs.azsli
//
// Declarations of SRGs used by the hair shaders.
//------------------------------------------------------------------------------
#pragma once
#include <HairSrgs.azsli>
//!-----------------------------------------------------------------------------
//!
//! Skinning / Simulation Render Usage
//! Per Objects Space 0 - Static Buffers for Hair Generation
//!
//! ----------------------------------------------------------------------------
struct TressFXSimulationParams
{
float4 m_Wind;
float4 m_Wind1;
float4 m_Wind2;
float4 m_Wind3;
float4 m_Shape; // damping, local stiffness, global stiffness, global range.
float4 m_GravTimeTip; // gravity maginitude (assumed to be in negative y direction.)
int4 m_SimInts; // Length iterations, local iterations, collision flag.
int4 m_Counts; // num strands per thread group, num follow hairs per guid hair, num verts per strand.
float4 m_VSP; // VSP parameters - controls how Velocity Shock Propogation will dictate how
// fast velocities are handled and can be compensated using the hair root velocity
float m_ResetPositions;
float m_ClampPositionDelta;
float m_pad1;
float m_pad2;
#if TRESSFX_DQ // this option is currently not functional
float4 m_BoneSkinningDQ[AMD_TRESSFX_MAX_NUM_BONES * 2];
#else
row_major float4x4 m_BoneSkinningMatrix[AMD_TRESSFX_MAX_NUM_BONES];
#endif
};
struct BoneSkinningData
{
float4 boneIndex; // x, y, z and w component are four bone indices per strand
float4 boneWeight; // x, y, z and w component are four bone weights per strand
};
//!------------------------------ SRG Structure --------------------------------
//! This is the static Srg required for the hair genration per hair object draw.
//! The data is used only for skinning / simulation and doesn't change between
//! the object's passes.
//! To match to the original TressFX naming follow the global defines bellow.
ShaderResourceGroup HairGenerationSrg : SRG_PerDraw
{
// Buffers containing hair generation properties.
Buffer<float4> m_initialHairPositions;
Buffer<float> m_hairRestLengthSRV;
Buffer<float> m_hairStrandType;
Buffer<float4> m_followHairRootOffset;
StructuredBuffer<BoneSkinningData> m_boneSkinningData;
// Constant buffer structure reflected in code as 'TressFXSimulationParams'
TressFXSimulationParams m_tressfxSimParameters;
};
//------------------------------------------------------------------------------
// Allow for the code to run with minimal changes - compute passes usage
#define g_InitialHairPositions HairGenerationSrg::m_initialHairPositions
#define g_HairRestLengthSRV HairGenerationSrg::m_hairRestLengthSRV
#define g_HairStrandType HairGenerationSrg::m_hairStrandType
#define g_FollowHairRootOffset HairGenerationSrg::m_followHairRootOffset
#define g_BoneSkinningData HairGenerationSrg::m_boneSkinningData
#define g_NumOfStrandsPerThreadGroup HairGenerationSrg::m_tressfxSimParameters.m_Counts.x
#define g_NumFollowHairsPerGuideHair HairGenerationSrg::m_tressfxSimParameters.m_Counts.y
#define g_NumVerticesPerStrand HairGenerationSrg::m_tressfxSimParameters.m_Counts.z
#define g_NumLocalShapeMatchingIterations HairGenerationSrg::m_tressfxSimParameters.m_SimInts.y
#define g_GravityMagnitude HairGenerationSrg::m_tressfxSimParameters.m_GravTimeTip.x
#define g_TimeStep HairGenerationSrg::m_tressfxSimParameters.m_GravTimeTip.y
#define g_TipSeparationFactor HairGenerationSrg::m_tressfxSimParameters.m_GravTimeTip.z
#define g_Wind HairGenerationSrg::m_tressfxSimParameters.m_Wind
#define g_Wind1 HairGenerationSrg::m_tressfxSimParameters.m_Wind1
#define g_Wind2 HairGenerationSrg::m_tressfxSimParameters.m_Wind2
#define g_Wind3 HairGenerationSrg::m_tressfxSimParameters.m_Wind3
#define g_ResetPositions HairGenerationSrg::m_tressfxSimParameters.m_ResetPositions
#define g_ClampPositionDelta HairGenerationSrg::m_tressfxSimParameters.m_ClampPositionDelta
#define g_BoneSkinningDQ HairGenerationSrg::m_tressfxSimParameters.m_BoneSkinningDQ
#define g_BoneSkinningMatrix HairGenerationSrg::m_tressfxSimParameters.m_BoneSkinningMatrix
// We no longer support groups (indirection).
int GetStrandType(int globalThreadIndex)
{
return 0;
}
float GetDamping(int strandType)
{
// strand type unused.
// In the future, we may create an array and use indirection.
return HairGenerationSrg::m_tressfxSimParameters.m_Shape.x;
}
float GetLocalStiffness(int strandType)
{
// strand type unused.
// In the future, we may create an array and use indirection.
return HairGenerationSrg::m_tressfxSimParameters.m_Shape.y;
}
float GetGlobalStiffness(int strandType)
{
// strand type unused.
// In the future, we may create an array and use indirection.
return HairGenerationSrg::m_tressfxSimParameters.m_Shape.z;
}
float GetGlobalRange(int strandType)
{
// strand type unused.
// In the future, we may create an array and use indirection.
return HairGenerationSrg::m_tressfxSimParameters.m_Shape.w;
}
float GetVelocityShockPropogation()
{
return HairGenerationSrg::m_tressfxSimParameters.m_VSP.x;
}
float GetVSPAccelThreshold()
{
return HairGenerationSrg::m_tressfxSimParameters.m_VSP.y;
}
int GetLocalConstraintIterations()
{
return (int)HairGenerationSrg::m_tressfxSimParameters.m_SimInts.y;
}
int GetLengthConstraintIterations()
{
return (int)HairGenerationSrg::m_tressfxSimParameters.m_SimInts.x;
}
//------------------------------------------------------------------------------

@ -0,0 +1,217 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//-----------------------------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
//------------------------------------------------------------------------------
// File: HairSRGs.azsli
//
// Declarations of SRGs used by the hair shaders.
//------------------------------------------------------------------------------
#pragma once
#include <Atom/Features/SrgSemantics.azsli>
// Whether bones are specified by dual quaternion.
// This option is not currently functional.
#define TRESSFX_DQ 0
//! notice - the following constants need to match what appears in AMD_TressFX.h
#define AMD_TRESSFX_MAX_HAIR_GROUP_RENDER 16
#define AMD_TRESSFX_MAX_NUM_BONES 512
#define CM_TO_METERS 1.0
#define METERS_TO_CM 1.0
//#define CM_TO_METERS 0.01
//#define METERS_TO_CM 100.0
// The following macro is not being used yet due to limitation of the C preprocessor
// mcpp that creates a shader compilation fault not being able to extend the macro.
#define BYTE_OFFSET(index,baseOffset) ((baseOffset >> 2) + (index << 2))
//!------------------------------ SRG Structure --------------------------------
//! Per pass SRG the holds the dynamic read-write buffer shared across all
//! dispatches and draw calls and used as the memory pool for all the dynamic
//! buffer that can change between passes due to the application of skinning,
//! simulation and physics affect and read by the rendering shaders.
ShaderResourceGroup PassSrg : SRG_PerPass
{
RWStructuredBuffer<int> m_skinnedHairSharedBuffer;
}
//!=============================================================================
//!
//! Per Instance Space 1 - Dynamic Buffers for Hair Skinning and Simulation
//!
//! ----------------------------------------------------------------------------
struct StrandLevelData
{
float4 skinningQuat;
float4 vspQuat;
float4 vspTranslation;
};
//!------------------------------ SRG Structure --------------------------------
//! Per instance/draw SRG representing dynamic read-write set of buffers
//! that are unique per instance and are shared and changed between passes due
//! to the application of skinning, simulation and physics affect.
//! It is then also read by the rendering shaders.
//! This Srg is NOT shared by the passes since it requires having barriers between
//! both passes and draw calls, instead, all buffers are allocated from a single
//! shared buffer (through BufferViews) and that buffer is then shared between
//! the passes via the PerPass Srg frequency.
ShaderResourceGroup HairDynamicDataSrg : SRG_PerObject // space 1 - per instance / object
{
RWBuffer<float4> m_hairVertexPositions;
RWBuffer<float4> m_hairVertexPositionsPrev;
RWBuffer<float4> m_hairVertexPositionsPrevPrev;
RWBuffer<float4> m_hairVertexTangents;
RWStructuredBuffer<StrandLevelData> m_strandLevelData;
//! Per hair object offset to the start location of each buffer within
//! 'm_skinnedHairSharedBuffer'. The offset is in bytes!
uint m_positionBufferOffset;
uint m_positionPrevBufferOffset;
uint m_positionPrevPrevBufferOffset;
uint m_tangentBufferOffset;
uint m_strandLevelDataOffset;
};
//------------------------------------------------------------------------------
// Allow for the code to run with minimal changes - skinning / simulation compute passes
// Usage of per-instance buffer
#define g_HairVertexPositions HairDynamicDataSrg::m_hairVertexPositions
#define g_HairVertexPositionsPrev HairDynamicDataSrg::m_hairVertexPositionsPrev
#define g_HairVertexPositionsPrevPrev HairDynamicDataSrg::m_hairVertexPositionsPrevPrev
#define g_HairVertexTangents HairDynamicDataSrg::m_hairVertexTangents
#define g_StrandLevelData HairDynamicDataSrg::m_strandLevelData
//------------------------------------------------------------------------------
float3 GetSharedVector3(int offset)
{
return float3(
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 1]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 2])
);// *CM_TO_METERS; // convert to meters when using
}
void SetSharedVector3(int offset, float3 pos)
{
// pos.xyz *= METERS_TO_CM; // convert to cm when storing
PassSrg::m_skinnedHairSharedBuffer[offset] = asint(pos.x);
PassSrg::m_skinnedHairSharedBuffer[offset+1] = asint(pos.y);
PassSrg::m_skinnedHairSharedBuffer[offset+2] = asint(pos.z);
}
float4 GetSharedVector4(int offset)
{
return float4(
float3(
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 1]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 2])
),// * CM_TO_METERS, // convert to meters when using
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 3])
);
}
void SetSharedVector4(int offset, float4 pos)
{
// pos.xyz *= METERS_TO_CM; // convert to cm when storing
PassSrg::m_skinnedHairSharedBuffer[offset] = asint(pos.x);
PassSrg::m_skinnedHairSharedBuffer[offset+1] = asint(pos.y);
PassSrg::m_skinnedHairSharedBuffer[offset+2] = asint(pos.z);
PassSrg::m_skinnedHairSharedBuffer[offset+3] = asint(pos.w);
}
//------------------------------------------------------------------------------
//! Getter/setter of position / tangent in the global shared buffer based on the
//! per-instance offset of the instance positions buffer within the global shared buffer
void SetSharedPosition3(int vertexIndex, float3 position)
{
int vertexOffset = (HairDynamicDataSrg::m_positionBufferOffset >> 2) + (vertexIndex << 2);
SetSharedVector3(vertexOffset, position);
}
void SetSharedPosition(int vertexIndex, float4 position)
{
int vertexOffset = (HairDynamicDataSrg::m_positionBufferOffset >> 2) + (vertexIndex << 2);
SetSharedVector4(vertexOffset, position);
}
float4 GetSharedPosition(int vertexIndex)
{
int vertexOffset = (HairDynamicDataSrg::m_positionBufferOffset >> 2) + (vertexIndex << 2);
return GetSharedVector4(vertexOffset);
}
void SetSharedPrevPosition(int vertexIndex, float4 position)
{
int vertexOffset = (HairDynamicDataSrg::m_positionPrevBufferOffset >> 2) + (vertexIndex << 2);
SetSharedVector4(vertexOffset, position);
}
float4 GetSharedPrevPosition(int vertexIndex)
{
int vertexOffset = (HairDynamicDataSrg::m_positionPrevBufferOffset >> 2) + (vertexIndex << 2);
return GetSharedVector4(vertexOffset);
}
void SetSharedPrevPrevPosition(int vertexIndex, float4 position)
{
int vertexOffset = (HairDynamicDataSrg::m_positionPrevPrevBufferOffset >> 2) + (vertexIndex << 2);
SetSharedVector4(vertexOffset, position);
}
float4 GetSharedPrevPrevPosition(int vertexIndex)
{
int vertexOffset = (HairDynamicDataSrg::m_positionPrevPrevBufferOffset >> 2) + (vertexIndex << 2);
return GetSharedVector4(vertexOffset);
}
void SetSharedTangent(int tangentIndex, float3 currentTangent)
{
int tangentOffset = (HairDynamicDataSrg::m_tangentBufferOffset >> 2) + (tangentIndex << 2);
PassSrg::m_skinnedHairSharedBuffer[tangentOffset] = asint(currentTangent.x);
PassSrg::m_skinnedHairSharedBuffer[tangentOffset+1] = asint(currentTangent.y);
PassSrg::m_skinnedHairSharedBuffer[tangentOffset+2] = asint(currentTangent.z);
}
float3 GetSharedTangent(int tangentIndex)
{
int tangentOffset = (HairDynamicDataSrg::m_tangentBufferOffset >> 2) + (tangentIndex << 2);
return float3(
asfloat(PassSrg::m_skinnedHairSharedBuffer[tangentOffset]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[tangentOffset + 1]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[tangentOffset + 2])
);
}

@ -0,0 +1,182 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//---------------------------------------------------------------------------------------
// Shader code related to hair strands in the graphics pipeline.
//-------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
#pragma once
#include <HairUtilities.azsli>
#define CM_TO_METERS_RENDER 0.01
float4 GetSharedVector4(int offset)
{
return float4(
float3(
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 1]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 2])
),// * CM_TO_METERS, // convert to meters when using
asfloat(PassSrg::m_skinnedHairSharedBuffer[offset + 3])
);
}
float4 GetSharedPosition(int vertexIndex)
{
int vertexOffset = (HairDynamicDataSrg::m_positionBufferOffset >> 2) + (vertexIndex << 2);
return GetSharedVector4(vertexOffset);
}
float3 GetSharedTangent(int tangentIndex)
{
int tangentOffset = (HairDynamicDataSrg::m_tangentBufferOffset >> 2) + (tangentIndex << 2);
return float3(
asfloat(PassSrg::m_skinnedHairSharedBuffer[tangentOffset]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[tangentOffset + 1]),
asfloat(PassSrg::m_skinnedHairSharedBuffer[tangentOffset + 2])
);
}
struct TressFXVertex
{
float4 Position;
float4 Tangent;
float4 p0p1;
float4 StrandColor;
};
float3 GetStrandColor(int index, float fractionOfStrand)
{
float3 rootColor;
float3 tipColor;
float2 texCd = g_HairStrandTexCd[(float) index / NumVerticesPerStrand].xy;
rootColor = BaseAlbedoTexture.SampleLevel(LinearWrapSampler, texCd, 0).rgb;
tipColor = MatTipColor.rgb;
// Multiply with Base Material color
rootColor *= MatBaseColor.rgb;
// Update the color based on position along the strand (vertex level) and lerp between tip and root if within the tipPercentage requested
float rootRange = 1.f - TipPercentage;
return (fractionOfStrand > rootRange) ? lerp(rootColor, tipColor, (fractionOfStrand - rootRange) / TipPercentage) : rootColor;
}
TressFXVertex GetExpandedTressFXVert(uint vertexId, float3 eye, float2 winSize, float4x4 viewProj)
{
// Access the current line / curve segment - remember that the mesh is built around
// the center line / curve that is expanded as the vertices.
uint index = vertexId / 2; // vertexId is the indexed vertex id when indexed triangles are used
// Get updated positions and tangents from simulation result
// float3 v = GetSharedPosition(index).xyz;
float3 v = g_GuideHairVertexPositions[index].xyz;
// Both approaches (offset to shared buffer or BufferView) will work!
// float3 t = GetSharedTangent(index);
float3 t = g_GuideHairVertexTangents[index].xyz;
// Get hair strand thickness
uint indexInStrand = index % NumVerticesPerStrand;
float fractionOfStrand = (float)indexInStrand / (NumVerticesPerStrand - 1);
float ratio = (EnableThinTip > 0) ? lerp(1.0, FiberRatio, fractionOfStrand) : 1.0; // need length of full strand vs the length of this point on the strand.
// Calculate right and projected right vectors
float3 right = Safe_normalize(cross(t, Safe_normalize(v - eye)));
float2 proj_right = Safe_normalize(MatrixMult(viewProj, float4(right, 0)).xy);
// We always to to expand for faster hair AA, we may want to gauge making this adjustable
float expandPixels = 0.71 * CM_TO_METERS_RENDER;
// Calculate the negative and positive offset screenspace positions
float4 hairEdgePositions[2]; // 0 is negative, 1 is positive
hairEdgePositions[0] = float4(v - right * ratio * FiberRadius, 1.0);
hairEdgePositions[1] = float4(v + right * ratio * FiberRadius, 1.0);
hairEdgePositions[0] = MatrixMult(viewProj, hairEdgePositions[0]);
hairEdgePositions[1] = MatrixMult(viewProj, hairEdgePositions[1]);
// Gonna hi-jack Tangent.w (unused) and add a .w component to strand color to store a strand UV
float2 strandUV;
strandUV.x = (vertexId & 0x01) ? 0.f : 1.f;
strandUV.y = fractionOfStrand;
// Write output data
TressFXVertex Output = (TressFXVertex)0;
float fDirIndex = (vertexId & 0x01) ? -1.0 : 1.0;
Output.Position = ((vertexId & 0x01) ? hairEdgePositions[0] : hairEdgePositions[1])
// [To Do] Hair: remove the scale
+ CM_TO_METERS_RENDER * fDirIndex * float4(proj_right * expandPixels / winSize.y, 0.0f, 0.0f)
* ((vertexId & 0x01) ? hairEdgePositions[0].w : hairEdgePositions[1].w);
Output.Tangent = float4(t, strandUV.x);
Output.p0p1 = float4(hairEdgePositions[0].xy / max(hairEdgePositions[0].w, TRESSFX_FLOAT_EPSILON), hairEdgePositions[1].xy / max(hairEdgePositions[1].w, TRESSFX_FLOAT_EPSILON));
Output.StrandColor = float4(GetStrandColor(index, fractionOfStrand), strandUV.y);
return Output;
}
TressFXVertex GetExpandedTressFXShadowVert(uint vertexId, float3 eye, float2 winSize, float4x4 viewProj)
{
// Access the current line segment
uint index = vertexId / 2; // vertexId is actually the indexed vertex id when indexed triangles are used
// Get updated positions and tangents from simulation result
// float3 v = GetSharedPosition(index).xyz;
float3 v = g_GuideHairVertexPositions[index].xyz;
// float3 t = GetSharedTangent(index); // Adi: both approaches will work!!
float3 t = g_GuideHairVertexTangents[index].xyz;
// Get hair strand thickness
uint indexInStrand = index % NumVerticesPerStrand;
float fractionOfStrand = (float)indexInStrand / (NumVerticesPerStrand - 1);
float ratio = (EnableThinTip > 0) ? lerp(1.0, FiberRatio, fractionOfStrand) : 1.0; //need length of full strand vs the length of this point on the strand.
// Calculate right and projected right vectors
float3 right = Safe_normalize(cross(t, Safe_normalize(v - eye)));
float2 proj_right = Safe_normalize(MatrixMult(viewProj, float4(right, 0)).xy);
// We always to to expand for faster hair AA, we may want to gauge making this adjustable
float expandPixels = 1.f * CM_TO_METERS_RENDER; // Disable for shadows 0.71;
// Calculate the negative and positive offset screenspace positions
float4 hairEdgePositions[2]; // 0 is negative, 1 is positive
hairEdgePositions[0] = float4(v + -1.0 * right * ratio * FiberRadius * CM_TO_METERS_RENDER, 1.0);
hairEdgePositions[1] = float4(v + 1.0 * right * ratio * FiberRadius * CM_TO_METERS_RENDER, 1.0);
hairEdgePositions[0] = MatrixMult(viewProj, hairEdgePositions[0]);
hairEdgePositions[1] = MatrixMult(viewProj, hairEdgePositions[1]);
// Write output data
TressFXVertex Output = (TressFXVertex)0;
float fDirIndex = (vertexId & 0x01) ? -1.0 : 1.0;
Output.Position = ((vertexId & 0x01) ? hairEdgePositions[0] : hairEdgePositions[1]) + fDirIndex * float4(proj_right * expandPixels / winSize.y, 0.0f, 0.0f) * ((vertexId & 0x01) ? hairEdgePositions[0].w : hairEdgePositions[1].w);
return Output;
}
// EndHLSL

@ -0,0 +1,80 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <Atom/Features/PBR/Surfaces/BasePbrSurfaceData.azsli>
#include <Atom/Features/PBR/Surfaces/TransmissionSurfaceData.azsli>
class Surface
{
TransmissionSurfaceData transmission;
// ------- BasePbrSurfaceData -------
float3 position; //!< Position in world-space
float3 normal; //!< Normal in world-space
float3 albedo; //!< Albedo color of the non-metallic material, will be multiplied against the diffuse lighting value
float3 specularF0; //!< Fresnel f0 spectral value of the surface
float roughnessLinear; //!< Perceptually linear roughness value authored by artists. Must be remapped to roughnessA before use
float roughnessA; //!< Actual roughness value ( a.k.a. "alpha roughness") to be used in microfacet calculations
float roughnessA2; //!< Alpha roughness ^ 2 (i.e. roughnessA * roughnessA), used in GGX, cached here for perfromance
// ------- Hair Surface Data -------
float3 tangent;
float3 cuticleTilt;
float thickness;
//! Applies specular anti-aliasing to roughnessA2
void ApplySpecularAA();
//! Calculates roughnessA and roughnessA2 after roughness has been set
void CalculateRoughnessA();
//! Sets albedo and specularF0 using metallic workflow
void SetAlbedoAndSpecularF0(float3 baseColor, float specularF0Factor);
};
// Specular Anti-Aliasing technique from this paper:
// http://www.jp.square-enix.com/tech/library/pdf/ImprovedGeometricSpecularAA.pdf
void Surface::ApplySpecularAA()
{
// Constants for formula below
const float screenVariance = 0.25f;
const float varianceThresh = 0.18f;
// Specular Anti-Aliasing
float3 dndu = ddx_fine( normal );
float3 dndv = ddy_fine( normal );
float variance = screenVariance * (dot( dndu , dndu ) + dot( dndv , dndv ));
float kernelRoughnessA2 = min(2.0 * variance , varianceThresh );
float filteredRoughnessA2 = saturate ( roughnessA2 + kernelRoughnessA2 );
roughnessA2 = filteredRoughnessA2;
}
void Surface::CalculateRoughnessA()
{
// The roughness value in microfacet calculations (called "alpha" in the literature) does not give perceptually
// linear results. Disney found that squaring the roughness value before using it in microfacet equations causes
// the user-provided roughness parameter to be more perceptually linear. We keep both values available as some
// equations need roughnessLinear (i.e. IBL sampling) while others need roughnessA (i.e. GGX equations).
// See Burley's Disney PBR: https://pdfs.semanticscholar.org/eeee/3b125c09044d3e2f58ed0e4b1b66a677886d.pdf
roughnessA = max(roughnessLinear * roughnessLinear, MinRoughnessA);
roughnessA2 = roughnessA * roughnessA;
if(o_applySpecularAA)
{
ApplySpecularAA();
}
}
void Surface::SetAlbedoAndSpecularF0(float3 baseColor, float specularF0Factor)
{
albedo = baseColor;
specularF0 = MaxDielectricSpecularF0 * specularF0Factor;
}

@ -0,0 +1,14 @@
{
"Source" : "HairSimulationCompute.azsl",
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "UpdateFollowHairVertices",
"type": "Compute"
}
]
}
}

@ -0,0 +1,167 @@
/*
* Modifications Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: (Apache-2.0 OR MIT) AND MIT
*
*/
//---------------------------------------------------------------------------------------
// Shader code utilities for TressFX
//-------------------------------------------------------------------------------------
//
// Copyright (c) 2019 Advanced Micro Devices, Inc. All rights reserved.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
//
// Cutoff to not render hair
#pragma once
#include <viewsrg.srgi>
#define SHORTCUT_MIN_ALPHA 0.02
#define TRESSFX_FLOAT_EPSILON 1e-7
//--------------------------------------------------------------------------------------
//
// Controls whether you do mul(M,v) or mul(v,M)
// i.e., row major vs column major
//
//--------------------------------------------------------------------------------------
float4 MatrixMult(float4x4 m, float4 v)
{
return mul(m, v);
}
// Given the depth buffer depth of the current pixel and the fragment XY position,
// reconstruct the NDC.
// screenCoords - from 0.. dimension of the screen of the current pixel
// screenTexture - screen buffer texture representing the same resolution we work in
// sDepth - the depth buffer depth at the fragment location
// NDC - Normalized Device Coordinates = warped screen space ( -1.1, -1..1, 0..1 )
float3 ScreenPosToNDC( Texture2D<float> screenTexture, float2 screenCoords, float depth )
{
uint2 dimensions;
screenTexture.GetDimensions(dimensions.x, dimensions.y);
float2 UV = saturate(screenCoords / dimensions.xy);
float x = UV.x * 2.0f - 1.0f;
float y = (1.0f - UV.y) * 2.0f - 1.0f;
float3 NDC = float3(x, y, depth);
return NDC;
}
// Given the depth buffer depth of the current pixel and the fragment XY position,
// reconstruct the world space position
float3 ScreenPosToWorldPos(
Texture2D<float> screenTexture, float2 screenCoords, float depth,
inout float3 screenPosNDC )
{
screenPosNDC = ScreenPosToNDC(PassSrg::m_linearDepth, screenCoords, depth);
float4 projectedPos = float4(screenPosNDC, 1.0f); // warped projected space [0..1]
float4 positionVS = mul(ViewSrg::m_projectionMatrixInverse, projectedPos);
positionVS /= positionVS.w; // notice the normalization factor - crucial!
float4 positionWS = mul(ViewSrg::m_viewMatrixInverse, positionVS);
return positionWS.xyz;
}
// Pack a float4 into an uint
uint PackFloat4IntoUint(float4 vValue)
{
return (((uint)(vValue.x * 255)) << 24) | (((uint)(vValue.y * 255)) << 16) | (((uint)(vValue.z * 255)) << 8) | (uint)(vValue.w * 255);
}
// Unpack a uint into a float4 value
float4 UnpackUintIntoFloat4(uint uValue)
{
return float4(((uValue & 0xFF000000) >> 24) / 255.0, ((uValue & 0x00FF0000) >> 16) / 255.0, ((uValue & 0x0000FF00) >> 8) / 255.0, ((uValue & 0x000000FF)) / 255.0);
}
// Pack a float3 and a uint8 into an uint
uint PackFloat3ByteIntoUint(float3 vValue, uint uByteValue)
{
return (((uint)(vValue.x * 255)) << 24) | (((uint)(vValue.y * 255)) << 16) | (((uint)(vValue.z * 255)) << 8) | uByteValue;
}
// Unpack a uint into a float3 and a uint8 value
float3 UnpackUintIntoFloat3Byte(uint uValue, out uint uByteValue)
{
uByteValue = uValue & 0x000000FF;
return float3(((uValue & 0xFF000000) >> 24) / 255.0, ((uValue & 0x00FF0000) >> 16) / 255.0, ((uValue & 0x0000FF00) >> 8) / 255.0);
}
//--------------------------------------------------------------------------------------
//
// Safe_normalize-float2
//
//--------------------------------------------------------------------------------------
float2 Safe_normalize(float2 vec)
{
float len = length(vec);
return len >= TRESSFX_FLOAT_EPSILON ? (vec * rcp(len)) : float2(0, 0);
}
//--------------------------------------------------------------------------------------
//
// Safe_normalize-float3
//
//--------------------------------------------------------------------------------------
float3 Safe_normalize(float3 vec)
{
float len = length(vec);
return len >= TRESSFX_FLOAT_EPSILON ? (vec * rcp(len)) : float3(0, 0, 0);
}
//--------------------------------------------------------------------------------------
// ComputeCoverage
//
// Calculate the pixel coverage of a hair strand by computing the hair width
//--------------------------------------------------------------------------------------
float ComputeCoverage(float2 p0, float2 p1, float2 pixelLoc, float2 winSize)
{
// p0, p1, pixelLoc are in d3d clip space (-1 to 1)x(-1 to 1)
// Scale positions so 1.f = half pixel width
p0 *= winSize;
p1 *= winSize;
pixelLoc *= winSize;
float p0dist = length(p0 - pixelLoc);
float p1dist = length(p1 - pixelLoc);
float hairWidth = length(p0 - p1);
// will be 1.f if pixel outside hair, 0.f if pixel inside hair
float outside = any(float2(step(hairWidth, p0dist), step(hairWidth, p1dist)));
// if outside, set sign to -1, else set sign to 1
float sign = outside > 0.f ? -1.f : 1.f;
// signed distance (positive if inside hair, negative if outside hair)
float relDist = sign * saturate(min(p0dist, p1dist));
// returns coverage based on the relative distance
// 0, if completely outside hair edge
// 1, if completely inside hair edge
return (relDist + 1.f) * 0.5f;
}

@ -0,0 +1,14 @@
{
"Source" : "HairSimulationCompute.azsl",
"ProgramSettings":
{
"EntryPoints":
[
{
"name": "VelocityShockPropagation",
"type": "Compute"
}
]
}
}

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:6a0d9d2461d2317b21e0fbd9bf8385bfb978f4dfd0db8f50450c3482faee85ed
size 4116085

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:7ea292419a5d2ebc9c33eaddb1b69a5d7b21055bcce2edc703dc32cfcb3a45cb
size 209056

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:4856b6163507f326804fd98e1d3d5d0cbfc08139068ceea6480dae542a2ddfb3
size 15098332

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:092278e1a27dbd90685f4e7f0da0fc7c2600f5c9e5217bdcef523ece6e8900b4
size 1934040

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:ae42c22c58c6daef42dbe9db16dd827f44ca66e8d1bb1761dfcb1f5890396dc2
size 134009

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:9cbc84f42d4acdaf748dfdc76102796e705a5c95bc3e73acf1deca3db57b53a9
size 221701

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:42c7f063e1e87aef6293e08b0b9e6bdf205fc2f79a8fb2cb70db03524ac547d3
size 143015

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:df138ab3b88ce7678370ae589c8e28741e33f6f9aa9fba9f6c28705b0f3a02e7
size 3626

@ -0,0 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:df138ab3b88ce7678370ae589c8e28741e33f6f9aa9fba9f6c28705b0f3a02e7
size 3626

@ -5,3 +5,157 @@
# SPDX-License-Identifier: Apache-2.0 OR MIT
#
#
if(PAL_TRAIT_BUILD_HOST_TOOLS)
# Declare ATOMTRESSFX_EDITOR only when tools pipeline is built
# This is in order to allow AzToolsFramework to properly link across
# platforms that support editor mode, while in the rest, not include
# this module and avoide compiling the editor components.
set_source_files_properties(
Code/HairModule.cpp
Code/Components/EditorHairComponent.h
Code/Components/EditorHairComponent.cpp
PROPERTIES
COMPILE_DEFINITIONS
ATOMTRESSFX_EDITOR
)
ly_add_target(
NAME AtomTressFX.Static STATIC
NAMESPACE Gem
FILES_CMAKE
Hair_files.cmake
INCLUDE_DIRECTORIES
PRIVATE
Code
External
External/Code/src
PUBLIC
Code/Include
BUILD_DEPENDENCIES
PRIVATE
AZ::AzCore
AZ::AzToolsFramework
Gem::LmbrCentral
Gem::Atom_RHI.Public
Gem::Atom_RPI.Public
Gem::Atom_Feature_Common.Static
Gem::AtomLyIntegration_CommonFeatures.Static
Gem::EMotionFXStaticLib
)
ly_add_target(
NAME AtomTressFX ${PAL_TRAIT_MONOLITHIC_DRIVEN_MODULE_TYPE}
NAMESPACE Gem
FILES_CMAKE
Hair_shared_files.cmake
INCLUDE_DIRECTORIES
PRIVATE
Code/AssetPipeline
Code
External
External/Code/src
PUBLIC
Code/Include
BUILD_DEPENDENCIES
PRIVATE
AZ::AzCore
AZ::AzToolsFramework
Gem::Atom_Utils.Static
Gem::EMotionFXStaticLib
Gem::AtomTressFX.Static
)
else()
ly_add_target(
NAME AtomTressFX.Static STATIC
NAMESPACE Gem
FILES_CMAKE
Hair_files.cmake
INCLUDE_DIRECTORIES
PRIVATE
Code
External
External/Code/src
PUBLIC
Code/Include
BUILD_DEPENDENCIES
PRIVATE
AZ::AzCore
Gem::LmbrCentral
Gem::Atom_RHI.Public
Gem::Atom_RPI.Public
Gem::Atom_Feature_Common.Static
Gem::AtomLyIntegration_CommonFeatures.Static
Gem::EMotionFXStaticLib
)
ly_add_target(
NAME AtomTressFX ${PAL_TRAIT_MONOLITHIC_DRIVEN_MODULE_TYPE}
NAMESPACE Gem
FILES_CMAKE
Hair_shared_files.cmake
INCLUDE_DIRECTORIES
PRIVATE
Code/AssetPipeline
Code
External
External/Code/src
PUBLIC
Code/Include
BUILD_DEPENDENCIES
PRIVATE
AZ::AzCore
Gem::Atom_Utils.Static
Gem::EMotionFXStaticLib
Gem::AtomTressFX.Static
)
endif()
# Clients and servers use the AtomTressFX module
ly_create_alias(NAME AtomTressFX.Clients NAMESPACE Gem TARGETS Gem::AtomTressFX)
ly_create_alias(NAME AtomTressFX.Servers NAMESPACE Gem TARGETS Gem::AtomTressFX)
if(PAL_TRAIT_BUILD_HOST_TOOLS)
ly_add_target(
NAME AtomTressFX.Builders.Static STATIC
NAMESPACE Gem
FILES_CMAKE
Hair_builders_files.cmake
INCLUDE_DIRECTORIES
PRIVATE
Code
External
External/Code/src
PUBLIC
Code/Include
BUILD_DEPENDENCIES
PUBLIC
AZ::AssetBuilderSDK
Gem::AtomTressFX.Static
)
ly_add_target(
NAME AtomTressFX.Builders GEM_MODULE
NAMESPACE Gem
FILES_CMAKE
Hair_builders_shared_files.cmake
INCLUDE_DIRECTORIES
PRIVATE
Code
External
External/Code/src
PUBLIC
Code/Include
BUILD_DEPENDENCIES
PRIVATE
Gem::AtomTressFX.Builders.Static
)
# builders and tools use the AtomTressFX.Builders and AtomTressFX modules
ly_create_alias(NAME AtomTressFX.Tools NAMESPACE Gem TARGETS Gem::AtomTressFX Gem::AtomTressFX.Builders)
endif()

@ -0,0 +1,40 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Assets/HairAsset.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
HairAssetHandler::HairAssetHandler()
: AzFramework::GenericAssetHandler<HairAsset>(HairAsset::DisplayName, HairAsset::Group, HairAsset::Extension)
{
}
Data::AssetHandler::LoadResult HairAssetHandler::LoadAssetData(
const Data::Asset<Data::AssetData>& asset, AZStd::shared_ptr<Data::AssetDataStream> stream,
[[maybe_unused]]const Data::AssetFilterCB& assetLoadFilterCB)
{
HairAsset* assetData = asset.GetAs<HairAsset>();
assetData->m_tressFXAsset.reset(new AMD::TressFXAsset());
if(assetData->m_tressFXAsset->LoadCombinedHairData(stream.get()))
{
return Data::AssetHandler::LoadResult::LoadComplete;
}
else
{
assetData->m_tressFXAsset.release();
}
return Data::AssetHandler::LoadResult::Error;
}
}
} // namespace Render
} // namespace AZ

@ -0,0 +1,57 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Asset/AssetCommon.h>
#include <AzFramework/Asset/GenericAssetHandler.h>
#include <TressFX/TressFXAsset.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
//! HairAsset is a simple AssetData wrapper around the TressFXAsset used by the AP.
//! Is comprises of the hair vertices data file, the hair bone skinning information file
//! and the collision data file.
//! The plan is to separate the collision data as it can have the relation of 1:1, 1:N or N:1
//! meaning that the hair can have multiple collision handling (not only single mesh), and at
//! the other end multiple hairs can have the same collision (hairdo and fur for example).
class HairAsset final
: public AZ::Data::AssetData
{
public:
static constexpr inline const char* DisplayName = "HairAsset";
static constexpr inline const char* Extension = "tfxhair";
static constexpr inline const char* Group = "Hair";
AZ_RTTI(HairAsset, "{52842B73-8F75-4620-8231-31EBCC74DD85}", AZ::Data::AssetData);
AZ_CLASS_ALLOCATOR(HairAsset, AZ::SystemAllocator, 0);
AZStd::unique_ptr<AMD::TressFXAsset> m_tressFXAsset;
};
// HairAssetHandler
// This handler class help to load the .tfxhair file (which is a combined file of .tfx, .tfxbone and .tfxmesh)
// from an AssetDataStream.
class HairAssetHandler final
: public AzFramework::GenericAssetHandler<HairAsset>
{
public:
HairAssetHandler();
private:
Data::AssetHandler::LoadResult LoadAssetData(
const Data::Asset<Data::AssetData>& asset, AZStd::shared_ptr<Data::AssetDataStream> stream,
const Data::AssetFilterCB& assetLoadFilterCB) override;
};
}
}
}

@ -0,0 +1,173 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Assets/HairAsset.h>
#include <Builders/HairAssetBuilder.h>
#include <AssetBuilderSDK/SerializationDependencies.h>
#include <AzCore/IO/FileIO.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
void HairAssetBuilder::RegisterBuilder()
{
// Setup the builder descriptor
AssetBuilderSDK::AssetBuilderDesc builderDesc;
builderDesc.m_name = "HairAssetBuilder";
builderDesc.m_patterns.emplace_back(AssetBuilderSDK::AssetBuilderPattern(
AZStd::string::format("*.%s", AMD::TFXFileExtension), AssetBuilderSDK::AssetBuilderPattern::PatternType::Wildcard));
builderDesc.m_busId = azrtti_typeid<HairAssetBuilder>();
builderDesc.m_version = 3;
builderDesc.m_createJobFunction =
AZStd::bind(&HairAssetBuilder::CreateJobs, this, AZStd::placeholders::_1, AZStd::placeholders::_2);
builderDesc.m_processJobFunction =
AZStd::bind(&HairAssetBuilder::ProcessJob, this, AZStd::placeholders::_1, AZStd::placeholders::_2);
BusConnect(builderDesc.m_busId);
AssetBuilderSDK::AssetBuilderBus::Broadcast(
&AssetBuilderSDK::AssetBuilderBusTraits::RegisterBuilderInformation, builderDesc);
}
void HairAssetBuilder::ShutDown()
{
m_isShuttingDown = true;
}
void HairAssetBuilder::CreateJobs(
const AssetBuilderSDK::CreateJobsRequest& request, AssetBuilderSDK::CreateJobsResponse& response)
{
if (m_isShuttingDown)
{
response.m_result = AssetBuilderSDK::CreateJobsResultCode::ShuttingDown;
return;
}
for (const AssetBuilderSDK::PlatformInfo& info : request.m_enabledPlatforms)
{
AssetBuilderSDK::JobDescriptor descriptor;
descriptor.m_jobKey = AMD::TFXFileExtension;
descriptor.m_critical = false;
descriptor.SetPlatformIdentifier(info.m_identifier.c_str());
response.m_createJobOutputs.push_back(descriptor);
}
// Set the tfx bone and tfx mesh file as source dependency. This way when tfxbone or tfxmesh file reloaded it will
// also trigger the rebuild of the hair asset.
AssetBuilderSDK::SourceFileDependency sourceFileDependency;
sourceFileDependency.m_sourceFileDependencyPath = request.m_sourceFile;
AZ::StringFunc::Path::ReplaceExtension(sourceFileDependency.m_sourceFileDependencyPath, AMD::TFXBoneFileExtension);
response.m_sourceFileDependencyList.push_back(sourceFileDependency);
sourceFileDependency.m_sourceFileDependencyPath = request.m_sourceFile;
AZ::StringFunc::Path::ReplaceExtension(sourceFileDependency.m_sourceFileDependencyPath, AMD::TFXMeshFileExtension);
response.m_sourceFileDependencyList.push_back(sourceFileDependency);
response.m_result = AssetBuilderSDK::CreateJobsResultCode::Success;
}
void HairAssetBuilder::ProcessJob(
const AssetBuilderSDK::ProcessJobRequest& request, AssetBuilderSDK::ProcessJobResponse& response)
{
AZ_TracePrintf(AssetBuilderSDK::InfoWindow, "HairAssetBuilder Starting Job for %s.\n", request.m_fullPath.c_str());
if (m_isShuttingDown)
{
AZ_TracePrintf(
AssetBuilderSDK::WarningWindow, "Cancelled job %s because shutdown was requested.\n", request.m_fullPath.c_str());
response.m_resultCode = AssetBuilderSDK::ProcessJobResult_Cancelled;
return;
}
// There are 3 source files for TressFX asset - .tfx, .tfxbone and .tfxmesh.
// We read all the 3 source files and combine them to 1 output file .tfxhair in the cache.
AZStd::string tfxFileName;
AzFramework::StringFunc::Path::GetFullFileName(request.m_fullPath.c_str(), tfxFileName);
// Create the path to the result .tfxhair file inside the tempDirPath.
AZStd::string destPath;
AzFramework::StringFunc::Path::ConstructFull(request.m_tempDirPath.c_str(), tfxFileName.c_str(), destPath, true);
AZ::StringFunc::Path::ReplaceExtension(destPath, AMD::TFXCombinedFileExtension);
// Create and open the .tfxhair we are writing to.
AZ::IO::FileIOStream outStream(destPath.data(), AZ::IO::OpenMode::ModeWrite | AZ::IO::OpenMode::ModeCreatePath);
if (!outStream.IsOpen())
{
AZ_TracePrintf(
AssetBuilderSDK::ErrorWindow, "Error: Failed job %s because .tfxhair file cannot be created.\n", request.m_fullPath.c_str());
response.m_resultCode = AssetBuilderSDK::ProcessJobResult_Failed;
return;
}
// Write the .tfxhair file header as a placeholder
AMD::TressFXCombinedHairFileHeader header;
outStream.Write(sizeof(AMD::TressFXCombinedHairFileHeader), &header);
// Combine .tfx, .tfxbone, .tfxmesh file into .tfxhair file
auto writeToStream = [](const AZStd::string& fullpath, AZ::IO::FileIOStream& outStream, bool required) {
IO::FileIOStream inStream;
if (!inStream.Open(fullpath.c_str(), IO::OpenMode::ModeRead))
{
if (required)
{
AZ_TracePrintf(AssetBuilderSDK::ErrorWindow,
"Error: Failed job %s because the file is either missing or cannot be opened.\n", fullpath.c_str());
}
return (AZ::IO::SizeType)0;
}
const AZ::IO::SizeType dataSize = inStream.GetLength();
AZStd::vector<AZ::u8> fileBuffer(dataSize);
inStream.Read(dataSize, fileBuffer.data());
return outStream.Write(dataSize, fileBuffer.data());
};
// Write .tfx file to the combined .tfxhair file.
AZStd::string sourcePath = request.m_fullPath;
const AZ::IO::SizeType tfxSize = writeToStream(sourcePath, outStream, true);
// Move on to .tfxbone file.
AZ::StringFunc::Path::ReplaceExtension(sourcePath, AMD::TFXBoneFileExtension);
const AZ::IO::SizeType tfxBoneSize = writeToStream(sourcePath, outStream, true);
// Move on to .tfxmesh file.
AZ::StringFunc::Path::ReplaceExtension(sourcePath, AMD::TFXMeshFileExtension);
writeToStream(sourcePath, outStream, false);
if (tfxSize == 0 || tfxBoneSize == 0)
{
// Fail the job if the .tfx file or the .tfxbone file is missing.
AZ_TracePrintf(AssetBuilderSDK::ErrorWindow,
"Error: Failed job %s because tfxSize=%d or tfxBoneSize=%d.\n",
request.m_fullPath.c_str(), tfxSize, tfxBoneSize);
response.m_resultCode = AssetBuilderSDK::ProcessJobResult_Failed;
return;
}
// Write the header file with correct data
header.offsetTFX = sizeof(AMD::TressFXCombinedHairFileHeader);
header.offsetTFXBone = header.offsetTFX + tfxSize;
header.offsetTFXMesh = header.offsetTFXBone + tfxBoneSize;
outStream.Seek(0, AZ::IO::FileIOStream::SeekMode::ST_SEEK_BEGIN);
outStream.Write(sizeof(AMD::TressFXCombinedHairFileHeader), &header);
// Send the .tfxhair as the final job product product.
AssetBuilderSDK::JobProduct jobProduct(destPath, azrtti_typeid<HairAsset>(), 0);
jobProduct.m_dependenciesHandled = true;
response.m_outputProducts.push_back(jobProduct);
response.m_resultCode = AssetBuilderSDK::ProcessJobResult_Success;
AZ_TracePrintf(AssetBuilderSDK::InfoWindow, "HairAssetBuilder successfully finished Job for %s.\n", request.m_fullPath.c_str());
}
}
} // namespace Render
} // namespace AZ

@ -0,0 +1,39 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AssetBuilderSDK/AssetBuilderSDK.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairAssetBuilder final
: public AssetBuilderSDK::AssetBuilderCommandBus::Handler
{
public:
AZ_RTTI(AnimGraphBuilderWorker, "{7D77A133-115E-4A14-860D-C1DB9422C190}");
void RegisterBuilder();
//! AssetBuilderSDK::AssetBuilderCommandBus interface
void ShutDown() override;
void CreateJobs(const AssetBuilderSDK::CreateJobsRequest& request, AssetBuilderSDK::CreateJobsResponse& response);
void ProcessJob(const AssetBuilderSDK::ProcessJobRequest& request, AssetBuilderSDK::ProcessJobResponse& response);
private:
bool m_isShuttingDown = false;
};
}
} // namespace Render
} // namespace AZ

@ -0,0 +1,67 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzCore/Serialization/SerializeContext.h>
#include <AzCore/Serialization/EditContextConstants.inl>
#include <Assets/HairAsset.h>
#include <Builders/HairBuilderComponent.h>
#include <Builders/HairAssetBuilder.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
void HairBuilderComponent::Reflect(ReflectContext* context)
{
if (SerializeContext* serialize = azrtti_cast<SerializeContext*>(context))
{
serialize->Class<HairBuilderComponent, Component>()
->Version(1)->Attribute(
AZ::Edit::Attributes::SystemComponentTags,
AZStd::vector<AZ::Crc32>({AssetBuilderSDK::ComponentTags::AssetBuilder}));
;
}
}
void HairBuilderComponent::GetProvidedServices(ComponentDescriptor::DependencyArrayType& provided)
{
provided.push_back(AZ_CRC_CE("HairBuilderService"));
}
void HairBuilderComponent::GetIncompatibleServices(ComponentDescriptor::DependencyArrayType& incompatible)
{
incompatible.push_back(AZ_CRC_CE("HairBuilderService"));
}
void HairBuilderComponent::Activate()
{
m_hairAssetBuilder.RegisterBuilder();
m_hairAssetHandler.Register();
// Add asset types and extensions to AssetCatalog.
auto assetCatalog = AZ::Data::AssetCatalogRequestBus::FindFirstHandler();
if (assetCatalog)
{
assetCatalog->EnableCatalogForAsset(azrtti_typeid<HairAsset>());
assetCatalog->AddExtension(AMD::TFXCombinedFileExtension);
}
}
void HairBuilderComponent::Deactivate()
{
m_hairAssetBuilder.BusDisconnect();
m_hairAssetHandler.Unregister();
}
} // namespace Hair
} // End Render namespace
} // End AZ namespace

@ -0,0 +1,51 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Component/Component.h>
#include <AzCore/Asset/AssetCommon.h>
#include <AzCore/Asset/AssetManager.h>
#include <AssetBuilderSDK/AssetBuilderBusses.h>
#include <Assets/HairAsset.h>
#include <Builders/HairAssetBuilder.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairBuilderComponent final
: public Component
{
public:
AZ_COMPONENT(HairBuilderComponent, "{88233F79-98DA-4DC6-A60B-0405BD810479}");
HairBuilderComponent() = default;
~HairBuilderComponent() = default;
static void Reflect(ReflectContext* context);
static void GetProvidedServices(ComponentDescriptor::DependencyArrayType& provided);
static void GetIncompatibleServices(ComponentDescriptor::DependencyArrayType& incompatible);
private:
HairBuilderComponent(const HairBuilderComponent&) = delete;
////////////////////////////////////////////////////////////////////////
// Component interface implementation
void Activate() override;
void Deactivate() override;
////////////////////////////////////////////////////////////////////////
HairAssetBuilder m_hairAssetBuilder;
HairAssetHandler m_hairAssetHandler;
};
} // namespace Hair
} // End Render namespace
} // End AZ namespace

@ -0,0 +1,31 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Builders/HairBuilderComponent.h>
#include <Builders/HairBuilderModule.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
HairBuilderModule::HairBuilderModule()
: AZ::Module()
{
m_descriptors.push_back(HairBuilderComponent::CreateDescriptor());
}
AZ::ComponentTypeList HairBuilderModule::GetRequiredSystemComponents() const
{
return AZ::ComponentTypeList{azrtti_typeid<HairBuilderComponent>()};
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,35 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Memory/SystemAllocator.h>
#include <AzCore/Module/Module.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairBuilderModule : public AZ::Module
{
public:
AZ_RTTI(HairBuilderModule, "{44440BE8-48AC-46AA-9643-2BD866709E27}", AZ::Module);
AZ_CLASS_ALLOCATOR(HairBuilderModule, AZ::SystemAllocator, 0);
HairBuilderModule();
//! Add required SystemComponents to the SystemEntity.
AZ::ComponentTypeList GetRequiredSystemComponents() const override;
};
} // namespace Hair
} // namespace Render
} // namespace AZ
AZ_DECLARE_MODULE_CLASS(Gem_AtomTressFX_Builder, AZ::Render::Hair::HairBuilderModule)

@ -0,0 +1,149 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#if defined (ATOMTRESSFX_EDITOR)
#include <AzCore/RTTI/BehaviorContext.h>
#include <Components/EditorHairComponent.h>
#include <Rendering/HairRenderObject.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
void EditorHairComponent::Reflect(AZ::ReflectContext* context)
{
BaseClass::Reflect(context);
if (AZ::SerializeContext* serializeContext = azrtti_cast<AZ::SerializeContext*>(context))
{
serializeContext->Class<EditorHairComponent, BaseClass>()
->Version(1);
if (AZ::EditContext* editContext = serializeContext->GetEditContext())
{
editContext->Class<EditorHairComponent>(
"Atom Hair", "Controls Hair Properties")
->ClassElement(Edit::ClassElements::EditorData, "")
->Attribute(Edit::Attributes::Category, "Atom")
->Attribute(AZ::Edit::Attributes::Icon, "Editor/Icons/Components/Component_Placeholder.svg")
->Attribute(AZ::Edit::Attributes::ViewportIcon, "editor/icons/components/viewport/component_placeholder.png")
->Attribute(Edit::Attributes::AppearsInAddComponentMenu, AZ_CRC("Game", 0x232b318c))
->Attribute(Edit::Attributes::AutoExpand, true)
->Attribute(Edit::Attributes::HelpPageURL, "https://o3de.org/docs/user-guide/gems/reference/rendering/amd/atom-tressfx/")
;
editContext->Class<HairComponentController>(
"HairComponentController", "")
->ClassElement(AZ::Edit::ClassElements::EditorData, "")
->Attribute(AZ::Edit::Attributes::AutoExpand, true)
->DataElement(AZ::Edit::UIHandlers::Default, &HairComponentController::m_configuration, "Configuration", "")
->Attribute(AZ::Edit::Attributes::Visibility, AZ::Edit::PropertyVisibility::ShowChildrenOnly)
;
editContext->Class<HairComponentConfig>(
"HairComponentConfig", "")
->ClassElement(AZ::Edit::ClassElements::EditorData, "")
->DataElement(
AZ::Edit::UIHandlers::Default, &HairComponentConfig::m_hairAsset, "Hair Asset",
"TressFX asset to be assigned to this entity.")
->DataElement(
AZ::Edit::UIHandlers::Default, &HairComponentConfig::m_simulationSettings, "TressFX Sim Settings",
"TressFX simulation settings to be applied on this entity.")
->DataElement(
AZ::Edit::UIHandlers::Default, &HairComponentConfig::m_renderingSettings, "TressFX Render Settings",
"TressFX rendering settings to be applied on this entity.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairComponentConfig::m_hairGlobalSettings)
->Attribute(AZ::Edit::Attributes::ChangeNotify, &HairComponentConfig::OnHairGlobalSettingsChanged)
;
}
}
if (auto behaviorContext = azrtti_cast<BehaviorContext*>(context))
{
behaviorContext->Class<EditorHairComponent>()->RequestBus("HairRequestsBus");
behaviorContext->ConstantProperty("EditorHairComponentTypeId", BehaviorConstant(Uuid(Hair::EditorHairComponentTypeId)))
->Attribute(AZ::Script::Attributes::Module, "render")
->Attribute(AZ::Script::Attributes::Scope, AZ::Script::Attributes::ScopeFlags::Automation);
}
}
EditorHairComponent::EditorHairComponent(const HairComponentConfig& config)
: BaseClass(config)
{
m_prevHairAssetId = config.m_hairAsset.GetId();
}
void EditorHairComponent::Activate()
{
BaseClass::Activate();
AzFramework::EntityDebugDisplayEventBus::Handler::BusConnect(GetEntityId());
}
void EditorHairComponent::Deactivate()
{
BaseClass::Deactivate();
AzFramework::EntityDebugDisplayEventBus::Handler::BusDisconnect();
}
u32 EditorHairComponent::OnConfigurationChanged()
{
// Since any of the hair config and hair asset change will trigger this call, we use the prev loaded hair assetId
// to check which config actually got changed.
// This is because an asset change is a heavy operation and we don't want to trigger that when it's not needed.
if (m_prevHairAssetId == m_controller.GetConfiguration().m_hairAsset.GetId())
{
m_controller.OnHairConfigChanged();
}
else
{
m_controller.OnHairAssetChanged();
m_prevHairAssetId = m_controller.GetConfiguration().m_hairAsset.GetId();
}
return Edit::PropertyRefreshLevels::AttributesAndValues;
}
void EditorHairComponent::DisplayEntityViewport(
[[maybe_unused]] const AzFramework::ViewportInfo& viewportInfo, AzFramework::DebugDisplayRequests& debugDisplay)
{
// Only render debug information when selected.
if (!IsSelected())
{
return;
}
// Only render debug information after render object got created.
if (!m_controller.m_renderObject)
{
return;
}
float x = 40.0;
float y = 20.0f;
float size = 1.00f;
bool center = false;
AZStd::string debugString = AZStd::string::format(
"Hair component stats:\n"
" Total number of hairs: %d\n"
" Total number of guide hairs: %d\n"
" Amount of follow hair per guide hair: %d\n",
m_controller.m_renderObject->GetNumTotalHairStrands(),
m_controller.m_renderObject->GetNumGuideHairs(),
m_controller.m_renderObject->GetNumFollowHairsPerGuideHair());
debugDisplay.Draw2dTextLabel(x, y, size, debugString.c_str(), center);
}
} // namespace Hair
} // namespace Render
} // namespace AZ
#endif // ATOMTRESSFX_EDITOR

@ -0,0 +1,67 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#if defined (ATOMTRESSFX_EDITOR)
#include <AzToolsFramework/ToolsComponents/EditorComponentAdapter.h>
#include <AzFramework/Entity/EntityDebugDisplayBus.h>
#include <Components/HairComponent.h>
#include <Components/HairComponentConfig.h>
#include <Components/HairComponentController.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
//! Visual editor representation of the hair that can be created for an entity
//! that have an Actor component.
//! The config data itself is held by the 'HairComponentConfig' that reflects the 'TressFXSettings'
//! and by the 'HairGlobalSettings' that mainly controls the shader options.
//! The hair data is held by the 'HairRenderObject' and the connection between the component
//! and the handling of the data is done by the 'HairComponentController'.
static constexpr const char* const EditorHairComponentTypeId =
"{822A8253-4662-41B1-8623-7B2D047A4D68}";
class EditorHairComponent final
: public AzToolsFramework::Components::EditorComponentAdapter<HairComponentController, HairComponent, HairComponentConfig>
, private AzFramework::EntityDebugDisplayEventBus::Handler
{
public:
using BaseClass = AzToolsFramework::Components::EditorComponentAdapter<HairComponentController, HairComponent, HairComponentConfig>;
AZ_EDITOR_COMPONENT(AZ::Render::Hair::EditorHairComponent, Hair::EditorHairComponentTypeId, BaseClass);
static void Reflect(AZ::ReflectContext* context);
EditorHairComponent() = default;
EditorHairComponent(const HairComponentConfig& config);
void Activate() override;
void Deactivate() override;
// AzFramework::DebugDisplayRequestBus::Handler interface
void DisplayEntityViewport(
const AzFramework::ViewportInfo& viewportInfo, AzFramework::DebugDisplayRequests& debugDisplay) override;
private:
//! EditorRenderComponentAdapter overrides...
AZ::u32 OnConfigurationChanged() override;
Data::AssetId m_prevHairAssetId; // Previous loaded hair asset id.
};
} // namespace Hair
} // namespace Render
} // namespace AZ
#endif // ATOMTRESSFX_EDITOR

@ -0,0 +1,36 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Component/Component.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairRequests
: public ComponentBus
{
public:
AZ_RTTI(AZ::Render::HairRequests, "{923D6B94-C6AD-4B03-B8CC-DB7E708FB9F4}");
/// Overrides the default AZ::EBusTraits handler policy to allow one listener only.
static const EBusHandlerPolicy HandlerPolicy = EBusHandlerPolicy::Single;
virtual ~HairRequests() {}
// Add required getter and setter functions - matching the interface methods
};
typedef AZ::EBus<HairRequests> HairRequestsBus;
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,60 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzCore/RTTI/BehaviorContext.h>
#include <AzFramework/Components/TransformComponent.h>
#include <Components/HairComponent.h>
#include <Rendering/HairFeatureProcessor.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
HairComponent::HairComponent(const HairComponentConfig& config)
: BaseClass(config)
{
}
HairComponent::~HairComponent()
{
}
void HairComponent::Reflect(AZ::ReflectContext* context)
{
BaseClass::Reflect(context);
if (auto serializeContext = azrtti_cast<AZ::SerializeContext*>(context))
{
serializeContext->Class<HairComponent, BaseClass>();
}
if (auto behaviorContext = azrtti_cast<BehaviorContext*>(context))
{
behaviorContext->Class<HairComponent>()->RequestBus("HairRequestsBus");
behaviorContext->ConstantProperty("HairComponentTypeId", BehaviorConstant(Uuid(Hair::HairComponentTypeId)))
->Attribute(AZ::Script::Attributes::Module, "render")
->Attribute(AZ::Script::Attributes::Scope, AZ::Script::Attributes::ScopeFlags::Common);
}
}
void HairComponent::Activate()
{
BaseClass::Activate();
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,47 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Component/Component.h>
#include <AzFramework/Components/ComponentAdapter.h>
// Hair specific
#include <Components/HairComponentConfig.h>
#include <Components/HairComponentController.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairRenderObject;
static constexpr const char* const HairComponentTypeId = "{9556883B-6F3C-4010-BB3F-EBB480515D68}";
//! Parallel to the 'EditorHairComponent' this class is used in game mode.
class HairComponent final
: public AzFramework::Components::ComponentAdapter<HairComponentController, HairComponentConfig>
{
public:
using BaseClass = AzFramework::Components::ComponentAdapter<HairComponentController, HairComponentConfig>;
AZ_COMPONENT(AZ::Render::Hair::HairComponent, Hair::HairComponentTypeId, BaseClass);
HairComponent() = default;
HairComponent(const HairComponentConfig& config);
~HairComponent();
static void Reflect(AZ::ReflectContext* context);
void Activate() override;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,47 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzCore/Serialization/SerializeContext.h>
#include <AzCore/Serialization/EditContext.h>
#include <Components/HairComponentConfig.h>
#include <Rendering/HairGlobalSettingsBus.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
void HairComponentConfig::Reflect(ReflectContext* context)
{
AMD::TressFXSimulationSettings::Reflect(context);
AMD::TressFXRenderingSettings::Reflect(context);
if (auto serializeContext = azrtti_cast<AZ::SerializeContext*>(context))
{
serializeContext->Class<HairComponentConfig, ComponentConfig>()
->Version(4)
->Field("HairAsset", &HairComponentConfig::m_hairAsset)
->Field("SimulationSettings", &HairComponentConfig::m_simulationSettings)
->Field("RenderingSettings", &HairComponentConfig::m_renderingSettings)
->Field("HairGlobalSettings", &HairComponentConfig::m_hairGlobalSettings)
;
}
}
void HairComponentConfig::OnHairGlobalSettingsChanged()
{
HairGlobalSettingsRequestBus::Broadcast(&HairGlobalSettingsRequests::SetHairGlobalSettings, m_hairGlobalSettings);
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,62 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Component/Component.h>
#include <TressFX/TressFXSettings.h>
#include <Assets/HairAsset.h>
#include <Rendering/HairGlobalSettings.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class EditorHairComponent;
//! Reflects the TressFX settings and configuration data of the current hair object.
class HairComponentConfig final :
public ComponentConfig
{
friend class EditorHairComponent;
public:
AZ_RTTI(AZ::Render::HairComponentConfig, "{AF2C2F26-0C01-4EAD-A81C-4304BD751EDF}", AZ::ComponentConfig);
static void Reflect(ReflectContext* context);
void OnHairGlobalSettingsChanged();
void SetEnabled(bool value)
{
m_enabled = value;
}
bool GetIsEnabled()
{
return m_enabled;
}
// TressFX settings
AMD::TressFXSimulationSettings m_simulationSettings;
AMD::TressFXRenderingSettings m_renderingSettings;
Data::Asset<HairAsset> m_hairAsset;
HairGlobalSettings m_hairGlobalSettings;
private:
bool m_enabled = true;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,374 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzCore/RTTI/BehaviorContext.h>
#include <Atom/RPI.Public/Scene.h>
#include <AzFramework/Components/TransformComponent.h>
#include <Integration/Components/ActorComponent.h>
#include <EMotionFX/Source/TransformData.h>
#include <EMotionFX/Source/ActorInstance.h>
#include <EMotionFX/Source/Node.h>
// Hair Specific
#include <TressFX/TressFXAsset.h>
#include <TressFX/TressFXSettings.h>
#include <Rendering/HairFeatureProcessor.h>
#include <Components/HairComponentController.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
HairComponentController::~HairComponentController()
{
RemoveHairObject();
}
void HairComponentController::Reflect(ReflectContext* context)
{
HairComponentConfig::Reflect(context);
if (auto* serializeContext = azrtti_cast<SerializeContext*>(context))
{
serializeContext->Class<HairComponentController>()
->Version(2)
->Field("Configuration", &HairComponentController::m_configuration)
;
}
if (AZ::BehaviorContext* behaviorContext = azrtti_cast<AZ::BehaviorContext*>(context))
{
behaviorContext->EBus<HairRequestsBus>("HairRequestsBus")
->Attribute(AZ::Script::Attributes::Module, "render")
->Attribute(AZ::Script::Attributes::Scope, AZ::Script::Attributes::ScopeFlags::Common)
// Insert auto-gen behavior context here...
;
}
}
void HairComponentController::GetProvidedServices(AZ::ComponentDescriptor::DependencyArrayType& provided)
{
provided.push_back(AZ_CRC_CE("HairService"));
}
void HairComponentController::GetIncompatibleServices(AZ::ComponentDescriptor::DependencyArrayType& incompatible)
{
incompatible.push_back(AZ_CRC_CE("HairService"));
}
void HairComponentController::GetRequiredServices([[maybe_unused]] AZ::ComponentDescriptor::DependencyArrayType& required)
{
// Dependency in the Actor due to the need to get the bone / joint matrices
required.push_back(AZ_CRC_CE("EMotionFXActorService"));
}
HairComponentController::HairComponentController(const HairComponentConfig& config)
: m_configuration(config)
{
}
void HairComponentController::Activate(EntityId entityId)
{
m_entityId = entityId;
m_featureProcessor = RPI::Scene::GetFeatureProcessorForEntity<Hair::HairFeatureProcessor>(m_entityId);
if (m_featureProcessor)
{
m_featureProcessor->SetHairGlobalSettings(m_configuration.m_hairGlobalSettings);
if (!m_renderObject)
{
// Call this function if object doesn't exist to trigger the load of the existing asset
OnHairAssetChanged();
}
}
EMotionFX::Integration::ActorComponentNotificationBus::Handler::BusConnect(m_entityId);
HairRequestsBus::Handler::BusConnect(m_entityId);
TickBus::Handler::BusConnect();
HairGlobalSettingsNotificationBus::Handler::BusConnect();
}
void HairComponentController::Deactivate()
{
HairRequestsBus::Handler::BusDisconnect(m_entityId);
EMotionFX::Integration::ActorComponentNotificationBus::Handler::BusDisconnect(m_entityId);
Data::AssetBus::MultiHandler::BusDisconnect();
TickBus::Handler::BusDisconnect();
HairGlobalSettingsNotificationBus::Handler::BusDisconnect();
RemoveHairObject();
m_entityId.SetInvalid();
}
void HairComponentController::SetConfiguration(const HairComponentConfig& config)
{
m_configuration = config;
OnHairConfigChanged();
}
const HairComponentConfig& HairComponentController::GetConfiguration() const
{
return m_configuration;
}
void HairComponentController::OnHairAssetChanged()
{
Data::AssetBus::MultiHandler::BusDisconnect();
if (m_configuration.m_hairAsset.GetId().IsValid())
{
Data::AssetBus::MultiHandler::BusConnect(m_configuration.m_hairAsset.GetId());
m_configuration.m_hairAsset.QueueLoad();
}
else
{
RemoveHairObject();
}
}
void HairComponentController::OnHairGlobalSettingsChanged(const HairGlobalSettings& hairGlobalSettings)
{
m_configuration.m_hairGlobalSettings = hairGlobalSettings;
}
void HairComponentController::RemoveHairObject()
{
if (m_featureProcessor)
{
m_featureProcessor->RemoveHairRenderObject(m_renderObject);
}
m_renderObject.reset();
}
void HairComponentController::OnHairConfigChanged()
{
// The actual config change to render object happens in the onTick function. We do this to make sure it
// always happens pre-rendering. There is no need to do it before render object created, because the object
// will always be created with the updated configuration.
if (m_renderObject)
{
m_configChanged = true;
}
}
void HairComponentController::OnAssetReady(Data::Asset<Data::AssetData> asset)
{
if (asset.GetId() == m_configuration.m_hairAsset.GetId())
{
m_configuration.m_hairAsset = asset;
CreateHairObject();
}
}
void HairComponentController::OnAssetReloaded(Data::Asset<Data::AssetData> asset)
{
OnAssetReady(asset);
}
void HairComponentController::OnActorInstanceCreated([[maybe_unused]]EMotionFX::ActorInstance* actorInstance)
{
CreateHairObject();
}
void HairComponentController::OnActorInstanceDestroyed([[maybe_unused]]EMotionFX::ActorInstance* actorInstance)
{
RemoveHairObject();
}
void HairComponentController::OnTick([[maybe_unused]]float deltaTime, [[maybe_unused]]AZ::ScriptTimePoint time)
{
if (!m_renderObject)
{
return;
}
// Config change to renderObject happens on the OnTick, so we know the it occurs before render update.
if (m_configChanged)
{
const float MAX_SIMULATION_TIME_STEP = 0.033f; // Assuming minimal of 30 fps
float currentDeltaTime = AZStd::min(deltaTime, MAX_SIMULATION_TIME_STEP);
m_renderObject->UpdateSimulationParameters(&m_configuration.m_simulationSettings, currentDeltaTime);
// [To Do] Hair - Allow update of the following settings to control dynamic LOD
const float distanceFromCamera = 1.0f;
const float updateShadows = false;
m_renderObject->UpdateRenderingParameters(
&m_configuration.m_renderingSettings, RESERVED_PIXELS_FOR_OIT, distanceFromCamera, updateShadows);
m_configChanged = false;
// Only load the image asset when the dirty flag has been set on the settings.
if (m_configuration.m_renderingSettings.m_imgDirty)
{
m_renderObject->LoadImageAsset(&m_configuration.m_renderingSettings);
m_configuration.m_renderingSettings.m_imgDirty = false;
}
}
// Update the enable flag for hair render object
// The enable flag depends on the visibility of render actor instance and the flag of hair configuration.
bool actorVisible = false;
EMotionFX::Integration::ActorComponentRequestBus::EventResult(
actorVisible, m_entityId, &EMotionFX::Integration::ActorComponentRequestBus::Events::GetRenderActorVisible);
m_renderObject->SetEnabled(actorVisible);
UpdateActorMatrices();
}
int HairComponentController::GetTickOrder()
{
return AZ::TICK_PRE_RENDER;
}
bool HairComponentController::UpdateActorMatrices()
{
if (!m_renderObject->IsEnabled())
{
return false;
}
EMotionFX::ActorInstance* actorInstance = nullptr;
EMotionFX::Integration::ActorComponentRequestBus::EventResult(
actorInstance, m_entityId, &EMotionFX::Integration::ActorComponentRequestBus::Events::GetActorInstance);
if (!actorInstance)
{
return false;
}
const EMotionFX::TransformData* transformData = actorInstance->GetTransformData();
if (!transformData)
{
AZ_WarningOnce("Hair Gem", false, "Error getting the transformData from the actorInstance.");
return false;
}
// In EMotionFX the skinning matrices is stored as a 3x4. The conversion to 4x4 matrices happens at the update bone matrices function.
// In here we use the boneIndexMap to find the correct EMotionFX bone index (also as the global bone index), and passing the
// matrices of those bones to the hair render object. We do this for both hair and collision bone matrices.
const AZ::Matrix3x4* matrices = transformData->GetSkinningMatrices();
for (AZ::u32 tressFXBoneIndex = 0; tressFXBoneIndex < m_cachedHairBoneMatrices.size(); ++tressFXBoneIndex)
{
const AZ::u32 emfxBoneIndex = m_hairBoneIndexLookup[tressFXBoneIndex];
m_cachedHairBoneMatrices[tressFXBoneIndex] = matrices[emfxBoneIndex];
}
for (AZ::u32 tressFXBoneIndex = 0; tressFXBoneIndex < m_cachedCollisionBoneMatrices.size(); ++tressFXBoneIndex)
{
const AZ::u32 emfxBoneIndex = m_collisionBoneIndexLookup[tressFXBoneIndex];
m_cachedCollisionBoneMatrices[tressFXBoneIndex] = matrices[emfxBoneIndex];
}
m_entityWorldMatrix = Matrix3x4::CreateFromTransform(actorInstance->GetWorldSpaceTransform().ToAZTransform());
m_renderObject->UpdateBoneMatrices(m_entityWorldMatrix, m_cachedHairBoneMatrices);
return true;
}
bool HairComponentController::GenerateLocalToGlobalBoneIndex(
EMotionFX::ActorInstance* actorInstance, AMD::TressFXAsset* hairAsset)
{
// Generate local TressFX to global EMFX bone index lookup.
AMD::BoneNameToIndexMap globalNameToIndexMap;
const EMotionFX::Skeleton* skeleton = actorInstance->GetActor()->GetSkeleton();
if (!skeleton)
{
AZ_Error("Hair Gem", false, "Actor could not retrieve his skeleton.");
return false;
}
const uint32_t numBones = uint32_t(skeleton->GetNumNodes());
globalNameToIndexMap.reserve(size_t(numBones));
for (uint32_t i = 0; i < numBones; ++i)
{
const char* boneName = skeleton->GetNode(i)->GetName();
globalNameToIndexMap[boneName] = i;
}
if (!hairAsset->GenerateLocaltoGlobalHairBoneIndexLookup(globalNameToIndexMap, m_hairBoneIndexLookup) ||
!hairAsset->GenerateLocaltoGlobalCollisionBoneIndexLookup(globalNameToIndexMap, m_collisionBoneIndexLookup))
{
AZ_Error("Hair Gem", false, "Cannot convert local bone index to global bone index. The hair asset may not be compatible with the actor.");
return false;
}
return true;
}
// The hair object will only be created if both conditions are met:
// 1. The hair asset is loaded
// 2. The actor instance is created
bool HairComponentController::CreateHairObject()
{
// Do not create a hairRenderObject when actor instance hasn't been created.
EMotionFX::ActorInstance* actorInstance = nullptr;
EMotionFX::Integration::ActorComponentRequestBus::EventResult(
actorInstance, m_entityId, &EMotionFX::Integration::ActorComponentRequestBus::Events::GetActorInstance);
if (!actorInstance)
{
return false;
}
if (!m_featureProcessor)
{
AZ_Error("Hair Gem", false, "Required feature processor does not exist yet");
return false;
}
if (!m_configuration.m_hairAsset.GetId().IsValid() || !m_configuration.m_hairAsset.IsReady())
{
AZ_Warning("Hair Gem", false, "Hair Asset was not ready - second attempt will be made when ready");
return false;
}
AMD::TressFXAsset* hairAsset = m_configuration.m_hairAsset.Get()->m_tressFXAsset.get();
if (!hairAsset)
{
AZ_Error("Hair Gem", false, "Hair asset could not be loaded");
return false;
}
if (!GenerateLocalToGlobalBoneIndex(actorInstance, hairAsset))
{
return false;
}
// First remove the existing hair object - this can happen if the configuration
// or the hair asset selected changes.
RemoveHairObject();
// create a new instance - will remove the old one.
m_renderObject.reset(new HairRenderObject());
AZStd::string hairName;
AzFramework::StringFunc::Path::GetFileName(m_configuration.m_hairAsset.GetHint().c_str(), hairName);
if (!m_renderObject->Init( m_featureProcessor, hairName.c_str(), hairAsset,
&m_configuration.m_simulationSettings, &m_configuration.m_renderingSettings))
{
AZ_Warning("Hair Gem", false, "Hair object was not initialize succesfully");
m_renderObject.reset(); // no instancing yet - remove manually
return false;
}
// Resize the bone matrices array. The size should equal to the number of bones in the tressFXAsset.
m_cachedHairBoneMatrices.resize(m_hairBoneIndexLookup.size());
m_cachedCollisionBoneMatrices.resize(m_collisionBoneIndexLookup.size());
// Feature processor registration that will hold an instance.
// Remark: DO NOT remove the TressFX asset - it's data might be required for
// more instance hair objects.
m_featureProcessor->AddHairRenderObject(m_renderObject);
return true;
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,124 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Math/Vector2.h>
#include <AzCore/Math/Vector3.h>
#include <AzCore/Component/Component.h>
#include <AzCore/Component/TickBus.h>
// Hair specific
#include <Components/HairBus.h>
#include <Rendering/HairGlobalSettingsBus.h>
#include <Components/HairComponentConfig.h>
#include <Rendering/HairRenderObject.h>
// EMotionFX
#include <Integration/ActorComponentBus.h>
namespace AMD
{
class TressFXSimulationSettings;
class TressFXRenderingSettings;
class TressFXAsset;
}
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairFeatureProcessor;
//! This is the controller class for both EditorComponent and in game Component.
//! It is responsible for the creation and activation of the hair object itself
//! and the update and synchronization of any changed configuration.
//! It also responsible to the connection with the entity's Actor to whom the hair
//! is associated and gets the skinning matrices and visibility.
class HairComponentController final
: public HairRequestsBus::Handler
, public HairGlobalSettingsNotificationBus::Handler
, private AZ::Data::AssetBus::MultiHandler
, private AZ::TickBus::Handler
, private EMotionFX::Integration::ActorComponentNotificationBus::Handler
{
public:
friend class EditorHairComponent;
AZ_TYPE_INFO(AZ::Render::HairComponentController, "{81D3EA93-7EAC-44B7-B8CB-0B573DD8D634}");
static void Reflect(AZ::ReflectContext* context);
static void GetProvidedServices(AZ::ComponentDescriptor::DependencyArrayType& provided);
static void GetIncompatibleServices(AZ::ComponentDescriptor::DependencyArrayType& incompatible);
static void GetRequiredServices(AZ::ComponentDescriptor::DependencyArrayType& required);
HairComponentController() = default;
HairComponentController(const HairComponentConfig& config);
~HairComponentController();
void Activate(EntityId entityId);
void Deactivate();
void SetConfiguration(const HairComponentConfig& config);
const HairComponentConfig& GetConfiguration() const;
HairFeatureProcessor* GetFeatureProcessor() { return m_featureProcessor; }
private:
AZ_DISABLE_COPY(HairComponentController);
void OnHairConfigChanged();
void OnHairAssetChanged();
// AZ::Render::Hair::HairGlobalSettingsNotificationBus Overrides
void OnHairGlobalSettingsChanged(const HairGlobalSettings& hairGlobalSettings) override;
// AZ::Data::AssetBus::Handler
void OnAssetReady(AZ::Data::Asset<AZ::Data::AssetData> asset) override;
void OnAssetReloaded(AZ::Data::Asset<AZ::Data::AssetData> asset) override;
// EMotionFX::Integration::ActorComponentNotificationBus::Handler
void OnActorInstanceCreated(EMotionFX::ActorInstance* actorInstance) override;
void OnActorInstanceDestroyed(EMotionFX::ActorInstance* actorInstance) override;
// AZ::TickBus::Handler
void OnTick(float deltaTime, AZ::ScriptTimePoint time) override;
int GetTickOrder() override;
bool GenerateLocalToGlobalBoneIndex( EMotionFX::ActorInstance* actorInstance, AMD::TressFXAsset* hairAsset);
bool CreateHairObject();
void RemoveHairObject();
// Extract actor matrix from the actor instance.
bool UpdateActorMatrices();
HairFeatureProcessor* m_featureProcessor = nullptr;
bool m_configChanged = false; // Flag used to defer the configuration change to onTick.
HairComponentConfig m_configuration; // Settings per hair component
//! Hair render object for connecting to the skeleton and connecting to the feature processor.
Data::Instance<HairRenderObject> m_renderObject; // unique to this component - this is the data source.
EntityId m_entityId;
// Store a cache of the bone index lookup we generated during the creation of hair object.
AMD::LocalToGlobalBoneIndexLookup m_hairBoneIndexLookup;
AMD::LocalToGlobalBoneIndexLookup m_collisionBoneIndexLookup;
// Cache the bone matrices array to avoid frequent allocation.
AZStd::vector<AZ::Matrix3x4> m_cachedHairBoneMatrices;
AZStd::vector<AZ::Matrix3x4> m_cachedCollisionBoneMatrices;
AZ::Matrix3x4 m_entityWorldMatrix;
};
} // namespace Hair
}
}

@ -0,0 +1,95 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzCore/Serialization/SerializeContext.h>
#include <Atom/RPI.Public/Pass/PassSystemInterface.h>
#include <Atom/RPI.Public/FeatureProcessorFactory.h>
#include <Components/HairSystemComponent.h>
#include <Components/HairComponentController.h>
#include <Rendering/HairFeatureProcessor.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
HairSystemComponent::HairSystemComponent() = default;
HairSystemComponent::~HairSystemComponent() = default;
void HairSystemComponent::Reflect(ReflectContext* context)
{
if (SerializeContext* serialize = azrtti_cast<SerializeContext*>(context))
{
serialize->Class<HairSystemComponent, Component>()
->Version(0)
;
}
Hair::HairFeatureProcessor::Reflect(context);
}
void HairSystemComponent::GetProvidedServices(ComponentDescriptor::DependencyArrayType& provided)
{
provided.push_back(AZ_CRC_CE("HairService"));
}
void HairSystemComponent::GetIncompatibleServices(ComponentDescriptor::DependencyArrayType& incompatible)
{
incompatible.push_back(AZ_CRC_CE("HairService"));
}
void HairSystemComponent::GetRequiredServices([[maybe_unused]] ComponentDescriptor::DependencyArrayType& required)
{
required.push_back(AZ_CRC("ActorSystemService", 0x5e493d6c));
required.push_back(AZ_CRC("EMotionFXAnimationService", 0x3f8a6369));
}
void HairSystemComponent::LoadPassTemplateMappings()
{
auto* passSystem = RPI::PassSystemInterface::Get();
AZ_Assert(passSystem, "Cannot get the pass system.");
const char* passTemplatesFile = "Passes/AtomTressFX_PassTemplates.azasset";
passSystem->LoadPassTemplateMappings(passTemplatesFile);
}
void HairSystemComponent::Init()
{
}
void HairSystemComponent::Activate()
{
// Feature processor
AZ::RPI::FeatureProcessorFactory::Get()->RegisterFeatureProcessor<Hair::HairFeatureProcessor>();
auto* passSystem = RPI::PassSystemInterface::Get();
AZ_Assert(passSystem, "Cannot get the pass system.");
// Setup handler for load pass templates mappings
m_loadTemplatesHandler = RPI::PassSystemInterface::OnReadyLoadTemplatesEvent::Handler([this]() { this->LoadPassTemplateMappings(); });
passSystem->ConnectEvent(m_loadTemplatesHandler);
// Load the AtomTressFX pass classes
passSystem->AddPassCreator(Name("HairSkinningComputePass"), &HairSkinningComputePass::Create);
passSystem->AddPassCreator(Name("HairPPLLRasterPass"), &HairPPLLRasterPass::Create);
passSystem->AddPassCreator(Name("HairPPLLResolvePass"), &HairPPLLResolvePass::Create);
}
void HairSystemComponent::Deactivate()
{
AZ::RPI::FeatureProcessorFactory::Get()->UnregisterFeatureProcessor<Hair::HairFeatureProcessor>();
m_loadTemplatesHandler.Disconnect();
}
} // namespace Hair
} // End Render namespace
} // End AZ namespace

@ -0,0 +1,54 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Component/Component.h>
#include <AzCore/Asset/AssetCommon.h>
#include <Atom/RPI.Public/Pass/PassSystemInterface.h>
namespace AZ
{
namespace Render
{
class SharedBuffer;
namespace Hair
{
class HairSystemComponent final
: public Component
{
public:
AZ_COMPONENT(HairSystemComponent, "{F3A56326-1D2F-462D-A9E8-0405A76601A5}");
HairSystemComponent();
~HairSystemComponent();
static void Reflect(ReflectContext* context);
static void GetProvidedServices(ComponentDescriptor::DependencyArrayType& provided);
static void GetIncompatibleServices(ComponentDescriptor::DependencyArrayType& incompatible);
static void GetRequiredServices(ComponentDescriptor::DependencyArrayType& required);
private:
//! Loads the pass templates mapping file
void LoadPassTemplateMappings();
////////////////////////////////////////////////////////////////////////
// Component interface implementation
void Init() override;
void Activate() override;
void Deactivate() override;
////////////////////////////////////////////////////////////////////////
//! Used for loading the pass templates of the hair gem.
RPI::PassSystemInterface::OnReadyLoadTemplatesEvent::Handler m_loadTemplatesHandler;
};
} // namespace Hair
} // End Render namespace
} // End AZ namespace

@ -0,0 +1,47 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <HairModule.h>
#include <Components/HairSystemComponent.h>
#include <Components/HairComponent.h>
#if defined (ATOMTRESSFX_EDITOR)
#include <Components/EditorHairComponent.h>
#endif // ATOMTRESSFX_EDITOR
namespace AZ
{
namespace Render
{
namespace Hair
{
HairModule::HairModule()
: AZ::Module()
{
m_descriptors.insert(
m_descriptors.end(),
{
HairSystemComponent::CreateDescriptor(),
HairComponent::CreateDescriptor(),
#if defined (ATOMTRESSFX_EDITOR)
// Prevent adding editor component when editor and tools are not built
EditorHairComponent::CreateDescriptor(),
#endif // ATOMTRESSFX_EDITOR
});
}
AZ::ComponentTypeList HairModule::GetRequiredSystemComponents() const
{
// add required SystemComponents to the SystemEntity
return AZ::ComponentTypeList{
azrtti_typeid<HairSystemComponent>()
};
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,37 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Memory/SystemAllocator.h>
#include <AzCore/Module/Module.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairModule
: public AZ::Module
{
public:
AZ_RTTI(HairModule, "{0EF06CF0-8011-4668-A31F-A6851583EBDC}", AZ::Module);
AZ_CLASS_ALLOCATOR(HairModule, AZ::SystemAllocator, 0);
HairModule();
//! Add required SystemComponents to the SystemEntity.
AZ::ComponentTypeList GetRequiredSystemComponents() const override;
};
} // namespace Hair
} // namespace Render
} // namespace AZ
AZ_DECLARE_MODULE_CLASS(Gem_AtomTressFX, AZ::Render::Hair::HairModule)

@ -0,0 +1,11 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzCore/Module/Module.h>
//AZ_DECLARE_MODULE_CLASS(Gem_AtomTressFX, AZ::Module)

@ -0,0 +1,320 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
//#include <Atom/RHI/CommandList.h>
#include <Atom/RHI/RHISystemInterface.h>
#include <Atom/RHI/DrawPacketBuilder.h>
#include <Atom/RHI/PipelineState.h>
#include <Atom/RPI.Public/View.h>
#include <Atom/RPI.Public/RPIUtils.h>
#include <Atom/RPI.Public/RenderPipeline.h>
#include <Atom/RPI.Public/RPISystemInterface.h>
#include <Atom/RPI.Public/Pass/PassUtils.h>
#include <Atom/RPI.Public/Scene.h>
#include <Atom/RPI.Reflect/Asset/AssetUtils.h>
#include <Atom/RPI.Reflect/Pass/RasterPassData.h>
#include <Atom/RPI.Reflect/Pass/PassTemplate.h>
#include <Atom/RPI.Reflect/Shader/ShaderAsset.h>
#include <Passes/HairPPLLRasterPass.h>
#include <Rendering/HairRenderObject.h>
#include <Rendering/HairFeatureProcessor.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
// --- Creation & Initialization ---
RPI::Ptr<HairPPLLRasterPass> HairPPLLRasterPass::Create(const RPI::PassDescriptor& descriptor)
{
RPI::Ptr<HairPPLLRasterPass> pass = aznew HairPPLLRasterPass(descriptor);
return pass;
}
HairPPLLRasterPass::HairPPLLRasterPass(const RPI::PassDescriptor& descriptor)
: RasterPass(descriptor),
m_passDescriptor(descriptor)
{
}
HairPPLLRasterPass::~HairPPLLRasterPass()
{
}
bool HairPPLLRasterPass::AcquireFeatureProcessor()
{
if (m_featureProcessor)
{
return true;
}
RPI::Scene* scene = GetScene();
if (scene)
{
m_featureProcessor = scene->GetFeatureProcessor<HairFeatureProcessor>();
}
else
{
return false;
}
if (!m_featureProcessor)
{
AZ_Warning("Hair Gem", false,
"HairPPLLRasterPass [%s] - Failed to retrieve Hair feature processor from the scene",
GetName().GetCStr());
return false;
}
return true;
}
void HairPPLLRasterPass::InitializeInternal()
{
if (GetScene())
{
RasterPass::InitializeInternal();
}
}
void HairPPLLRasterPass::BuildInternal()
{
RasterPass::BuildInternal();
if (!AcquireFeatureProcessor())
{
return;
}
if (!LoadShaderAndPipelineState())
{
return;
}
// Output
AttachBufferToSlot(Name{ "PerPixelLinkedList" }, m_featureProcessor->GetPerPixelListBuffer());
}
bool HairPPLLRasterPass::IsEnabled() const
{
return (RPI::RasterPass::IsEnabled() && m_initialized) ? true : false;
}
bool HairPPLLRasterPass::LoadShaderAndPipelineState()
{
RPI::ShaderReloadNotificationBus::Handler::BusDisconnect();
const RPI::RasterPassData* passData = RPI::PassUtils::GetPassData<RPI::RasterPassData>(m_passDescriptor);
// If we successfully retrieved our custom data, use it to set the DrawListTag
if (!passData)
{
AZ_Error("Hair Gem", false, "Missing pass raster data");
return false;
}
// Load Shader
const char* shaderFilePath = "Shaders/hairrenderingfillppll.azshader";
Data::Asset<RPI::ShaderAsset> shaderAsset =
RPI::AssetUtils::LoadAssetByProductPath<RPI::ShaderAsset>(shaderFilePath, RPI::AssetUtils::TraceLevel::Error);
if (!shaderAsset.GetId().IsValid())
{
AZ_Error("Hair Gem", false, "Invalid shader asset for shader '%s'!", shaderFilePath);
return false;
}
m_shader = RPI::Shader::FindOrCreate(shaderAsset);
if (m_shader == nullptr)
{
AZ_Error("Hair Gem", false, "Pass failed to load shader '%s'!", shaderFilePath);
return false;
}
// Per Pass Srg
{
// Using 'PerPass' naming since currently RasterPass assumes that the pass Srg is always named 'PassSrg'
// [To Do] - RasterPass should use srg slot index and not name - currently this will
// result in a crash in one of the Atom existing MSAA passes that requires further dive.
// m_shaderResourceGroup = UtilityClass::CreateShaderResourceGroup(m_shader, "HairPerPassSrg", "Hair Gem");
m_shaderResourceGroup = UtilityClass::CreateShaderResourceGroup(m_shader, "PassSrg", "Hair Gem");
if (!m_shaderResourceGroup)
{
AZ_Error("Hair Gem", false, "Failed to create the per pass srg");
return false;
}
}
const RPI::ShaderVariant& shaderVariant = m_shader->GetVariant(RPI::ShaderAsset::RootShaderVariantStableId);
RHI::PipelineStateDescriptorForDraw pipelineStateDescriptor;
shaderVariant.ConfigurePipelineState(pipelineStateDescriptor);
RPI::Scene* scene = GetScene();
if (!scene)
{
AZ_Error("Hair Gem", false, "Scene could not be acquired" );
return false;
}
RHI::DrawListTag drawListTag = m_shader->GetDrawListTag();
scene->ConfigurePipelineState(drawListTag, pipelineStateDescriptor);
pipelineStateDescriptor.m_renderAttachmentConfiguration = GetRenderAttachmentConfiguration();
pipelineStateDescriptor.m_inputStreamLayout.SetTopology(AZ::RHI::PrimitiveTopology::TriangleList);
pipelineStateDescriptor.m_inputStreamLayout.Finalize();
m_pipelineState = m_shader->AcquirePipelineState(pipelineStateDescriptor);
if (!m_pipelineState)
{
AZ_Error("Hair Gem", false, "Pipeline state could not be acquired");
return false;
}
RPI::ShaderReloadNotificationBus::Handler::BusConnect(shaderAsset.GetId());
m_initialized = true;
return true;
}
void HairPPLLRasterPass::SchedulePacketBuild(HairRenderObject* hairObject)
{
m_newRenderObjects.insert(hairObject);
}
bool HairPPLLRasterPass::BuildDrawPacket(HairRenderObject* hairObject)
{
if (!m_initialized)
{
return false;
}
RHI::DrawPacketBuilder::DrawRequest drawRequest;
drawRequest.m_listTag = m_drawListTag;
drawRequest.m_pipelineState = m_pipelineState;
// drawRequest.m_streamBufferViews = // no explicit vertex buffer. shader is using the srg buffers
drawRequest.m_stencilRef = 0;
drawRequest.m_sortKey = 0;
// Seems that the PerView and PerScene are gathered through RenderPass::CollectSrgs()
// The PerPass is gathered through the RasterPass::m_shaderResourceGroup
AZStd::lock_guard<AZStd::mutex> lock(m_mutex);
return hairObject->BuildPPLLDrawPacket(drawRequest);
}
bool HairPPLLRasterPass::AddDrawPackets(AZStd::list<Data::Instance<HairRenderObject>>& hairRenderObjects)
{
bool overallSuccess = true;
if (!m_currentView &&
(!(m_currentView = GetView()) || !m_currentView->HasDrawListTag(m_drawListTag)))
{
m_currentView = nullptr; // set it to nullptr to prevent further attempts this frame
AZ_Warning("Hair Gem", false, "HairPPLLRasterPass failed to acquire or match the DrawListTag - check that your pass and shader tag name match");
return false;
}
for (auto& renderObject : hairRenderObjects)
{
const RHI::DrawPacket* drawPacket = renderObject->GetFillDrawPacket();
if (!drawPacket)
{ // might not be an error - the object might have just been added and the DrawPacket is
// scheduled to be built when the render frame begins
AZ_Warning("Hair Gem", !m_newRenderObjects.empty(), "HairPPLLRasterPass - DrawPacket wasn't built");
overallSuccess = false;
continue;
}
m_currentView->AddDrawPacket(drawPacket);
}
return overallSuccess;
}
void HairPPLLRasterPass::FrameBeginInternal(FramePrepareParams params)
{
{
AZStd::lock_guard<AZStd::mutex> lock(m_mutex);
if (!m_initialized && AcquireFeatureProcessor())
{
LoadShaderAndPipelineState();
m_featureProcessor->ForceRebuildRenderData();
}
}
if (!m_initialized)
{
return;
}
// Bind the Per Object resources and trigger the RHI validation that will use attachment
// for its validation. The attachments are invalidated outside the render begin/end frame.
for (HairRenderObject* newObject : m_newRenderObjects)
{
newObject->BindPerObjectSrgForRaster();
BuildDrawPacket(newObject);
}
// Clear the new added objects - BuildDrawPacket should only be carried out once per
// object/shader lifetime
m_newRenderObjects.clear();
// Refresh current view every frame
if (!(m_currentView = GetView()) || !m_currentView->HasDrawListTag(m_drawListTag))
{
m_currentView = nullptr; // set it to null if view exists but no tag match
AZ_Warning("Hair Gem", false, "HairPPLLRasterPass failed to acquire or match the DrawListTag - check that your pass and shader tag name match");
return;
}
RPI::RasterPass::FrameBeginInternal(params);
}
void HairPPLLRasterPass::CompileResources(const RHI::FrameGraphCompileContext& context)
{
AZ_PROFILE_FUNCTION(AzRender);
if (!m_featureProcessor)
{
return;
}
// Compilation of remaining srgs will be done by the parent class
RPI::RasterPass::CompileResources(context);
}
void HairPPLLRasterPass::BuildShaderAndRenderData()
{
AZStd::lock_guard<AZStd::mutex> lock(m_mutex);
m_initialized = false; // make sure we initialize it even if not in this frame
if (AcquireFeatureProcessor())
{
LoadShaderAndPipelineState();
m_featureProcessor->ForceRebuildRenderData();
}
}
void HairPPLLRasterPass::OnShaderReinitialized([[maybe_unused]] const RPI::Shader & shader)
{
BuildShaderAndRenderData();
}
void HairPPLLRasterPass::OnShaderAssetReinitialized([[maybe_unused]] const Data::Asset<RPI::ShaderAsset>& shaderAsset)
{
BuildShaderAndRenderData();
}
void HairPPLLRasterPass::OnShaderVariantReinitialized([[maybe_unused]] const AZ::RPI::ShaderVariant& shaderVariant)
{
BuildShaderAndRenderData();
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,116 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Memory/SystemAllocator.h>
#include <Atom/RHI.Reflect/Size.h>
#include <Atom/RPI.Public/Pass/RasterPass.h>
#include <Atom/RPI.Public/Shader/Shader.h>
#include <Atom/RPI.Public/Shader/ShaderResourceGroup.h>
#include <Atom/RPI.Public/Shader/ShaderReloadNotificationBus.h>
namespace AZ
{
namespace RHI
{
struct DrawItem;
}
namespace Render
{
namespace Hair
{
class HairRenderObject;
class HairFeatureProcessor;
//! A HairPPLLRasterPass is used for the hair fragments fill render after the data
//! went through the skinning and simulation passes.
//! The output of this pass is the general list of fragment data that can now be
//! traversed for depth resolve and lighting.
//! The Fill pass uses the following Srgs:
//! - PerPassSrg shared by all hair passes for the shared dynamic buffer and the PPLL buffers
//! - PerMaterialSrg - used solely by this pass to alter the vertices and apply the visual
//! hair properties to each fragment.
//! - HairDynamicDataSrg (PerObjectSrg) - shared buffers views for this hair object only.
//! - PerViewSrg and PerSceneSrg - as per the data from Atom.
class HairPPLLRasterPass
: public RPI::RasterPass
, private RPI::ShaderReloadNotificationBus::Handler
{
AZ_RPI_PASS(HairPPLLRasterPass);
public:
AZ_RTTI(HairPPLLRasterPass, "{6614D7DD-24EE-4A2B-B314-7C035E2FB3C4}", RasterPass);
AZ_CLASS_ALLOCATOR(HairPPLLRasterPass, SystemAllocator, 0);
virtual ~HairPPLLRasterPass();
//! Creates a HairPPLLRasterPass
static RPI::Ptr<HairPPLLRasterPass> Create(const RPI::PassDescriptor& descriptor);
bool AddDrawPackets(AZStd::list<Data::Instance<HairRenderObject>>& hairObjects);
//! The following will be called when an object was added or shader has been compiled
void SchedulePacketBuild(HairRenderObject* hairObject);
Data::Instance<RPI::Shader> GetShader() { return m_shader; }
void SetFeatureProcessor(HairFeatureProcessor* featureProcessor)
{
m_featureProcessor = featureProcessor;
}
virtual bool IsEnabled() const override;
protected:
// ShaderReloadNotificationBus::Handler overrides...
void OnShaderReinitialized(const RPI::Shader& shader) override;
void OnShaderAssetReinitialized(const Data::Asset<RPI::ShaderAsset>& shaderAsset) override;
void OnShaderVariantReinitialized(const AZ::RPI::ShaderVariant& shaderVariant) override;
private:
explicit HairPPLLRasterPass(const RPI::PassDescriptor& descriptor);
bool LoadShaderAndPipelineState();
bool AcquireFeatureProcessor();
void BuildShaderAndRenderData();
bool BuildDrawPacket(HairRenderObject* hairObject);
// Pass behavior overrides
void InitializeInternal() override;
void BuildInternal() override;
void FrameBeginInternal(FramePrepareParams params) override;
// Scope producer functions...
void CompileResources(const RHI::FrameGraphCompileContext& context) override;
private:
HairFeatureProcessor* m_featureProcessor = nullptr;
// The shader that will be used by the pass
Data::Instance<RPI::Shader> m_shader = nullptr;
// To help create the pipeline state
RPI::PassDescriptor m_passDescriptor;
const RHI::PipelineState* m_pipelineState = nullptr;
RPI::ViewPtr m_currentView = nullptr;
AZStd::mutex m_mutex;
//! List of new render objects introduced this frame so that their
//! Per Object (dynamic) Srg should be bound to the resources.
//! Done once per every new object introduced / requires update.
AZStd::unordered_set<HairRenderObject*> m_newRenderObjects;
bool m_initialized = false;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,142 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Atom/RPI.Public/Pass/PassUtils.h>
#include <Atom/RPI.Public/RenderPipeline.h>
#include <Atom/RHI/RHISystemInterface.h>
#include <Atom/RPI.Public/Shader/ShaderSystem.h>
#include <Atom/RPI.Public/RPIUtils.h>
#include <Atom/RPI.Public/Scene.h>
#include <Atom/RPI.Reflect/Pass/PassTemplate.h>
#include <Atom/RPI.Reflect/Shader/ShaderAsset.h>
#include <Passes/HairPPLLResolvePass.h>
#include <Rendering/HairFeatureProcessor.h>
#include <Rendering/HairLightingModels.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
HairPPLLResolvePass::HairPPLLResolvePass(const RPI::PassDescriptor& descriptor)
: RPI::FullscreenTrianglePass(descriptor)
{
}
void HairPPLLResolvePass::UpdateGlobalShaderOptions()
{
RPI::ShaderOptionGroup shaderOption = m_shader->CreateShaderOptionGroup();
m_featureProcessor->GetHairGlobalSettings(m_hairGlobalSettings);
shaderOption.SetValue(AZ::Name("o_enableShadows"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableShadows });
shaderOption.SetValue(AZ::Name("o_enableDirectionalLights"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableDirectionalLights });
shaderOption.SetValue(AZ::Name("o_enablePunctualLights"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enablePunctualLights });
shaderOption.SetValue(AZ::Name("o_enableAreaLights"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableAreaLights });
shaderOption.SetValue(AZ::Name("o_enableIBL"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableIBL });
shaderOption.SetValue(AZ::Name("o_hairLightingModel"), AZ::Name{ "HairLightingModel::" + AZStd::string(HairLightingModelNamespace::ToString(m_hairGlobalSettings.m_hairLightingModel)) });
shaderOption.SetValue(AZ::Name("o_enableMarschner_R"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableMarschner_R });
shaderOption.SetValue(AZ::Name("o_enableMarschner_TRT"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableMarschner_TRT });
shaderOption.SetValue(AZ::Name("o_enableMarschner_TT"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableMarschner_TT });
shaderOption.SetValue(AZ::Name("o_enableDiffuseLobe"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableDiffuseLobe });
shaderOption.SetValue(AZ::Name("o_enableSpecularLobe"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableSpecularLobe });
shaderOption.SetValue(AZ::Name("o_enableTransmittanceLobe"), AZ::RPI::ShaderOptionValue{ m_hairGlobalSettings.m_enableTransmittanceLobe });
m_shaderOptions = shaderOption.GetShaderVariantKeyFallbackValue();
}
RPI::Ptr<HairPPLLResolvePass> HairPPLLResolvePass::Create(const RPI::PassDescriptor& descriptor)
{
RPI::Ptr<HairPPLLResolvePass> pass = aznew HairPPLLResolvePass(descriptor);
return AZStd::move(pass);
}
void HairPPLLResolvePass::InitializeInternal()
{
if (GetScene())
{
FullscreenTrianglePass::InitializeInternal();
}
}
bool HairPPLLResolvePass::AcquireFeatureProcessor()
{
if (m_featureProcessor)
{
return true;
}
RPI::Scene* scene = GetScene();
if (scene)
{
m_featureProcessor = scene->GetFeatureProcessor<HairFeatureProcessor>();
}
else
{
return false;
}
if (!m_featureProcessor || !m_featureProcessor->IsInitialized())
{
AZ_Warning("Hair Gem", m_featureProcessor,
"HairPPLLResolvePass [%s] - Failed to retrieve Hair feature processor from the scene",
GetName().GetCStr());
m_featureProcessor = nullptr; // set it as null if not initialized to repeat this check.
return false;
}
return true;
}
void HairPPLLResolvePass::CompileResources(const RHI::FrameGraphCompileContext& context)
{
if (!m_shaderResourceGroup || !AcquireFeatureProcessor())
{
AZ_Error("Hair Gem", m_shaderResourceGroup, "HairPPLLResolvePass: PPLL list data was not bound - missing Srg");
return; // no error message due to FP - initialization not complete yet, wait for the next frame
}
UpdateGlobalShaderOptions();
if (m_shaderResourceGroup->HasShaderVariantKeyFallbackEntry())
{
m_shaderResourceGroup->SetShaderVariantKeyFallbackValue(m_shaderOptions);
}
SrgBufferDescriptor descriptor = SrgBufferDescriptor(
RPI::CommonBufferPoolType::ReadWrite, RHI::Format::Unknown,
PPLL_NODE_SIZE, RESERVED_PIXELS_FOR_OIT,
Name{ "LinkedListNodesPPLL" }, Name{ "m_linkedListNodes" }, 0, 0
);
if (!UtilityClass::BindBufferToSrg("Hair Gem", m_featureProcessor->GetPerPixelListBuffer(), descriptor, m_shaderResourceGroup))
{
AZ_Error("Hair Gem", false, "HairPPLLResolvePass: PPLL list data could not be bound.");
}
// Update the material array constant buffer within the per pass srg
descriptor = SrgBufferDescriptor(
RPI::CommonBufferPoolType::Constant, RHI::Format::Unknown,
sizeof(AMD::TressFXShadeParams), 1,
Name{ "HairMaterialsArray" }, Name{ "m_hairParams" }, 0, 0
);
m_featureProcessor->GetMaterialsArray().UpdateGPUData(m_shaderResourceGroup, descriptor);
// All remaining srgs should compile here
FullscreenTrianglePass::CompileResources(context);
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,73 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Memory/SystemAllocator.h>
#include <Atom/RPI.Public/Pass/FullscreenTrianglePass.h>
#include <Atom/RPI.Public/Shader/Shader.h>
#include <Atom/RPI.Public/Shader/ShaderResourceGroup.h>
#include <Atom/RPI.Public/Shader/ShaderReloadNotificationBus.h>
#include <TressFX/TressFXConstantBuffers.h>
#include <Rendering/HairCommon.h>
#include <Rendering/HairGlobalSettings.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairFeatureProcessor;
//! The hair PPLL resolve pass is a full screen pass that runs over the hair fragments list
//! that were computed in the raster fill pass and resolves their depth order, transparency
//! and lighting values to be output to display.
//! Each pixel on the screen will be processed only once and will iterate through the fragments
//! list associated with the pixel's location.
//! The full screen resolve pass is using the following Srgs:
//! - PerPassSrg: hair vertex data data, PPLL buffers and material array
//! shared by all passes.
class HairPPLLResolvePass final
: public RPI::FullscreenTrianglePass
{
AZ_RPI_PASS(HairPPLLResolvePass);
public:
AZ_RTTI(HairPPLLResolvePass, "{240940C1-4A47-480D-8B16-176FF3359B01}", RPI::FullscreenTrianglePass);
AZ_CLASS_ALLOCATOR(HairPPLLResolvePass, SystemAllocator, 0);
~HairPPLLResolvePass() = default;
static RPI::Ptr<HairPPLLResolvePass> Create(const RPI::PassDescriptor& descriptor);
bool AcquireFeatureProcessor();
void SetFeatureProcessor(HairFeatureProcessor* featureProcessor)
{
m_featureProcessor = featureProcessor;
}
//! Override pass behavior methods
void InitializeInternal() override;
void CompileResources(const RHI::FrameGraphCompileContext& context) override;
private:
HairPPLLResolvePass(const RPI::PassDescriptor& descriptor);
private:
void UpdateGlobalShaderOptions();
HairGlobalSettings m_hairGlobalSettings;
HairFeatureProcessor* m_featureProcessor = nullptr;
AZ::RPI::ShaderVariantKey m_shaderOptions;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,55 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Atom/RHI/DrawListTagRegistry.h>
#include <Atom/RHI/RHISystemInterface.h>
#include <Atom/RPI.Public/RenderPipeline.h>
#include <Atom/RPI.Reflect/Pass/RasterPassData.h>
namespace AZ
{
namespace Render
{
HairParentPass::HairParentPass(const RPI::PassDescriptor& descriptor)
: Base(descriptor)
{
}
HairParentPass::~HairParentPass()
{
}
RPI::Ptr<HairParentPass> HairParentPass::Create(const RPI::PassDescriptor& descriptor)
{
return aznew HairParentPass(descriptor);
}
// The following two methods are here as the mean to allow usage of different hair passes in
// the active pipeline due to different hair options activations.
// For example - one might want to use short resolve render method vs' the complete full buffers
// that is used currently (but cost much memory), or to enable/disable collisions by removing
// the collision passes.
// [To Do] - The parent pass class can be removed if this is not done.
void HairParentPass::UpdateChildren()
{
if (!m_updateChildren)
{
return;
}
m_updateChildren = false;
}
void HairParentPass::BuildAttachmentsInternal()
{
UpdateChildren();
Base::BuildAttachmentsInternal();
}
} // namespace Render
} // namespace AZ

@ -0,0 +1,47 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <Atom/RPI.Public/Pass/ParentPass.h>
#include <AtomCore/std/containers/array_view.h>
#include <AzCore/std/containers/fixed_vector.h>
namespace AZ
{
namespace Render
{
//! HairParentPass owns the hair passes.
//! Currently they are all defined via the pipeline configuration, making the HairParent
//! class somewhat redundant, but moving forward, such class can be required to control
//! passes activation and usage based on user activation options.
class HairParentPass final
: public RPI::ParentPass
{
using Base = RPI::ParentPass;
AZ_RPI_PASS(HairParentPass);
public:
AZ_RTTI(HairParentPass, "80C7E869-2513-4201-8C1E-D2E39DDE1244", Base);
AZ_CLASS_ALLOCATOR(HairParentPass, SystemAllocator, 0);
virtual ~HairParentPass();
static RPI::Ptr<HairParentPass> Create(const RPI::PassDescriptor& descriptor);
private:
HairParentPass() = delete;
explicit HairParentPass(const RPI::PassDescriptor& descriptor);
// RPI::Pass overrides...
void BuildAttachmentsInternal() override;
void UpdateChildren();
bool m_updateChildren = true;
};
} // namespace Render
} // namespace AZ

@ -0,0 +1,269 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Atom/RHI/CommandList.h>
#include <Atom/RHI/Factory.h>
#include <Atom/RHI/FrameGraphAttachmentInterface.h>
#include <Atom/RHI/FrameGraphInterface.h>
#include <Atom/RHI/PipelineState.h>
#include <Atom/RPI.Public/Base.h>
#include <Atom/RPI.Public/Pass/PassUtils.h>
#include <Atom/RPI.Public/RenderPipeline.h>
#include <Atom/RHI/RHISystemInterface.h>
#include <Atom/RPI.Public/RPIUtils.h>
#include <Atom/RPI.Public/Scene.h>
#include <Atom/RPI.Public/View.h>
#include <Atom/RPI.Reflect/Pass/PassTemplate.h>
#include <Atom/RPI.Reflect/Shader/ShaderAsset.h>
// Hair Specific
#include <Rendering/HairCommon.h>
#include <Rendering/HairSharedBufferInterface.h>
#include <Rendering/HairDispatchItem.h>
#include <Rendering/HairFeatureProcessor.h>
#include <Rendering/HairRenderObject.h>
#include <Passes/HairSkinningComputePass.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
Data::Instance<RPI::Shader> HairSkinningComputePass::GetShader()
{
return m_shader;
}
bool HairSkinningComputePass::AcquireFeatureProcessor()
{
if (m_featureProcessor)
{
return true;
}
RPI::Scene* scene = GetScene();
if (scene)
{
m_featureProcessor = scene->GetFeatureProcessor<HairFeatureProcessor>();
}
else
{
return false;
}
if (!m_featureProcessor)
{
AZ_Warning("Hair Gem", false,
"HairSkinningComputePass [%s] - Failed to retrieve Hair feature processor from the scene",
GetName().GetCStr());
return false;
}
return true;
}
RPI::Ptr<HairSkinningComputePass> HairSkinningComputePass::Create(const RPI::PassDescriptor& descriptor)
{
RPI::Ptr<HairSkinningComputePass> pass = aznew HairSkinningComputePass(descriptor);
return pass;
}
HairSkinningComputePass::HairSkinningComputePass(const RPI::PassDescriptor& descriptor)
: RPI::ComputePass(descriptor)
{
}
void HairSkinningComputePass::InitializeInternal()
{
if (GetScene())
{
ComputePass::InitializeInternal();
}
}
void HairSkinningComputePass::BuildInternal()
{
ComputePass::BuildInternal();
if (!AcquireFeatureProcessor())
{
return;
}
// Output
// This is the buffer that is shared between all objects and dispatches and contains
// the dynamic data that can be changed between passes.
Name bufferName = Name{ "SkinnedHairSharedBuffer" };
RPI::PassAttachmentBinding* localBinding = FindAttachmentBinding(bufferName);
if (localBinding && !localBinding->m_attachment)
{
AttachBufferToSlot(Name{ "SkinnedHairSharedBuffer" }, HairSharedBufferInterface::Get()->GetBuffer());
}
}
void HairSkinningComputePass::FrameBeginInternal(FramePrepareParams params)
{
if (m_buildShaderAndData)
{ // Shader rebuild is required - the async callback did not succeed (missing FP?)
if (AcquireFeatureProcessor())
{ // FP exists or can be acquired
LoadShader(); // this will happen in this frame
// Flag the FP not to add any more dispatches until shader rebuild was done
m_featureProcessor->SetAddDispatchEnable(false);
// The following will force rebuild in the next frame keeping this frame clean.
m_featureProcessor->ForceRebuildRenderData();
m_buildShaderAndData = false;
}
// Clear the dispatch items, they will be re-populated next frame
m_dispatchItems.clear();
}
// This will bind the Per Object resources since the binding is triggering
// the RHI validation that will use attachment for its validation. The attachments
// are invalidated outside the render begin / end frame.
for (HairRenderObject* newObject : m_newRenderObjects)
{
newObject->BindPerObjectSrgForCompute();
}
// Clear the objects, hence this is only done once per object/shader lifetime
m_newRenderObjects.clear();
RPI::ComputePass::FrameBeginInternal(params);
}
void HairSkinningComputePass::CompileResources([[maybe_unused]] const RHI::FrameGraphCompileContext& context)
{
if (!m_featureProcessor)
{
return;
}
// DON'T call the ComputePass:CompileResources as it will try to compile perDraw srg
// under the assumption that this is a single dispatch compute. Here we have dispatch
// per hair object and each has its own perDraw srg.
if (m_shaderResourceGroup != nullptr)
{
BindPassSrg(context, m_shaderResourceGroup);
m_shaderResourceGroup->Compile();
}
}
bool HairSkinningComputePass::IsEnabled() const
{
return RPI::ComputePass::IsEnabled();// (m_dispatchItems.size() ? true : false);
}
bool HairSkinningComputePass::BuildDispatchItem(HairRenderObject* hairObject, DispatchLevel dispatchLevel)
{
m_newRenderObjects.insert(hairObject);
return hairObject->BuildDispatchItem(m_shader.get(), dispatchLevel);
}
void HairSkinningComputePass::AddDispatchItems(AZStd::list<Data::Instance<HairRenderObject>>& hairRenderObjects)
{
// The following mutex is used for blocking the shader switch when a hot load occurs, hence ensuring
// the shader exists and the same shader, data and dispatch items are used across all hair objects
// during this frame.
// Several cases exist:
// 1. Hot reload was invoked first - either finished before this method or the mutex in this method
// is waited upon. The flag for hot reload was set already resulting in exit without add of dispatches.
// 2. Hot reload was invoked after this method - the BuildCommandListInternal will test for the flag and
// clear if required.
// 3. Hot reload was invoked after send to the GPU (BuildCommandListInternal) - the data sent is valid
// and it is safe to change the shader and create new dispatches.
// Remark: BuildCommandListInternal does not need to be synched as well since if the data was already
// inserted it is consistent and valid using the existing shader and data with instance counting.
AZStd::lock_guard<AZStd::mutex> lock(m_mutex);
if (m_buildShaderAndData)
{ // mutex was held by the hot reload and released - abort render until done. List is empty.
return;
}
for (auto& renderObject : hairRenderObjects)
{
if (!renderObject->IsEnabled())
{
continue;
}
const RHI::DispatchItem* dispatchItem = renderObject->GetDispatchItem(m_shader.get());
if (!dispatchItem)
{
continue;
}
uint32_t iterations = m_allowSimIterations ? AZ::GetMax(renderObject->GetCPULocalShapeIterations(), 1) : 1;
for (uint32_t j = 0; j < iterations; ++j)
{
m_dispatchItems.insert(dispatchItem);
}
}
}
void HairSkinningComputePass::BuildCommandListInternal(const RHI::FrameGraphExecuteContext& context)
{
if (m_buildShaderAndData)
{ // Protect against shader and data async change that were not carried out
m_dispatchItems.clear();
return;
}
RHI::CommandList* commandList = context.GetCommandList();
// The following will bind all registered Srgs set in m_shaderResourceGroupsToBind
// and sends them to the command list ahead of the dispatch.
// This includes the PerView, PerScene and PerPass srgs (what about per draw?)
SetSrgsForDispatch(commandList);
for (const RHI::DispatchItem* dispatchItem : m_dispatchItems)
{
commandList->Submit(*dispatchItem);
}
// Clear the dispatch items. They will need to be re-populated next frame
m_dispatchItems.clear();
}
void HairSkinningComputePass::BuildShaderAndRenderData()
{
AZStd::lock_guard<AZStd::mutex> lock(m_mutex);
m_buildShaderAndData = true;
if (AcquireFeatureProcessor())
{
// Flag the FP not to add any more dispatches until shader rebuild was done
m_featureProcessor->SetAddDispatchEnable(false);
}
}
// Before reloading shaders, we want to wait for existing dispatches to finish
// so shader reloading does not interfere in any way. Because AP reloads are async, there might
// be a case where dispatch resources are destructed and will most certainly cause a GPU crash.
// If we flag the need for rebuild, the build will be made at the start of the next frame - at
// this stage the dispatch items should have been cleared - now we can load the shader and data.
void HairSkinningComputePass::OnShaderReinitialized([[maybe_unused]] const AZ::RPI::Shader& shader)
{
BuildShaderAndRenderData();
}
void HairSkinningComputePass::OnShaderAssetReinitialized([[maybe_unused]] const Data::Asset<AZ::RPI::ShaderAsset>& shaderAsset)
{
BuildShaderAndRenderData();
}
void HairSkinningComputePass::OnShaderVariantReinitialized([[maybe_unused]] const AZ::RPI::ShaderVariant& shaderVariant)
{
BuildShaderAndRenderData();
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,113 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Memory/SystemAllocator.h>
#include <Atom/RHI.Reflect/Size.h>
#include <Atom/RPI.Public/Pass/ComputePass.h>
#include <Atom/RPI.Public/Shader/Shader.h>
#include <Atom/RPI.Public/Shader/ShaderResourceGroup.h>
namespace AZ
{
namespace RHI
{
struct DispatchItem;
}
namespace Render
{
namespace Hair
{
class HairDispatchItem;
class HairFeatureProcessor;
class HairRenderObject;
enum class DispatchLevel;
// This pass class serves for all skinning and simulation hair compute passes.
//! The Skinning Compute passes are all using the following Srgs via the dispatchItem:
//! - PerPassSrg: shared by all hair passes for the shared dynamic buffer and the PPLL buffers
//! - HairGenerationSrg: dictates how to construct the hair vertices and skinning
//! - HairSimSrg: defines vertices and tangent data shared between all passes
class HairSkinningComputePass final
: public RPI::ComputePass
{
AZ_RPI_PASS(HairSkinningComputePass);
public:
AZ_RTTI(HairSkinningComputePass, "{DC8D323E-41FF-4FED-89C6-A254FD6809FC}", RPI::ComputePass);
AZ_CLASS_ALLOCATOR(HairSkinningComputePass, SystemAllocator, 0);
~HairSkinningComputePass() = default;
// Creates a HairSkinningComputePass
static RPI::Ptr<HairSkinningComputePass> Create(const RPI::PassDescriptor& descriptor);
bool BuildDispatchItem(HairRenderObject* hairObject, DispatchLevel dispatchLevel );
//! Thread-safe function for adding the frame's dispatch items
void AddDispatchItems(AZStd::list<Data::Instance<HairRenderObject>>& renderObjects);
// Pass behavior overrides
void CompileResources(const RHI::FrameGraphCompileContext& context) override;
virtual bool IsEnabled() const override;
//! Returns the shader held by the ComputePass
Data::Instance<RPI::Shader> GetShader();
void SetFeatureProcessor(HairFeatureProcessor* featureProcessor)
{
m_featureProcessor = featureProcessor;
}
void SetAllowIterations(bool allowIterations)
{
m_allowSimIterations = allowIterations;
}
protected:
HairSkinningComputePass(const RPI::PassDescriptor& descriptor);
void BuildCommandListInternal(const RHI::FrameGraphExecuteContext& context) override;
// Attach here all the pass buffers
void InitializeInternal() override;
void BuildInternal() override;
void FrameBeginInternal(FramePrepareParams params) override;
// ShaderReloadNotificationBus::Handler overrides...
void OnShaderReinitialized(const AZ::RPI::Shader& shader) override;
void OnShaderAssetReinitialized(const Data::Asset<AZ::RPI::ShaderAsset>& shaderAsset) override;
void OnShaderVariantReinitialized(const AZ::RPI::ShaderVariant& shaderVariant) override;
bool AcquireFeatureProcessor();
void BuildShaderAndRenderData();
private:
HairFeatureProcessor* m_featureProcessor = nullptr;
bool m_allowSimIterations = false;
bool m_initialized = false;
bool m_buildShaderAndData = false; // If shader is updated, mark it for build
AZStd::mutex m_mutex;
//! list of dispatch items, each represents a single hair object that
//! will be used by the skinning compute shader.
AZStd::unordered_set<const RHI::DispatchItem*> m_dispatchItems;
//! List of new render objects that their Per Object (dynamic) Srg should be bound
//! to the resources. Done once per pass per object only.
AZStd::unordered_set<HairRenderObject*> m_newRenderObjects;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,57 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
namespace AZ
{
namespace RPI
{
class BufferAsset;
class Buffer;
}
namespace Render
{
namespace Hair
{
enum class HairDynamicBuffersSemantics : uint8_t
{
Position = 0,
PositionsPrev,
PositionsPrevPrev,
Tangent,
StrandLevelData,
NumBufferStreams
};
enum class HairGenerationBuffersSemantics : uint8_t
{
InitialHairPositions = 0,
HairRestLengthSRV,
HairStrandType,
FollowHairRootOffset,
BoneSkinningData,
TressFXSimulationConstantBuffer,
NumBufferStreams
};
enum class HairRenderBuffersSemantics : uint8_t
{
HairVertexRenderParams = 0,
HairTexCoords,
BaseAlbedo,
StrandAlbedo,
RenderCB,
StrandCB,
NumBufferStreams
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,195 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Atom/RHI/Factory.h>
#include <Atom/RHI/RHIUtils.h>
#include <Atom/RPI.Public/Shader/Shader.h>
#include <Atom/RPI.Public/Image/StreamingImage.h>
#include <Atom/RPI.Reflect/Buffer/BufferAssetView.h>
// Hair specific
#include <Rendering/HairCommon.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
//!=====================================================================================
//!
//! Utility Functions
//!
//!=====================================================================================
// [To Do] examine if most of these functions can become global in RPI
//! Utility function to generate the Srg given the shader and the desired Srg name to be associated to.
//! If several shaders are sharing the same Srg (for example perView, perScene), it is enough to
//! create the Srg by associating it with a single shader and since the GPU signature and the data
//! are referring to the same shared description (preferable set in an [SrgDeclaration].aszli file)
//! The association with all shaders will work properly.
Data::Instance<RPI::ShaderResourceGroup> UtilityClass::CreateShaderResourceGroup(
Data::Instance<RPI::Shader> shader,
const char* shaderResourceGroupId,
const char* moduleName)
{
Data::Instance<RPI::ShaderResourceGroup> srg = RPI::ShaderResourceGroup::Create(shader->GetAsset(), AZ::Name{ shaderResourceGroupId });
if (!srg)
{
AZ_Error(moduleName, false, "Failed to create shader resource group");
return nullptr;
}
return srg;
}
//! If srg is nullptr the index handle will NOT be set.
//! This can be useful when creating a constant buffer or an image.
Data::Instance<RPI::Buffer> UtilityClass::CreateBuffer(
const char* warningHeader,
SrgBufferDescriptor& bufferDesc,
Data::Instance<RPI::ShaderResourceGroup> srg)
{
// If srg is provided, match the shader Srg bind index (returned via the descriptor)
if (srg)
{ // Not always do we want to associate Srg when creating a buffer
bufferDesc.m_resourceShaderIndex = srg->FindShaderInputBufferIndex(bufferDesc.m_paramNameInSrg).GetIndex();
if (bufferDesc.m_resourceShaderIndex == uint32_t(-1))
{
AZ_Error(warningHeader, false, "Failed to find shader input index for [%s] in the SRG.",
bufferDesc.m_paramNameInSrg.GetCStr());
return nullptr;
}
}
// Descriptor setting
RPI::CommonBufferDescriptor desc;
desc.m_elementFormat = bufferDesc.m_elementFormat;
desc.m_poolType = bufferDesc.m_poolType;;
desc.m_elementSize = bufferDesc.m_elementSize;
desc.m_bufferName = bufferDesc.m_bufferName.GetCStr();
desc.m_byteCount = (uint64_t)bufferDesc.m_elementCount * bufferDesc.m_elementSize;
desc.m_bufferData = nullptr; // set during asset load - use Update
// Buffer creation
return RPI::BufferSystemInterface::Get()->CreateBufferFromCommonPool(desc);
}
bool UtilityClass::BindBufferToSrg(
const char* warningHeader,
Data::Instance<RPI::Buffer> buffer,
SrgBufferDescriptor& bufferDesc,
Data::Instance<RPI::ShaderResourceGroup> srg)
{
if (!buffer)
{
AZ_Error(warningHeader, false, "Trying to bind a null buffer");
return false;
}
RHI::ShaderInputBufferIndex bufferIndex = srg->FindShaderInputBufferIndex(bufferDesc.m_paramNameInSrg);
if (!bufferIndex.IsValid())
{
AZ_Error(warningHeader, false, "Failed to find shader input index for [%s] in the SRG.",
bufferDesc.m_paramNameInSrg.GetCStr());
return false;
}
if (!srg->SetBufferView(bufferIndex, buffer->GetBufferView()))
{
AZ_Error(warningHeader, false, "Failed to bind buffer view for [%s]", bufferDesc.m_bufferName.GetCStr());
return false;
}
return true;
}
Data::Instance<RPI::Buffer> UtilityClass::CreateBufferAndBindToSrg(
const char* warningHeader,
SrgBufferDescriptor& bufferDesc,
Data::Instance<RPI::ShaderResourceGroup> srg)
{
// Buffer creation
Data::Instance<RPI::Buffer> buffer = CreateBuffer(warningHeader, bufferDesc, srg);
if (!BindBufferToSrg(warningHeader, buffer, bufferDesc, srg))
{
return nullptr;
}
return buffer;
}
Data::Instance<RHI::ImagePool> UtilityClass::CreateImagePool(RHI::ImagePoolDescriptor& imagePoolDesc)
{
RHI::Ptr<RHI::Device> device = RHI::GetRHIDevice();
Data::Instance<RHI::ImagePool> imagePool = RHI::Factory::Get().CreateImagePool();
RHI::ResultCode result = imagePool->Init(*device, imagePoolDesc);
if (result != RHI::ResultCode::Success)
{
AZ_Error("CreateImagePool", false, "Failed to create or initialize image pool");
return nullptr;
}
return imagePool;
}
Data::Instance<RHI::Image> UtilityClass::CreateImage2D(RHI::ImagePool* imagePool, RHI::ImageDescriptor& imageDesc)
{
Data::Instance<RHI::Image> rhiImage = RHI::Factory::Get().CreateImage();
RHI::ImageInitRequest request;
request.m_image = rhiImage.get();
request.m_descriptor = imageDesc;
RHI::ResultCode result = imagePool->InitImage(request);
if (result != RHI::ResultCode::Success)
{
AZ_Error("CreateImage2D", false, "Failed to create or initialize image");
return nullptr;
}
return rhiImage;
}
Data::Instance<RPI::StreamingImage> UtilityClass::LoadStreamingImage(
const char* textureFilePath, [[maybe_unused]] const char* sampleName)
{
Data::AssetId streamingImageAssetId;
Data::AssetCatalogRequestBus::BroadcastResult(
streamingImageAssetId, &Data::AssetCatalogRequestBus::Events::GetAssetIdByPath,
textureFilePath, azrtti_typeid<RPI::StreamingImageAsset>(), false);
if (!streamingImageAssetId.IsValid())
{
AZ_Error(sampleName, false, "Failed to get streaming image asset id with path %s", textureFilePath);
return nullptr;
}
auto streamingImageAsset = Data::AssetManager::Instance().GetAsset<RPI::StreamingImageAsset>(
streamingImageAssetId,
Data::AssetLoadBehavior::PreLoad);
streamingImageAsset.BlockUntilLoadComplete();
if (!streamingImageAsset.IsReady())
{
AZ_Error(sampleName, false, "Failed to get streaming image asset '%s'", textureFilePath);
return nullptr;
}
auto image = RPI::StreamingImage::FindOrCreate(streamingImageAsset);
if (!image)
{
AZ_Error(sampleName, false, "Failed to find or create an image instance from image asset '%s'", textureFilePath);
return nullptr;
}
return image;
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,208 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AtomCore/Instance/InstanceData.h>
#include <Atom/RHI/ImagePool.h>
#include <Atom/RPI.Public/Shader/ShaderResourceGroup.h>
#include <Atom/RPI.Public/Image/StreamingImage.h>
#include <Atom/RPI.Public/Image/AttachmentImage.h>
#include <Atom/RPI.Public/Shader/Shader.h>
// Hair specific
#include <Rendering/SharedBuffer.h>
namespace AZ
{
namespace RHI
{
class BufferAssetView;
}
namespace RPI
{
class Buffer;
}
namespace Data
{
/**
* This is an alias of intrusive_ptr designed for any class which inherits from
* InstanceData. You're not required to use Instance<> over AZStd::intrusive_ptr<>,
* but it provides symmetry with Asset<>.
*/
template <typename T>
using Instance = AZStd::intrusive_ptr<T>;
}
namespace Render
{
namespace Hair
{
class UtilityClass
{
public:
UtilityClass() = default;
static Data::Instance<RPI::ShaderResourceGroup> CreateShaderResourceGroup(
Data::Instance<RPI::Shader> shader,
const char* shaderResourceGroupId,
const char* moduleName
);
static Data::Instance<RPI::Buffer> CreateBuffer(
const char* warningHeader,
SrgBufferDescriptor& bufferDesc,
Data::Instance<RPI::ShaderResourceGroup> srg
);
static Data::Instance<RPI::Buffer> CreateBufferAndBindToSrg(
const char* warningHeader,
SrgBufferDescriptor& bufferDesc,
Data::Instance<RPI::ShaderResourceGroup> srg
);
static bool BindBufferToSrg(
const char* warningHeader,
Data::Instance<RPI::Buffer> buffer,
SrgBufferDescriptor& bufferDesc,
Data::Instance<RPI::ShaderResourceGroup> srg=nullptr
);
static Data::Instance<RPI::StreamingImage> LoadStreamingImage(
const char* textureFilePath, [[maybe_unused]] const char* sampleName
);
static Data::Instance<RHI::ImagePool> CreateImagePool(
RHI::ImagePoolDescriptor& imagePoolDesc);
static Data::Instance<RHI::Image> CreateImage2D(
RHI::ImagePool* imagePool, RHI::ImageDescriptor& imageDesc);
};
//! The following class matches between a constant buffer structure in CPU and its counter
//! part on the GPU. It is the equivalent Atom class for TressFXUniformBuffer.
template<class TYPE>
class HairUniformBuffer
{
public:
TYPE* operator->()
{
return &m_cpuBuffer;
}
TYPE* get()
{
return &m_cpuBuffer;
}
Render::SrgBufferDescriptor& GetBufferDescriptor() { return m_bufferDesc; }
//! Use this method only if the buffer will be attached to a single Srg.
//! If this is not the case use InitForUndefinedSrg
bool InitForUniqueSrg(
Data::Instance<RPI::ShaderResourceGroup> srg,
Render::SrgBufferDescriptor& srgDesc)
{
m_srg = srg;
m_bufferDesc = srgDesc;
AZ_Error("HairUniformBuffer", m_srg, "Critical Error - no Srg was provided to bind buffer to [%s]",
m_bufferDesc.m_bufferName.GetCStr());
RHI::ShaderInputConstantIndex indexHandle = m_srg->FindShaderInputConstantIndex(m_bufferDesc.m_paramNameInSrg);
if (indexHandle.IsNull())
{
AZ_Error("HairUniformBuffer", false,
"Failed to find shader constant buffer index for [%s] in the SRG.",
m_bufferDesc.m_paramNameInSrg.GetCStr());
return false;
}
m_bufferDesc.m_resourceShaderIndex = indexHandle.GetIndex();
return true;
}
//! Updates and binds the data to the Srg and copies it to the GPU side.
//! Assumes that the buffer is uniquely attached to the saved srg.
bool UpdateGPUData()
{
if (!m_srg.get())
{
AZ_Error("HairUniformBuffer", false, "Critical Error - no Srg was provided to bind buffer to [%s]",
m_bufferDesc.m_bufferName.GetCStr());
return false;
}
RHI::ShaderInputConstantIndex constantIndex = RHI::ShaderInputConstantIndex(m_bufferDesc.m_resourceShaderIndex);
if (constantIndex.IsNull())
{
AZ_Error("HairUniformBuffer", false, "Critical Error - Srg index is not valide for [%s]",
m_bufferDesc.m_paramNameInSrg.GetCStr());
return false;
}
if (!m_srg->SetConstantRaw(constantIndex, &m_cpuBuffer, m_bufferDesc.m_elementSize))
{
AZ_Error("HairUniformBuffer", false, "Failed to bind Constant Buffer for [%s]", m_bufferDesc.m_bufferName.GetCStr());
return false;
}
return true;
}
//! Updates Binds the data to the Srg and copies it to the GPU side.
//! Assumes that the buffer can be associated with various srgs.
bool UpdateGPUData(
Data::Instance<RPI::ShaderResourceGroup> srg,
Render::SrgBufferDescriptor& srgDesc)
{
if (!srg)
{
AZ_Error("HairUniformBuffer", srg, "Critical Error - no Srg was provided to bind buffer to [%s]",
srgDesc.m_bufferName.GetCStr());
return false;
}
RHI::ShaderInputConstantIndex indexHandle = srg->FindShaderInputConstantIndex(srgDesc.m_paramNameInSrg);
if (indexHandle.IsNull())
{
AZ_Error("HairUniformBuffer", false,
"Failed to find shader constant buffer index for [%s] in the SRG.",
srgDesc.m_paramNameInSrg.GetCStr());
return false;
}
if (!srg->SetConstantRaw(indexHandle, &m_cpuBuffer, srgDesc.m_elementSize))
{
AZ_Error("HairUniformBuffer", false, "Failed to bind Constant Buffer for [%s]", srgDesc.m_bufferName.GetCStr());
return false;
}
return true;
}
private:
TYPE m_cpuBuffer;
//! When this srg is set as nullptr, we assume that the buffer can be shared
//! between several passes (as done for PerView and PerScene).
Data::Instance<RPI::ShaderResourceGroup> m_srg = nullptr;
Render::SrgBufferDescriptor m_bufferDesc = Render::SrgBufferDescriptor(
RPI::CommonBufferPoolType::Constant,
RHI::Format::Unknown, sizeof(TYPE), 1,
Name{"BufferNameUndefined"}, Name{"BufferNameUndefined"}, 0, 0
);
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,61 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Code/src/TressFX/TressFXCommon.h>
#include <Rendering/HairDispatchItem.h>
#include <Rendering/HairRenderObject.h>
#include <Passes/HairSkinningComputePass.h>
#include <Atom/RPI.Public/Shader/ShaderResourceGroup.h>
#include <Atom/RPI.Public/Shader/Shader.h>
#include <Atom/RPI.Public/Buffer/Buffer.h>
#include <Atom/RHI/Factory.h>
#include <Atom/RHI/BufferView.h>
#include <limits>
namespace AZ
{
namespace Render
{
namespace Hair
{
HairDispatchItem::~HairDispatchItem()
{
}
// Reference in the code above that tackles handling of the different dispatches possible
// This one is targeting the per vertex dispatches for now.
void HairDispatchItem::InitSkinningDispatch(
RPI::Shader* shader,
RPI::ShaderResourceGroup* hairGenerationSrg,
RPI::ShaderResourceGroup* hairSimSrg,
uint32_t elementsAmount )
{
m_shader = shader;
RHI::DispatchDirect dispatchArgs(
elementsAmount, 1, 1,
TRESSFX_SIM_THREAD_GROUP_SIZE, 1, 1
);
m_dispatchItem.m_arguments = dispatchArgs;
RHI::PipelineStateDescriptorForDispatch pipelineDesc;
m_shader->GetVariant(RPI::ShaderAsset::RootShaderVariantStableId).ConfigurePipelineState(pipelineDesc);
m_dispatchItem.m_pipelineState = m_shader->AcquirePipelineState(pipelineDesc);
m_dispatchItem.m_shaderResourceGroupCount = 2; // the per pass will be added by each pass.
m_dispatchItem.m_shaderResourceGroups = {
hairGenerationSrg->GetRHIShaderResourceGroup(), // Static generation data
hairSimSrg->GetRHIShaderResourceGroup() // Dynamic data changed between passes
};
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,74 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <Atom/RHI/DispatchItem.h>
#include <AtomCore/Instance/InstanceData.h>
namespace AZ
{
namespace RHI
{
class BufferView;
class PipelineState;
}
namespace RPI
{
class Buffer;
class ModelLod;
class Shader;
class ShaderResourceGroup;
}
namespace Render
{
namespace Hair
{
class HairSkinningComputePass;
class HairRenderObject;
enum class DispatchLevel
{
DISPATCHLEVEL_VERTEX,
DISPATCHLEVEL_STRAND
};
//! Holds and manages an RHI DispatchItem for a specific skinned mesh, and the resources that are
//! needed to build and maintain it.
class HairDispatchItem
: public Data::InstanceData
{
public:
AZ_CLASS_ALLOCATOR(HairDispatchItem, AZ::SystemAllocator, 0);
HairDispatchItem() = default;
//! One dispatch item per hair object per computer pass.
//! The amount of dispatches depends on the amount of vertices required to be created
~HairDispatchItem();
void InitSkinningDispatch(
RPI::Shader* shader,
RPI::ShaderResourceGroup* hairGenerationSrg,
RPI::ShaderResourceGroup* hairSimSrg,
uint32_t elementsAmount
);
RHI::DispatchItem* GetDispatchItem() { return &m_dispatchItem; }
private:
RHI::DispatchItem m_dispatchItem;
RPI::Shader* m_shader;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,532 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <AzCore/Jobs/JobCompletion.h>
#include <AzCore/Jobs/JobFunction.h>
#include <AzCore/RTTI/TypeInfo.h>
#include <AzCore/Serialization/SerializeContext.h>
#include <AzCore/Debug/EventTrace.h>
#include <Atom/RHI/Factory.h>
#include <Atom/RHI/RHIUtils.h>
#include <Atom/RHI/ImagePool.h>
#include <Atom/RHI/CpuProfiler.h>
#include <Atom/RHI/RHISystemInterface.h>
#include <Atom/RPI.Public/View.h>
#include <Atom/RPI.Public/Scene.h>
#include <Atom/RPI.Public/RenderPipeline.h>
#include <Atom/RPI.Public/Pass/PassSystemInterface.h>
#include <Atom/RPI.Public/RPIUtils.h>
#include <Atom/RPI.Public/Shader/Shader.h>
#include <Atom/RPI.Public/Shader/ShaderResourceGroup.h>
#include <Atom/RPI.Public/Image/ImageSystemInterface.h>
#include <Atom/RPI.Public/Image/AttachmentImagePool.h>
#include <Atom/RPI.Reflect/Buffer/BufferAssetView.h>
#include <Atom/RPI.Reflect/Asset/AssetUtils.h>
// Hair specific
#include <Rendering/HairGlobalSettings.h>
#include <Rendering/HairFeatureProcessor.h>
#include <Rendering/HairBuffersSemantics.h>
#include <Rendering/HairRenderObject.h>
#include <Passes/HairSkinningComputePass.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
uint32_t HairFeatureProcessor::s_instanceCount = 0;
HairFeatureProcessor::HairFeatureProcessor()
{
HairParentPassName = Name{ "HairParentPass" };
HairPPLLRasterPassName = Name{ "HairPPLLRasterPass" };
HairPPLLResolvePassName = Name{ "HairPPLLResolvePass" };
GlobalShapeConstraintsPassName = Name{ "HairGlobalShapeConstraintsComputePass" };
CalculateStrandDataPassName = Name{ "HairCalculateStrandLevelDataComputePass" };
VelocityShockPropagationPassName = Name{ "HairVelocityShockPropagationComputePass" };
LocalShapeConstraintsPassName = Name{ "HairLocalShapeConstraintsComputePass" };
LengthConstriantsWindAndCollisionPassName = Name{ "HairLengthConstraintsWindAndCollisionComputePass" };
UpdateFollowHairPassName = Name{ "HairUpdateFollowHairComputePass" };
++s_instanceCount;
if (!CreatePerPassResources())
{ // this might not be an error - if the pass system is still empty / minimal
// and these passes are not part of the minimal pipeline, they will not
// be created.
AZ_Error("Hair Gem", false, "Failed to create the hair shared buffer resource");
}
}
HairFeatureProcessor::~HairFeatureProcessor()
{
m_linkedListNodesBuffer.reset();
m_sharedDynamicBuffer.reset();
}
void HairFeatureProcessor::Reflect(ReflectContext* context)
{
HairGlobalSettings::Reflect(context);
if (auto* serializeContext = azrtti_cast<SerializeContext*>(context))
{
serializeContext
->Class<HairFeatureProcessor, RPI::FeatureProcessor>()
->Version(0);
}
}
void HairFeatureProcessor::Activate()
{
m_hairFeatureProcessorRegistryName = { "AZ::Render::Hair::HairFeatureProcessor" };
EnableSceneNotification();
TickBus::Handler::BusConnect();
HairGlobalSettingsRequestBus::Handler::BusConnect();
}
void HairFeatureProcessor::Deactivate()
{
DisableSceneNotification();
TickBus::Handler::BusDisconnect();
HairGlobalSettingsRequestBus::Handler::BusDisconnect();
}
void HairFeatureProcessor::OnTick(float deltaTime, [[maybe_unused]] AZ::ScriptTimePoint time)
{
const float MAX_SIMULATION_TIME_STEP = 0.033f; // Assuming minimal of 30 fps
m_currentDeltaTime = AZStd::min(deltaTime, MAX_SIMULATION_TIME_STEP);
for (auto& object : m_hairRenderObjects)
{
object->SetFrameDeltaTime(m_currentDeltaTime);
}
}
int HairFeatureProcessor::GetTickOrder()
{
return AZ::TICK_PRE_RENDER;
}
void HairFeatureProcessor::AddHairRenderObject(Data::Instance<HairRenderObject> renderObject)
{
if (!m_initialized)
{
Init(m_renderPipeline);
}
m_hairRenderObjects.push_back(renderObject);
BuildDispatchAndDrawItems(renderObject);
EnablePasses(true);
}
void HairFeatureProcessor::EnablePasses(bool enable)
{
for (auto& [passName, pass] : m_computePasses)
{
pass->SetEnabled(enable);
}
m_hairPPLLRasterPass->SetEnabled(enable);
m_hairPPLLResolvePass->SetEnabled(enable);
}
bool HairFeatureProcessor::RemoveHairRenderObject(Data::Instance<HairRenderObject> renderObject)
{
for (auto objIter = m_hairRenderObjects.begin(); objIter != m_hairRenderObjects.end(); ++objIter)
{
if (objIter->get() == renderObject)
{
m_hairRenderObjects.erase(objIter);
if (m_hairRenderObjects.empty())
{
EnablePasses(false);
}
return true;
}
}
return false;
}
void HairFeatureProcessor::UpdateHairSkinning()
{
// Copying CPU side m_SimCB content to the GPU buffer (matrices, wind parameters..)
for (auto objIter = m_hairRenderObjects.begin(); objIter != m_hairRenderObjects.end(); ++objIter)
{
if (!objIter->get()->IsEnabled())
{
return;
}
objIter->get()->Update();
}
}
//! Assumption: the hair is being updated per object before this method is called and
//! therefore the parameters that were calculated per object can be directly copied
//! without need to recalculate as in the original code.
//! Make sure there are no more than (currently) 16 hair objects or update dynamic handling.
//! This DOES NOT do the srg binding since it can be shared but by pass itself when compiling resources.
void HairFeatureProcessor::FillHairMaterialsArray(std::vector<const AMD::TressFXRenderParams*>& renderSettings)
{
// Update Render Parameters
for (int i = 0; i < renderSettings.size(); ++i)
{
AMD::ShadeParams& hairMaterial = m_hairObjectsMaterialsCB->HairShadeParams[i];
hairMaterial.FiberRadius = renderSettings[i]->FiberRadius;
hairMaterial.ShadowAlpha = renderSettings[i]->ShadowAlpha;
hairMaterial.FiberSpacing = renderSettings[i]->FiberSpacing;
hairMaterial.HairEx2 = renderSettings[i]->HairEx2;
hairMaterial.HairKs2 = renderSettings[i]->HairKs2;
hairMaterial.MatKValue = renderSettings[i]->MatKValue;
hairMaterial.Roughness = renderSettings[i]->Roughness;
hairMaterial.CuticleTilt = renderSettings[i]->CuticleTilt;
}
}
//==========================================================================================
void HairFeatureProcessor::Simulate(const FeatureProcessor::SimulatePacket& packet)
{
AZ_PROFILE_FUNCTION(AzRender);
AZ_ATOM_PROFILE_FUNCTION("Hair", "HairFeatureProcessor: Simulate");
AZ_UNUSED(packet);
if (m_hairRenderObjects.empty())
{ // there might not be are no render objects yet, indicating that scene data might not be ready
// to initialize just yet.
return;
}
if (m_forceRebuildRenderData)
{
for (auto& hairRenderObject : m_hairRenderObjects)
{
BuildDispatchAndDrawItems(hairRenderObject);
}
m_forceRebuildRenderData = false;
m_addDispatchEnabled = true;
}
// Prepare materials array for the per pass srg
std::vector<const AMD::TressFXRenderParams*> hairObjectsRenderMaterials;
uint32_t obj = 0;
for (auto objIter = m_hairRenderObjects.begin(); objIter != m_hairRenderObjects.end(); ++objIter, ++obj)
{
HairRenderObject* renderObject = objIter->get();
if (!renderObject->IsEnabled())
{
continue;
}
renderObject->Update();
// [To Do] Hair - update the following parameters for dynamic LOD control
// should change or when parameters are being changed on the editor side.
// float Distance = sqrtf( m_activeScene.scene->GetCameraPos().x * m_activeScene.scene->GetCameraPos().x +
// m_activeScene.scene->GetCameraPos().y * m_activeScene.scene->GetCameraPos().y +
// m_activeScene.scene->GetCameraPos().z * m_activeScene.scene->GetCameraPos().z);
// objIter->get()->UpdateRenderingParameters(
// renderingSettings, m_nScreenWidth * m_nScreenHeight * AVE_FRAGS_PER_PIXEL, m_deltaTime, Distance);
// this will be used for the constant buffer
hairObjectsRenderMaterials.push_back(renderObject->GetHairRenderParams());
}
FillHairMaterialsArray(hairObjectsRenderMaterials);
}
void HairFeatureProcessor::Render([[maybe_unused]] const FeatureProcessor::RenderPacket& packet)
{
AZ_PROFILE_FUNCTION(AzRender);
AZ_ATOM_PROFILE_FUNCTION("Hair", "HairFeatureProcessor: Render");
if (!m_initialized || !m_addDispatchEnabled)
{ // Skip adding dispatches / Draw packets for this frame until initialized and the shaders are ready
return;
}
// [To Do] - no culling scheme applied yet.
// Possibly setup the hair culling work group to be re-used for each view.
// See SkinnedMeshFeatureProcessor::Render for more details
// Add dispatch per hair object per Compute passes
for (auto& [passName, pass] : m_computePasses)
{
pass->AddDispatchItems(m_hairRenderObjects);
}
// Add all hair objects to the Render / Raster Pass
m_hairPPLLRasterPass->AddDrawPackets(m_hairRenderObjects);
}
void HairFeatureProcessor::ClearPasses()
{
m_initialized = false; // Avoid simulation or render
m_computePasses.clear();
m_hairPPLLRasterPass = nullptr;
m_hairPPLLResolvePass = nullptr;
// Mark for all passes to evacuate their render data and recreate it.
m_forceRebuildRenderData = true;
m_forceClearRenderData = true;
}
void HairFeatureProcessor::OnRenderPipelineAdded(RPI::RenderPipelinePtr renderPipeline)
{
// Proceed only if this is the main pipeline that contains the parent pass
if (!renderPipeline.get()->GetRootPass()->FindPassByNameRecursive(HairParentPassName))
{
return;
}
Init(renderPipeline.get());
// Mark for all passes to evacuate their render data and recreate it.
m_forceRebuildRenderData = true;
}
void HairFeatureProcessor::OnRenderPipelineRemoved(RPI::RenderPipeline* renderPipeline)
{
// Proceed only if this is the main pipeline that contains the parent pass
if (!renderPipeline->GetRootPass()->FindPassByNameRecursive(HairParentPassName))
{
return;
}
m_renderPipeline = nullptr;
ClearPasses();
}
void HairFeatureProcessor::OnRenderPipelinePassesChanged(RPI::RenderPipeline* renderPipeline)
{
// Proceed only if this is the main pipeline that contains the parent pass
if (!renderPipeline->GetRootPass()->FindPassByNameRecursive(HairParentPassName))
{
return;
}
Init(renderPipeline);
// Mark for all passes to evacuate their render data and recreate it.
m_forceRebuildRenderData = true;
}
bool HairFeatureProcessor::Init(RPI::RenderPipeline* renderPipeline)
{
m_renderPipeline = renderPipeline;
ClearPasses();
// Compute Passes - populate the passes map
bool resultSuccess = InitComputePass(GlobalShapeConstraintsPassName);
resultSuccess &= InitComputePass(CalculateStrandDataPassName);
resultSuccess &= InitComputePass(VelocityShockPropagationPassName);
resultSuccess &= InitComputePass(LocalShapeConstraintsPassName, true); // restore shape over several iterations
resultSuccess &= InitComputePass(LengthConstriantsWindAndCollisionPassName);
resultSuccess &= InitComputePass(UpdateFollowHairPassName);
// Rendering Passes
resultSuccess &= InitPPLLFillPass();
resultSuccess &= InitPPLLResolvePass();
// Don't enable passes if no hair object was added yet (depending on activation order)
if (m_hairRenderObjects.empty())
{
EnablePasses(false);
}
m_initialized = resultSuccess;
// this might not be an error - if the pass system is still empty / minimal
// and these passes are not part of the minimal pipeline, they will not
// be created.
AZ_Error("Hair Gem", resultSuccess, "Passes could not be retrieved.");
return m_initialized;
}
bool HairFeatureProcessor::CreatePerPassResources()
{
if (m_sharedResourcesCreated)
{
return true;
}
SrgBufferDescriptor descriptor;
AZStd::string instanceNumber = AZStd::to_string(s_instanceCount);
// Shared buffer - this is a persistent buffer that needs to be created manually.
{
AZStd::vector<SrgBufferDescriptor> hairDynamicDescriptors;
DynamicHairData::PrepareSrgDescriptors(hairDynamicDescriptors, 1, 1);
Name sharedBufferName = Name{ "HairSharedDynamicBuffer" + instanceNumber };
if (!HairSharedBufferInterface::Get())
{ // Since there can be several pipelines, allocate the shared buffer only for the
// first one and from that moment on it will be used through its interface
m_sharedDynamicBuffer = AZStd::make_unique<SharedBuffer>(sharedBufferName.GetCStr(), hairDynamicDescriptors);
}
}
// PPLL nodes buffer
{
descriptor = SrgBufferDescriptor(
RPI::CommonBufferPoolType::ReadWrite, RHI::Format::Unknown,
PPLL_NODE_SIZE, RESERVED_PIXELS_FOR_OIT,
Name{ "LinkedListNodesPPLL" + instanceNumber }, Name{ "m_linkedListNodes" }, 0, 0
);
m_linkedListNodesBuffer = UtilityClass::CreateBuffer("Hair Gem", descriptor, nullptr);
if (!m_linkedListNodesBuffer)
{
AZ_Error("Hair Gem", false, "Failed to bind buffer view for [%s]", descriptor.m_bufferName.GetCStr());
return false;
}
}
m_sharedResourcesCreated = true;
return true;
}
void HairFeatureProcessor::GetHairGlobalSettings(HairGlobalSettings& hairGlobalSettings)
{
AZStd::lock_guard<AZStd::mutex> lock(m_hairGlobalSettingsMutex);
hairGlobalSettings = m_hairGlobalSettings;
}
void HairFeatureProcessor::SetHairGlobalSettings(const HairGlobalSettings& hairGlobalSettings)
{
{
AZStd::lock_guard<AZStd::mutex> lock(m_hairGlobalSettingsMutex);
m_hairGlobalSettings = hairGlobalSettings;
}
HairGlobalSettingsNotificationBus::Broadcast(&HairGlobalSettingsNotifications::OnHairGlobalSettingsChanged, m_hairGlobalSettings);
}
bool HairFeatureProcessor::InitComputePass(const Name& passName, bool allowIterations)
{
m_computePasses[passName] = nullptr;
if (!m_renderPipeline)
{
AZ_Error("Hair Gem", false, "%s does NOT have render pipeline set yet", passName.GetCStr());
return false;
}
RPI::Ptr<RPI::Pass> desiredPass = m_renderPipeline->GetRootPass()->FindPassByNameRecursive(passName);
if (desiredPass)
{
m_computePasses[passName] = static_cast<HairSkinningComputePass*>(desiredPass.get());
m_computePasses[passName]->SetFeatureProcessor(this);
m_computePasses[passName]->SetAllowIterations(allowIterations);
}
else
{
AZ_Error("Hair Gem", false,
"%s does not exist in this pipeline. Check your game project's .pass assets.",
passName.GetCStr());
return false;
}
return true;
}
bool HairFeatureProcessor::InitPPLLFillPass()
{
m_hairPPLLRasterPass = nullptr; // reset it to null, just in case it fails to load the assets properly
if (!m_renderPipeline)
{
AZ_Error("Hair Gem", false, "Hair Fill Pass does NOT have render pipeline set yet");
return false;
}
RPI::Ptr<RPI::Pass> desiredPass = m_renderPipeline->GetRootPass()->FindPassByNameRecursive(HairPPLLRasterPassName);
if (desiredPass)
{
m_hairPPLLRasterPass = static_cast<HairPPLLRasterPass*>(desiredPass.get());
m_hairPPLLRasterPass->SetFeatureProcessor(this);
}
else
{
AZ_Error("Hair Gem", false, "HairPPLLRasterPass does not have any valid passes. Check your game project's .pass assets.");
return false;
}
return true;
}
bool HairFeatureProcessor::InitPPLLResolvePass()
{
m_hairPPLLResolvePass = nullptr; // reset it to null, just in case it fails to load the assets properly
if (!m_renderPipeline)
{
AZ_Error("Hair Gem", false, "Hair Fill Pass does NOT have render pipeline set yet");
return false;
}
RPI::Ptr<RPI::Pass> desiredPass = m_renderPipeline->GetRootPass()->FindPassByNameRecursive(HairPPLLResolvePassName);
if (desiredPass)
{
m_hairPPLLResolvePass = static_cast<HairPPLLResolvePass*>(desiredPass.get());
m_hairPPLLResolvePass->SetFeatureProcessor(this);
}
else
{
AZ_Error("Hair Gem", false, "HairPPLLResolvePassTemplate does not have valid passes. Check your game project's .pass assets.");
return false;
}
return true;
}
void HairFeatureProcessor::BuildDispatchAndDrawItems(Data::Instance<HairRenderObject> renderObject)
{
HairRenderObject* renderObjectPtr = renderObject.get();
// Dispatches for Compute passes
m_computePasses[GlobalShapeConstraintsPassName]->BuildDispatchItem(
renderObjectPtr, DispatchLevel::DISPATCHLEVEL_VERTEX);
m_computePasses[CalculateStrandDataPassName]->BuildDispatchItem(
renderObjectPtr, DispatchLevel::DISPATCHLEVEL_STRAND);
m_computePasses[VelocityShockPropagationPassName]->BuildDispatchItem(
renderObjectPtr, DispatchLevel::DISPATCHLEVEL_VERTEX);
m_computePasses[LocalShapeConstraintsPassName]->BuildDispatchItem(
renderObjectPtr, DispatchLevel::DISPATCHLEVEL_STRAND);
m_computePasses[LengthConstriantsWindAndCollisionPassName]->BuildDispatchItem(
renderObjectPtr, DispatchLevel::DISPATCHLEVEL_VERTEX);
m_computePasses[UpdateFollowHairPassName]->BuildDispatchItem(
renderObjectPtr, DispatchLevel::DISPATCHLEVEL_VERTEX);
// Render / Raster pass - adding the object will schedule Srgs binding
// and DrawItem build.
m_hairPPLLRasterPass->SchedulePacketBuild(renderObjectPtr);
}
Data::Instance<HairSkinningComputePass> HairFeatureProcessor::GetHairSkinningComputegPass()
{
if (!m_computePasses[GlobalShapeConstraintsPassName])
{
Init(m_renderPipeline);
}
return m_computePasses[GlobalShapeConstraintsPassName];
}
Data::Instance<HairPPLLRasterPass> HairFeatureProcessor::GetHairPPLLRasterPass()
{
if (!m_hairPPLLRasterPass)
{
Init(m_renderPipeline);
}
return m_hairPPLLRasterPass;
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,206 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/base.h>
#include <AzCore/std/containers/map.h>
#include <AzCore/std/containers/list.h>
#include <AzCore/Component/TickBus.h>
#include <AtomCore/Instance/Instance.h>
#include <Atom/RPI.Public/FeatureProcessor.h>
#include <Atom/RPI.Public/Image/AttachmentImage.h>
// Hair specific
#include <TressFX/TressFXConstantBuffers.h>
#include <Passes/HairSkinningComputePass.h>
#include <Passes/HairPPLLRasterPass.h>
#include <Passes/HairPPLLResolvePass.h>
#include <Rendering/HairRenderObject.h>
#include <Rendering/SharedBuffer.h>
#include <Rendering/HairCommon.h>
#include <Rendering/HairGlobalSettingsBus.h>
namespace AZ
{
namespace RHI
{
struct ImageDescriptor;
}
namespace Render
{
namespace Hair
{
//! The following lines dictate the overall memory consumption reserved for the
//! PPLL fragments. The memory consumption using this technique is quite large (can
//! grow far above 1GB in GPU/CPU data and in extreme cases of zoom up with dense hair
//! might still not be enough. For this reason it is recommended to utilize the
//! approximated lighting scheme originally suggested by Eidos Montreal and do OIT using
//! several frame buffer layers for storing closest fragments data.
//! Using the approximated technique, the OIT buffers will consume roughly 256MB for 4K
//! resolution render with 4 OIT layers.
static const size_t PPLL_NODE_SIZE = 16;
static const size_t AVE_FRAGS_PER_PIXEL = 24;
static const size_t SCREEN_WIDTH = 1920;
static const size_t SCREEN_HEIGHT = 1080;
static const size_t RESERVED_PIXELS_FOR_OIT = SCREEN_WIDTH * SCREEN_HEIGHT * AVE_FRAGS_PER_PIXEL;
class HairSkinningPass;
class HairRenderObject;
//! The HairFeatureProcessor (FP) is the glue between the various hair components / entities in
//! the scene and their passes / shaders.
//! The FP will keep track of all active hair objects, will run their skinning update iteration
//! and will then populate them into each of the passes to be computed and rendered.
//! The overall process involves update, skinning, collision, and simulation compute, fragment
//! raster fill, and final frame buffer OIT resolve.
//! The last part can be switched to support the smaller foot print pass version that instead
//! of fragments list (PPLL) will use fill screen buffers to approximate OIT layer resolve.
class HairFeatureProcessor final
: public RPI::FeatureProcessor
, public HairGlobalSettingsRequestBus::Handler
, private AZ::TickBus::Handler
{
Name HairParentPassName;
Name HairPPLLRasterPassName;
Name HairPPLLResolvePassName;
Name GlobalShapeConstraintsPassName;
Name CalculateStrandDataPassName;
Name VelocityShockPropagationPassName;
Name LocalShapeConstraintsPassName;
Name LengthConstriantsWindAndCollisionPassName;
Name UpdateFollowHairPassName;
public:
AZ_RTTI(AZ::Render::Hair::HairFeatureProcessor, "{5F9DDA81-B43F-4E30-9E56-C7C3DC517A4C}", RPI::FeatureProcessor);
static void Reflect(AZ::ReflectContext* context);
HairFeatureProcessor();
virtual ~HairFeatureProcessor();
void UpdateHairSkinning();
bool Init(RPI::RenderPipeline* pipeline);
bool IsInitialized() { return m_initialized; }
// FeatureProcessor overrides ...
void Activate() override;
void Deactivate() override;
void Simulate(const FeatureProcessor::SimulatePacket& packet) override;
void Render(const FeatureProcessor::RenderPacket& packet) override;
// AZ::TickBus::Handler overrides
void OnTick(float deltaTime, AZ::ScriptTimePoint time) override;
int GetTickOrder() override;
void AddHairRenderObject(Data::Instance<HairRenderObject> renderObject);
bool RemoveHairRenderObject(Data::Instance<HairRenderObject> renderObject);
// RPI::SceneNotificationBus overrides ...
void OnRenderPipelineAdded(RPI::RenderPipelinePtr renderPipeline) override;
void OnRenderPipelineRemoved(RPI::RenderPipeline* renderPipeline) override;
void OnRenderPipelinePassesChanged(RPI::RenderPipeline* renderPipeline) override;
Data::Instance<HairSkinningComputePass> GetHairSkinningComputegPass();
Data::Instance<HairPPLLRasterPass> GetHairPPLLRasterPass();
//! Update the hair objects materials array.
void FillHairMaterialsArray(std::vector<const AMD::TressFXRenderParams*>& renderSettings);
Data::Instance<RPI::Buffer> GetPerPixelListBuffer() { return m_linkedListNodesBuffer; }
HairUniformBuffer<AMD::TressFXShadeParams>& GetMaterialsArray() { return m_hairObjectsMaterialsCB; }
void ForceRebuildRenderData() { m_forceRebuildRenderData = true; }
void SetAddDispatchEnable(bool enable) { m_addDispatchEnabled = enable; }
void SetEnable(bool enable)
{
m_isEnabled = enable;
EnablePasses(enable);
}
bool CreatePerPassResources();
void GetHairGlobalSettings(HairGlobalSettings& hairGlobalSettings) override;
void SetHairGlobalSettings(const HairGlobalSettings& hairGlobalSettings) override;
private:
AZ_DISABLE_COPY_MOVE(HairFeatureProcessor);
void ClearPasses();
bool InitPPLLFillPass();
bool InitPPLLResolvePass();
bool InitComputePass(const Name& passName, bool allowIterations = false);
void BuildDispatchAndDrawItems(Data::Instance<HairRenderObject> renderObject);
void EnablePasses(bool enable);
//! The following will serve to register the FP in the Thumbnail system
AZStd::vector<AZStd::string> m_hairFeatureProcessorRegistryName;
//! The render pipeline is acquired and set when a pipeline is created or changed
//! and accordingly the passes and the feature processor are associated.
//! Notice that scene can contain several pipelines all using the same feature
//! processor. On the pass side, it will acquire the scene and request the FP,
//! but on the FP side, it will only associate to the latest pass hence such a case
//! might still be a problem. If needed, it can be resolved using a map for each
//! pass name per pipeline.
RPI::RenderPipeline* m_renderPipeline = nullptr;
//! The Hair Objects in the scene (one per hair component)
AZStd::list<Data::Instance<HairRenderObject>> m_hairRenderObjects;
//! Simulation Compute Passes
AZStd::unordered_map<Name, Data::Instance<HairSkinningComputePass> > m_computePasses;
// Render Passes
Data::Instance<HairPPLLRasterPass> m_hairPPLLRasterPass = nullptr;
Data::Instance<HairPPLLResolvePass> m_hairPPLLResolvePass = nullptr;
//--------------------------------------------------------------
// Per Pass Resources
//--------------------------------------------------------------
//! The shared buffer used by all dynamic buffer views for the hair skinning / simulation
AZStd::unique_ptr<SharedBuffer> m_sharedDynamicBuffer; // used for the hair data changed between passes.
//! The constant buffer structure containing an array of all hair objects materials
//! to be used by the full screen resolve pass.
HairUniformBuffer<AMD::TressFXShadeParams> m_hairObjectsMaterialsCB;
//! PPLL single buffer containing all the PPLL elements
Data::Instance<RPI::Buffer> m_linkedListNodesBuffer = nullptr;
//--------------------------------------------------------------
//! Per frame delta time for the physics simulation - updated every frame
float m_currentDeltaTime = 0.02f;
//! flag to disable/enable feature processor adding dispatch calls to compute passes.
bool m_addDispatchEnabled = true;
bool m_sharedResourcesCreated = false;
//! reload / pipeline changes force build dispatches and render items
bool m_forceRebuildRenderData = false;
bool m_forceClearRenderData = false;
bool m_initialized = false;
bool m_isEnabled = true;
static uint32_t s_instanceCount;
HairGlobalSettings m_hairGlobalSettings;
AZStd::mutex m_hairGlobalSettingsMutex;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,63 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#include <Rendering/HairGlobalSettings.h>
#include <AzCore/Serialization/SerializeContext.h>
#include <AzCore/Serialization/EditContext.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
void HairGlobalSettings::Reflect(ReflectContext* context)
{
if (auto serializeContext = azrtti_cast<AZ::SerializeContext*>(context))
{
serializeContext->Class<HairGlobalSettings>()
->Version(2)
->Field("EnableShadows", &HairGlobalSettings::m_enableShadows)
->Field("EnableDirectionalLights", &HairGlobalSettings::m_enableDirectionalLights)
->Field("EnablePunctualLights", &HairGlobalSettings::m_enablePunctualLights)
->Field("EnableAreaLights", &HairGlobalSettings::m_enableAreaLights)
->Field("EnableIBL", &HairGlobalSettings::m_enableIBL)
->Field("HairLightingModel", &HairGlobalSettings::m_hairLightingModel)
->Field("EnableMarschner_R", &HairGlobalSettings::m_enableMarschner_R)
->Field("EnableMarschner_TRT", &HairGlobalSettings::m_enableMarschner_TRT)
->Field("EnableMarschner_TT", &HairGlobalSettings::m_enableMarschner_TT)
->Field("EnableDiffuseLobe", &HairGlobalSettings::m_enableDiffuseLobe)
->Field("EnableSpecularLobe", &HairGlobalSettings::m_enableSpecularLobe)
->Field("EnableTransmittanceLobe", &HairGlobalSettings::m_enableTransmittanceLobe)
;
if (auto editContext = serializeContext->GetEditContext())
{
editContext->Class<HairGlobalSettings>("Hair Global Settings", "Shared settings across all hair components")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableShadows, "Enable Shadows", "Enable shadows for hair.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableDirectionalLights, "Enable Directional Lights", "Enable directional lights for hair.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enablePunctualLights, "Enable Punctual Lights", "Enable punctual lights for hair.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableAreaLights, "Enable Area Lights", "Enable area lights for hair.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableIBL, "Enable IBL", "Enable imaged-based lighting for hair.")
->DataElement(AZ::Edit::UIHandlers::ComboBox, &HairGlobalSettings::m_hairLightingModel, "Hair Lighting Model", "Determines which lighting equation to use")
->EnumAttribute(Hair::HairLightingModel::GGX, "GGX")
->EnumAttribute(Hair::HairLightingModel::Marschner, "Marschner")
->EnumAttribute(Hair::HairLightingModel::Kajiya, "Kajiya")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableMarschner_R, "Enable Marschner R", "Enable Marschner R.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableMarschner_TRT, "Enable Marschner TRT", "Enable Marschner TRT.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableMarschner_TT, "Enable Marschner TT", "Enable Marschner TT.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableDiffuseLobe, "Enable Diffuse Lobe", "Enable Diffuse Lobe.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableSpecularLobe, "Enable Specular Lobe", "Enable Specular Lobe.")
->DataElement(AZ::Edit::UIHandlers::Default, &HairGlobalSettings::m_enableTransmittanceLobe, "Enable Transmittance Lobe", "Enable Transmittance Lobe.")
;
}
}
}
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,44 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <Rendering/HairLightingModels.h>
#include <AzCore/RTTI/ReflectContext.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
//! Used by all hair components to control the shader options flags used by the hair
//! rendering for lighting and various display features such as the Marschner lighting
//! model components.
struct HairGlobalSettings
{
AZ_TYPE_INFO(AZ::Render::Hair::HairGlobalSettings, "{B4175C42-9F4D-4824-9563-457A84C4983D}");
static void Reflect(ReflectContext* context);
bool m_enableShadows = true;
bool m_enableDirectionalLights = true;
bool m_enablePunctualLights = true;
bool m_enableAreaLights = true;
bool m_enableIBL = true;
HairLightingModel m_hairLightingModel = HairLightingModel::Marschner;
bool m_enableMarschner_R = true;
bool m_enableMarschner_TRT = true;
bool m_enableMarschner_TT = true;
bool m_enableDiffuseLobe = true;
bool m_enableSpecularLobe = true;
bool m_enableTransmittanceLobe = true;
};
} // namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,48 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/EBus/EBus.h>
#include <Rendering/HairGlobalSettings.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
class HairGlobalSettingsNotifications
: public AZ::EBusTraits
{
public:
// EBusTraits
static const AZ::EBusHandlerPolicy HandlerPolicy = AZ::EBusHandlerPolicy::Multiple;
static const AZ::EBusAddressPolicy AddressPolicy = AZ::EBusAddressPolicy::Single;
virtual void OnHairGlobalSettingsChanged(const HairGlobalSettings& hairGlobalSettings) = 0;
};
typedef AZ::EBus<HairGlobalSettingsNotifications> HairGlobalSettingsNotificationBus;
class HairGlobalSettingsRequests
: public AZ::EBusTraits
{
public:
// EBusTraits
static const AZ::EBusHandlerPolicy HandlerPolicy = AZ::EBusHandlerPolicy::Multiple;
static const AZ::EBusAddressPolicy AddressPolicy = AZ::EBusAddressPolicy::Single;
virtual void GetHairGlobalSettings(HairGlobalSettings& hairGlobalSettings) = 0;
virtual void SetHairGlobalSettings(const HairGlobalSettings& hairGlobalSettings) = 0;
};
typedef AZ::EBus<HairGlobalSettingsRequests> HairGlobalSettingsRequestBus;
} //namespace Hair
} // namespace Render
} // namespace AZ

@ -0,0 +1,25 @@
/*
* Copyright (c) Contributors to the Open 3D Engine Project.
* For complete copyright and license terms please see the LICENSE at the root of this distribution.
*
* SPDX-License-Identifier: Apache-2.0 OR MIT
*
*/
#pragma once
#include <AzCore/Preprocessor/Enum.h>
namespace AZ
{
namespace Render
{
namespace Hair
{
AZ_ENUM(HairLightingModel,
GGX,
Marschner,
Kajiya);
} // namespace Hair
} // namespace Render
} // namespace AZ

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save