Darkhold of Niagara

Spent many deep nights from the past month to write about all my tricks and sorceries for fluid simulation.
80lv interview: https://80.lv/articles/working-with-niagara-fluids-to-create-water-simulations

If you don’t read, someone else who did can instakill you.

2. Let’s start by introducing Niagara Fluids? Could you explain the toolkit to beginners? What does it do? What are the main features?

For starters, Niagara Fluids includes templates for fire, smoke, pools of water, splashes and shallow water. It’s UE5’s answer to fluid simulation. 

Niagara is the most robust, artist-friendly GPU programming framework for video games (or for anything, really). Niagara Fluids is basically a DLC that makes fluid related stuff much easier.

Main features:

Simulation:

  1. FLIP solver for water (2D and 3D)
  2. Shallow water implementation
  3. Gas simulation for both 2D and 3D grid
  4. A series of Niagara System showcases

Rendering:

  1. SDF / jumpflood renderer
  2. Material to represent SDF using Single Layer Water
  3. (coming soon) sphere rasterizer that supplements SDF
  4. Lighting injection Interfaces for gas

Gameplay / Interaction:

  1. 3D Collision Interfaces
  2. 2.5D Collision (affect 2D sims with 3D world object)
  3. Character interaction

And more! It’s critical to download Content Example and open Niagara_Fluids map yourself. You can wander through the workflow demos and see what’s available. All Content Examples maps are constantly updated to reflect UE’s latest features.

3. How did you get started with your fluid simulation experiment? What were the first steps inside the engine?

Engineered art and artistic tools have always been my curiosity, and I’ve done many experiments to see how far I could go. There were plenty of skill trees I had to master before I could unlock the fluid simulation skill tree.

It’s a long story, but I’d like to mention a couple of toys I made that helped me understand volumetric effects.

Firstly, clouds! Stacking 3D noises together to create beautiful clouds was extremely satisfying. Painting them was just a very natural next step.

The process trained me to visually feel the fluffy and crispy shapes of 3D noises. It’s very similar as water splashes, foam and bubbles, in terms of technique and mood.

(Here is my breakdown article if you are interested)

After that, the uncharted domain of fluidsim (for artists) caught my attention. Niagara Sim Stage was rapidly maturing at that time and I gave it a go. I read papers. I got totally lost after a couple pages. But I kept pushing. In a couple months my system started to take shape. To this day I still can’t believe it worked.

Eventually, my OC character Barrelhead was born

Around that time I had to create most of the modules and materials from scratch. SPH solver, rasterizer, collision. There weren’t good ways to communicate with Secondary Emitters(more of this below) so I had to ‘morph’ a small percent of SPH particles into splash sprites.

With the advancement of Niagara, now we have robust pre-made alternative modules, with showcases to help you learn and make well-informed decisions. I think 2022 is a really good year to get into it.

> “What were the first steps inside the engine”

If you are new to procedural VFX in general, check out the base knowledge first. From there, your second stop will be Epic’s Learning Library, search ‘Niagara’ and give it a go. There are dozens of extremely well constructed examples crafted by our engine dev team, tech writers and evangelists. A lot of questions you have, and a lot of questions you don’t know you should have, will be answered while learning to replicate these cool toys.

For fluidsim learning path, I personally recommend brute forcing the Content Examples. Problem solving is the best way to learn.

Deconstruct and reconstruct demo assets in this order has worked for me the best:

  1. Check out all the examples, play with the parameters. Check out the official Niagara Fluid intro videos.
  2. Stripe out the renderer ‘beauty’ components and everything not essential to the simulation
    Leave out a minimal ‘barely working’ system
  3. Dissect further with the assistance of Debug Tools and Attribute Spreadsheet

4. Learn to create new custom modules to make stuff happen

4. Could you discuss setting up your system? How did you control fluids? How did you tweak the parameters and collisions? Please share your settings.

I love how these questions naturally split the system into parts. Similar to mentioned in Q2, I’ll divide it into 3 parts: Simulation, Collision and Rendering

Simulation

So for the The simulation part, all the 3D water simulation in Niagara uses a PIC/FLIP module. As mentioned above, you can find these examples in the Niagara_Fluids map:

From a high level overview, there isn’t too much to tweak here – which is good. Because water is water, you don’t typically want different kinds of water.

To get a feel of the parameters, a brief explanation of stuff that matters,:

Collision Velocity MultUsed for collision interaction. E.g. consider pouring water out of a bowl. If this is 0, you can’t pour the water no matter how fast you try. Water will simply flow down with gravity.
Geometry Collection CollisionsWorks with Chaos fracture asset! Still WIP tho
Static Mesh CollisionThe best boi. It samples individual static mesh distance field (NOT global distance field). You have position, normal and velocity for the nearest surface point at your disposal.It also doesn’t require global DF generation so won’t affect Niagara System tick order. Aka can be used with Opaque materials without 1 frame delay.
Num Cells Max AxisPick the longest axis of your bounding box. Divide the length with this number, you get your simulation voxel size. Just a convenient way to set & tweak resolution for everything
Particles Per CellUtility parameter to fill a tank of water on sim start
Physics CollisionsCharacter collision using Physics Asset DI
Tutorial:
https://dev.epicgames.com/community/learning/tutorials/8kkP/scene-interactions-with-niagara-fluids
Pressure IterationsWe can bind a dynamic number to determine how many times we want a render stage to iterate. For water systems, this determines Solve Pressure sim stage iterations count. 
PIC FLIP ratio0.0 – 100% PIC simulation. Stable, less accurate
1.0 – 100% FLIP simulation. Accurate, less stableA value in between – Mix good things of both. Usually 0.75 – 0.95 works well depending on your use case (e.g. fish tank or running river).

Collision

Static meshes are amazing. Their collision is accurate and we can pre-generate mesh distance field for them. With Geometry Cache collision still in experimental state, Static Mesh collision is the best to stir interesting fluid behavior.

The Niagara_Fluids map river uses Static Mesh Collisions DI, which is what I’d recommend for.

This interface takes Mesh Distance Field (not Global Distance Field) of all tagged static meshes. As result you get view-independent collision, normal and velocity reads.

The downside is it gets heavier the more meshes you mark for collision.

An alternative is Global Distance Field collision. Because global SDF is constantly generated as a whole in runtime, the cost is always the same.

The downside is it’s view dependent. Your water may pop a little when the camera gets further / nearer. And it doesn’t support mesh velocity.

There are also other collision types for Landscape and Skeletal Mesh. 

Rendering:

My river demo and all the Content Example 3D fluids use Single Layer Water shading model. It’s basically a 3D box masked out to match the shape of the water body. 

Because we know the water surface depth from SDF, we can ‘push’ the SLW material pixel onto the correct position using Pixel Depth Offset. Water surface normal is also extracted from SDF. 

With all these combined, we can render the volume of the water. But how do we get more art directed elements to make the water prettier? 

5. How did you add foam here? How does it work? How did you set up the rules?

  • The water seems to have many layers of interesting behaviors. How did you tweak each one?

For video game water, foam or ‘white water’ is often a generalized name for 3 parts: Splash, Surface foam and Bubbles.

Note: Artistically and technically, all of these also more or less apply to 2D water simulation, shadow water or even traditional mesh/flowmap based water effects. Pick what’s useful to you.

Splash

In video games, water splash is almost always presented as flipbook sprites. For 2D water surface, we can decide where and when to spawn splashes using 3D geometry representation and water velocity. Consider boulders sitting in the middle of ocean waves, or the player interacting with a river – we can tell that, part of them is underwater simply by comparing their height with the water surface height.

From Ryan Brucks

For 3D simulation, not much has changed for the 3D geometry representation (of things to collide with), however we do need to refer to the Grid3D to find out where to spawn the sprites. That’s where the Secondary Emitter in NiagaraFluids plugin chimes in (you can find it in the examples). 

The Secondary Emitter will check these conditions for all SimGrid voxel positions:

  1. Distance Field – Is this point inside water? 
  2. Grid Velocity – Is water here moving fast enough?
  3. Grid Vorticity – Is water here volatile enough?

If all are satisfied for one voxel, secondary particles will spawn at the position of that voxel (with a little jittering on top to smooth things out).

Water surface onlyWIth Splash spritesSprites visualization

Within the Secondary Emitter You have nice control of how and when the sprites should spawn:

The secondary particles are rendered as flipbook sprites, the animation starts to play the moment they spawn. I prefer dithered Masked flipbooks for splash instead of Translucent because they are cheaper, can sort against each other, and can have pixel normal to react to environment light. Translucent sprites can have pixel normal if you want, but when you have a lot of secondary particles, the pixel details tend to blur each other out.

Bubbles

Masked material writes Depth buffer, which also means it affects underwater light scattering for Single Layer Water material. Here is an exaggerated example that has the water darkened, so you can see the underwater sprites color behavior more easily:

When the slash sprites are underwater, the scattering totally makes them look like bubbles. And I just used that, with a little opacity tweak. Of course you can do more fancy tricks on pixel shader to make it look even more interesting.

Foam

Attention adventurers, for water foams, we are traveling to the “this will be in the next UE release” city.

Mainly because we’ll need a few additional modules. Rasterization and DualRestPosition are two of them. 

For now I’ll go over the stuff I did and provide code samples (attached at the end of this section). If you are eager, it’s a good opportunity to dive into the HLSL craziness.

Clarity: I use SDF approach to render water surface because I like the look better. But all particle-carried attributes are extracted using rasterization (written as Nagara modules). See below for details.

Dual Rest Field

(See end of section for code example)

So, Dual Rest Position Field (sometimes it’s easier to refer to the result – advected textures) is similar to flowmaps. However, instead of predefined flow directions on a 2D texture, the direction data is carried on discrete particles and rasterized in realtime:

See below for code example

This is of course, better explained in cat memes:

Rasterize Foam Intensity to ScreenSpace

(See end of section for code example)

But how do we know where the foam is, and communicate that to the material? 

Similar to the Secondary Emitter, we calculate Foam Intensity on each particle. And we rasterize it on a screenspace RT.

Foam Intensity RT in screenspace, visualized as heatmap

And we blur out the Foam Intensity RT using Render Target 2D – Mip Map Generation.

Finally, we can multiply that advected foam texture (again, cat meme texture) with the foam intensity RT.

And now we got ourselves the incredibly punchy and detailed advected foam!

That still feels too ‘dry’ for some reason, like pouring baby powder into the river. 

There is only so much we can do on a surface. It lacks volume. That’s where the splashes come into rescue:

They add essential volumetric feelz on top of the surface. The dithered splash also ‘smears’ the pixels between water and foam, which yields a really nice soft feel.

Now, both foam and surface detail normal texture are added to the water surface using this technique. Foam does drastic modification to pixel basecolor, roughness, opacity, specular and normal, while detailed normal texture simply applies to water surface normal(where foam is absent). More on this later.

Foam Tip: 

Focus on using large advected foam texture for punch. Then add small foam texture for detail

If you only focus on detail. You’ll get something like this. Looks nice but doesn’t feel natural. The flow feels forced.

Foam Tip: 

In the surface material, keep foam intensity RT (from the simulation) smooth and untouched. Don’t tweak its contrast. A simple SmoothStep is enough. 

To create contrast, manipulate the advected foam textures (cat meme) instead.

I don’t have pictures for this one because it’s more of a feel. Basically you want to art direct the foam texture, but not the physics. Foam intensity RT is the physics.

Phase Function

I kind of brushed over the detail normal layer from the last session. The reason is once you understand the foam part, hooking up the detail normal texture would be trivial.

Detail normal is great for still or slowly moving water surface. However for chaotic running water, your mileage may vary. Personally, when using translucency with TAA/TSR, I found it hard to keep the fine details intact.

But it’s still an amazing layer of important detail. We can use it to interact with lighting.

FIrstly, the detail normal twists the underlying caustics in a nice way and ‘push’ the caustics forward.:

Secondly, you may have noticed the sparks are a little more interesting and reactive as a result. This is because the sparks come from the reflection of the sun. And the reflection takes water normal, including detail normal as input.

Finally, another important technique is using phase function to add fake light scattering. 

Directionality input of the phase function is taken from detail normal texture, and it’s an excellent way to add fidelity for volatile water, as well as a great opportunity to add some color variation. Notice the scattering light added in the right gif is a little greener than the water color.

Code Examples: Extract particle attributes to screenspace

RasterizeParticlesAsSpheres:

After we get the depth RT, we can do the sphere trace again to rasterize ExecIndex onto another RT.

WritingExecIndexByComparingSphereDepth:

for(int i = -RadiusIndexExtent.x; i <= RadiusIndexExtent.x; i++)
{
    for(int j = -RadiusIndexExtent.y; j<= RadiusIndexExtent.y; j++)
    {
        int2 CurIndex = ParticleIndex + int2(i, j);
        if(CurIndex.x >= 0 && CurIndex.y >= 0 && CurIndex.x < NumCellsX && CurIndex.y < NumCellsY)
        {
            float2 VectorFromCenter = ((float2)CurIndex + float2(.5f, .5f)) / float2(NumCellsX, NumCellsY) - ParticleUV;
            float2 OffsetWS = VectorFromCenter / RadiusInUV;
            // Treating camera as orthographical but shouldn't be noticeable since spheres are small?
            OffsetWS *= OffsetWS;
            float t = 1 - OffsetWS.x - OffsetWS.y;
            // Inside sphere mask
            if(t > 0)
            {
                float DepthOffset = sqrt(t) * RadiusWS;
                float OriginalValue;
                float ThisDepth = ParticleClip.w - DepthOffset;
                RasterGrid.GetFloatGridValue(CurIndex.x, CurIndex.y, 0, 0, OriginalValue);
                if(abs(ThisDepth - OriginalValue) < .1f)
                {
                    ExecIndexGrid.SetFloatValue(CurIndex.x, CurIndex.y, ExecIndex);
                }
            }
        }
    }
}

Yes I’m doing the sphere tracking twice. It’s not the best but at the moment RasterGrid can’t carry additional attributes so we have to do some tricks. 

(Packing ExecIndex into the RasterGrid(Depth) can’t solve this. ExecIndex can reach > 1 million, leaving no space to present depth properly. )

This process does have room to improve. Either way, to boost performance, it’s important to define a strategy to keep track of particles that are ‘too deep’ under the surface, and cull them from the rasterization.

In the case of SPH sim we have the luxury of knowing each particle’s neighbors. But for Eulerian simulations or Eulerian/Lagrangian hybrid simulations (e.g. FLIP), we don’t know how many particles are nearby. 

So how do we know which particles are ‘too deep’? For my demo, since SDF is the field we use to generate water surface, it contains the best information to make that call.

Current frame SDF can be used to cull current frame particles from rasterization.

Previous frame SDF can be used to cull current frame particles from SDF generation.

From there, it’s easy to extract any attribute you want from the particles.

Grid2D_VorticityFromExecIndex:

The rest of the code examples take too much space. So I uploaded it to my discord server. You can download the zip here:

https://discord.com/channels/742761452225691709/743102189009895465/999810191744827484

Code Example: Dual Rest Position Field

🍙DualRestTimeline: Goes into Emitter Update

🍙DualRestCapturePosition: Goes into Particle Update

🍙Grid2D_PixelDualRestPosition: As a Simulation Stage. This takes a ScreenSpace depth grid as input to calculate the offset between surface pixels and particle rest positions.

Code Example: SDF raymarch

🍙Grid3D_RaymarchSDF: You can feed the rasterized sphere depth directly as input for Grid2D_PixelDualRestPosition. But it’s much nicer to have the SDF surface as input to get more accurate and smoother results. 

The raymarched result can also be used directly for rendering (in material). The content example does the raymarching step inside the material.

6. Please tell us about the opacity and colors here. How did you achieve the desired effect? What were the challenges?

For colors, apart from the techniques already mentioned, it’s also important to understand the science behind the UE’s water surface material, namely the Absorption and Scattering Coefficient.

In short, as of 2022 all video games still use RGB values to represent color. Spectral rendering is a luxury we are yet to have.

The benefit is, for any light calculation, we only have 3 channels to worry about. 

In the case of water absorption, think of the primary colors – Red, Green and Blue as three types of light energy we have in the game world. And when the light goes under water, water will take some of the energy away from each primary color, before the light can hit our eyes (the camera).

Obviously water absorbs red more than blue. That’s why water is blue.

But how much is each primary color absorbed? In order to answer this question, first let’s check wikipedia for the wavelength of each RGB channel.

So proximately

Red: ~700nm

Green: ~550nm

Blue: ~450nm

No need to be too precise here. We are artists, we can do whatever we want.

Next, let’s look up water absorption coefficient:

https://en.wikipedia.org/wiki/Electromagnetic_absorption_by_water

This is more #Outside science than video games, but we only need to understand a small part of it.

So for the red channel, the wave length is around 700nm, and from the chart we get the Absorbance coefficient = ~0.6/m. 

This means when light goes into water, its Red energy will be reduced to a portion of 1/e after the light travels a distance of 1/0.6 = 1.667m.

Similarly, Green energy will be reduced to 1/e after the light travels 1/0.05 = 20m.

Blue energy will be reduced to 1/e after the light travels 1/0.005 = 200m.

And the math ends here, well done! Now we only need to fill the Absorbance values into Single Layer Water material.

There you go, if you got here you have nailed the most important part of making a beautiful water surface material. The water shading model will take water depth and handle the calculation of absorption.

I’d also recommend to check out Ryan Brucks’ “Water And Volumetrics – Inside Unreal” talk for a deeper dive into this topic:

Picture from Single Layer Water document page

For the fluidsim project, I kept the Scattering at almost zero because I’m putting another layer of foam on top of the water, and I wanted high contrast between the clear water and the foam layer. The foam material was made into a Material Attribute, and blended with water surface material inside the same SLW material.

Last but not the least, caustics is an essential part. It adds details, and most importantly it gives you a way to control underwater brightness without messing with the water light response. As mentioned, the animated caustics patterns also play very nicely with refraction from moving waves.

7. How can users control the resolution of their simulations? What parameters should they tweak? Please share some tips.

8. As the toolkit is still in development, should users be aware of some things? What aspects should one consider when using Niagara Fluids?

Apparently when we talk about an ‘experimental’ tool the first thing that pops up into our head is – bugs! Or, it still can’t do the so, so obvious thing you want it to do yet.

Personally speaking, while it’s not production ready, it’s simply fun to jump in and learn how future video game magic will work.

Professional wise, it’s always a ping-pong situation between technology and creative space. Bugs are artificial forms of the unknown, if you don’t fiddle with the darkness, you’ll only be able to do the stuff that everyone has already done. 

The process of problem solving and banging my head on the wall always led to a deeper understanding of what I wanted to do and how to turn that creative space into reality.

That being said, if you follow our question 3 here and the showcases, I believe you’d have a kinda smooth experience. The system is pretty robust for what we have already tested. 

Be aware of the cost, any fluidsim effect is likely going to cost a big chunk of your game’s render budget. Profile in Standalone play mode often and plan ahead.

Good news is all the modules are going to improve under the hood overtime. So what costs you 12ms now has a good chance to cost much less in the future. But how much? We’ll only know after the change and do extensive profiling.

Regarding putting the new systems into your game, if your game is in pre-production or in early stages with enough time budget for R&D, It’s always good to push for more novelty. Setup scenes to profile simulation performance under different scalability settings (particle count, grid resolution, rendering etc.). Focus on art directed supplement techniques, instead of relying on simulated resolution. This will go a long way.

If your game is deep in production and you are wondering if the new fancy thing is good for the next milestone.. Unless you are absolutely sure about what you are doing, and have abundant margin for error, please don’t. The crunch is not worth it. So many unfamiliar and strange things can go wrong. All my friends, especially our beloved producers, will hate you and warn people about you. As a game company we should know better than risking crunching our teams. This might sound harsh.. but I’ve seen so many horror stories.

9. Thank you for sharing your experience! Where can Unreal users learn more about your work and Niagara Fluids? 

Thank you for having me. I use twitter mostly to promote my creations and techniques. It’s weird to think twitter is widely adopted for academic purposes.. Also my artstation always have the best quality videos.

I do try to post everything on multiple platforms, pick your poison💚 :

Blog http://asher.gg/

Twitter @Vuthric
Discord 🍙http://discord.gg/7NUVDWwbK4
Youtube 🍙https://youtube.com/c/AsherZhu
Artstation 🍙https://artstation.com/vuth
Instagram 🍙@vuthric 

Leave a Reply

Your email address will not be published. Required fields are marked *