**Report** **Prepare your project report. Feel free to completely replace this file!** **Motivational Image** Our motivational image, "Hourglass," captures the in-between moment of nature's transformation into the urban landscape. The upper part depicts vanishing trees, while the lower part showcases a cityscape. The sand in the middle symbolizes the passage of time and the shift from nature to urbanization. Inspired by this image, we aim to create a realistic rendering of a scene also utilizing an hourglass as the central element. The upper part of the hourglass will contain a melting iceberg, and the lower part will be a modern cityscape. The melting icewater will flow through the hourglass, and there will be the striking contrast of a stormy industrial scene outside of the window and a tranquil wooden interior, embodying the in-between state of nature's retreat and the rise of urbanization. Motivational image Xiaohongshu - Hourglass, Bangbing Pan # Qi Ma ID | Short Name |Points | Features (if required) & Comments --------|---------------------------|-------|------------------------------------ 5.10.2 | Simple Extra Emitters | 5 | directional light 5.16.1 | Anisotropic Phase Function | 5 | Henyey-Greenstein 10.3 | Simple Denoising | 10 | 10.9 | Different distance sampling and transmittance estimation methods: ray marching, delta tracking and ratio tracking | 10 | 30.1 | Heterogeneous Participating Media | 30 | Total || 60 | ## Simple Extra Emitters: Directional Light (5 Points) ### Updated Files - `include/nori/emitter.h` - `src/directional.cpp` - `src/path_mats.cpp` - `src/path_mis.h` - `src/vol_path_mis.cpp` ### Implementation Directional light is a type of light source that emits light in a specific direction with uniform intensity that does not attenuate over distance. For the implementation of the directional light, we created a new class `DirectionalEmitter` in the `directional.cpp` file. The `DirectionalEmitter` class inherits from the `Emitter` class and includes properties for the light's intensity and direction. The core functionality is in the `sample` method, which calculates the projection of the distance from the reference point to the light source projection center onto the light source direction. The light source projection center is estimated from the world center and world radius based on the scene or be specified in the scene xml file. It also sets up a shadow ray for visibility testing and assigns values to key fields in the `EmitterQueryRecord`, including the light position, direction, and normal. The `eval` and `pdf` methods return fixed values (`0` and `1`, respectively), reflecting the delta nature of the light source. ### Validation We compared the directional light with the same functionality in Mitsuba. The direction of the light and its radiance were adjusted to validate the correctness of the implementation. #### Comparison 1: Uniform Intensity - Intensity: `(5, 5, 5)` - Direction: `(0.401387, -0.444440, -0.800851)` - Samples Per Pixel (SPP): 256
Mitsuba Our (Path MIS)
#### Comparison 2: Non-Uniform Intensity(Color) - Intensity: `(5, 4, 3)` - Direction: `(0.401387, -0.444440, -0.800851)` - SPP: 256
Mitsuba Our (Path MIS)
#### Comparison 3: Change Direction - Intensity: `(5, 5, 5)` - Direction: `(0.849596, -0.343943, 0.399863)` - SPP: 256
Mitsuba Our (Path MIS)
#### Conclusion Under all test cases, our implementation of the directional light source produces results consistent with the reference images generated by Mitsuba, validating the correctness of our implementation. ## Simple Denoising: NL-Means Denoising Using Pixel Variance Estimates (10 Points) ### Updated Files - `src/render.cpp` - `src/nl_means_denoise.cpp` ### Implementation To implement the NL-means denoising algorithm, we estimated the pixel variance based on the `sample mean variance`, which is calculated during the rendering process. We modified `render.cpp` to calculate the sample mean variance during independent sampling and store it as `filename_variance.exr`. The formula used to calculate the `sample mean variance` is: $$ \text{Var}[p] = \left( \frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2 \right) / n $$ where: - \(n\) is the number of samples, - \(x_i\) is the sample value, - \(\bar{x}\) is the sample mean. The NL-means denoising algorithm was implemented in `nl_means_denoise.cpp` using NumPy. The inputs are a noisy image and a variance map, where each pixel's variance is derived from its sample mean variance. The algorithm uses the following formula to calculate the squared distance between two pixels: $$ d^2(p, q) = \frac{(u(p) - u(q))^2 - (\text{Var}[p] + \min(\text{Var}[q], \text{Var}[p]))^2}{\epsilon + k^2(\text{Var}[p] + \text{Var}[q])} $$ Once the distances are computed, weights are calculated using an exponential decay function, and the denoised pixel value is obtained by averaging the weighted pixel values within a specified search window. We use a box filter to smooth the distance and weight to avoid artifacts and improve the denoising quality. There are both stratified sampling and NL-means denoising in our feature list. However, they conflict with each other because stratified sampling increases the sample mean variance, as noted in [Rousselle et al., 2012](https://www.cs.umd.edu/~zwicker/publications/AdaptiveRenderingNLM-SIGA12.pdf). Following TA suggestions, we validated the NL-means denoising using independent sampling. For stratified sampling validation, NL-means denoising was turned off. To use NL-means denoising, install Numpy and opencv-python and run the following command: ```bash pip install numpy opencv-python python src/nl_means_denoise.py scenes/project/final/final.exr ``` The output will be saved as `final_denoised.png` under the same directory. ### Validation To validate the NL-means denoising algorithm, we rendered a noisy image using a simple Cornell box scene with a `path_mis` integrator and 256 SPP. The NL-means denoising algorithm was applied to the noisy image to obtain the denoised image. As a reference, we also rendered the same scene with 4096 SPP for comparison. #### Comparison: NL-Means Denoising
Path MIS 256 SPP NL-Means Denoised Path MIS 4096 SPP
#### Time Cost Comparison | Task | Time Cost (s) | |---------------------|---------------| | 256 SPP Rendering | 46.0 | | Denoising | 10.0 | | 4096 SPP Rendering | 1014.0 | This test demonstrates that the NL-means denoising algorithm accelerates the rendering process by approximately 18× while maintaining a similar quality to 4096 SPP rendering. ## Heterogeneous Participating Media (30 Points) ### Updated Files - `include/nori/medium.h` - `include/nori/phase.h` - `include/nori/volume.h` - `include/nori/common.h` - `include/nori/object.h` - `include/nori/scene.h` - `include/nori/shape.h` - `include/nori/scene.cpp` - `src/parser.cpp` - `src/homogeneous.cpp` - `src/heterogeneous.cpp` - `src/isotropic.cpp` - `src/henyey_greenstein.cpp` - `src/vol_path_mis.cpp` - `src/transparent.cpp` - `src/vdb_volume_nano.cpp` This feature includes both homogeneous (15 points) and heterogeneous (15 points) participating media. It also serves as the basis for: - Henyey-Greenstein phase function (5 points). - Different distance sampling and transmittance estimation methods (10 points). We will first present our implementation of homogeneous and heterogeneous participating media, followed by validation results for both. The implementation of the anisotropic phase function (Henyey-Greenstein) and different distance sampling/transmittance estimation methods (ray marching, delta tracking, ratio tracking) will be shown in the next section. ### Implementation We followed the PBRT approach and created the following classes: - **`Medium` Class**: Samples a new free path inside the medium and evaluates the transmittance along the path. - **`PhaseFunction` Class**: Samples a new direction for the scattering event and evaluates the phase function value for given directions. - **`HomogeneousMedium` Class**: Inherits from the `Medium` class and implements homogeneous participating media with constant density. - **`HeterogeneousMedium` Class**: Inherits from the `Medium` class and implements heterogeneous participating media with grid-based density. - **`Volume` Class**: Handles the grid-based density of the heterogeneous medium. - **`MediumShape` Class**: Allows rendering of the medium within a shape. - **`VolPathMISIntegrator` Class**: Handles sampling and evaluation of participating media. #### Medium Class Following the PBRT implementation, the `Medium` class has two pure virtual methods: `sample` and `Tr`. - `sample`: Used to sample a new free path inside the medium. - `Tr`: Used to evaluate the transmittance along the path. For homogeneous media, we use the delta tracking algorithm instead of Beer's law to align with Mitsuba's results. For heterogeneous media, we support three distance sampling and transmittance estimation methods: ray marching, delta tracking, and ratio tracking. Detailed implementations will be covered in the next section. Key parameters: - `sigma_a`: Absorption coefficient. - `sigma_s`: Scattering coefficient. - `sigma_t`: Attenuation coefficient (`sigma_t = sigma_a + sigma_s`). - `albedo`: Ratio of `sigma_s` to `sigma_t`. #### Volume Class and NanoVDB The homogeneous medium has uniform density throughout, specified in the scene file. In contrast, the heterogeneous medium has varying densities across regions. We introduced a `VDBVolume` class to handle grid-based density, which can: - Load density data from `nvdb` files (NanoVDB format). - Query the density at a specified position. Although most density files use the `vdb` format, they can be converted to `nvdb` using NanoVDB tools. [NanoVDB](https://developer.nvidia.com/nanovdb) is a lightweight version of OpenVDB, designed for efficient storage and retrieval of heterogeneous density data. #### Medium in Shapes To support mediums within shapes, we added the `MediumShape` class. This class: - Defines a bounding box for the medium within the shape. - Maps the shape's bounding box to the `nvdb` file's bounding box for density queries. #### Volumetric Path Tracing The `VolPathMISIntegrator` class, derived from the `PathMISIntegrator` class, was modified to support participating media. In the `Li` method: - Randomly select a medium and sample a new path. - If the ray interacts with the medium, choose a random emitter and check its visibility from the interaction point. - If visible, compute the emitter's contribution while considering the transmittance along the path. - Use the phase function to sample a new ray, check for emitter intersections, and update `w_mat`. - If there is no medium interaction, the process is similar to the original `PathMISIntegrator`, except that medium transmittance along the path is also considered. ### Validation #### Homogeneous Medium **Comparison: Homogeneous Medium** We compare our homogeneous medium implementation with Mitsuba. First, we evaluate two extreme cases: full absorption (albedo = 0) and full scattering (albedo = 1).
Mitsuba Homogeneous (Full Absorption) Our Implementation (Full Absorption) Mitsuba Homogeneous (Full Scattering) Our Implementation (Full Scattering)
Next, we compare homogeneous medium results within different shapes (cube and sphere) and with various colors (uniform and non-uniform radiance). - **Cube parameters:** - \( \sigma_a \): (2.0825, 1.6675, 1.25) - \( \sigma_s \): (0.4175, 0.8325, 1.25) - \( \sigma_t \): (2.5, 2.5, 2.5) - Albedo: (0.167, 0.333, 0.5) - **Sphere parameters:** - \( \sigma_a \): (2, 2, 2) - \( \sigma_s \): (0.5, 0.5, 0.5) - \( \sigma_t \): (2.5, 2.5, 2.5) - Albedo: (0.2, 0.2, 0.2)
Mitsuba Homogeneous (Isotropic) Our Implementation (Isotropic)
#### Heterogeneous Medium We utilize `.nvdb` grid density files, which are compatible with PBRT-v4. Mitsuba, however, uses `.vol` files. Despite attempting to convert `.nvdb` or `.vdb` files to `.vol` using the [Mitsuba2 VDB Converter](https://github.com/mitsuba-renderer/mitsuba2-vdb-converter), we encountered issues. Therefore, we validate our heterogeneous medium implementation by comparing it with PBRT-v4. We start with the Bunny Cloud `.nvdb` file from [PBRT-v4 Scenes](https://github.com/mmp/pbrt-v4-scenes), placing it inside a Cornell Box. The original parameter settings are: - \( \sigma_a \): 0.5 - \( \sigma_s \): 10.0 PBRT-v4 specifies wavelength-specific values, while our implementation assumes uniform \( \sigma_a \) and \( \sigma_s \) for all wavelengths. ```pbrt "spectrum sigma_s" [200 10 900 10] "spectrum sigma_a" [200 .5 900 .5] ``` Below are results comparing PBRT-v4, our Henyey-Greenstein (g=0.5), and isotropic phase functions. All images are rendered with 128 SPP and independent sampling.
PBRT Our Implementation (HG g=0.5) Our Implementation (Isotropic)
Our Henyey-Greenstein results are close to PBRT-v4's output and can show more details of the cloud. Slight color differences may arise from PBRT's use of the null-scattering path integral formulation from [Miller et al.]("https://dl.acm.org/doi/10.1145/3306346.3323025"). Besides, we do not know how pbrt-v4 will dicide the g factor of Henyey-Greenstein, so we just use g=0.5, which will cause some difference. But our results have shown our implementation's ability to dipict the detailed shape and density of the bunny cloud. Then we show our results under full scattering and full absorption cases.
Mine(full absorption) Mine(full scattering)
Then we show our ability to render the medium within a shape, which can auto-scale the medium to fit the bounding box of the shape and can be placed at the specified position and rotation. Below three images are a scaled/rotated/re-positioned box with the bunny cloud medium. The bunny cloud medium is auto-transformed to fit the bounding box of the box shape.
Mine(original) Mine(scaled 0.5) Mine(rotated) Mine(re-positioned)
To show our ability to render the medium in different shapes, we use a sphere shape to hold the bunny cloud medium. The medium ourside the sphere will not be rendered.
Mine(original) Mine(in sphere) Mine(sphere shape)
Then we show our results with different colors.
Mine(original) Mine(blue) Mine(orange)
#### Conclusion From the validation results, we can conclude that our implementation of homogeneous and heterogeneous participating media is consistent with the reference images generated by Mitsuba and PBRT-v4. Our implementation supports rendering within shapes and using different colors. The Henyey-Greenstein phase function and different distance sampling/transmittance estimation methods will be validated in the next section. ## Anisotropic Phase Function: henyey-greenstein function(5 points) ### Updated Files - `include/nori/phase.h` - `include/nori/warp.h` - `src/henyey_greenstein.cpp` - `src/isotropic.cpp` - `src/warp.cpp` - `src/warptest.cpp` ### Implementation We implemented the naive isotropic phase function and the Henyey-Greenstein phase function. The Henyey-Greenstein phase function is an anisotropic phase function that models the scattering of light in a specific direction. $$ P(\theta) = \frac{1}{4\pi} \frac{1 - g^2}{(1 + g^2 - 2g \cos\theta)^{3/2}} $$ where: - \( g \) is the asymmetry parameter, controlling the forward or backward scattering. Its value ranges from -1 (backward scattering) to 1 (forward scattering). - \( \theta \) is the angle between the incident and scattered directions. To implement the Henyey-Greenstein phase function, we created a `PhaseFunction` class and derived class: `HenyeyGreenstein`. The `HenyeyGreenstein` class receive the asymmetry parameter \( g \) as input. Its core functionality is in the `sample` method, which samples a new direction based on the Henyey-Greenstein phase function. The `pdf` method returns the probability density function value. ### Validation We compared the Henyey-Greenstein phase function with the isotropic phase function in a simple Cornell box scene. The results show that the Henyey-Greenstein phase function improves rendering quality in a heterogeneous medium. Examining the light-facing and backlit sides of the bunny cloud, we observe that when \( g > 0 \), the light-facing side appears brighter than the backlit side. This result aligns with expectations, as the Henyey-Greenstein phase function causes more light to scatter forward when \( g > 0 \).
Mine Homogeneous (Isotropic) Mine Homogeneous HG ( g=0.5 ) Mine Heterogeneous (Isotropic) Mine Heterogeneous (HG  g=0.5 )
#### Comparison with Mitsuba Results We compared our results with those generated using Mitsuba. Due to limitations in converting the VDB file format to the VOL file format, we were unable to perform a direct comparison in the heterogeneous medium using the Henyey-Greenstein phase function. Instead, we evaluated results in a homogeneous medium with \( g = 0.5 \) and \( g = -0.5 \) at 256 samples per pixel (SPP).
Mitsuba Homogeneous HG ( g=0.5 ) Mine Homogeneous HG ( g=0.5 ) Mitsuba Homogeneous HG ( g=-0.5 ) Mine Homogeneous HG ( g=-0.5 )
Our results are slightly brighter than those from Mitsuba when \( |g| > 0 \), though we have not identified the exact cause of this discrepancy. I believe my implementation faithfully follows the Henyey-Greenstein formula and adheres to the implementation details provided in the [PBR book](https://pbr-book.org/3ed-2018/Volume_Scattering/Phase_Functions). #### Validation with Sampling Warptest To validate the correctness of our Henyey-Greenstein sampling implementation, we conducted a warptest. In the video below, as \( g \) transitions from -1 to 1, the sampled points shift from a backward-centered distribution to a forward-centered distribution, consistent with the expected behavior.
## Different distance sampling and transmittance estimation methods: ray marching, delta tracking and ratio tracking(10 points) ### Updated Files - `src/heterogeneous.cpp` ### Implementation We implemented three different distance sampling and transmittance estimation methods for heterogeneous participating media: ray marching, delta tracking, and ratio tracking. #### Ray Marching Ray marching samples the medium at regular intervals along the ray path. We followed the PBR-book [ray marching formula](https://pbr-book.org/3ed-2018/Light_Transport_II_Volume_Rendering/Sampling_Volume_Scattering#eq:heterogeneous-medium-pmed): $$ p_t(t) = \sigma_t(t) e^{-\int_0^t \sigma_t(t') dt'} $$. The transmittance is calculated by integrating the attenuation coefficient along the ray path. This method has a parameter `stepSize` to control the sampling interval. #### Delta Tracking and Ratio Tracking Different from Ray Marching, delta tracking and ratio tracking are unbiased methods. Delta tracking is using reject sampling. This methoud adds fictitious particles that do not impact transport, and make the combined volume (real + fictitious) homogeneous. It probabilistically reject/accept collisions based on local concentrations of real or fictitious volumes. Ratio tracking offers an alternative approach to improve the accuracy of transmittance estimation. Instead of immediately terminating transmission, it accumulates the transmittance along the entire path. This is achieved by multiplying the transmittance at each step, calculated as the ratio between the actual extinction coefficient and the majorant extinction coefficient. #### Validation We compared the three distance sampling and transmittance estimation methods in a simple Cornell box scene with cloud from [OpenVDB models](https://www.openvdb.org/download/). - sigma_a: 1.0 1.0 1.0 - sigma_s: 4.0 4.0 4.0 - spp: 256
delta tracking ratio tracking ray marching
- sigma_a: 2.0 2.0 2.0 - sigma_s: 3.0 3.0 3.0 - spp: 128
delta tracking ray marching ratio tracking
From the images above, we can see that the results of ratio tracking are slightly brighter than those of delta tracking, especially in the bunny cloud scene. This may be due to the fact that delta tracking terminates the transmission at a random point determined by Russian roulette, potentially causing the transmittance to be underestimated. The results of ray marching are similar to those of ratio tracking but appear unnatural at the edges of the cloud. This is because the ray marching method is a brute-force approach, and the step size is not small enough to capture the details of the cloud accurately. We also measured the time cost of the three methods in the bunny cloud scene. The results are shown below: | Method | Time Cost (minutes) | |----------------|----------------------| | Delta Tracking | 16.3 | | Ray Marching | 11.4 | | Ratio Tracking | 15.9 | The time costs of the three methods are comparable, but the results of ratio tracking are slightly better than those of delta tracking. Although the ray marching method is faster than the other two methods in this case, this may not always hold true, as its time cost depends on the step size. If the step size is reduced, the time cost of ray marching will increase. We also compared different step sizes for the ray marching method. As shown in the images below, as the step size decreases, the results become more natural.
delta tracking ray marching step 1 ray marching step 10 ray marching step 30
# Jiayi Sun ID | Short Name |Points | Features (if required) & Comments --------|---------------------------|-------|------------------------------------ 5.3 | Images as Textures | 5 | 5.20 | Modeling Meshes | 5 | 10.4 | Stratified Sampling | 10 | 10.12 | Object Instancing | 10 | 15.3 | Environment Map Emitter | 15 | 15.5 | Disney BSDF | 15 | subsurface, metallic, roughness, sheen, clearcoat Total || 60 | ## Images as Textures (5 points) ### Updated Files - `include/lodepng.h` - `src/lodepng.cpp` - `src/imagetexture.cpp` ### Implementation The `ImageTexture` class implements texture mapping using image files, supporting both nearest-neighbor and bilinear interpolation modes. We load the texture as a .png file using the lodepng library and stored as an array of pixel values. We support two wrap modes, repeat and clamp, to handle out-of-bound UV coordinates. We also apply inverse gamma correction to pixel values to ensure proper color. The formula for gamma correction is: \[ I_{\text{out}} = \begin{cases} \frac{I_{\text{in}}}{12.92} & \text{if } I_{\text{in}} \leq 0.04045 \\ \left( \frac{I_{\text{in}} + 0.055}{1.055} \right)^{2.4} & \text{if } I_{\text{in}} > 0.04045 \end{cases} \] This formula corrects the non-linear gamma encoding typically used in image formats to achieve linear color space for rendering calculations. ### Validation We compare our result with Mitsuba3, which uses the same image texture for each mug and ground, respectively. We also show the non-textured version for comparison. We then try to scale the texture to see if the texture mapping is correct. Here we use repeat mode for better visualization.
Mitsuba3 Mine with texture Mine without texture
Expand Shrink
## Modeling Meshes (5 points) To render our final image, we performed mesh modeling for multiple objects, including the window frame, table, hourglass, iceberg, and the water beneath the iceberg. We used an excellent indoor scene from Blendswap as the background. Most of our mesh modeling was editted on existing models. For the table and window frame, some of their normals were inverted, causing black shading during rendering. We enabled the overlay in Blender to display face orientation and flipped the normals for the problematic faces.
Face Orientation
For the hourglass model, we separated its components into the frame, inner glass, outer glass, and sand as we would give them different BSDF or textures. The sand was then deleted to obtain a blank hourglass for placing iceberg and buildings.
Hourglass Seperation
For the iceberg, we extracted it from an Arctic scene and filled the bottom with smooth faces. As for the water beneath the iceberg, we selected the lower half of the inner glass of the hourglass, duplicated the faces, and closed the surface using extrusion, scaling and filling operations. We further adjusted the scale and position to ensure no clipping occurs in the final scene.
Water Modelling from Hourglass
We also scaled and moved multiple objects in Blender to achieve better angles. Then we exported them as OBJ files. This process also included adding the camera and lighting in Blender to refine the scene setup. This method freed us from calculating transformation matrices manually in the XML file.
Scale and Position
For the environment map emitter, we used world lighting in Blender and mapped texture coordinates to the HDRI map. By rotating the HDRI, we positioned the crane to appear right outside the window. The resulting angle was then applied in the Nori XML configuration.
Environment Map Rotation
We added a plugin to Blender called Mitsuba Blender Add-on to export the current scene configuration as a Mitsuba XML file. Since Mitsuba's XML format is quite similar to Nori's, this allowed us to quickly generate a rough configuration file for further adjustments. ## Stratified Sampling (10 points) ### Updated Files - `src/stratified.cpp` - `src/render.cpp` ### Implementation The stratified sampling is implemented based on PBRT 7.3 Stratified Sampling. We divide each pixel into a grid of subpixels and sample one point in each subpixel. The `Stratified` class is inherited from the `Sampler` class. The `m_sampleCount` parameter holds the total number of samples, adjusted to fit the grid resolution. Also we add `m_maxDim` parameter, which controls the maximum dimensions for the precomputed arrays. Then `m_jitter` controls whether samples within each cell are perturbed. If the sampling dimension exceeds the specified `m_maxDim`, the stratified sampler falls back to generating purely random samples. ### Validation We compare the rendering result of stratified sampling with the original independent sampling. We use a modified Cornell box scene with 4 integrators: `path_mats`, `path_mis`, `vol_path_mats`, and `vol_path_mis`. We set the spp to 32 and 512 for each integrator to observe the differences. For `path_mats` and `vol_path_mats`, stratified sampling obviously reduced noise at both 32 spp, where Monte Carlo integration had not yet converged, and 512 spp, where the rendering was nearly converged. For `path_mis` and `vol_path_mis`, while importance sampling already provides efficient sampling and lower noise, we can still notice the benefits of stratified sampling. In `path_mis`, the specular highlight below the dielectric sphere appears sharper with stratified sampling. For `vol_path_mis`, stratified sampling further reduces noise on the walls surrounding the cloud as well as the shading part on the ground. 32 ssp for path integrators:
independent, mats independent, mis stratified, mats stratified, mis
512 ssp for path integrators:
independent, mats independent, mis stratified, mats stratified, mis
32 ssp for volume path integrators:
independent, mats independent, mis stratified, mats stratified, mis
512 ssp for volume path integrators:
independent, mats independent, mis stratified, mats stratified, mis
## Object Instancing (10 points) ### Updated Files - `include/nori/instance.h` - `include/nori/reference.h` - `src/instance.cpp` - `src/reference.cpp` - `src/scene.cpp` ### Implementation For object instancing, I implement it that supports single-object instancing based on a reference shape. The `Instance` class uses a `Reference` object as the base geometry, applying transformations only at the instance level. Geometry operations like intersection tests and surface sampling are handled by the reference object, with rays and points transformed to the instance's local frame for calculations and then back to world space. Each instance shares the same reference geometry but can have independent transformations ### Validation For validation, I compare the results of object instancing with three separately defined spheres, ensuring that both cases produce identical output images. To further verify the correctness of transformations, I apply textures to the spheres and confirmed that the rotation and alignment were accurate, as the textures appear correctly oriented on the instanced spheres.
no instance instance instance with texture
I also test with 20 spheres and observed minimal differences in speed (from 6.8m to 6.6m) and memory usage of the whole Nori process (both are 2.8 GB). This suggests that the performance benefits of instancing may become more noticeable in more complex scenes.
no instance instance
## Environment Map Emitter (15 points) ### Updated Files - `include/nori/emitter.h` - `src/envmap.cpp` - `src/direct_mis.cpp` - `src/direct_mats.cpp` - `src/path_mis.cpp` - `src/path_mats.cpp` - `src/vol_path_mis.cpp` ### Implementation The environment map emitter simulates infinite area lighting using HDR images. It loads the HDR image, precomputes luminance-based importance sampling distributions, and supports transformations for flexible orientation in the scene. For a given direction, the emitter evaluates radiance by mapping the direction to the HDR image and sampling the corresponding pixel. Importance sampling ensures brighter regions of the map contribute more to the lighting, reducing noise in Monte Carlo integration. The implementation is based on PBRT 12.5 Infinite Area Lights. ### Validation We compare out result with Mitsuba3 on an outdoor scene with two spheres with different materials, from left to right: dielectric, diffuse in red albedo and full mirror. The scene is only lit by the environment map emitter. Compared to Mitsuba3, our implementation shows slightly more noise, though the difference is not significant. Mitsuba uses a `Hierarchical2D` structure for sampling environment light directions, which combines hierarchical importance sampling to improve sampling efficiency effectively.
Mitsuba3 Mine
## Disney BSDF (15 points) ### Updated Files - `include/nori/warp.h` - `src/warp.cpp` - `src/disney.cpp` ### Implementation The Disney BSDF uses a combination of physically-based shading terms to represent a wide variety of materials. The 5 graded parameters we implement are subsurface, metallic, roughness, sheen and clearcoat. We also implement specular, specular tint, sheen tint and clearcoat gloss. We follow the 2012 version paper as well as UCSD tutorial, which provides a more detailed explanation of the Disney BSDF model in our implementation. The diffuse component uses Lambertian reflection combined with a roughness-dependent correction term, where `F_D90()` and `F_D()` compute reflectance based on Schlick's Fresnel approximation, while subsurface scattering is modeled using `F_SS()` and blended with `lerp()` for light penetration effects. The metallic component employs a microfacet BRDF with roughness, using the GTR2 distribution in `D_m()`, the Smith masking-shadowing function (`Lambda_w()` and `G_m()`), and Fresnel reflectance (`F_m_hat()`), which blends base color for dielectric and metallic materials. The specular term is embedded in the metallic and dielectric reflectance calculation using Fresnel's approximation. The sheen component adds soft highlights for fabric-like surfaces, using Schlick's weight function and a sheen tint derived from the base color. The clearcoat component introduces a glossy top layer modeled with the GTR1 distribution in `D_c()`, combined with masking-shadowing (`G_c()`) and a fixed-index Fresnel term (`F_c()`). Sampling is performed by blending three distributions—diffuse, metal, and clearcoat—using weights derived from material parameters, with importance sampling handled by `Warp::squareToCosineHemisphere`, `Warp::squareToGTR2`, and `Warp::squareToGTR1` for each component. The final BSDF combines these terms into a single model, balancing diffuse, metallic, sheen, and clearcoat contributions based on user-defined parameters: \[ \begin{aligned} f_{\text{disney}} = & \, (1 - m_{\text{metallic}}) \cdot f_{\text{diffuse}} \\ & + (1 - m_{\text{metallic}}) \cdot m_{\text{sheen}} \cdot f_{\text{sheen}} \\ & + f_{\text{metal}} \\ & + 0.25 \cdot m_{\text{clearcoat}} \cdot f_{\text{clearcoat}} \end{aligned} \] ### Validation We display 5 rows of spheres, where each row tests one specific parameter of the Disney BSDF. From top to bottom the parameters are metallic, subsurface, roughness, sheen and clearcoat. The chosen parameter is linearly interpolated from 0 to 1 across the spheres in the row from left to right, while all other parameters are held constant at their default values. The scene is illuminated by an area light positioned above the center sphere (6th sphere), ensuring consistent and focused lighting.
Metallic
Subsurface
Roughness
Sheen
Clearcoat
The difference caused by the **sheen** parameter is very subtle in the initial validation setup. We provide additional comparisons between `sheen = 0` and `sheen = 1` under identical conditions. It can be seen that when `sheen = 1`, the micro highlights become more prominent and spread out, giving the surface a softer, fabric-like appearance.
Sheen = 0 Sheen = 1
The comparison for the **clearcoat** parameter shows that its effect is subtle due to the 0.25 scaling factor applied in the final term. As a result, we also do the same comparison to clearcoat parameter. When `clearcoat = 1`, the surface becomes more glossy and reflective.
Clearcoat = 0 Clearcoat = 1
The metallic and clearcoat lobes use the GTR1 and GTR2 distributions in their respective D-terms. We tested these distributions with the warptest program to ensure that the sampling procedures match their respective PDFs. Screenshots from warptest confirm accurate sampling for both GTR1 and GTR2 distributions. Here we use `alpha = 0.1` and `alpha = 0.9` for both distributions. **GTR1** alpha = 0.1:
alpha = 0.9:
**GTR2** alpha = 0.1:
alpha = 0.9:
# Final Image The image captures an iceberg gradually transforming into urban skyscrapers within an hourglass, as well as the striking contrast of a stormy industrial scene outside and a tranquil wooden interior, embodying the concept of "in between"—a balance and transition between nature and modernization, the past and the future. This image was rendered with 256 samples per pixel (spp) and processed with NL-means denoising for post-processing. The hourglass frame uses the Disney BSDF to simulate the metallic material. The meshes for the iceberg, melting water, and urban buildings were modeled(mesh modeling), with image textures applied to simulate surface details. The ice water droplets in the center of the hourglass were created using object instancing. The crane visible outside the window is illuminated using an environment map emitter, while the dark clouds are rendered using heterogeneous participating media. Additionally, directional light was used to provide extra illumination for the scene.
Final Image
Resources for project scene: - environment map: Sergej Majboroda@polyhaven - urban buildings: Herminio Nieves@2013 city model - iceberg: taobao - cloud: openvdb - hourglass: • Less •@sketchfab - indoor scene blender: Wig42@Blendswap