Public API
Internal API
Hikari.BXDF_REFLECTION Constant
BxDFReflTransFlags - Flags for controlling reflection/transmission samplingHikari.D65_ILLUMINANT_VALUES Constant
D65_ILLUMINANT_VALUESD65 illuminant spectral power distribution values (normalized to 100 at 560nm).
sourceHikari.D65_ILLUMINANT_WAVELENGTHS Constant
D65_ILLUMINANT_WAVELENGTHSWavelength sample points for D65 illuminant spectrum (300-830nm, 5nm intervals).
sourceHikari.PRIMES Constant
First 1023 prime numbers (omitting 2). Used in radical_inverse function.
Hikari.AliasTable Type
AliasTable{V1,V2}Walker's alias method for O(1) sampling from a discrete distribution. Each bin stores:
p: The PMF (probability) for this indexq: The threshold for choosing this index vs the aliasalias: The aliased index if u > q
Following pbrt-v4's implementation in util/sampling.h
Parameterized to work with both CPU (Vector) and GPU arrays (CLArray, etc.)
sourceHikari.AliasTable Method
AliasTable(weights::AbstractVector{Float32})Construct an alias table from weights (need not sum to 1).
sourceHikari.AmbientLight Method
AmbientLight(rgb::RGB{Float32})Create an AmbientLight from RGB color with automatic spectral conversion and photometric normalization, matching pbrt-v4's UniformInfiniteLight creation: scale = 1 / SpectrumToPhotometric(spectrum).
Example
light = AmbientLight(RGB{Float32}(0.1f0, 0.1f0, 0.1f0))Hikari.BVHLightSampler Type
BVHLightSamplerSpatially-aware light sampler using a BVH over bounded lights. Default light sampler in pbrt-v4 (lightsamplers.h:259-404).
Construction builds the BVH on CPU from light bounds. The resulting node array and bit trail array are uploaded to GPU for kernel use.
sourceHikari.BVHLightSampler Method
BVHLightSampler(lights::Raycore.MultiTypeSet; scene_radius=10f0)Build a BVHLightSampler from a MultiTypeSet of lights. Separates infinite and bounded lights, builds BVH over bounded lights.
sourceHikari.BoxFilter Type
BoxFilter(radius=Point2f(0.5, 0.5))Simple box filter with constant weight 1.0 within the support region. This is the simplest filter - uniform sampling within the pixel.
sourceHikari.CIEXYZTable Type
CIEXYZTable{V <: AbstractVector{Float32}}GPU-compatible table for CIE XYZ color matching functions. Use to_gpu(ArrayType, table) to convert to GPU arrays.
Hikari.CIEXYZTable Method
CIEXYZTable() -> CIEXYZTable{Vector{Float32}}Create the default CPU CIE XYZ table from built-in data.
sourceHikari.CloudVolume Type
CloudVolumeA 3D grid representing cloud density (e.g., liquid water content in g/m³). The volume is axis-aligned and defined by its bounding box.
This is both a data structure and a material type - when a ray hits geometry with a CloudVolume material, volumetric rendering is performed.
sourceHikari.CloudVolume Method
CloudVolume(qˡ_data::AbstractArray{<:Real,3}, grid_extent::Tuple; ...)Create a CloudVolume from raw LWC data with physical grid info.
Arguments
qˡ_data: 3D array of liquid water specific humidity (kg/kg)grid_extent: Tuple (Lx, Ly, Lz) of domain size in metersr_eff: Effective droplet radius in meters (default: 10μm)ρ_air: Air density in kg/m³ (default: 1.0)scale: Additional scaling factor for visualization (default: 1.0)
Hikari.CoatedConductorMaterial Type
CoatedConductorMaterialA layered material with a dielectric coating over a conductor (metal) base. This implements pbrt-v4's coatedconductor material using random walk sampling between the layers (LayeredBxDF algorithm).
Fields
Interface (coating) layer
interface_u_roughness: U roughness for the dielectric coatinginterface_v_roughness: V roughness for the dielectric coatinginterface_eta: Index of refraction of the dielectric coating
Conductor (base) layer
conductor_eta: Complex index of refraction (real part) - OR use reflectanceconductor_k: Complex index of refraction (imaginary part)reflectance: Alternative to eta/k - artist-friendly reflectance colorconductor_u_roughness: U roughness for the conductorconductor_v_roughness: V roughness for the conductor
Volumetric scattering (between layers)
thickness: Thickness of the coating layer (affects absorption)albedo: Single-scattering albedo of medium between layers (0 = no absorption)g: Henyey-Greenstein asymmetry parameter for medium scattering
Random walk parameters
max_depth: Maximum random walk depthn_samples: Number of samples for estimating the BSDFremap_roughness: Whether to remap roughness to microfacet alpha
Notes
Critical: Conductor eta/k are scaled by interface IOR: ce /= ieta, ck /= ieta
If
conductor_etais nothing, uses reflectance-based approach
Hikari.CoatedConductorMaterial Method
CoatedConductorMaterial(; interface_roughness=0.0, interface_eta=1.5, ...)Create a coated conductor material with keyword arguments.
Arguments
Interface (coating)
interface_roughness: Coating roughness (scalar or (u,v) tuple), default 0interface_eta: Coating IOR (default 1.5)
Conductor (base) - use EITHER eta/k OR reflectance
conductor_eta: Complex IOR real part (RGBSpectrum, tuple, or Texture)conductor_k: Complex IOR imaginary partreflectance: Alternative artist-friendly color (if eta/k not specified)conductor_roughness: Conductor roughness (scalar or (u,v) tuple), default 0.01
Volumetric
thickness: Coating thickness (default 0.01)albedo: Medium albedo (default 0 = no medium)g: HG asymmetry (default 0 = isotropic)
Algorithm
max_depth: Max random walk depth (default 10)n_samples: Number of samples (default 1)remap_roughness: Remap roughness to alpha (default true)
Examples
# Glossy coated gold
CoatedConductorMaterial(
interface_roughness=0.05,
conductor_eta=(0.143, 0.374, 1.442), # Gold
conductor_k=(3.983, 2.385, 1.603)
)
# Coated copper using reflectance
CoatedConductorMaterial(
reflectance=(0.95, 0.64, 0.54), # Copper-like
conductor_roughness=0.1
)
# Car paint effect (rough coating over smooth metal)
CoatedConductorMaterial(
interface_roughness=0.3,
conductor_roughness=0.01,
reflectance=(0.9, 0.1, 0.1) # Red metallic
)Hikari.CoatedDiffuseMaterial Type
CoatedDiffuseMaterialA layered material with a dielectric coating over a diffuse base. This implements pbrt-v4's coateddiffuse material using random walk sampling between the layers (LayeredBxDF algorithm).
Fields
reflectance: Diffuse reflectance of the base layer (RGB color)u_roughness: Roughness in U direction for the dielectric coatingv_roughness: Roughness in V direction for the dielectric coatingthickness: Thickness of the coating layer (affects absorption)eta: Index of refraction of the dielectric coatingalbedo: Single-scattering albedo of medium between layers (0 = no absorption)g: Henyey-Greenstein asymmetry parameter for medium scatteringmax_depth: Maximum random walk depthn_samples: Number of samples for estimating the BSDFremap_roughness: Whether to remap roughness to microfacet alpha
Hikari.CoatedDiffuseMaterial Method
CoatedDiffuseMaterial(; reflectance, roughness=0, thickness=0.01, eta=1.5, ...)Create a coated diffuse material with keyword arguments.
Arguments
reflectance: Diffuse color (RGBSpectrum, tuple, or Texture)roughness: Surface roughness (scalar or (u,v) tuple)thickness: Coating thickness (default 0.01)eta: Index of refraction (default 1.5 for typical dielectric)albedo: Medium albedo for absorption (default 0 = no medium)g: HG asymmetry parameter (default 0 = isotropic)max_depth: Max random walk depth (default 10)n_samples: Number of samples (default 1)remap_roughness: Remap roughness to alpha (default true)
Examples
# Simple coated diffuse (glossy plastic-like)
CoatedDiffuseMaterial(reflectance=(0.4, 0.45, 0.35), roughness=0)
# Rough coating
CoatedDiffuseMaterial(reflectance=(0.8, 0.2, 0.2), roughness=0.3)
# With absorbing medium
CoatedDiffuseMaterial(reflectance=(0.9, 0.9, 0.9), albedo=(0.8, 0.4, 0.2))Hikari.ConductorMaterial Type
ConductorMaterial{EtaTex, KTex, RoughTex, ReflTex}A metal/conductor material with wavelength-dependent complex index of refraction.
Metals reflect light based on Fresnel equations for conductors, characterized by:
η (eta): Real part of complex IOR (PiecewiseLinearSpectrum for presets, or RGB texture)
k: Imaginary part (extinction coefficient)
roughness: Surface roughness for microfacet model
Fields
eta: Real part of complex index of refraction (PiecewiseLinearSpectrum or texture)k: Extinction coefficient (PiecewiseLinearSpectrum or texture)roughness: Surface roughnessreflectance: Color multiplier for Fresnel reflectance (for tinting)remap_roughness: Whether to remap roughness to alpha
Hikari.ConductorMaterial Method
ConductorMaterial(; eta=(0.2, 0.2, 0.2), k=(3.9, 3.9, 3.9), roughness=0.1, remap_roughness=true)Create a metal/conductor material with Fresnel reflectance.
Arguments
eta: Real part of complex IOR - PiecewiseLinearSpectrum, (r,g,b) tuple, RGBSpectrum, or Texturek: Extinction coefficient - PiecewiseLinearSpectrum, (r,g,b) tuple, RGBSpectrum, or Textureroughness: Surface roughness (0 = mirror-like, higher = more diffuse)reflectance: Color multiplier for tinting the metal (default white = no tint)remap_roughness: Whether to remap roughness to microfacet alpha
Presets
Use the provided metal constants for realistic materials:
METAL_COPPER,METAL_GOLD,METAL_SILVER,METAL_ALUMINUM
Examples
ConductorMaterial() # Generic metal
ConductorMaterial(; METAL_COPPER..., roughness=0.05) # Polished copper
ConductorMaterial(eta=(0.2, 0.8, 0.2), k=(3, 3, 3)) # Custom green-tinted metalHikari.DDAMajorantIterator Type
DDAMajorantIterator3D DDA iterator for traversing voxels along a ray through a majorant grid. Returns per-voxel majorant bounds for tight delta tracking.
Following PBRT-v4's DDAMajorantIterator implementation. The iterator state is immutable - dda_next returns a new iterator with updated state.
Note: Uses NTuple instead of Vec types for GPU compatibility.
sourceHikari.DDAMajorantIterator Method
Create an empty/invalid DDA iterator (for rays that miss the grid)
sourceHikari.DenoiseConfig Type
DenoiseConfigConfiguration parameters for the à-trous wavelet denoiser.
Fields
iterations: Number of filter passes (each doubles the filter radius)sigma_color: Color edge-stopping threshold (luminance sensitivity)sigma_normal: Normal edge-stopping threshold (angular sensitivity)sigma_depth: Depth edge-stopping threshold (distance sensitivity)use_variance: Whether to use per-pixel variance to guide filtering
Hikari.DenoiseConfig Method
DenoiseConfig(; iterations=5, sigma_color=4.0, sigma_normal=128.0, sigma_depth=1.0, use_variance=true)Create a denoiser configuration with sensible defaults.
sourceHikari.DiffuseAreaLight Type
DiffuseAreaLight{LeTex}Per-triangle area light for emissive surfaces. Following pbrt-v4's DiffuseAreaLight, stores triangle geometry for sampling and emission texture/color for evaluation.
Each emissive triangle creates one DiffuseAreaLight. The light sampler (PowerLightSampler) includes these for importance sampling, enabling MIS with BSDF sampling.
Fields
vertices: Triangle vertex positions (for uniform sampling)normal: Geometric normal of the trianglearea: Triangle area (precomputed)uv: Vertex UV coordinates (for texture evaluation via barycentric interpolation)Le: Emitted radiance —RGBSpectrumfor constant,TextureReffor textured emissionscale: Intensity multiplier applied to Letwo_sided: If true, emits from both sides of the surface
Hikari.DiffuseTransmission Type
Type alias: DiffuseTransmission is the same as DiffuseTransmissionMaterial
Hikari.DiffuseTransmissionMaterial Type
DiffuseTransmissionMaterial{RTex, TTex}A material that diffusely reflects and transmits light.
Models surfaces like paper, thin fabric, or leaves where light scatters diffusely on both sides. The reflection and transmission are independent Lambertian distributions.
Fields
reflectance: Diffuse reflectance color (same hemisphere as incident)transmittance: Diffuse transmittance color (opposite hemisphere)scale: Intensity multiplier applied to both R and T
Physics
Reflection: f = R/π (same hemisphere)
Transmission: f = T/π (opposite hemisphere)
Sampling: probability proportional to max(R) and max(T)
Usage
# Thin white paper (equal reflection and transmission)
paper = DiffuseTransmission(reflectance=(0.8, 0.8, 0.8), transmittance=(0.5, 0.5, 0.5))
# Green leaf (green transmission, less reflection)
leaf = DiffuseTransmission(reflectance=(0.2, 0.3, 0.1), transmittance=(0.1, 0.5, 0.1))Hikari.DiffuseTransmissionMaterial Method
DiffuseTransmissionMaterial(; reflectance, transmittance, scale=1.0)Create a diffuse transmission material with keyword arguments.
Arguments
reflectance: Diffuse reflection color (RGBSpectrum, tuple, or Texture)transmittance: Diffuse transmission color (RGBSpectrum, tuple, or Texture)scale: Intensity multiplier (default 1.0)
Examples
# Thin translucent material
DiffuseTransmission(reflectance=(0.5, 0.5, 0.5), transmittance=(0.3, 0.3, 0.3))
# Pure transmission (no reflection)
DiffuseTransmission(reflectance=(0, 0, 0), transmittance=(1, 1, 1))Hikari.DirectionCone Type
DirectionConeRepresents a cone of directions with axis w and half-angle cosθ.
cosθ = 1: point direction (degenerate cone)cosθ = -1: entire spherecosθ = Inf32: empty cone (no directions)
Following pbrt-v4's DirectionCone (vecmath.h:1784-1850).
sourceHikari.DirectionalLight Type
Directional light does not take medium interface, since only reasonable interface for it is vacuum, otherwise all the light would've been absorbed by the medium, since the light is infinitely far away.
sourceHikari.DirectionalLight Method
DirectionalLight(rgb::RGB{Float32}, direction; illuminance=nothing)Create a DirectionalLight from RGB color with automatic spectral conversion and photometric normalization, matching pbrt-v4's light creation pattern.
Arguments
rgb: RGB color (intensity encoded in color values)direction: Direction the light travels (away from source)illuminance: Optional target illuminance in lux. If specified, overrides the RGB intensity.
Example
# Directional light traveling in -Y direction (sunlight from above)
light = DirectionalLight(RGB{Float32}(1f0, 1f0, 1f0), Vec3f(0, -1, 0))Hikari.Distribution2D Type
Distribution2D{V<:AbstractVector{Float32}, M<:AbstractMatrix{Float32}}GPU-compatible 2D distribution that stores all data in flat arrays/matrices. Avoids nested device arrays which cause SPIR-V validation errors on OpenCL.
The conditional distribution data is stored as 2D matrices where each column represents one conditional distribution:
conditional_func[i, v]= func value at index i for row vconditional_cdf[i, v]= cdf value at index i for row vconditional_func_int[v]= func_int for row v
Hikari.Distribution2D Method
Distribution2D(func::Matrix{Float32})Construct a GPU-friendly 2D distribution directly from a function matrix. The matrix has dimensions (nv, nu) where nv is height (rows) and nu is width (columns).
sourceHikari.Emissive Type
Emissive{LeTex}Emission data for area lights. Always used inside MediumInterface.arealight to add light emission to surfaces. A surface with an arealight both reflects light (via the MediumInterface's BSDF material) AND emits light.
Fields
Le: Emitted radiance (color/intensity texture or TextureRef)scale: Intensity multiplier applied to Letwo_sided: If true, emits from both sides of the surface
Usage
# Surface that reflects AND glows:
MediumInterface(MatteMaterial(Kd=diffuse_tex);
arealight=Emissive(Le=glow_color, scale=10))
# Pure emitter (no reflection):
MediumInterface(Emissive(Le=bright_tex, scale=50))Hikari.Emissive Method
Emissive(; Le=RGBSpectrum(1), scale=1.0, two_sided=false)Create emission data for use in MediumInterface.arealight.
Examples
# Diffuse surface with warm glow
MediumInterface(MatteMaterial(Kd=wood_tex);
arealight=Emissive(Le=RGBSpectrum(15, 12, 8), scale=5.0, two_sided=true))
# Pure area light
MediumInterface(Emissive(Le=(1, 1, 1), scale=100.0))Hikari.EnvironmentLight Type
Environment light that illuminates the scene from all directions using an HDR environment map. Uses equirectangular (lat-long) mapping.
sourceHikari.EnvironmentLight Method
Convenience constructor that loads an environment map from a file. rotation: Mat3f rotation matrix (use rotation_matrix(angle_deg, axis) to create)
sourceHikari.EnvironmentMap Type
Environment map texture for HDR image-based lighting. Supports sampling by direction vector (for environment lights). Includes importance sampling distribution based on luminance.
Uses equal-area (octahedral) mapping like pbrt-v4's ImageInfiniteLight. Expects SQUARE images in equal-area format.
sourceHikari.EnvironmentMap Method
Sample the environment map by direction vector. The textures parameter is used to deref TextureRef fields when EnvironmentMap is stored in a MultiTypeSet.
Hikari.FastMaterialProps Type
Simple material properties extracted from Hikari materials for fast shading.
sourceHikari.Film Method
resolution: full resolution of the image in pixels.
crop_bounds: subset of the image to render in [0, 1] range.
diagonal: length of the diagonal of the film's physical area in mm.
scale: scale factor that is applied to the samples when writing image.
Hikari.FilmSensor Type
FilmSensorFilm sensor parameters for physically-based image formation. Matches pbrt-v4's film sensor simulation.
Fields
iso: ISO sensitivity (default 100). Higher = brighter.exposure_time: Exposure time in seconds (default 1.0).white_balance: Color temperature in Kelvin for white balance (default 0 = disabled). When set, applies Bradford chromatic adaptation from the illuminant to D65.
The imaging ratio is computed as: imagingRatio = exposure_time * iso / 100 This matches pbrt-v4's PixelSensor behavior.
Example
# Nikon D850 settings from pbrt bunny-cloud scene
sensor = FilmSensor(iso=90, exposure_time=1.0, white_balance=5000)
postprocess!(film; sensor=sensor, tonemap=:aces)Hikari.FilterSample Type
FilterSampleResult of sampling a filter. Contains the offset position and the weight for this sample (filter value / pdf).
sourceHikari.FilterSampler Type
FilterSampler{F<:AbstractFilter}Importance sampler for filters that don't have analytical sampling. Tabulates the filter function and uses PiecewiseConstant2D for sampling.
For BoxFilter and TriangleFilter, direct analytical sampling is used instead.
sourceHikari.GPUFilterParams Type
GPUFilterParamsGPU-compatible filter parameters for kernel use. For filters requiring tabulated importance sampling (Gaussian, Mitchell, Lanczos), the table data must be passed separately to the kernel.
sourceHikari.GPUFilterSamplerData Type
GPUFilterSamplerDataGPU-compatible tabulated data for importance sampling filters. Stores the distribution data needed for sampling Gaussian/Mitchell/Lanczos filters.
This matches pbrt-v4's FilterSampler which uses PiecewiseConstant2D.
sourceHikari.GPUFilterSamplerData Method
Build GPU-compatible filter sampler data from a filter. Returns nothing for Box/Triangle filters (they use analytical sampling).
sourceHikari.GaussianFilter Type
GaussianFilter(radius=Point2f(1.5, 1.5), sigma=0.5)Gaussian filter with configurable sigma. The filter is truncated at the radius and normalized by subtracting the value at the boundary.
sourceHikari.GlassMaterial Type
GlassMaterial(Kr, Kt, u_roughness, v_roughness, index, remap_roughness)Glass/dielectric material with reflection and transmission.
Kr: Spectral reflectance (Texture or TextureRef)Kt: Spectral transmittance (Texture or TextureRef)u_roughness: Roughness in u direction (0 = perfect specular)v_roughness: Roughness in v direction (0 = perfect specular)index: Index of refractionremap_roughness: Whether to remap roughness to alpha
Hikari.GlassMaterial Method
GlassMaterial(; Kr=RGBSpectrum(1), Kt=RGBSpectrum(1), roughness=0, index=1.5, remap_roughness=true)Create a glass/dielectric material with reflection and transmission.
Arguments
Kr: Reflectance colorKt: Transmittance colorroughness: Surface roughness (0 = perfect specular, can be single value or (u,v) tuple)index: Index of refraction (1.5 for glass, 1.33 for water, 2.4 for diamond)remap_roughness: Whether to remap roughness to microfacet alpha
Examples
GlassMaterial() # Clear glass
GlassMaterial(Kt=(1, 0.9, 0.8), index=1.5) # Amber tinted
GlassMaterial(roughness=0.1) # Frosted glass
GlassMaterial(roughness=(0.1, 0.05)) # Anisotropic roughnessHikari.GridMedium Type
GridMediumA participating medium with spatially varying density. Uses a 3D grid for density and a coarser majorant grid for efficient sampling.
Note: Uses Vec{3, Int32} for density_res for GPU compatibility.
sourceHikari.HGPhaseFunction Type
HGPhaseFunctionHenyey-Greenstein phase function for anisotropic scattering.
g > 0: Forward scattering (clouds typically g ≈ 0.85)
g = 0: Isotropic scattering
g < 0: Backward scattering
Hikari.HomogeneousMajorantIterator Type
HomogeneousMajorantIteratorSimple iterator that returns a single majorant segment for homogeneous media. Provides consistent iterator interface with DDAMajorantIterator.
sourceHikari.HomogeneousMedium Type
HomogeneousMediumA participating medium with constant properties throughout. Simplest medium type - majorant equals actual extinction everywhere.
sourceHikari.LanczosSincFilter Type
LanczosSincFilter(radius=Point2f(4.0, 4.0), tau=3.0)Lanczos-windowed sinc filter for high-quality reconstruction. The sinc function is windowed by another sinc to reduce ringing.
sourceHikari.LayeredBSDFSample Type
LayeredBSDFSample - Internal sample result for LayeredBxDF interfacesContains the sampled direction, BSDF value, pdf, and flags indicating whether the sample is reflection/transmission and specular/glossy.
sourceHikari.LightBVHNode Type
LightBVHNodeBVH node for the light sampler. Stores unquantized LightBounds fields plus tree connectivity. Interior nodes store child1 index; child0 is always at node_idx + 1 (left child immediately follows parent in array).
Simplification vs pbrt-v4: we skip CompactLightBounds quantization. Node is ~64 bytes instead of 32, but avoids OctahedralVector/bit-packing complexity. With 37k lights this is ~5MB — negligible on GPU.
sourceHikari.LightBounds Type
LightBoundsStores spatial and directional bounds for a light or group of lights. Used by BVHLightSampler for importance-based light selection.
Following pbrt-v4's LightBounds (lights.h:104-125).
sourceHikari.LightSamplerData Type
LightSamplerDataGPU-compatible light sampler data structure. Stores alias table as flat arrays that can be uploaded to GPU.
sourceHikari.MajorantGrid Type
MajorantGridCoarse grid storing maximum extinction coefficients for DDA traversal. Used to provide tight majorant bounds in heterogeneous media.
Note: Uses Vec{3, Int32} instead of Vec3i (Int64) for GPU compatibility.
sourceHikari.MatrixCamera Type
MatrixCamera <: CameraA camera defined by view and projection matrices, matching the convention used by Makie's Camera struct (OpenGL-style: camera looks along -z in camera space).
This is a fallback camera for scenes that don't use Camera3D (e.g. Axis3 which uses its own camera type but still sets view/projection matrices on the scene camera).
The view matrix is world-to-camera, and the projection matrix is camera-to-clip (NDC). Both use OpenGL conventions (right-handed, camera looks along -z, clip z in [-1, 1]).
sourceHikari.MatrixCamera Method
MatrixCamera(view, projection, resolution, screen_window)Construct a MatrixCamera with a custom screen window (NDC sub-region). Used for rendering a cropped viewport where film pixels correspond to a sub-region of the full projection.
screen_window: NDC bounds for the visible region (default full: [-1,-1] to [1,1])
Hikari.MatrixCamera Method
MatrixCamera(view::Mat4f, projection::Mat4f, resolution::Point2f)Construct a MatrixCamera from Makie-style view and projection matrices.
view: world-to-camera matrix (OpenGL convention, camera looks along -z)projection: camera-to-clip matrix (OpenGL perspective projection)resolution: film resolution asPoint2f(width, height)
Hikari.MatteMaterial Type
MatteMaterial(Kd::Texture, σ::Texture)Matte (diffuse) material with Lambertian or Oren-Nayar BRDF.
Kd: Spectral diffuse reflection (color texture or TextureRef)σ: Scalar roughness for Oren-Nayar model (0 = Lambertian)
Hikari.MatteMaterial Method
MatteMaterial(; Kd=RGBSpectrum(0.5), σ=0.0)Create a matte (diffuse) material with optional Oren-Nayar roughness.
Arguments
Kd: Diffuse color - can be RGBSpectrum, (r,g,b) tuple, or Textureσ: Roughness angle in degrees (0 = Lambertian, >0 = Oren-Nayar)
Examples
MatteMaterial(Kd=RGBSpectrum(0.8, 0.2, 0.2)) # Red matte
MatteMaterial(Kd=(0.8, 0.2, 0.2), σ=20) # Red with roughness
MatteMaterial(Kd=my_texture) # TexturedHikari.MediumInteraction Type
MediumInteractionRepresents an interaction point within a participating medium. Created when a real scattering event occurs during delta tracking.
sourceHikari.MediumInterface Type
MediumInterface{M<:Material, I, O}Material wrapper that combines a surface BSDF with medium boundaries. Surfaces define boundaries between media.
Emission is handled separately by DiffuseAreaLight (registered per-triangle in scene.lights, not on the material).
Fields
material: The underlying BSDF material (e.g., MatteMaterial, GlassMaterial)inside: Medium for inside the surface, ornothingfor vacuumoutside: Medium for outside the surface, ornothingfor vacuum
Usage
# Glass with fog inside
MediumInterface(GlassMaterial(...); inside=fog)
# Simple material with no medium transition
MediumInterface(MatteMaterial(Kd, σ))Hikari.MediumInterfaceIdx Type
MediumInterfaceIdxInternal index wrapper with material and medium indices for GPU-compatible dispatch. Created during scene building from MediumInterface.
Fields (all SetKey indices)
material: SetKey into materials container (BSDF material)inside: SetKey for inside medium (invalid = vacuum)outside: SetKey for outside medium (invalid = vacuum)
Hikari.MediumProperties Type
MediumPropertiesProperties of a participating medium at a specific point. Returned by medium sampling functions.
sourceHikari.MetalMaterial Type
Type alias: MetalMaterial is the same as ConductorMaterial (legacy alias)
Hikari.MirrorMaterial Type
MirrorMaterial(Kr::Texture)Perfect mirror (specular reflection) material.
Kr: Spectral reflectance (color texture or TextureRef)
Hikari.MirrorMaterial Method
MirrorMaterial(; Kr=RGBSpectrum(0.9))Create a perfect mirror (specular reflection) material.
Arguments
Kr: Reflectance color - can be RGBSpectrum, (r,g,b) tuple, or Texture
Examples
MirrorMaterial() # Default silver mirror
MirrorMaterial(Kr=RGBSpectrum(0.95, 0.93, 0.88)) # Gold-tinted
MirrorMaterial(Kr=(0.9, 0.9, 0.9)) # Using tupleHikari.MitchellFilter Type
MitchellFilter(radius=Point2f(2.0, 2.0), B=1/3, C=1/3)Mitchell-Netravali filter - a family of cubic filters parameterized by B and C. Default B=C=1/3 gives a good balance between ringing and blurring.
sourceHikari.MixMaterial Type
MixMaterial{M1, M2, AmountTex}A material that stochastically blends between two sub-materials based on a mixing amount.
Following pbrt-v4, the material selection is deterministic based on:
The intersection position and viewing direction
A hash function to generate deterministic randomness
The
amounttexture value at the hit point
Fields
material1: First material (selected when amount → 0)material2: Second material (selected when amount → 1)amount: Texture controlling the blend ratio (0 = material1, 1 = material2)material1_idx: Index of material1 in the materials tuplematerial2_idx: Index of material2 in the materials tuple
Usage
MixMaterial is resolved at intersection time before material evaluation. The integrator should call choose_material() to get the actual material index to use for the hit point, then proceed with normal material evaluation.
Hikari.MixMaterial Method
MixMaterial(; materials, amount, material_indices)Create a MixMaterial with keyword arguments.
Arguments
materials: Tuple of two materials (material1, material2)amount: Mixing amount (0-1 scalar, texture, or image path)material_indices: Tuple of SetKey for each material
Examples
# Simple 50-50 blend
MixMaterial(
materials=(gold_material, red_diffuse),
amount=0.5,
material_indices=(gold_idx, diffuse_idx)
)
# Texture-based blend (e.g., mask texture)
MixMaterial(
materials=(gold_material, red_diffuse),
amount=mask_texture,
material_indices=(gold_idx, diffuse_idx)
)Hikari.MultiMaterialQueue Type
MultiMaterialQueue{N}Container holding N separate work queues, one per material type. Items are routed to the correct queue based on SetKey.material_type.
Following pbrt-v4's MultiWorkQueue pattern where:
Push routes to correct sub-queue based on type
Each sub-queue is processed with type-specialized kernel
Hikari.NanoVDBMedium Type
NanoVDBMediumDirect NanoVDB volume sampling medium that keeps the buffer in native format. Matches pbrt-v4's NanoVDBMedium implementation exactly.
The buffer contains the entire decompressed NanoVDB grid data. Tree traversal is done on-the-fly using byte offsets.
sourceHikari.NanoVDBMedium Method
NanoVDBMedium(data::Array{Float32,3}; bounds, σ_a, σ_s, g, majorant_res, buffer_alloc)Construct a NanoVDBMedium from a dense 3D array by building a sparse NanoVDB tree. Only leaf blocks (8³) containing non-zero voxels are stored, giving significant memory savings for sparse volumetric data like cloud fields.
Arguments
data: Dense 3D Float32 array of density/opacity valuesbounds: World-space bounding box (Bounds3)σ_a: Absorption coefficient (default: 0 for pure scattering clouds)σ_s: Scattering coefficient (default: 1)g: Henyey-Greenstein asymmetry parametermajorant_res: Resolution of majorant grid (default: 64³)buffer_alloc: Array constructor for target device (default:Vector{UInt8})
Hikari.NanoVDBMedium Method
NanoVDBMedium(filepath::String; σ_a, σ_s, g, transform, majorant_res)Construct a NanoVDBMedium from a NanoVDB file.
Arguments
filepath: Path to the .nvdb fileσ_a: Absorption coefficient (RGBSpectrum)σ_s: Scattering coefficient (RGBSpectrum)g: Henyey-Greenstein asymmetry parametertransform: Optional 3x3 rotation matrix (medium-to-world, like pbrt's Rotate)majorant_res: Resolution of majorant grid (default 64³)
For pbrt bunny-cloud scene, use: transform = RotZ(π) * RotX(π/2) # Rotate 180° around Z, then 90° around X
sourceHikari.PCG32State Type
PCG32StateGPU-compatible immutable PCG32 random number generator state. Uses tuple-like struct for stack allocation on GPU.
sourceHikari.PWDirectLightingResult Type
PWDirectLightingResultResult of direct lighting calculation for one light sample.
sourceHikari.PWEscapedRayWorkItem Type
PWEscapedRayWorkItemA ray that escaped the scene (missed all geometry). Used to compute contribution from environment/infinite lights.
sourceHikari.PWHitAreaLightWorkItem Type
PWHitAreaLightWorkItemA ray that hit an emissive surface (area light). Contains info needed to compute emission contribution with MIS.
sourceHikari.PWLightSample Type
PWLightSampleResult of sampling a light source with spectral radiance.
sourceHikari.PWLightSampleContext Type
PWLightSampleContextContext for sampling lights - position, geometric normal, shading normal. Used for MIS weight computation.
sourceHikari.PWMISContext Type
PWMISContextContext needed for computing MIS weights between light and BSDF sampling.
sourceHikari.PWMaterialEvalResult Type
PWMaterialEvalResultResult of evaluating a material for the wavefront pipeline.
sourceHikari.PWMaterialEvalWorkItem Type
PWMaterialEvalWorkItemWork item for material/BSDF evaluation after ray intersection. Contains all intersection data needed for shading.
sourceHikari.PWPixelSampleState Type
PWPixelSampleStatePer-pixel state accumulated during spectral path tracing.
sourceHikari.PWRaySamples Type
PWRaySamplesRandom samples used for direct and indirect lighting at each path vertex.
sourceHikari.PWRayWorkItem Type
PWRayWorkItemA ray to be traced, along with full path state. This is the main work item flowing through the ray queue.
sourceHikari.PWRayWorkItem Method
PWRayWorkItem(ray, lambda, pixel_index)Create a new camera ray work item with default path state.
sourceHikari.PWShadowRayWorkItem Type
PWShadowRayWorkItemA shadow ray for direct lighting visibility test.
sourceHikari.PiecewiseConstant2D Type
PiecewiseConstant2D2D piecewise constant distribution for importance sampling. Stores a 2D function tabulated on a grid and allows efficient sampling proportional to the function values.
sourceHikari.PiecewiseConstant2D Method
Build a PiecewiseConstant2D from a 2D function array. func[y, x] = function value at grid cell (x, y).
sourceHikari.PixelSample Type
PixelSampleHolds all sample values needed for a single pixel sample in path tracing. This is a stateless struct computed deterministically from (pixel, sample_index).
sourceHikari.PointLight Method
PointLight(rgb::RGB{Float32}, position; power=nothing)
PointLight(rgb::RGB, position; power=nothing)Create a PointLight from RGB color with automatic spectral conversion and photometric normalization, matching pbrt-v4's light creation pattern.
Converts RGB to RGBIlluminantSpectrum and applies photometric normalization: scale = 1 / SpectrumToPhotometric(spectrum) where SpectrumToPhotometric extracts the D65 illuminant component.
Arguments
rgb: RGB color (intensity encoded in color values, e.g., RGB(50,50,50) for bright white)position: World-space position of the lightpower: Optional radiant power in Watts. If specified, overrides the RGB intensity.
Example
# Equivalent to pbrt-v4's: LightSource "point" "rgb I" [50 50 50]
light = PointLight(RGB{Float32}(50f0, 50f0, 50f0), Vec3f(10, 10, 10))
# Or with Makie's RGBf:
light = PointLight(RGBf(50, 50, 50), Vec3f(10, 10, 10))Hikari.PowerLightSampler Type
PowerLightSamplerSample lights proportional to their total power using an alias table. O(1) sampling with O(N) construction. Good for scenes with varying light intensities.
Following pbrt-v4's PowerLightSampler which uses light.Phi() to estimate power.
sourceHikari.RGBAlbedoSpectrum Type
RGBAlbedoSpectrumBounded [0,1] spectrum for surface reflectance/albedo.
sourceHikari.RGBGridMedium Type
RGBGridMediumA participating medium with spatially varying per-voxel RGB absorption and scattering coefficients. Following pbrt-v4's RGBGridMedium implementation exactly.
Key design (matching pbrt-v4):
Optional σ_a grid: 3D array of RGBSpectrum absorption coefficients (if absent, defaults to 1.0)
Optional σ_s grid: 3D array of RGBSpectrum scattering coefficients (if absent, defaults to 1.0)
Optional Le grid: 3D array of RGBSpectrum emission coefficients (for emissive volumes)
sigma_scale: Global multiplier for σ_a and σ_s grids
Le_scale: Global multiplier for Le grid
The majorant grid stores
sigma_scale * max(σ_a + σ_s)per coarse voxelIn SampleRay, uses unit σ_t since scaling is already in the majorant grid
Hikari.RGBGridMedium Method
RGBGridMedium(; σ_a_grid, σ_s_grid, Le_grid, sigma_scale, Le_scale, g, bounds, transform, majorant_res)Create an RGBGridMedium following pbrt-v4's design exactly.
At least one of σ_a_grid or σ_s_grid must be provided. The grids contain per-voxel RGBSpectrum values that are multiplied by sigma_scale (or Le_scale for emission).
Arguments
σ_a_grid: Optional 3D array of RGBSpectrum absorption coefficientsσ_s_grid: Optional 3D array of RGBSpectrum scattering coefficientsLe_grid: Optional 3D array of RGBSpectrum emission coefficients (requires σ_a_grid)sigma_scale: Global multiplier for σ_a and σ_s grids (default: 1.0)Le_scale: Global multiplier for Le grid (default: 0.0)g: Henyey-Greenstein asymmetry parameter (default: 0.0)bounds: Volume bounds in medium space (default: unit cube)transform: Transform from medium to render space (default: identity)majorant_res: Resolution of the majorant grid (default: 16³)
Hikari.RGBIlluminantSpectrum Type
RGBIlluminantSpectrumIlluminant spectrum for light sources, matching pbrt-v4's RGBIlluminantSpectrum. Stores RGB coefficients and multiplies by D65 illuminant when sampled.
Fields:
poly::RGBSigmoidPolynomial- Sigmoid polynomial for spectral shapescale::Float32- Scale factor (2 * max(r,g,b) from construction)
When sampled at wavelength λ: returns scale * poly(λ) * D65(λ)
Hikari.RGBSigmoidPolynomial Type
RGBSigmoidPolynomialRepresents a smooth spectrum using a sigmoid-wrapped polynomial. The spectrum value at wavelength λ is: sigmoid(c0_λ² + c1_λ + c2) where sigmoid(x) = 0.5 + x / (2*sqrt(1 + x²))
This provides a smooth, bounded [0,1] spectrum from just 3 coefficients.
sourceHikari.RGBToSpectrumTable Type
RGBToSpectrumTable{V1, V5}Precomputed lookup table for converting RGB colors to RGBSigmoidPolynomial coefficients. Uses trilinear interpolation for smooth results.
Parameterized by array types for GPU compatibility:
V1: 1D array type for scale (AbstractVector{Float32})
V5: 5D array type for coeffs (AbstractArray{Float32, 5})
Use to_gpu(ArrayType, table) to convert to GPU-compatible arrays.
Hikari.RGBUnboundedSpectrum Type
RGBUnboundedSpectrumUnbounded spectrum for illumination/emission. Scales the sigmoid polynomial to match the maximum RGB component.
sourceHikari.RayMajorantIterator Type
RayMajorantIteratorUnified majorant iterator that can represent either homogeneous or DDA iteration. This avoids Union types which cause GPU compilation issues.
Following pbrt-v4's RayMajorantIterator which is a TaggedPointer variant.
Mode:
mode = 0: Invalid/exhausted iterator
mode = 1: Homogeneous mode (single segment)
mode = 2: DDA mode (voxel traversal)
Hikari.RayMajorantIterator Method
Create a homogeneous mode iterator (from HomogeneousMajorantIterator)
sourceHikari.RayMajorantSegment Type
RayMajorantSegmentA segment along a ray with a majorant (upper bound) extinction coefficient. Used for delta tracking in heterogeneous media.
sourceHikari.SampledSpectrum Type
SampledSpectrum{N}A spectrum sampled at N wavelengths. Default is 4 for GPU efficiency (matches Float4). Used by PhysicalWavefront for spectral path tracing.
sourceHikari.SampledWavelengths Type
SampledWavelengths{N}Represents N sampled wavelengths and their PDFs for hero wavelength sampling.
sourceHikari.Scene Method
Scene(; backend=KA.CPU())Create an empty mutable Scene for incremental construction.
Example
scene = Scene()
push!(scene, mesh, material) # Add geometry
push!(scene, PointLight(...)) # Add lights
push!(scene, AmbientLight(...))
sync!(scene) # Build acceleration structureHikari.SmitsSpectralBasis Type
SmitsSpectralBasisPrecomputed basis spectra for Smits' RGB to spectrum conversion. Contains white, cyan, magenta, yellow, red, green, blue basis spectra.
sourceHikari.SobolRNG Type
SobolRNG{M}GPU-compatible Sobol random number generator. Holds the Sobol matrices and precomputed parameters for ZSobol sampling.
This struct can be passed directly to GPU kernels via Adapt.jl integration. All sampling state is computed on-the-fly from pixel coordinates and sample index, making it stateless and thread-safe.
Fields
matrices::M: GPU array of Sobol generator matrices (UInt32)log2_spp::Int32: log2 of samples per pixeln_base4_digits::Int32: number of base-4 digits for Morton encodingseed::UInt32: scrambling seedwidth::Int32: image width (for pixel coordinate recovery)
Hikari.SobolRNG Method
SobolRNG(backend, seed::UInt32, width::Integer, height::Integer, samples_per_pixel::Integer)Create a SobolRNG for the given render settings. Allocates Sobol matrices on the specified backend (CPU/GPU).
Arguments
backend: KernelAbstractions backend (e.g.,CPU(),CUDABackend(),OpenCLBackend())seed: Scrambling seed for decorrelationwidth: Image width in pixelsheight: Image height in pixelssamples_per_pixel: Number of samples per pixel
Hikari.SpectralBSDFSample Type
SpectralBSDFSampleResult of sampling a BSDF with spectral wavelengths. Used by PhysicalWavefront for spectral path tracing.
sourceHikari.SpotLight Method
SpotLight(rgb::RGB{Float32}, position, target, total_width, falloff_start; power=nothing)Create a SpotLight from RGB color with automatic spectral conversion and photometric normalization, matching pbrt-v4's light creation pattern.
Arguments
rgb: RGB color (intensity encoded in color values)position::Point3f: World-space position of the spotlighttarget::Point3f: Point the spotlight is aimed attotal_width::Float32: Total cone angle in degreesfalloff_start::Float32: Angle where intensity falloff begins (degrees)power: Optional radiant power in Watts. If specified, overrides the RGB intensity.
Example
# Spotlight with RGB color
light = SpotLight(RGB{Float32}(100f0, 100f0, 100f0), Point3f(0, 5, 0), Point3f(0, 0, 0), 30f0, 25f0)Hikari.SpotLight Method
SpotLight(position, target, intensity, cone_angle, falloff_angle; scale=1f0)Convenience constructor for SpotLight that takes position and target points.
Arguments
position::Point3f: World-space position of the spotlighttarget::Point3f: Point the spotlight is aimed ati::Spectrum: Light intensity/colortotal_width::Float32: Total cone angle in degreesfalloff_start::Float32: Angle where intensity falloff begins (degrees)scale::Float32: Photometric scale factor (default 1.0)
Example
# Spotlight at (0, 5, 0) pointing at origin with 30° cone
light = SpotLight(Point3f(0, 5, 0), Point3f(0, 0, 0), RGBSpectrum(100f0), 30f0, 25f0)Hikari.SunLight Type
SunLight - A directional light with additional parameters for volumetric cloud rendering.
Extends DirectionalLight with angular diameter and corona falloff parameters useful for rendering sun disks and atmospheric scattering in clouds.
sourceHikari.SunLight Method
SunLight(rgb::RGB{Float32}, direction; angular_diameter=0.00933f0, corona_falloff=8f0)Create a SunLight from RGB color with automatic spectral conversion and photometric normalization, matching pbrt-v4's light creation pattern.
Arguments
rgb: RGB color (intensity encoded in color values)direction: Direction the light travels (away from source)angular_diameter: Angular size of sun disk in radians (default ~0.53°, real sun)corona_falloff: How quickly corona fades around sun disk
Example
# Sun light traveling in -Y direction (sunlight from above)
light = SunLight(RGB{Float32}(1f0, 1f0, 0.9f0), Vec3f(0, -1, 0))Hikari.TextureFilterContext Type
TextureFilterContextContext for texture filtering, containing UV coordinates and screen-space derivatives. Following pbrt-v4's TextureEvalContext pattern.
Materials receive this context and pass it to eval_tex for proper texture filtering. The UV derivatives (dudx, dudy, dvdx, dvdy) are computed from:
Surface partial derivatives (dpdu, dpdv) at intersection
Screen-space position derivatives (dpdx, dpdy) from camera
These enable mipmap level selection and anisotropic filtering.
sourceHikari.ThinDielectricMaterial Type
ThinDielectricMaterialA thin dielectric material for surfaces like window glass.
Unlike regular dielectric materials which refract light according to Snell's law, thin dielectric materials transmit light straight through (wi = -wo) while accounting for multiple internal reflections within the thin layer.
Fields
eta: Index of refraction of the dielectric
Physics
For a thin dielectric layer:
Single-surface Fresnel: R₀ = FrDielectric(cos_θ, eta)
Multiple-bounce reflectance: R = R₀ + T₀²R₀/(1 - R₀²) where T₀ = 1 - R₀
Transmittance: T = 1 - R
Transmitted direction: wi = -wo (straight through, no bend)
Usage
# Thin glass window
window = ThinDielectric(eta=1.5)
# Thin plastic film
film = ThinDielectric(eta=1.4)Hikari.TransformMapping3D Type
3D mapping that takes world space coordinate of the point and applies a linear transformation to it. This will often be a transformation that takes the point back to the primitive's object space.
Because a linear mapping is used, the differential change in texture coordinates can be found by applying the same transformation to the partial derivatives of the position.
sourceHikari.TriangleFilter Type
TriangleFilter(radius=Point2f(2.0, 2.0))Triangle (tent) filter - linear falloff from center to edges. Can be importance sampled analytically with weight = 1.0.
sourceHikari.TrowbridgeReitzDistribution Type
Microfacet distribution function based on Gaussian distribution of microfacet slopes. Distribution has higher tails, it falls off to zero more slowly for directions far from the surface normal.
sourceHikari.UVMapping2D Type
Simplest mapping uses (u, v) coordinates from the SurfaceInteraction to compute texture coordinates and can be offset and scaled with user-supplied values in each dimension.
Hikari.UniformLightSampler Type
UniformLightSamplerSimple uniform light selection. O(1) but ignores light importance. This is the baseline sampler equivalent to the current Hikari behavior.
sourceHikari.VPEscapedRayWorkItem Type
VPEscapedRayWorkItemRay that escaped the scene (for environment lighting).
sourceHikari.VPHitSurfaceWorkItem Type
VPHitSurfaceWorkItemIntermediate work item when ray hits a surface. Used to separate intersection from material evaluation. Stores surface partial derivatives for texture filtering (pbrt-v4 style).
sourceHikari.VPMaterialEvalWorkItem Type
VPMaterialEvalWorkItemSurface interaction work item for VolPath. Similar to PWMaterialEvalWorkItem but includes medium transition info.
Following pbrt-v4's MaterialEvalWorkItem, stores surface partial derivatives for texture filtering. Screen-space derivatives (dudx, dudy, dvdx, dvdy) are computed on-the-fly during material evaluation using camera.Approximate_dp_dxy().
sourceHikari.VPMediumSampleWorkItem Type
VPMediumSampleWorkItemWork item for rays in medium that need delta tracking. Follows pbrt-v4's approach: intersection is found FIRST, then delta tracking runs with the known t_max distance.
If the ray reaches t_max without scattering/absorption, it processes the surface hit normally. This allows proper bounded medium traversal.
sourceHikari.VPMediumScatterWorkItem Type
VPMediumScatterWorkItemWork item for a real scattering event in a participating medium. Created when delta tracking samples a real scatter (not null scatter).
sourceHikari.VPRaySamples Type
VPRaySamplesPre-computed Sobol samples for a single bounce, matching pbrt-v4's RaySamples. These are generated once per bounce and stored in PixelSampleState.
sourceHikari.VPRayWorkItem Type
VPRayWorkItemRay work item for volumetric path tracing. Includes medium index to track which medium the ray is currently in.
sourceHikari.VPShadowRayWorkItem Type
VPShadowRayWorkItemShadow ray for direct lighting (works for both surface and volume scattering).
sourceHikari.VolPath Type
VolPath <: IntegratorVolumetric path tracer using hero wavelength sampling and wavefront architecture.
Extends path tracing with support for participating media (fog, smoke, clouds). Uses delta tracking with null scattering for efficient medium traversal.
The main loop follows pbrt-v4's structure:
- If ray is in medium: run delta tracking
Absorption → terminate
Real scatter → sample direct light + phase function → continue
Null scatter → continue tracking
Escape medium → proceed to intersection
Intersect with scene
Handle surface (emission + BSDF) or environment light
Hikari.VolPath Method
(vp::VolPath)(scene, film, camera)Render using volumetric spectral wavefront path tracing.
Media are automatically extracted from MediumInterface materials during scene construction and stored in scene.media.
The render loop follows pbrt-v4's VolPathIntegrator:
Generate camera rays (possibly in a medium)
For each bounce: a. Delta tracking for rays in media → scatter or pass through b. Trace rays to find intersections c. Handle escaped rays (environment light) d. Handle surface hits (emission + direct lighting + BSDF sampling)
Accumulate spectral samples to RGB
Hikari.VolPath Method
VolPath(; max_depth=8, samples=64, russian_roulette_depth=3, regularize=true,
material_coherence=:none, max_component_value=10, filter=GaussianFilter(),
accumulation_eltype=Float32)Create a VolPath integrator for volumetric path tracing.
Arguments
max_depth: Maximum path depthsamples: Number of samples per pixelrussian_roulette_depth: Depth at which to start Russian rouletteregularize: Apply BSDF regularization to reduce fireflies (default: true). When enabled, near-specular BSDFs are roughened after the first non-specular bounce, following pbrt-v4's approach.material_coherence: Material evaluation strategy for GPU coherence (default: :none)::none: Standard evaluation (baseline):sorted: Sort work items by material type before evaluation:per_type: Launch separate kernels per material type (pbrt-v4 style)
max_component_value: Maximum RGB component value before clamping (default: 10). When set to a finite value, RGB values are scaled down if any component exceeds this. This is pbrt-v4's firefly suppression mechanism. Try values like 10.0 or 100.0.filter: Pixel reconstruction filter (default: GaussianFilter with radius 1.5, sigma 0.5). Supports BoxFilter, TriangleFilter, GaussianFilter, MitchellFilter, LanczosSincFilter. All filters use importance sampling with weight≈1 (pbrt-v4 compatible).accumulation_eltype: Element type for RGB/weight accumulators (default: Float32). Use Float64 on backends that support double precision for higher accumulation precision.
Note: Sensor simulation (ISO, exposure_time, white_balance) is handled in postprocessing via FilmSensor, not in the integrator. This matches pbrt-v4's architecture where the film stores raw linear HDR values and sensor conversion happens at output time.
Hikari.VolPathState Type
VolPathStateContains all work queues and buffers for VolPath wavefront rendering.
sourceHikari.WorkQueue Type
WorkQueue{T, V, S}A GPU-compatible work queue that stores items of type T. Uses atomic operations for thread-safe push operations.
The queue stores items in a pre-allocated buffer and uses an atomic counter to track the current size. Supports both AOS and SOA layouts.
Fields
items::V: Pre-allocated array of work itemssize::S: Single-element array for atomic countercapacity::Int32: Maximum number of items
Example
# Create a queue on GPU backend
queue = WorkQueue{MyWorkItem}(backend, 1024)
# In a kernel, push items atomically
idx = push!(queue, item)
# Check if push succeeded (within capacity)
if idx <= length(queue.items)
# item was stored at queue.items[idx]
endHikari.WorkQueue Method
WorkQueue{T}(backend, capacity; soa=false)Create a new work queue with the given capacity on the specified backend.
Arguments
backend: KernelAbstractions backend (e.g.,CPU(),CUDABackend(),ROCBackend())capacity: Maximum number of items the queue can holdsoa: If true, use Structure-of-Arrays layout for better GPU memory coalescing
Adapt.adapt_structure Method
Adapt.adapt_structure(to, data::GPUFilterSamplerData)Adapt GPUFilterSamplerData for use inside GPU kernels. This converts the GPU arrays (e.g., ROCArray) to device-compatible representations (e.g., ROCDeviceArray) that can be used inside kernels.
sourceAdapt.adapt_structure Method
Adapt.adapt_structure(backend, queue::WorkQueue)Adapt WorkQueue for use inside GPU kernels. This converts the host-side arrays (e.g., CLArray, CuArray) to device-compatible representations (e.g., CLDeviceArray, CuDeviceArray) that can be used inside kernels.
This allows passing the entire WorkQueue to a kernel instead of passing items and size arrays separately.
sourceBase.close Method
Base.close(integrator::FastWavefront)Release GPU resources. Alias for cleanup!().
sourceBase.map Method
Given surface interaction si at the shading point, return (s, t) texture coordinates and estimated changes in (s, t) w.r.t. pixel x & y coordinates.
Paramters:
m::UVMapping2D: UVMapping with offset & scale parameters.si::SurfaceInteraction: SurfaceInteraction at the shading point.
Returns:
`Tuple{Point2f, Vec2f, Vec2f}`:
Texture coordinates at the shading point, estimated changes
in `(s, t)` w.r.t. pixel `x` & `y` coordinates.Base.union Method
Base.union(a::DirectionCone, b::DirectionCone) -> DirectionConeFind the minimal cone containing both input cones. Following pbrt-v4 (vecmath.cpp:56-83).
sourceBase.union Method
Base.union(a::LightBounds, b::LightBounds) -> LightBoundsMerge two LightBounds. Following pbrt-v4 (lights.h:137-153).
sourceHikari.Aluminum Method
Aluminum(; roughness=0.0, reflectance=(1,1,1), remap_roughness=true)Create an aluminum conductor material with measured spectral IOR data.
Examples
Aluminum() # Polished aluminum
Aluminum(roughness=0.1) # Brushed aluminumHikari.Brass Method
Brass(; roughness=0.0, reflectance=(1,1,1), remap_roughness=true)Create a brass (CuZn) conductor material with measured spectral IOR data.
Examples
Brass() # Polished brass
Brass(roughness=0.15) # Brushed brassHikari.Coffee Method
Coffee(; scale=1.0, g=0.0) -> HomogeneousMediumCreate an espresso coffee medium with realistic scattering properties.
Examples
Coffee() # Espresso
Coffee(scale=0.3) # Diluted coffee (americano-like)Hikari.Copper Method
Copper(; roughness=0.0, reflectance=(1,1,1), remap_roughness=true)Create a copper conductor material with measured spectral IOR data.
Examples
Copper() # Polished copper
Copper(roughness=0.2) # Weathered copperHikari.D Method
Distribution function, which gives the differential area of microfacets with the surface normal w.
Hikari.Filter Method
Filter(type::Symbol; kwargs...)Create a filter by type name.
Arguments
type: One of :box, :triangle, :gaussian, :mitchell, :lanczoskwargs: Filter-specific parameters
Examples
Filter(:box) # Default box filter
Filter(:triangle, radius=Point2f(2)) # Triangle with custom radius
Filter(:gaussian, sigma=0.5) # Gaussian filter
Filter(:mitchell, B=1/3, C=1/3) # Mitchell filter
Filter(:lanczos, tau=3) # Lanczos sinc filterHikari.Fog Method
Fog(; density=0.1, g=0.0) -> HomogeneousMediumCreate a fog medium. Fog is very light, highly scattering, and nearly non-absorbing (appears white).
Arguments
density: Fog density (lower = more transparent)g: Henyey-Greenstein asymmetry (0 = isotropic, typical for fog)
Examples
Fog() # Light fog
Fog(density=0.3) # Dense fogHikari.Gold Method
Gold(; roughness=0.0, reflectance=(1,1,1), remap_roughness=true)Create a gold conductor material with measured spectral IOR data.
Examples
Gold() # Polished gold
Gold(roughness=0.1) # Brushed gold
Gold(roughness=0.3) # Matte goldHikari.Juice Method
Juice(name::Symbol; scale=1.0, g=0.0) -> HomogeneousMediumCreate a juice medium from presets.
Available names
:apple,:cranberry,:grape,:grapefruit
Examples
Juice(:apple) # Apple juice
Juice(:grape, scale=0.5) # Diluted grape juiceHikari.Milk Method
Milk(; scale=1.0, g=0.0) -> HomogeneousMediumCreate a whole milk medium with realistic scattering properties. The scale parameter allows adjusting the density (for diluted milk use scale < 1).
Examples
Milk() # Standard whole milk
Milk(scale=0.5) # Diluted milk
Milk(g=0.8) # Forward-scattering milkHikari.PlasticMaterial Method
PlasticMaterial(; Kd=RGBSpectrum(0.5), Ks=RGBSpectrum(0.5), roughness=0.1, remap_roughness=true, eta=1.5)Create a plastic material with diffuse base and dielectric coating.
This is an alias for CoatedDiffuseMaterial matching pbrt-v4's behavior where "plastic" materials are implemented as coated diffuse with a dielectric coating.
Arguments
Kd: Diffuse color (reflectance of the base layer)Ks: Specular color (ignored - kept for API compatibility, Fresnel controls specular)roughness: Surface roughness of the coating (lower = sharper highlights)remap_roughness: Whether to remap roughness to microfacet alphaeta: Index of refraction of the coating (default 1.5 for typical plastic)
Examples
PlasticMaterial(Kd=(0.8, 0.2, 0.6)) # Magenta plastic
PlasticMaterial(Kd=(0.1, 0.1, 0.8), roughness=0.05) # Shiny blue
PlasticMaterial(Kd=wood_texture, roughness=0.3) # TexturedHikari.RayMajorantIterator_homogeneous Method
Create a homogeneous mode iterator directly (for HomogeneousMedium)
sourceHikari.Sample Method
Sample(s::RGBIlluminantSpectrum, lambda::Wavelengths) -> SpectralRadianceSample the illuminant spectrum at multiple wavelengths. Matches pbrt-v4's RGBIlluminantSpectrum::Sample(const SampledWavelengths &lambda).
Returns: scale * rsp(λ) * D65(λ) for each wavelength.
sourceHikari.Sample Method
Sample(::RGBToSpectrumTable, s::RGBIlluminantSpectrum, lambda::Wavelengths) -> SpectralRadianceSample an RGBIlluminantSpectrum at multiple wavelengths. The table argument is ignored since the polynomial is already baked in.
sourceHikari.Sample Method
Sample(table::RGBToSpectrumTable, rgb::RGBSpectrum, lambda::Wavelengths) -> SpectralRadianceSample an RGBSpectrum as an illuminant at multiple wavelengths. This provides a unified interface for light sampling - RGBSpectrum uses uplift_rgb_illuminant while RGBIlluminantSpectrum uses its baked-in polynomial.
sourceHikari.Silver Method
Silver(; roughness=0.0, reflectance=(1,1,1), remap_roughness=true)Create a silver conductor material with measured spectral IOR data.
Examples
Silver() # Polished silver
Silver(roughness=0.05) # Slightly brushedHikari.Smoke Method
Smoke(; density=0.5, albedo=0.9, g=0.0) -> HomogeneousMediumCreate a smoke/fog medium. Smoke is characterized by high scattering and low absorption (high albedo = scattering / extinction).
Arguments
density: Overall density multiplier (higher = thicker smoke)albedo: Single-scattering albedo (0-1, higher = more scattering, whiter smoke)g: Henyey-Greenstein asymmetry parameter (-1 to 1, 0 = isotropic)
Examples
Smoke() # Light gray smoke
Smoke(density=2.0) # Dense smoke
Smoke(albedo=0.5) # Darker, more absorbing smoke
Smoke(g=0.6) # Forward-scattering (typical for smoke)Hikari.SubsurfaceMedium Method
SubsurfaceMedium(name::String; scale=1.0, g=0.0) -> HomogeneousMediumCreate a medium from any of the available presets by name.
Examples
SubsurfaceMedium("Marble")
SubsurfaceMedium("Skin1", scale=2.0)
SubsurfaceMedium("Ketchup")See get_medium_preset for available preset names.
Hikari.Wine Method
Wine(name::Symbol; scale=1.0, g=0.0) -> HomogeneousMediumCreate a wine medium from presets.
Available names
:chardonnay,:zinfandel,:merlot
Examples
Wine(:merlot) # Red merlot wine
Wine(:chardonnay) # White wineHikari._accumulate_volume_scatter Method
_accumulate_volume_scatter(acc, light, cloud, pos, ray_d, time, shadow_steps, scene)Accumulator function for reduce_unrolled over lights in volume scattering. Computes single light contribution and adds to accumulator.
sourceHikari._build_bvh! Method
Recursive BVH construction with SAH splitting. Following pbrt-v4 buildBVH (lightsamplers.cpp:135-238).
sourceHikari._copy_multi_to_standard! Method
DEBUG helper: Copy items from multi_queue to standard material_queue for testing.
sourceHikari._detect_camera_medium_kernel! Method
_detect_camera_medium_kernel!(result, accel, media_interfaces, camera_pos)Single-workitem kernel that traces a ray from the camera position to determine which medium the camera is inside. Writes a SetKey to result[1].
sourceHikari._evaluate_cost Method
SAH cost for splitting a LightBounds along dimension dim. Following pbrt-v4 EvaluateCost (lightsamplers.h:383-396).
Hikari._evaluate_typed_material! Method
Inner evaluation for a typed material (no dispatch needed).
Uses pre-computed Sobol samples from pixel_samples (pbrt-v4 RaySamples style).
sourceHikari._get_rgb2spec_table Method
Get the global sRGB to spectrum table (loads on first access)
sourceHikari._ratio_tracking_dda Method
Ratio tracking using DDA majorant iterator segments. Iterates over per-voxel majorant bounds, doing ratio tracking within each segment.
sourceHikari._sample_rgb_grid Method
Trilinear interpolation for RGB grid. p_norm is in [0,1]³ normalized coordinates within bounds.
sourceHikari._sample_texture_bilinear Method
_sample_texture_bilinear(data, uv) -> TBilinear texture sampling for 2D textures. Provides smoother results than point sampling.
sourceHikari._sample_texture_data_filtered Method
_sample_texture_data_filtered(data, uv, dudx, dudy, dvdx, dvdy) -> TSample texture with filtering based on UV derivatives. Uses the derivatives to compute the filter footprint for mipmap selection.
TODO: Implement proper mipmap-based filtering. Currently uses bilinear sampling as a simple improvement over point sampling.
sourceHikari._sample_with_iterator_helper Method
Helper to call sample_T_maj_loop! with created iterator (no capture)
sourceHikari._spotlight_transform Method
Create a transformation that positions a spotlight and orients it to point at a target. The spotlight points in +Z direction in local space.
sourceHikari._sum_light_le Method
_sum_light_le(light, ray)Helper for sum_unrolled to accumulate le() contributions from lights.
sourceHikari._tonemap_aces Method
ACES filmic approximation (Narkowicz). Industry-standard filmic curve used in games and film.
sourceHikari._tonemap_filmic Method
Filmic tonemapping (Hejl-Dawson). Alternative filmic curve with good highlight rolloff.
sourceHikari._tonemap_uncharted2 Method
Uncharted 2 filmic tonemapping. Good for high-contrast scenes, preserves detail in shadows.
sourceHikari._transmittance_dda_helper Method
Helper dispatched via with_index to get concrete medium type for DDA iterator.
sourceHikari.abs_cos_theta Method
abs_cos_theta(w) -> Float32Get |cos(θ)| of a direction in local coordinates (matches pbrt-v4's AbsCosTheta).
sourceHikari.allocate_array Method
allocate_array(backend, T, n; soa=false)Allocate array with AOS (soa=false) or SOA (soa=true) layout. Both support identical indexing: arr[i] returns T, arr[i] = val stores T.
sourceHikari.angle_between Method
angle_between(v1, v2) -> Float32Numerically stable angle between two vectors. Following pbrt-v4 (vecmath.h:972-977).
sourceHikari.apply_white_balance Method
apply_white_balance(r, g, b, wb_matrix) -> (r', g', b')Apply white balance transformation using precomputed Bradford matrix. Input/output are in linear RGB (assumed sRGB primaries).
sourceHikari.approximate_dp_dxy Method
approximate_dp_dxy(pi, n, camera, samples_per_pixel) -> (dpdx, dpdy)Approximate screen-space position derivatives at intersection point. Following pbrt-v4's Camera::Approximate_dp_dxy method.
This estimates how much the surface position changes per pixel, which is used for texture filtering (mipmap level selection). The approximation assumes the surface is locally planar near the intersection point.
For a perspective camera, this is approximately: dp/dscreen ≈ distance * tan(fov/2) / (resolution/2)
Arguments:
pi: Intersection point in world spacen: Surface normal at intersectioncamera: Camera with dx_camera, dy_camera precomputedsamples_per_pixel: Number of samples per pixel (for scaling)
Returns (dpdx, dpdy) - approximate change in position per screen pixel.
sourceHikari.arealight_Le Method
arealight_Le(light::DiffuseAreaLight, lights_ctx, table, wo, n, uv, lambda) -> SpectralRadianceEvaluate spectral emission at a surface point on this area light. Following pbrt-v4 DiffuseAreaLight::L().
lights_ctx: StaticMultiTypeSet for TextureRef dereference (lights container)table: RGBToSpectrumTable for spectral upliftwo: outgoing direction (toward camera)n: surface normal at hit pointuv: texture coordinates at hit pointlambda: sampled wavelengths
Hikari.atrous_denoise_kernel! Method
atrous_denoise_kernel!(output, input, normals, depth, variance,
width, height, step_size, config)Single pass of the à-trous wavelet filter. Applies a 5x5 filter with edge-stopping weights.
sourceHikari.balance_heuristic Method
balance_heuristic(pdf_f::Float32, pdf_g::Float32) -> Float32Balance heuristic for MIS: w_f = pdf_f / (pdf_f + pdf_g)
sourceHikari.binary_permute_scramble Method
binary_permute_scramble(v::UInt32, perm::UInt32) -> UInt32Simple XOR-based scrambling (less quality than Owen, but faster). Reference: pbrt-v4/src/pbrt/util/lowdiscrepancy.h BinaryPermuteScrambler
sourceHikari.blackbody! Method
Compute emitted radiance by blackbody at the given temperature for the wavelengths.
Args
Le::Vector{Float32}: Preallocated output vector for computed radiance.λ::Vector{Float32}: Wavelengths for which to compute radiance. Their values should be innm.T::Float32: Temperature in Kelvin at which to compute radiance.
Hikari.blackbody_normalized! Method
Compute normalized SPD for a blackbody, with maximum value of the SPD at any wavelength is 1.
Args
Le::Vector{Float32}: Preallocated output vector for computed radiance.λ::Vector{Float32}: Wavelengths for which to compute radiance. Their values should be innm.T::Float32: Temperature in Kelvin at which to compute radiance.
Hikari.bound_subtended_directions Method
bound_subtended_directions(b::Bounds3, p::Point3f) -> DirectionConeCompute the bounding cone of directions from point p to bounding box b. If p is inside the box, returns entire sphere. Following pbrt-v4 (vecmath.h:1815-1828).
Hikari.build_majorant_grid! Method
In-place majorant grid rebuild for GridMedium density updates.
sourceHikari.build_nanovdb_from_dense Method
build_nanovdb_from_dense(data::Array{Float32,3}, origin::Point3f, extent::Vec3f; background=0f0)Convert a dense 3D Float32 array to a NanoVDB-compatible binary buffer. Only stores leaf blocks (8³) that contain non-background voxels, giving significant memory savings for sparse data (e.g. cloud fields with ~1% fill).
Returns (buffer::Vector{UInt8}, metadata::NamedTuple) suitable for constructing a NanoVDBMedium.
The NanoVDB tree hierarchy is: Root → Upper (32³) → Lower (16³) → Leaf (8³). Buffer layout: [Root | Upper nodes | Lower nodes | Leaf nodes] All child offsets are stored relative to the parent node.
sourceHikari.build_nanovdb_majorant_grid Function
build_nanovdb_majorant_grid(medium_partial, res::Vec3i) -> MajorantGridBuild a majorant grid for NanoVDBMedium by sampling max density in each cell. Matches pbrt-v4's majorant grid construction.
sourceHikari.build_rgb_majorant_grid! Method
In-place majorant grid rebuild for RGBGridMedium density updates.
sourceHikari.build_rgb_majorant_grid Method
Build majorant grid for RGBGridMedium following pbrt-v4.
For each majorant voxel, computes: sigma_scale * (max(σ_a.MaxValue) + max(σ_s.MaxValue))
where MaxValue returns the maximum RGB component of each spectrum.
sourceHikari.bvh_pmf Method
bvh_pmf(nodes, light_to_bit_trail, num_infinite, num_bvh, p, n, light_flat_idx) -> Float32Compute the PMF for a specific light at shading point (p, n). Uses the bit trail to replay the BVH traversal path efficiently.
Following pbrt-v4's BVHLightSampler::PMF (lightsamplers.h:323-358).
sourceHikari.bvh_sample_light Method
bvh_sample_light(nodes, infinite_indices, num_infinite, num_bvh, p, n, u) -> (Int32, Float32)Sample a light from the BVH using importance-weighted traversal. Returns (flat_light_index, pmf). Returns (0, 0) on failure.
Following pbrt-v4's BVHLightSampler::Sample (lightsamplers.h:266-320).
sourceHikari.bvh_to_gpu Method
to_gpu(backend, sampler::BVHLightSampler) -> NamedTupleUpload BVH light sampler data to GPU. Returns a NamedTuple with GPU arrays.
sourceHikari.choose_material Method
choose_material(mix::MixMaterial, textures, p::Point3f, wo::Vec3f, uv::Point2f) -> SetKeyChoose which sub-material to use at the given hit point. Returns the SetKey of the chosen material.
Following pbrt-v4's ChooseMaterial:
Evaluate the amount texture at (uv)
If amount ≤ 0, return material1
If amount ≥ 1, return material2
Otherwise, use deterministic hash to stochastically select
This function is called at intersection time, before material evaluation.
sourceHikari.choose_material_dispatch Method
choose_material_dispatch(materials::StaticMultiTypeSet, idx::SetKey, p, wo, uv) -> SetKeyType-stable dispatch for choosing material from MixMaterial. If the material is not MixMaterial, returns the input index unchanged. materials is used both for material lookup and texture evaluation.
Hikari.cleanup! Method
cleanup!(buffers::FastWavefrontBuffers)Release GPU memory held by FastWavefront buffers.
sourceHikari.cleanup! Method
cleanup!(integrator::FastWavefront)Release GPU memory held by the integrator's cached buffers.
sourceHikari.cleanup! Method
cleanup!(state::VolPathState)Release GPU memory held by the VolPath state.
sourceHikari.clear! Method
clear!(integrator::VolPath)Clear the integrator's internal state (RGB and weight accumulators) for restarting progressive rendering.
sourceHikari.compute_bsdf Method
compute_bsdf(cloud::CloudVolume, si, allow_multiple_lobes, transport) -> BSDFReturns a dummy BSDF for volume materials. Volumes don't use BSDF-based shading; they use ray marching in shade() instead. This method exists so that compute_bsdf_for_material can handle CloudVolume without special-casing.
sourceHikari.compute_bsdf Method
Compute BSDF for ConductorMaterial - conductor with Fresnel reflectance.
sourceHikari.compute_bsdf Method
compute_bsdf(mat::Emissive, si::SurfaceInteraction, ::Bool, transport)Emissive has no BSDF (pure emitter, doesn't scatter light). Returns an empty BSDF.
sourceHikari.compute_differentials Method
Compute partial derivatives needed for computing sampling rates for things like texture antialiasing.
sourceHikari.compute_direct_lighting_spectral Method
compute_direct_lighting_spectral(p, n, wo, beta, r_u, lambda, light_sample, bsdf_f, bsdf_pdf)Compute direct lighting contribution from a light sample with MIS.
Following pbrt-v4 (surfscatter.cpp lines 288-316):
Ld = beta * f * Li * cos_theta (NO MIS weight or PDF division here)
r_u = r_u * bsdfPDF (0 for delta lights)
r_l = r_u * lightPDF
MIS weighting happens at shadow ray resolution: Ld * T_ray / (r_u * tr_r_u + r_l * tr_r_l).Average()
Hikari.compute_env_light_pdf Method
compute_env_light_pdf(lights::StaticMultiTypeSet, ray_d::Vec3f)Compute PDF for sampling direction from environment-type lights using StaticMultiTypeSet.
sourceHikari.compute_geometric_normal Method
compute_geometric_normal(primitive) -> Vec3fCompute geometric normal from a primitive (triangle).
sourceHikari.compute_path_sample_1d Method
compute_path_sample_1d(px::Int32, py::Int32, sample_idx::Int32, depth::Int32, local_dim::Int32) -> Float32Compute a 1D sample for path tracing at a given depth. Each depth gets a separate set of dimensions to avoid correlation.
NOTE: This is the old hash-based version. Use compute_path_sample_1d_sobol for better convergence.
sourceHikari.compute_path_sample_1d Method
compute_path_sample_1d(rng::SobolRNG, px::Int32, py::Int32, sample_idx::Int32, depth::Int32, local_dim::Int32) -> Float32Compute a 1D sample for path tracing at a given depth using SobolRNG.
Dimension allocation:
Base dimension = 6 (camera uses 0-5)
Each depth uses 8 dimensions for BSDF, light, RR, etc.
Hikari.compute_path_sample_1d_sobol Method
compute_path_sample_1d_sobol(px, py, sample_idx, depth, local_dim, log2_spp, n_base4_digits, seed, sobol_matrices) -> Float32Compute a 1D sample for path tracing using ZSobol sampler.
sourceHikari.compute_path_sample_2d Method
compute_path_sample_2d(px::Int32, py::Int32, sample_idx::Int32, depth::Int32, local_dim::Int32) -> Tuple{Float32, Float32}Compute a 2D sample for path tracing at a given depth.
NOTE: This is the old hash-based version. Use compute_path_sample_2d_sobol for better convergence.
sourceHikari.compute_path_sample_2d Method
compute_path_sample_2d(rng::SobolRNG, px::Int32, py::Int32, sample_idx::Int32, depth::Int32, local_dim::Int32) -> Tuple{Float32, Float32}Compute a 2D sample for path tracing at a given depth using SobolRNG.
sourceHikari.compute_path_sample_2d_sobol Method
compute_path_sample_2d_sobol(px, py, sample_idx, depth, local_dim, log2_spp, n_base4_digits, seed, sobol_matrices) -> Tuple{Float32, Float32}Compute a 2D sample for path tracing using ZSobol sampler.
sourceHikari.compute_pdf Method
Compute PDF value for the given directions. In comparison, sample_f computes PDF value for the incident directions it chooses given the outgoing direction, while this returns a value of PDF for the given pair of directions.
Hikari.compute_pixel_sample Method
compute_pixel_sample(rng::SobolRNG, px::Int32, py::Int32, sample_idx::Int32)Compute all camera sample values for a pixel using SobolRNG. Returns a PixelSample struct with jitter, wavelength, lens, and time samples.
Dimension allocation (matching PBRT-v4 exactly):
See pbrt-v4/src/pbrt/wavefront/camera.cpp:51-60 and samplers.h:797-814
pbrt-v4's Get1D/Get2D increment dimension BEFORE computing hash:
- StartPixelSample(pPixel, sampleIndex, 0): dimension = 0
- Get1D() for wavelength: dimension++ → 1, hash uses 1
- GetPixel2D() which is Get2D(): dimension += 2 → 3, hash uses 3 (for jitter)
- Get1D() for time: dimension++ → 4, hash uses 4
- Get2D() for lens: dimension += 2 → 6, hash uses 6
So dimensions used are: 1 (wavelength), 3 (jitter), 4 (time), 6 (lens)
sourceHikari.compute_pixel_sample Method
compute_pixel_sample(px::Int32, py::Int32, sample_idx::Int32) -> PixelSampleCompute all sample values for a pixel sample deterministically. Uses R2 sequence for pixel jitter (better 2D stratification) and hash-based sampling for other dimensions.
NOTE: This is the old hash-based version. Use compute_pixel_sample_sobol for better convergence.
sourceHikari.compute_pixel_sample_sobol Method
compute_pixel_sample_sobol(px, py, sample_idx, log2_spp, n_base4_digits, seed, sobol_matrices) -> PixelSampleCompute all sample values for a pixel sample using ZSobol sampler. This matches PBRT-v4's ZSobolSampler for better convergence.
sourceHikari.compute_scattering! Function
If an intersection was found, it is necessary to determine, how the surface's material scatters light. compute_scattering! method evaluates texture functions to determine surface properties and then initializing a representation of the BSDF at the point.
Hikari.compute_shading_normal Method
compute_shading_normal(primitive, barycentric, geometric_normal) -> Vec3fCompute interpolated shading normal from vertex normals using barycentric coordinates. Falls back to geometric normal if vertex normals are not available (NaN).
sourceHikari.compute_tangent_frame Method
compute_tangent_frame(primitive) -> (dpdu::Vec3f, dpdv::Vec3f)Compute tangent vectors for a primitive.
sourceHikari.compute_texture_filter_context Method
compute_texture_filter_context(work, camera, samples_per_pixel) -> TextureFilterContextCompute texture filtering context from material evaluation work item. Uses approximate screen-space derivatives for proper mipmap selection.
sourceHikari.compute_transmittance Function
Compute transmittance from a point towards a direction through the volume
sourceHikari.compute_transmittance_ratio_tracking Method
compute_transmittance_ratio_tracking(table, media, medium_idx, origin, dir, t_max, lambda)Compute transmittance through heterogeneous medium using ratio tracking. Following pbrt-v4's TraceTransmittance / SampleT_maj implementation.
Uses DDA-based per-voxel majorant bounds (via RayMajorantIterator) for tight bounds in sparse heterogeneous media, matching the primary path's delta tracking.
Returns (T_ray, r_u, r_l) where:
T_ray: spectral transmittance estimate
r_u, r_l: MIS weight accumulators for combining with path weights
Hikari.compute_transmittance_simple Method
compute_transmittance_simple(table, media, medium_idx, origin, dir, t_max, lambda)Simple wrapper that returns only the transmittance (for compatibility).
sourceHikari.compute_uv Method
compute_uv(primitive, p::Point3f) -> Point2fCompute UV coordinates for a point on a primitive. Uses barycentric interpolation for triangles.
sourceHikari.compute_uv_barycentric Method
compute_uv_barycentric(primitive, barycentric) -> Point2fCompute UV coordinates using barycentric coordinates from ray intersection. More accurate than recomputing barycentric from position.
sourceHikari.compute_uv_derivatives Method
compute_uv_derivatives(dpdu, dpdv, dpdx, dpdy) -> (dudx, dudy, dvdx, dvdy)Compute UV derivatives from position derivatives using least-squares solve. Following pbrt-v4's SurfaceInteraction::ComputeDifferentials.
Given:
dpdu, dpdv: How position changes with UV (∂p/∂u, ∂p/∂v)
dpdx, dpdy: How position changes with screen pixel (∂p/∂x, ∂p/∂y)
Solve for:
dudx, dudy: How u changes with screen pixel (∂u/∂x, ∂u/∂y)
dvdx, dvdy: How v changes with screen pixel (∂v/∂x, ∂v/∂y)
Uses the normal equations: (A^T A) [du/dx; dv/dx]^T = A^T [dpdx] where A = [dpdu | dpdv] is a 3x2 matrix.
sourceHikari.compute_variance_kernel! Method
compute_variance_kernel!(variance, input, width, height)Compute per-pixel variance from RGB framebuffer. Uses spatial 3x3 neighborhood for variance estimation.
sourceHikari.compute_white_balance_matrix Method
compute_white_balance_matrix(src_temp::Float32) -> SMatrix{3,3,Float32}Compute the Bradford chromatic adaptation matrix to transform from source illuminant (at color temperature src_temp in Kelvin) to D65.
This is used for white balancing: colors captured under a light source at src_temp Kelvin are transformed to appear as if they were captured under D65.
Example
# Adapt from 5000K (warm daylight) to D65
wb_matrix = compute_white_balance_matrix(5000f0)Hikari.compute_zsobol_params Method
compute_zsobol_params(samples_per_pixel::Int, width::Int, height::Int) -> (Int32, Int32)Compute the ZSobol sampler parameters from render settings. Returns (log2_spp, n_base4_digits).
sourceHikari.convert_lights_for_fast_wavefront Method
Convert lights for fast wavefront rendering. Returns a tuple of lights.
sourceHikari.coordinate_system Method
coordinate_system(n::Vec3f) -> (tangent, bitangent)Build orthonormal basis from a normal vector.
sourceHikari.cos2_theta Method
cos2_theta(w) -> Float32Get cos²(θ) of a direction in local coordinates.
sourceHikari.cos_phi Method
cos_phi(w) -> Float32Get cos(φ) of a direction in local coordinates (matches pbrt-v4's CosPhi).
sourceHikari.cos_theta Method
cos_theta(w) -> Float32Get cos(θ) of a direction in local coordinates (matches pbrt-v4's CosTheta). In local space, z is the surface normal.
sourceHikari.cos_θ Method
The shading coordinate system gives a frame for expressing directions in spherical coordinates (θ, ϕ). The angle θ is measured from the given direction to the z-axis and ϕ is the angle formed with the x-axis after projection of the direction onto xy-plane.
Since normal is (0, 0, 1) → cos_θ = n · w = (0, 0, 1) ⋅ w = w.z.
Hikari.create_dda_iterator Method
create_dda_iterator(grid, bounds, ray_o, ray_d, t_min, t_max, σ_t) -> DDAMajorantIteratorInitialize a DDA iterator for traversing the majorant grid along a ray. The ray should already be transformed to medium space. t_min and t_max are the ray segment bounds (in medium space).
Following PBRT-v4's DDAMajorantIterator constructor.
sourceHikari.create_light_sampler Method
create_light_sampler(lights::MultiTypeSet; method::Symbol=:power, scene_radius::Float32=10f0) -> LightSamplerCreate a light sampler for the given lights.
Arguments
lights::MultiTypeSet: Collection of light sourcesmethod::Symbol: Sampling method (:uniformor:power)scene_radius::Float32: Scene bounding sphere radius (for power-weighted sampling of infinite lights)
Methods
:uniform: Uniform random selection (baseline):power: Power-weighted selection (recommended for varying light intensities)
Hikari.dda_next Method
dda_next(iter::DDAMajorantIterator, media) -> (RayMajorantSegment, DDAMajorantIterator, Bool)Advance the DDA iterator and return the next majorant segment. Returns (seg, new_iter, true) if valid, (invalid_seg, exhausted_iter, false) if exhausted.
The Bool indicates validity: true = has segment, false = exhausted.
Since structs are immutable in Julia, this returns a new iterator with updated state. Following PBRT-v4's DDAMajorantIterator::Next().
sourceHikari.denoise! Method
denoise!(film::Film; config=DenoiseConfig())Apply edge-avoiding à-trous wavelet denoising to film's framebuffer. Uses auxiliary buffers (normal, depth) from film for edge-stopping. Result is stored in film.postprocess.
Arguments
film: Film with framebuffer and auxiliary buffers populatedconfig: DenoiseConfig with filter parameters
Notes
Requires film.normal and film.depth to be populated (e.g., via fill_aux_buffers!)
Modifies film.postprocess in place
For PhysicalWavefront, aux buffers are populated during first bounce
Hikari.denoise_inplace! Method
denoise_inplace!(film::Film; config=DenoiseConfig())Like denoise!, but modifies framebuffer directly instead of using postprocess.
sourceHikari.denoise_luminance Method
denoise_luminance(r, g, b) -> Float32Compute luminance from RGB using Rec. 709 coefficients.
sourceHikari.detect_camera_medium Method
detect_camera_medium(backend, accel, media_interfaces, camera_pos) -> SetKeyDetect which medium the camera is inside by tracing a single ray from the camera position. Returns a SetKey identifying the medium, or SetKey() for vacuum.
sourceHikari.direction_to_uv_equal_area Method
direction_to_uv_equal_area(dir::Vec3f, rotation::Mat3f) -> Point2fConvert direction to UV using equal-area (octahedral) mapping. This is what pbrt-v4 uses for ImageInfiniteLight.
The rotation matrix transforms from render space to light/map space.
sourceHikari.direction_to_uv_equirect Method
Convert a direction vector to equirectangular UV coordinates. Uses standard lat-long mapping where:
U (horizontal) maps to longitude: 0 at +X, increases counter-clockwise
V (vertical) maps to latitude: 0 at top (+Y), 1 at bottom (-Y)
The rotation matrix transforms from render space to light/map space. We apply the INVERSE (transpose for orthogonal matrices) to get the direction in map space.
sourceHikari.distribution_fresnel_microfacet Method
Distribution function for FresnelMicrofacet - sum of reflection and transmission.
sourceHikari.distribution_lambertian_reflection Method
Reflection distribution is constant and divides reflectance spectrum equally over the hemisphere.
sourceHikari.distribution_specular_reflection Method
Return value of the distribution function for the given pair of directions. For specular reflection, no scattering is returned, since for arbitrary directions δ-funcion returns no scattering.
sourceHikari.distribution_specular_transmission Method
Return value of the distribution function for the given pair of directions. For specular transmission, no scattering is returned, since for arbitrary directions δ-funcion returns no scattering.
sourceHikari.encode_morton2 Method
encode_morton2(x::UInt32, y::UInt32) -> UInt64Encode two 32-bit coordinates into a single 64-bit Morton code. Reference: pbrt-v4/src/pbrt/util/vecmath.h EncodeMorton2
sourceHikari.equal_area_sphere_to_square Method
equal_area_sphere_to_square(d::Vec3f) -> Point2fConvert a direction vector to equal-area square UV coordinates. This is pbrt-v4's octahedral/equal-area mapping used for environment maps.
Reference: Clarberg, "Fast Equal-Area Mapping of the (Hemi)Sphere using SIMD"
sourceHikari.equal_area_square_to_sphere Method
equal_area_square_to_sphere(p::Point2f) -> Vec3fConvert equal-area square UV coordinates to a direction vector. Inverse of equal_area_sphere_to_square.
Reference: pbrt-v4's EqualAreaSquareToSphere
sourceHikari.estimate_light_power Method
estimate_light_power(light, scene_radius::Float32) -> Float32Estimate the total power (flux) of a light for importance sampling. Following pbrt-v4's Light::Phi() which returns total emitted power.
For point/spot lights: Phi = 4π * I (or 2π * I * cone_factor for spot) For directional lights: Phi = π * sceneRadius² * I For environment lights: Phi = 4π² * sceneRadius² * average_radiance
The scene_radius is required for infinite lights to compute meaningful power.
sourceHikari.eval_dielectric_interface Method
eval_dielectric_interface(wo, wi, alpha_x, alpha_y, eta) -> (f, pdf)Evaluate the dielectric interface BSDF for given directions. Returns (f_value, pdf) for the given wo/wi pair.
sourceHikari.eval_diffuse_interface Method
eval_diffuse_interface(wo, wi, reflectance) -> (f, pdf)Evaluate diffuse BSDF for given directions.
sourceHikari.eval_sigmoid_polynomial Method
Evaluate sigmoid polynomial at wavelength (GPU-compatible)
sourceHikari.evaluate_bsdf_spectral Method
evaluate_bsdf_spectral(table, mat::CoatedConductorMaterial, textures, wo, wi, n, uv, lambda) -> (f, pdf)Evaluate CoatedConductor BSDF for given directions.
sourceHikari.evaluate_bsdf_spectral Method
evaluate_bsdf_spectral(table, mat::CoatedDiffuseMaterial, textures, wo, wi, n, uv, lambda) -> (f, pdf)Evaluate CoatedDiffuse BSDF using pbrt-v4's LayeredBxDF::f random walk algorithm.
This is a 100% port of pbrt-v4's LayeredBxDF::f. The algorithm uses nSamples random walks to estimate the BSDF value using Monte Carlo integration with MIS.
sourceHikari.evaluate_bsdf_spectral Method
evaluate_bsdf_spectral(table, mat::ConductorMaterial, ...) -> (f, pdf)Evaluate metal BSDF for given directions. Matches pbrt-v4's ConductorBxDF::f and ConductorBxDF::PDF exactly.
sourceHikari.evaluate_bsdf_spectral Method
evaluate_bsdf_spectral(table, mat::DiffuseTransmissionMaterial, textures, wo, wi, n, uv, lambda) -> (f, pdf)Evaluate diffuse transmission BSDF matching pbrt-v4's DiffuseTransmissionBxDF::f and PDF.
sourceHikari.evaluate_bsdf_spectral Method
evaluate_bsdf_spectral(table, mat, textures, wo, wi, n, uv, lambda) -> (f::SpectralRadiance, pdf::Float32)Evaluate BSDF for a given pair of directions. Used for MIS in direct lighting. Returns the BSDF value and PDF.
sourceHikari.evaluate_bsdf_spectral Method
evaluate_bsdf_spectral for MediumInterface - forwards to wrapped material.Hikari.evaluate_bsdf_spectral Method
evaluate_bsdf_spectral(table, mat::ThinDielectricMaterial, textures, wo, wi, n, uv, lambda) -> (f, pdf)Evaluate thin dielectric BSDF - returns zero for non-delta directions. ThinDielectric is purely specular, so f() and PDF() both return 0.
sourceHikari.evaluate_environment_spectral Method
evaluate_environment_spectral(light::AmbientLight, lights, table, ray_d::Vec3f, lambda::Wavelengths)Evaluate ambient light for an escaped ray - provides constant radiance regardless of direction.
sourceHikari.evaluate_environment_spectral Method
evaluate_environment_spectral(light::EnvironmentLight, lights, table, ray_d::Vec3f, lambda::Wavelengths)Evaluate environment light for an escaped ray direction. The lights parameter is used to deref TextureRef fields in EnvironmentMap.
Hikari.evaluate_escaped_ray_spectral Method
evaluate_escaped_ray_spectral(table, lights::StaticMultiTypeSet, ray_d, lambda)Evaluate all environment-type lights for an escaped ray using StaticMultiTypeSet.
sourceHikari.evaluate_material_complete Function
evaluate_material_complete(mat, table, ctx, wo, ns, n, tfc, lambda, u, rng, regularize=false)Complete material evaluation for wavefront pipeline. Returns PWMaterialEvalResult with BSDF sample and emission.
When regularize=true, near-specular BSDFs will be roughened to reduce fireflies.
Arguments:
tfc::TextureFilterContext: Contains UV coordinates and screen-space derivatives for texture filtering.
Hikari.evaluate_material_inner! Method
Inner function for material evaluation - can use return statements.
Now uses pre-computed Sobol samples from pixel_samples (pbrt-v4 RaySamples style).
sourceHikari.evaluate_spectral_material Method
evaluate_spectral_material(table, materials::StaticMultiTypeSet, idx, wo, wi, ns, tfc, lambda)Type-stable dispatch for spectral BSDF evaluation. Returns (f::SpectralRadiance, pdf::Float32).
Arguments:
tfc::TextureFilterContext: Contains UV coordinates and screen-space derivatives for texture filtering.
Hikari.extract_sky_color Method
Extract sky color from scene lights. Returns RGB for background rays.
sourceHikari.face_forward Method
face_forward(v, n) -> Vec3fFlip v to be in the same hemisphere as n (matches pbrt-v4's FaceForward).
sourceHikari.fast_owen_scramble Method
fast_owen_scramble(v::UInt32, seed::UInt32) -> UInt32Fast approximation of Owen scrambling for Sobol sequences. Reference: pbrt-v4/src/pbrt/util/lowdiscrepancy.h FastOwenScrambler (lines 220-237)
sourceHikari.fbm3d Method
fbm3d(x, y, z; octaves=4, persistence=0.5) -> Float64Fractional Brownian motion (fBm) using Perlin noise. Combines multiple octaves of noise at different frequencies for natural-looking detail.
Arguments
x, y, z: 3D coordinatesoctaves: Number of noise layers to combine (default: 4)persistence: Amplitude multiplier per octave (default: 0.5)
Returns values approximately in [-1, 1].
sourceHikari.fill_aux_buffers! Method
fill_aux_buffers!(film, scene, camera)Fill auxiliary buffers (albedo, normal, depth) by tracing primary rays. Uses KernelAbstractions for GPU compatibility.
This traces one ray per pixel (center of pixel) and stores first-hit data. Should be called before or after main rendering - the auxiliary buffers are used for denoising in postprocess!.
sourceHikari.filter_sample Method
Sample filter - uses analytical sampling where possible, tabulated importance sampling for Gaussian/Mitchell/Lanczos.
sourceHikari.filter_sample Method
Sample filter with tabulated data for importance sampling. Overload for when sampler_data is GPUFilterSamplerData (Gaussian/Mitchell/Lanczos).
sourceHikari.filter_sample Method
Sample filter with tabulated data for importance sampling. Overload for when sampler_data is nothing (Box/Triangle filters).
sourceHikari.filter_sample Method
Sample the triangle filter - analytical importance sampling with weight = 1.0
sourceHikari.filter_sample_tabulated Method
Sample filter using tabulated importance sampling (pbrt-v4 compatible). This is used for Gaussian, Mitchell, and Lanczos filters. Returns FilterSample with position and weight = f[sampled_point] / pdf.
sourceHikari.finalize_soa Method
finalize_soa(soa::NamedTuple)Finalize all arrays in a SoA (Structure of Arrays) NamedTuple.
sourceHikari.find_interval Method
Binary search in CDF to find interval containing u. Returns index o such that cdf[o] <= u < cdf[o+1] (1-based indexing). CDF has size n+1 with cdf[1]=0 and cdf[n+1]=1. GPU-compatible fully unrolled branchless implementation.
sourceHikari.find_interval_binary Method
GPU-compatible binary search to find last index where cdf[i] ≤ u. Returns index in [1, n] where n = length(cdf).
sourceHikari.find_interval_binary_col Method
Binary search in a column of a 2D array (for conditional CDF).
sourceHikari.find_interval_binary_flat Method
GPU-compatible fully unrolled branchless binary search in a flat vector (for marginal CDF).
sourceHikari.flat_to_light_index Method
flat_to_light_index(lights::StaticMultiTypeSet, flat_idx::Int32) -> SetKeyConvert a flat 1-based index to a SetKey (SetKey) for StaticMultiTypeSet. The flat index counts across all typed arrays in order.
sourceHikari.float32_to_bytes Method
float32_to_bytes(v::Float32) -> NTuple{4,UInt8}GPU-compatible conversion of Float32 to bytes using Core.bitcast.
sourceHikari.fr_complex Method
fr_complex(cos_theta_i, eta, k) -> Float32Compute Fresnel reflectance for a conductor using complex IOR (matches pbrt-v4's FrComplex).
Arguments:
cos_theta_i: Cosine of incident angle (clamped to [0, 1])eta: Real part of complex IOR (n)k: Imaginary part of complex IOR (extinction coefficient)
This uses the exact same formula as pbrt-v4 with complex arithmetic.
sourceHikari.fr_complex_spectral Method
fr_complex_spectral(cos_theta_i, eta, k) -> SpectralRadianceCompute spectral Fresnel reflectance for a conductor (matches pbrt-v4's FrComplex for SampledSpectrum). Evaluates fr_complex for each wavelength channel.
sourceHikari.free! Method
free!(film::Film)Release GPU memory held by the film by triggering finalizers on all arrays.
sourceHikari.fresnel_conductor Method
General Fresnel reflection formula with complex index of refraction η^ = η + ik, where some incident light is potentially absorbed by the material and turned into heat. k - is referred to as the absorption coefficient.
sourceHikari.fresnel_dielectric Method
fresnel_dielectric(cos_θi::Float32, ηi::Float32, ηt::Float32) -> Float32Compute Fresnel reflection for dielectric materials (two-IOR version). This is a convenience wrapper that computes eta = ηt / ηi.
Arguments:
cos_θi: Cosine of incident angle w.r.t. normalηi: Index of refraction for the incident mediaηt: Index of refraction for the transmitted media
Hikari.fresnel_dielectric Method
fresnel_dielectric(cos_θi::Float32, eta::Float32) -> Float32Compute Fresnel reflection for dielectric materials (single-eta version). This matches pbrt-v4's FrDielectric() exactly.
Arguments:
cos_θi: Cosine of incident angle (can be negative if coming from inside)eta: Ratio n_t / n_i (transmitted IOR / incident IOR)
For a ray hitting glass from air: eta = 1.5 (glass IOR) For a ray hitting air from inside glass: eta = 1/1.5
sourceHikari.generate_cloud_density Method
generate_cloud_density(resolution; kwargs...) -> Array{Float32, 3}Generate a 3D density grid for volumetric cloud rendering.
Keyword Arguments
scale=4.0: Base frequency scale for noisesphere_falloff=true: Apply spherical boundary maskthreshold=0.3: Density threshold (negative values include more volume)worley_weight=0.6: Weight of Worley noise (0-1, higher = puffier clouds)edge_sharpness=1.5: Controls edge falloff (lower = softer, puffier edges)density_scale=3.0: Scale factor for final density (match real cloud data ~2-3 max)
Cloud Appearance Tips
For puffy cumulus-like clouds: scale=2.5, worley_weight=0.6, threshold=0.15, density_scale=3.5
For wispy cirrus-like clouds: scale=5.0, worley_weight=0.3, threshold=0.3, density_scale=2.0
For dense fog-like volumes: scale=3.0, worley_weight=0.2, threshold=0.0, density_scale=4.0
Hikari.generate_ray Method
Compute the ray corresponding to a given sample. It is IMPORTANT that the direction vector of ray is normalized. Other parts of the system assume it to be so.
Returns generated ray & floating point that affects how much the radiance, arriving at the film plane along generated ray, contributes to the final image. Simple camera models can return 1, but cameras with simulated physical lenses set this value to indicate how much light carries through the lenses, based on their optical properties.
sourceHikari.generate_ray_differential Method
Same as generate_ray, but also computes rays for pixels shifted one pixel in x & y directions on the film plane. Useful for anti-aliasing textures.
Hikari.get_albedo Method
get_albedo(mat::Emissive, uv::Point2f) -> RGBSpectrumGet the "albedo" of an emissive material for denoising. For emissive materials, we return the normalized emission color.
sourceHikari.get_albedo_spectral Method
get_albedo_spectral(table::RGBToSpectrumTable, mat, textures, uv, lambda) -> SpectralRadianceExtract material albedo as spectral value for denoising auxiliary buffers.
sourceHikari.get_albedo_spectral Method
get_albedo_spectral for MediumInterface - forwards to wrapped material.Hikari.get_albedo_spectral Method
get_albedo_spectral for ThinDielectricMaterial - returns white (transparent).Hikari.get_albedo_spectral_dispatch Method
get_albedo_spectral_dispatch(table, materials::StaticMultiTypeSet, idx, tfc, lambda)Type-stable dispatch for getting material albedo for denoising. Returns SpectralRadiance.
Arguments:
tfc::TextureFilterContext: Contains UV coordinates and screen-space derivatives for texture filtering.
Hikari.get_emission Method
get_emission(mat::Emissive, uv::Point2f) -> RGBSpectrumGet the emitted radiance at UV coordinates (without directional check).
sourceHikari.get_emission Method
get_emission(mat::Emissive, si::SurfaceInteraction) -> RGBSpectrumGet the emitted radiance at a surface point. Returns zero if the surface is one-sided and we're on the back.
sourceHikari.get_emission_spectral Method
get_emission_spectral for CoatedConductorMaterial - returns zero (non-emissive).Hikari.get_emission_spectral Method
get_emission_spectral for CoatedDiffuseMaterial - returns zero (non-emissive).Hikari.get_emission_spectral Method
get_emission_spectral for DiffuseTransmissionMaterial - returns zero (non-emissive).Hikari.get_emission_spectral Method
get_emission_spectral for MediumInterface - forwards to wrapped material.
(Emission is now handled by DiffuseAreaLight in the lights set, not on materials.)Hikari.get_emission_spectral Method
get_emission_spectral for ThinDielectricMaterial - returns zero (non-emissive).Hikari.get_emission_spectral_dispatch Method
get_emission_spectral_dispatch(table, materials::StaticMultiTypeSet, idx, wo, n, tfc, lambda)Type-stable dispatch for getting spectral emission from materials. Returns SpectralRadiance (zero for non-emissive materials).
Arguments:
tfc::TextureFilterContext: Contains UV coordinates and screen-space derivatives for texture filtering.
Hikari.get_emission_spectral_uv_dispatch Method
get_emission_spectral_uv_dispatch(table, materials::StaticMultiTypeSet, idx, tfc, lambda)Type-stable dispatch for getting spectral emission without directional check. Returns SpectralRadiance (zero for non-emissive materials).
Arguments:
tfc::TextureFilterContext: Contains UV coordinates and screen-space derivatives for texture filtering.
Hikari.get_medium_index Method
get_medium_index(mi::MediumInterfaceIdx, wi::Vec3f, n::Vec3f) -> SetKeyDetermine which medium a ray enters based on direction and surface normal.
If dot(wi, n) > 0: ray going in direction of normal -> outside medium
If dot(wi, n) < 0: ray going against normal -> inside medium
Hikari.get_medium_index_for_direction Method
get_medium_index_for_direction(mat::MediumInterface, wi::Vec3f, n::Vec3f) -> SetKeyGet the medium index a ray enters when crossing a MediumInterface surface.
If dot(wi, n) > 0: ray going in direction of normal -> outside medium
If dot(wi, n) < 0: ray going against normal -> inside medium
Hikari.get_medium_index_for_direction_dispatch Method
get_medium_index_for_direction_dispatch(materials::StaticMultiTypeSet, idx::SetKey, wi::Vec3f, n::Vec3f)Type-stable dispatch for getting the new medium index after crossing a surface. Returns the SetKey from MediumInterface, or SetKey() (vacuum) for regular materials.
sourceHikari.get_medium_preset Method
get_medium_preset(name::String) -> NamedTuple{(:σ_s, :σ_a), ...}Get the scattering properties for a named medium preset. Returns a NamedTuple with σ_s (scattering) and σ_a (absorption) coefficients in mm⁻¹.
Available presets include:
Milk: "Wholemilk", "Skimmilk", "LowfatMilk", "ReducedMilk", "RegularMilk", "Cream"
Chocolate milk: "LowfatChocolateMilk", "RegularChocolateMilk"
Soy milk: "LowfatSoyMilk", "RegularSoyMilk"
Coffee: "Espresso", "MintMochaCoffee"
Wine/Beer: "Chardonnay", "WhiteZinfandel", "Merlot", "BudweiserBeer", "CoorsLightBeer"
Juices: "AppleJuice", "CranberryJuice", "GrapeJuice", "RubyGrapefruitJuice"
Sodas: "Sprite", "Coke", "Pepsi"
Foods: "Apple", "Potato", "Chicken1", "Chicken2", "Ketchup"
Skin: "Skin1", "Skin2"
Materials: "Marble", "Spectralon", "Shampoo", "HeadShouldersShampoo", "Clorox"
Powders: "CappuccinoPowder", "SaltPowder", "SugarPowder"
Water: "PacificOceanSurfaceWater"
Hikari.get_physical_extension Method
Extent of the film in the scene. This is needed for realistic cameras.
sourceHikari.get_pixel_index Method
Point p is in (x, y) format. Returns CartesianIndex in (row, col) = (y, x) format for Julia array indexing.
sourceHikari.get_sample_bounds Method
Range of integer pixels that the Sampler is responsible for generating samples for.
Hikari.get_surface_alpha_dispatch Method
get_surface_alpha_dispatch(materials::StaticMultiTypeSet, idx::SetKey, uv::Point2f) -> Float32Type-stable dispatch for evaluating surface alpha at a UV point. Returns alpha ∈ [0, 1] where 0 = fully transparent, 1 = fully opaque. Used by trace_shadow_transmittance for stochastic alpha pass-through.
sourceHikari.get_template_grid Method
get_template_grid(medium) -> MajorantGridExtract a template grid from a medium for type consistency in mixed media scenes. HomogeneousMedium returns EmptyMajorantGrid(), GridMedium returns its majorant_grid.
sourceHikari.get_template_grid_from_tuple Method
get_template_grid_from_tuple(media::Tuple) -> MajorantGridExtract a template grid from the first GridMedium or RGBGridMedium in the tuple, or EmptyMajorantGrid() if none. This is used to ensure all majorant iterators have consistent types for GPU compilation.
sourceHikari.glossy_reflect Method
Glossy reflection for microfacet materials (metals, glossy plastics). Uses sample_f for proper importance sampling of microfacet distribution. The microfacet distribution already handles roughness via its α parameters, so no additional perturbation is needed.
sourceHikari.has_medium_interface Method
has_medium_interface(mat) -> BoolCheck if a material is a MediumInterface (defines medium boundary).
sourceHikari.has_medium_interface_dispatch Method
has_medium_interface_dispatch(materials::StaticMultiTypeSet, idx::SetKey) -> BoolType-stable dispatch for checking if a material defines a medium boundary.
sourceHikari.henyey_greenstein Method
henyey_greenstein(cos_theta, g) -> Float32Henyey-Greenstein phase function.
g > 0: Forward scattering (clouds typically g ≈ 0.85)
g = 0: Isotropic scattering
g < 0: Backward scattering
Hikari.hg_p Method
hg_p(g, cos_θ) -> Float32Evaluate Henyey-Greenstein phase function. p(cos θ) = (1 - g²) / [4π(1 + g² - 2g cos θ)^(3/2)]
sourceHikari.hg_phase_pdf Method
hg_phase_pdf(g, cos_θ) -> Float32Evaluate Henyey-Greenstein phase function PDF.
sourceHikari.homogeneous_next Method
homogeneous_next(iter::HomogeneousMajorantIterator) -> (RayMajorantSegment, HomogeneousMajorantIterator, Bool)Return the single majorant segment for homogeneous media. Returns (seg, new_iter, true) if valid, (invalid_seg, exhausted_iter, false) if exhausted.
The Bool indicates validity: true = has segment, false = exhausted.
sourceHikari.importance Method
importance(lb::LightBounds, p::Point3f, n::Vec3f) -> Float32Compute the importance of a light (region) for shading point p with normal n. Pass n = Vec3f(0) for medium scattering (no normal term).
Following pbrt-v4's CompactLightBounds::Importance (lightsamplers.h:144-201).
sourceHikari.int32_to_bytes Method
int32_to_bytes(v::Int32) -> NTuple{4,UInt8}GPU-compatible conversion of Int32 to bytes.
sourceHikari.intersect_box Method
intersect_box(ray_o, ray_d, box_min, box_max) -> (hit, t_near, t_far)Compute ray intersection with an axis-aligned bounding box.
sourceHikari.is_emissive Method
is_emissive(mat::Material) -> BoolCheck if a material emits light.
sourceHikari.is_emissive Method
is_emissive(materials::StaticMultiTypeSet, idx::SetKey)Type-stable dispatch for checking if a material/medium is emissive. Returns Bool. Works for both materials and media via element-level dispatch.
sourceHikari.is_mix_material Method
is_mix_material(mat) -> BoolCheck if a material is a MixMaterial.
sourceHikari.is_mix_material_dispatch Method
is_mix_material_dispatch(materials, idx::SetKey) -> BoolType-stable dispatch to check if a material is MixMaterial.
sourceHikari.is_pure_emissive Method
is_pure_emissive(mat::Material) -> BoolCheck if a material is purely emissive (no BSDF, only emits light).
sourceHikari.is_pure_emissive Method
is_pure_emissive(materials::StaticMultiTypeSet, idx::SetKey)Type-stable dispatch for checking if a material is purely emissive (no BSDF). Returns Bool.
sourceHikari.layer_transmittance Method
Tr(thickness, w) -> Float32Transmittance through a layer of given thickness along direction w. Used in LayeredBxDF random walk.
sourceHikari.lcg_init Method
lcg_init(ray_o, ray_d, t_max) -> UInt64Initialize LCG state from ray geometry for deterministic medium sampling. Following pbrt-v4's RNG initialization: Hash(ray.o, tMax), Hash(ray.d)
sourceHikari.lcg_next Method
lcg_next(state) -> (UInt64, Float32)Generate next random Float32 in [0,1) and return new state. GPU-compatible LCG.
sourceHikari.le Method
Compute emitted radiance for a ray that escapes the scene (hits no geometry). This is called when a camera/path ray doesn't hit anything.
sourceHikari.le Method
Emmited light if ray hit an area light source. By default light sources have no area.
sourceHikari.left_shift2 Method
left_shift2(x::UInt64) -> UInt64Spread bits of x for Morton encoding (interleave with zeros).
sourceHikari.lerp_smits_basis Method
lerp_smits_basis(basis::NTuple{10, Float32}, λ::Float32) -> Float32Linearly interpolate a Smits basis spectrum at wavelength λ.
sourceHikari.light_bounds Method
light_bounds(light) -> Union{LightBounds, Nothing}Compute LightBounds for a light. Returns nothing for infinite lights.
Hikari.linear_srgb_to_xyz Method
linear_srgb_to_xyz(rgb::Vec3f) -> Vec3fConvert linear sRGB to CIE XYZ color space.
sourceHikari.linear_to_srgb_gamma Method
linear_to_srgb_gamma(c::Float32) -> Float32Apply sRGB gamma curve to a linear RGB value.
sourceHikari.load_environment_map Method
Load an environment map from an HDR/EXR file. Converts the image to RGBSpectrum format.
rotation: Mat3f rotation matrix, or nothing for identity
sourceHikari.local_to_world Method
local_to_world(local_dir, n, tangent, bitangent) -> Vec3fTransform direction from local (shading) space to world space.
sourceHikari.lookup_uv Method
lookup_uv(env::EnvironmentMap, uv::Point2f, textures) -> SpectrumLook up environment map directly by UV coordinates. This is the equivalent of pbrt-v4's ImageLe(uv, lambda) for ImageInfiniteLight. Used when UV is already known (e.g., from importance sampling the distribution).
The textures parameter is used to deref TextureRef fields when EnvironmentMap is stored in a MultiTypeSet.
IMPORTANT: Uses nearest-neighbor lookup to match the discrete PDF from importance sampling. Bilinear interpolation would cause bias because the PDF is computed for discrete pixels, not interpolated values. This matches pbrt-v4's LookupNearestChannel in ImageLe.
sourceHikari.luminance Method
luminance(rgb) -> Float32Compute luminance from RGB using standard coefficients (Rec. 709).
sourceHikari.max_value Method
max_value(s::RGBIlluminantSpectrum) -> Float32Maximum value of the spectrum, matching pbrt-v4's RGBIlluminantSpectrum::MaxValue(): scale * rsp.MaxValue() * illuminant->MaxValue()
sourceHikari.medium_direct_lighting_inner! Method
Inner function for medium direct lighting - can use return statements.
Uses power-weighted light sampling via alias table for better importance sampling in scenes with lights of varying intensities (pbrt-v4's PowerLightSampler approach).
Now uses pre-computed Sobol samples from pixel_samples (pbrt-v4 RaySamples style).
sourceHikari.medium_scatter_inner! Method
Inner function for medium scatter - can use return statements.
Now uses pre-computed Sobol samples from pixel_samples (pbrt-v4 RaySamples style).
sourceHikari.mis_weight_spectral Method
mis_weight_spectral(pdf_f::Float32, pdf_g::Float32) -> Float32Compute MIS weight using power heuristic (beta=2). Returns weight for strategy f: w_f = pdf_f^2 / (pdf_f^2 + pdf_g^2)
sourceHikari.mix_bits Method
mix_bits(v::UInt64) -> UInt64Bit mixing function from pbrt-v4's hash.h. Reference: http://zimbry.blogspot.ch/2011/09/better-bit-mixing-improving-on.html
sourceHikari.mix_hash_float Method
mix_hash_float(p::Point3f, wo::Vec3f, idx1::SetKey, idx2::SetKey) -> Float32Generate a deterministic pseudo-random float in [0, 1) for material selection. Uses a simple but effective hash function based on pbrt-v4's HashFloat.
The hash is deterministic: same position, direction, and materials always produce the same result, ensuring consistent rendering across samples.
sourceHikari.murmur_hash_64a Method
murmur_hash_64a(data::NTuple{N,UInt8}, seed::UInt64) -> UInt64MurmurHash2 64-bit hash function, exactly matching pbrt-v4's MurmurHash64A. Reference: https://github.com/explosion/murmurhash/blob/master/murmurhash/MurmurHash2.cpp
sourceHikari.nanovdb_get_value Method
nanovdb_get_value(medium::NanoVDBMedium, media, ijk::NTuple{3, Int32}) -> Float32Get the voxel value at integer index coordinates using full tree traversal. Matches pbrt-v4/NanoVDB's Tree::getValue exactly.
The media parameter is used to deref TextureRef fields when NanoVDBMedium is stored in a MultiTypeSet.
Uses pointer(buffer) with as_pointer for type conversion - works on both CPU and GPU.
sourceHikari.node_importance Method
node_importance(node::LightBVHNode, p::Point3f, n::Vec3f) -> Float32Compute importance of a BVH node at shading point p with normal n. Operates directly on unquantized node fields.
Hikari.parse_nanovdb_buffer Method
parse_nanovdb_buffer(filepath::String) -> (buffer, metadata)Parse a NanoVDB file and return the decompressed buffer along with metadata. Returns raw buffer suitable for GPU upload.
sourceHikari.pbrt_hash Method
pbrt_hash(args...) -> UInt64Hash function matching pbrt-v4's variadic Hash() template. Packs arguments into a byte buffer and applies MurmurHash64A. Uses GPU-compatible bit manipulation instead of reinterpret.
sourceHikari.pcg32_init Method
pcg32_init(seq_index::UInt64, seed::UInt64) -> PCG32StateInitialize PCG32 with sequence index and seed, matching pbrt-v4's SetSequence. Returns initialized state.
sourceHikari.pcg32_uniform_f32 Method
pcg32_uniform_f32(rng::PCG32State) -> (Float32, PCG32State)Generate uniform random Float32 in [0, 1) and return new state. Matching pbrt-v4's Uniform<float>().
sourceHikari.pcg32_uniform_u32 Method
pcg32_uniform_u32(rng::PCG32State) -> (UInt32, PCG32State)Generate uniform random UInt32 and return new state. Matching pbrt-v4's Uniform<uint32_t>().
sourceHikari.pdf Method
Compute PDF for sampling a specific 2D point from flat distribution. The textures parameter is used to deref TextureRef fields when Distribution2D is stored in a MultiTypeSet.
Hikari.pdf_dielectric_interface Function
pdf_dielectric_interface(wo, wi, alpha_x, alpha_y, eta, refl_trans_flags) -> Float32Compute PDF of dielectric interface sampling.
sourceHikari.pdf_diffuse_interface Method
pdf_diffuse_interface(wo, wi) -> Float32Compute PDF of diffuse sampling.
sourceHikari.pdf_is_nonzero Method
pdf_is_nonzero(lambda::Wavelengths, i::Int) -> BoolCheck if wavelength i has non-zero PDF (should contribute).
sourceHikari.pdf_layered_bsdf Method
pdf_layered_bsdf(...) -> Float32Compute PDF for LayeredBxDF using Monte Carlo estimation. This is a simplified version of pbrt-v4's LayeredBxDF::PDF.
sourceHikari.pdf_li Method
PDF for sampling a particular direction from the environment light. Returns the probability density for importance sampling this direction.
sourceHikari.pdf_li_spectral Method
pdf_li_spectral(lights, light::EnvironmentLight, p::Point3f, wi::Vec3f)PDF for sampling direction wi from environment light.
sourceHikari.pdf_li_spectral Method
pdf_li_spectral(light::DirectionalLight, p::Point3f, wi::Vec3f)PDF for sampling direction wi from directional light (always delta).
sourceHikari.pdf_li_spectral Method
pdf_li_spectral(light::PointLight, p::Point3f, wi::Vec3f)PDF for sampling direction wi from point light (always delta).
sourceHikari.pdf_li_spectral Method
pdf_li_spectral(light::SpotLight, p::Point3f, wi::Vec3f)PDF for sampling direction wi from spotlight (always delta).
sourceHikari.pdf_li_spectral Method
pdf_li_spectral(light::SunLight, p::Point3f, wi::Vec3f)PDF for sampling direction wi from sun light (always delta).
sourceHikari.perlin3d Method
perlin3d(x, y, z) -> Float64Classic 3D Perlin noise. Returns values in approximately [-1, 1].
sourceHikari.pixel_coords_from_index Method
pixel_coords_from_index(idx::Int32, width::Int32) -> (x::Int32, y::Int32)Convert linear pixel index to x,y coordinates (1-based).
sourceHikari.pixel_index_from_coords Method
pixel_index_from_coords(x::Int32, y::Int32, width::Int32) -> Int32Convert x,y coordinates to linear pixel index (1-based).
sourceHikari.pixel_offset_2d Method
pixel_offset_2d(px::Int32, py::Int32, dim::Int32) -> Tuple{Float32, Float32}Compute a deterministic 2D offset for Cranley-Patterson rotation based on pixel coordinates.
sourceHikari.planckian_xy Method
planckian_xy(T::Float32) -> (x, y)Compute CIE xy chromaticity coordinates for a Planckian (blackbody) radiator at temperature T in Kelvin. Valid for 1667K to 25000K.
sourceHikari.pmf Method
pmf(table::AliasTable, idx::Int32) -> Float32Get the PMF for index idx (1-based).
Hikari.pmf Method
pmf(sampler::PowerLightSampler, light_idx::Int32) -> Float32Get PMF for a specific light index.
sourceHikari.pmf Method
pmf(sampler::UniformLightSampler, light_idx::Int32) -> Float32PMF for any light is 1/N.
sourceHikari.postprocess! Method
postprocess!(film::Film; exposure=1.0f0, tonemap=:aces, gamma=2.2f0, white_point=4.0f0, sensor=nothing)Apply postprocessing to film.framebuffer and write result to film.postprocess.
This function is non-destructive: the original framebuffer is preserved, allowing you to call postprocess! multiple times with different parameters.
Works on both CPU and GPU arrays via KernelAbstractions.
Arguments
film: The Film containing rendered dataexposure: Exposure multiplier applied before tonemapping (default: 1.0)tonemap: Tonemapping method (default: :aces):reinhard- Simple Reinhard L/(1+L):reinhard_extended- Extended Reinhard with white point:aces- ACES filmic (industry standard):uncharted2- Uncharted 2 filmic:filmic- Hejl-Dawson filmicnothing- No tonemapping (linear clamp)
gamma: Gamma correction value (default: 2.2, usenothingto skip)white_point: White point for extended Reinhard (default: 4.0)sensor: FilmSensor for pbrt-style sensor simulation (ISO, white balance)background: When set to anRGB{Float32}, pixels wherefilm.depthisInf(escaped rays) are replaced with this color instead of being tonemapped. Useful for compositing.
Example
# Render once
integrator(scene, film, camera)
to_framebuffer!(film)
# Try different postprocessing settings
postprocess!(film; exposure=1.0, tonemap=:aces)
display(film.postprocess)
# With pbrt-style sensor (bunny-cloud scene settings)
sensor = FilmSensor(iso=90, white_balance=5000)
postprocess!(film; sensor=sensor, tonemap=:aces)
display(film.postprocess)Hikari.power Method
Total power emitted by the light source over the entire sphere of directions.
sourceHikari.power Method
The total power emitted by the directional light is related to the spatial extent of the scene and equals the amount of power arriving at the inscribed by bounding sphere disk: scale * I * π * r^2.
Hikari.power Method
Total power emitted by the environment light. For an environment light, this is approximated as the average radiance times the surface area of the bounding sphere.
sourceHikari.power_heuristic Function
power_heuristic(pdf_f::Float32, pdf_g::Float32, beta::Float32=2f0) -> Float32Power heuristic for MIS: w_f = pdf_f^beta / (pdf_f^beta + pdf_g^beta) Default beta=2 (squared terms).
sourceHikari.power_heuristic Method
power_heuristic(nf, fPdf, ng, gPdf) -> Float32Balance heuristic for MIS with power=2.
sourceHikari.pw_accumulate_sample_to_rgb! Method
pw_accumulate_sample_to_rgb!(backend, pixel_rgb, pixel_L, wavelengths_per_pixel,
pdf_per_pixel, cie_table, num_pixels)High-level wrapper to accumulate spectral sample to RGB buffer.
sourceHikari.pw_accumulate_sample_to_rgb_kernel! Method
pw_accumulate_sample_to_rgb_kernel!(pixel_rgb, pixel_L, wavelengths_per_pixel,
pdf_per_pixel, cie_x, cie_y, cie_z, num_pixels)Convert this sample's spectral radiance to RGB using PER-PIXEL wavelengths and accumulate into pixel_rgb buffer.
This is the key operation that pbrt-v4 does: spectral-to-RGB conversion happens IMMEDIATELY after each sample, using THAT PIXEL's wavelengths. The RGB values are then accumulated across samples.
Each pixel has independently sampled wavelengths, which decorrelates color noise and results in much faster convergence than using shared wavelengths.
Arguments:
pixel_rgb: RGB accumulation buffer (3 × num_pixels, interleaved R,G,B)pixel_L: Spectral radiance for this sample (4 × num_pixels, interleaved)wavelengths_per_pixel: Wavelengths for each pixel (4 × num_pixels)pdf_per_pixel: PDF for each wavelength sample (4 × num_pixels)cie_x,cie_y,cie_z: CIE XYZ color matching function arraysnum_pixels: Total number of pixels
Hikari.pw_accumulate_samples! Method
pw_accumulate_samples!(backend, pixel_L_accum, pixel_L_sample, num_pixels)Add new sample results to accumulated film values.
sourceHikari.pw_accumulate_to_film_kernel! Method
pw_accumulate_to_film_kernel!(pixel_L_accum, pixel_L_sample, num_pixels)Accumulate sample results into the film accumulator. Adds new sample's spectral values to existing accumulated values.
sourceHikari.pw_apply_exposure! Method
pw_apply_exposure!(backend, film::Film, exposure::Float32)Apply exposure adjustment to film framebuffer.
sourceHikari.pw_apply_exposure_kernel! Method
pw_apply_exposure_kernel!(framebuffer, exposure)Apply exposure adjustment to framebuffer (in-place).
sourceHikari.pw_apply_srgb_gamma! Method
pw_apply_srgb_gamma!(backend, film::Film)Apply sRGB gamma to framebuffer and store in postprocess buffer.
sourceHikari.pw_apply_srgb_gamma_kernel! Method
pw_apply_srgb_gamma_kernel!(output, input)Apply sRGB gamma curve to convert from linear to display sRGB.
sourceHikari.pw_clear_film! Method
pw_clear_film!(backend, pixel_L, num_pixels)Clear film accumulator to zero for a new render.
sourceHikari.pw_clear_film_kernel! Method
pw_clear_film_kernel!(pixel_L, num_pixels)Clear accumulated spectral radiance to zero for new render.
sourceHikari.pw_evaluate_materials! Function
pw_evaluate_materials!(backend, next_ray_queue, pixel_L, material_queue, materials, rgb2spec_table, max_depth, regularize=true)Evaluate materials and spawn continuation rays.
When regularize=true, near-specular BSDFs are roughened after the first non-specular bounce to reduce fireflies (matches pbrt-v4's BSDF::Regularize).
Hikari.pw_evaluate_materials_kernel! Method
pw_evaluate_materials_kernel!(next_ray_queue, pixel_L, material_queue, ...)Evaluate materials for all work items:
Sample BSDF for indirect lighting direction
Apply Russian roulette for path termination
Create continuation ray if path should continue
Hikari.pw_finalize_film! Method
pw_finalize_film!(backend, film, pixel_rgb, samples_per_pixel)Copy accumulated RGB values to film framebuffer, dividing by sample count.
sourceHikari.pw_finalize_film_kernel! Method
pw_finalize_film_kernel!(framebuffer, pixel_rgb, inv_samples, width, height)Copy accumulated RGB values to film framebuffer, dividing by sample count.
sourceHikari.pw_generate_camera_rays! Method
pw_generate_camera_rays!(backend, ray_queue, wavelengths_per_pixel, pdf_per_pixel,
width, height, camera, sample_idx, rng_base)Generate camera rays with per-pixel wavelength sampling (pbrt-v4 style). Each pixel samples its own wavelengths, which are stored for later film accumulation.
sourceHikari.pw_generate_camera_rays_kernel! Method
pw_generate_camera_rays_kernel!(ray_queue, wavelengths_per_pixel, pdf_per_pixel,
width, height, camera, sample_idx, rng_base)Generate camera rays for all pixels with PER-PIXEL wavelength sampling. Each thread generates one camera ray with independently sampled wavelengths.
This matches pbrt-v4's approach where each pixel samples its own wavelengths, which decorrelates color noise across pixels for faster convergence.
sourceHikari.pw_handle_escaped_rays! Method
pw_handle_escaped_rays!(backend, pixel_L, escaped_queue, lights, rgb2spec_table)Evaluate environment lights for escaped rays.
sourceHikari.pw_handle_escaped_rays_kernel! Method
pw_handle_escaped_rays_kernel!(pixel_L, escaped_queue, lights, rgb2spec_table, max_queued)Handle rays that escaped the scene by evaluating environment lights.
sourceHikari.pw_handle_hit_area_lights! Method
pw_handle_hit_area_lights!(backend, pixel_L, hit_light_queue, rgb2spec_table, materials)Evaluate emission for rays that hit area lights.
sourceHikari.pw_handle_hit_area_lights_kernel! Method
pw_handle_hit_area_lights_kernel!(pixel_L, hit_light_queue, rgb2spec_table, materials, max_queued)Handle rays that hit emissive surfaces.
sourceHikari.pw_populate_aux_buffers! Method
pw_populate_aux_buffers!(backend, aux_albedo, aux_normal, aux_depth,
material_queue, materials, rgb2spec_table)Populate auxiliary buffers for denoising.
sourceHikari.pw_populate_aux_buffers_kernel! Method
pw_populate_aux_buffers_kernel!(aux_albedo, aux_normal, aux_depth, material_queue, ...)Populate auxiliary buffers for denoising on first bounce. Only processes depth=0 items (primary ray hits).
sourceHikari.pw_sample_direct_lighting! Method
pw_sample_direct_lighting!(backend, shadow_queue, material_queue, materials, lights, rgb2spec_table)Sample direct lighting for all material work items.
sourceHikari.pw_sample_direct_lighting_kernel! Method
pw_sample_direct_lighting_kernel!(shadow_queue, material_queue, ...)Sample direct lighting for all material evaluation work items. For each item, selects a light, samples a direction, evaluates BSDF, and creates a shadow ray work item.
sourceHikari.pw_trace_rays! Method
pw_trace_rays!(backend, escaped_queue, hit_light_queue, material_queue,
ray_queue, accel, materials)Trace all rays in ray_queue and populate output queues.
sourceHikari.pw_trace_rays_kernel! Method
pw_trace_rays_kernel!(...)Trace rays from ray_queue, handle hits and misses:
Misses -> push to escaped_ray_queue (for environment light)
Hits on emissive -> push to hit_area_light_queue
Hits on non-emissive -> push to material_eval_queue
This kernel does NOT generate shadow rays - that happens in direct lighting.
sourceHikari.pw_trace_shadow_rays! Method
pw_trace_shadow_rays!(backend, pixel_L, shadow_queue, accel)Trace shadow rays and accumulate unoccluded contributions.
sourceHikari.pw_trace_shadow_rays_kernel! Method
pw_trace_shadow_rays_kernel!(pixel_L, shadow_queue, accel, max_queued)Trace shadow rays and accumulate unoccluded contributions to pixel buffer.
sourceHikari.pw_update_aux_from_material_queue! Method
pw_update_aux_from_material_queue!(backend, film, material_queue, materials, rgb2spec_table)Update film auxiliary buffers (albedo, normal, depth) from first-bounce material hits.
This extracts albedo from materials during the wavefront pipeline, unlike the separate fill_aux_buffers! which traces primary rays again.
sourceHikari.pw_update_aux_kernel! Method
pw_update_aux_kernel!(aux_albedo, aux_normal, aux_depth,
material_queue_items, material_queue_size,
materials, rgb2spec_table,
width, height, max_queued)Kernel to update auxiliary buffers from depth=0 material queue items.
sourceHikari.pw_update_film_perPixel! Method
pw_update_film_perPixel!(backend, film, pixel_L, wavelengths_per_pixel,
pdf_per_pixel, cie_table, samples_accumulated)Update film framebuffer with per-pixel wavelength data. Uses proper kernel with CIE table arrays passed explicitly.
sourceHikari.pw_update_film_spectral_kernel! Method
pw_update_film_spectral_kernel!(framebuffer, pixel_L, wavelengths_per_pixel,
pdf_per_pixel, cie_x, cie_y, cie_z,
samples_accumulated, width, height)Convert accumulated spectral radiance to RGB and update film framebuffer.
Each pixel stores 4 spectral values in pixel_L (interleaved), and wavelengths_per_pixel stores the sampled wavelengths. Uses CIE XYZ color matching for accurate conversion.
sourceHikari.pw_update_film_uniform! Method
pw_update_film_uniform!(backend, film, pixel_L, cie_table, lambda, samples_accumulated)Update film framebuffer using uniform wavelength sampling (all pixels share wavelengths).
sourceHikari.pw_update_film_uniform_kernel! Method
pw_update_film_uniform_kernel!(framebuffer, pixel_L, cie_x, cie_y, cie_z,
lambda, samples_accumulated, width, height)Convert accumulated spectral radiance to RGB using uniform wavelength sampling.
When using stratified wavelength sampling, all pixels share the same wavelengths within a sample iteration. This kernel uses a single Wavelengths value for all pixels.
sourceHikari.r2_sample Method
r2_sample(n::Int32) -> Tuple{Float32, Float32}Generate the n-th point of the R2 quasi-random sequence. The R2 sequence has excellent 2D discrepancy properties.
sourceHikari.r2_sample_rotated Method
r2_sample_rotated(n::Int32, offset_x::Float32, offset_y::Float32) -> Tuple{Float32, Float32}Generate the n-th point of the R2 sequence with Cranley-Patterson rotation. The offset is typically derived from a hash of the pixel coordinates.
sourceHikari.ray_bounds_intersect Method
ray_bounds_intersect(ray_o, ray_d, bounds) -> (t_min, t_max)Compute ray-AABB intersection. Returns (Inf, -Inf) if no intersection. Uses scalar operations for GPU compatibility.
sourceHikari.ray_majorant_next Method
ray_majorant_next(iter::RayMajorantIterator, media) -> (RayMajorantSegment, RayMajorantIterator, Bool)Advance the unified iterator and return the next majorant segment. Dispatches internally based on mode (homogeneous vs DDA). Returns (segment, new_iter, valid) where valid=false means exhausted.
sourceHikari.reflect Method
reflect(wo, n) -> wiCompute reflected direction: wi = -wo + 2_dot(wo,n)_n
sourceHikari.refract Method
refract(wi::Vec3f, n::Normal3f, eta::Float32) -> (valid::Bool, wt::Vec3f)Compute refracted direction wt given an incident direction wi, surface normal n, and eta = n_t / n_i (ratio of transmitted to incident IOR).
This matches pbrt-v4's Refract() function exactly:
If wi comes from below the surface (cos_θi < 0), the interface is flipped
Returns (false, zero) for total internal reflection
Returns (true, wt) with the refracted direction otherwise
The convention is: eta = n_transmitted / n_incident
For ray entering glass (n_i=1, n_t=1.5): eta = 1.5
For ray exiting glass (n_i=1.5, n_t=1): eta = 1/1.5 ≈ 0.67
Hikari.refract_microfacet Method
refract_microfacet(wo, wm, eta) -> (valid, wi, etap)Compute refracted direction through a microfacet with normal wm.
sourceHikari.refract_pbrt Method
refract_pbrt(wo, eta) -> (valid, wi, etap)Compute refracted direction using pbrt-v4 convention. eta = n_transmitted / n_incident Returns (valid, wi, effective_eta).
sourceHikari.regularize_alpha Method
regularize_alpha(α::Float32) -> Float32Regularize a microfacet distribution alpha value to reduce fireflies from near-specular paths. Matches pbrt-v4's TrowbridgeReitzDistribution::Regularize().
If α < 0.3, doubles it and clamps to [0.1, 0.3]. This increases the roughness of near-specular surfaces after the first non-specular bounce, reducing variance from paths that hit nearly-specular surfaces.
sourceHikari.render! Method
render!(vp::VolPath, scene, film, camera)Render one iteration/sample using volumetric spectral wavefront path tracing.
This function is allocation-free and renders a single sample, accumulating results in the film. The iteration index is read from and incremented in film.iteration_index.
For progressive rendering, call this repeatedly. For complete rendering, use the main call function (vp::VolPath)(scene, film, camera) which wraps this in a loop.
Hikari.resolve_mix_material Method
resolve_mix_material(materials::StaticMultiTypeSet, idx::SetKey, p, wo, uv) -> SetKeyResolve any MixMaterial chain to get the final material index. Handles nested MixMaterials by iterating until a non-mix material is found. materials is used both for material lookup and texture evaluation.
This should be called at intersection time before creating material work items.
sourceHikari.rgb_illuminant_spectrum Method
rgb_illuminant_spectrum(table::RGBToSpectrumTable, r, g, b) -> RGBIlluminantSpectrum
rgb_illuminant_spectrum(table::RGBToSpectrumTable, rgb::RGB) -> RGBIlluminantSpectrumCreate an illuminant spectrum from RGB values, matching pbrt-v4's RGBIlluminantSpectrum constructor.
sourceHikari.rgb_to_spectral Method
rgb_to_spectral(rgb::RGBSpectrum, lambda::Wavelengths) -> SpectralRadianceConvert Hikari's RGBSpectrum to spectral radiance at given wavelengths.
sourceHikari.rgb_to_spectral_at_wavelength Method
rgb_to_spectral_at_wavelength(r, g, b, λ) -> Float32Get spectral value at a single wavelength from RGB. Uses smooth blending between spectral bands.
sourceHikari.rgb_to_spectral_sigmoid Method
rgb_to_spectral_sigmoid(r::Float32, g::Float32, b::Float32, lambda::Wavelengths) -> SpectralRadianceConvert RGB to spectral radiance using sigmoid polynomial method (pbrt-v4 style). This provides the smoothest spectra and lowest variance for spectral rendering.
Note: Uses global table, not GPU-compatible. Use the version with explicit table for GPU kernels.
sourceHikari.rgb_to_spectral_sigmoid Method
rgb_to_spectral_sigmoid(table::RGBToSpectrumTable, r, g, b, lambda) -> SpectralRadianceGPU-compatible version that takes an explicit table parameter.
sourceHikari.rgb_to_spectral_sigmoid_illuminant Method
rgb_to_spectral_sigmoid_illuminant(table::RGBToSpectrumTable, r, g, b, lambda) -> SpectralRadianceConvert RGB to spectral radiance for illuminants/light sources. Following pbrt-v4's RGBIlluminantSpectrum: multiplies sigmoid polynomial by D65 illuminant.
This is the correct conversion for environment maps and other light sources that are specified in sRGB. The D65 multiplication is necessary because sRGB's white point is D65, so an RGB=(1,1,1) light source should emit a D65-like spectrum.
sourceHikari.rgb_to_spectral_sigmoid_unbounded Method
rgb_to_spectral_sigmoid_unbounded(r::Float32, g::Float32, b::Float32, lambda::Wavelengths) -> SpectralRadianceConvert RGB to spectral radiance for unbounded values (emission/illumination). Scales the spectrum to preserve the maximum RGB component.
Note: Uses global table, not GPU-compatible. Use the version with explicit table for GPU kernels.
sourceHikari.rgb_to_spectral_sigmoid_unbounded Method
rgb_to_spectral_sigmoid_unbounded(table::RGBToSpectrumTable, r, g, b, lambda) -> SpectralRadianceGPU-compatible version that takes an explicit table parameter.
sourceHikari.rgb_to_spectral_simple Method
rgb_to_spectral_simple(r::Float32, g::Float32, b::Float32, lambda::Wavelengths) -> SpectralRadianceConvert RGB to spectral radiance using a simple piecewise linear model. This is a simplified uplift that treats RGB as spectral bands.
For each wavelength λ:
Blue region (380-490nm): primarily B channel
Green region (490-580nm): primarily G channel
Red region (580-780nm): primarily R channel
With smooth transitions between regions.
sourceHikari.rgb_to_spectral_smits Method
rgb_to_spectral_smits(r::Float32, g::Float32, b::Float32, lambda::Wavelengths) -> SpectralRadianceConvert RGB to spectral radiance using Smits' method. More accurate than the simple piecewise linear approach.
sourceHikari.rgb_to_spectral_smits_at_wavelength Method
rgb_to_spectral_smits_at_wavelength(r, g, b, λ) -> Float32Compute spectral value at wavelength λ using Smits' method.
sourceHikari.rgb_to_spectrum Method
rgb_to_spectrum(table, rgb) -> RGBSigmoidPolynomialConvert an RGB color to a sigmoid polynomial spectrum representation. RGB values should be in [0, 1] range.
sourceHikari.rgb_to_spectrum_coeffs Method
rgb_to_spectrum_coeffs(scale, coeffs, res, r, g, b) -> (c0, c1, c2)GPU-compatible version that takes raw arrays and returns coefficient tuple.
sourceHikari.rotation_matrix Method
Create rotation matrix from axis-angle representation (like pbrt's Rotate command). angle: rotation angle in degrees axis: rotation axis (will be normalized)
sourceHikari.roughness_to_α Method
roughness_to_α(roughness::Float32) -> Float32Map [0, 1] scalar roughness to microfacet distribution alpha parameter.
Matches pbrt-v4's TrowbridgeReitzDistribution::RoughnessToAlpha which uses sqrt(roughness). This provides a more intuitive perceptual mapping where roughness values close to zero give near-perfect specular reflection.
Note: pbrt-v4 comments suggest Sqr(roughness) might be more perceptually uniform, but sqrt is retained for compatibility with existing scenes.
sourceHikari.russian_roulette_spectral Function
russian_roulette_spectral(beta::SpectralRadiance, depth::Int32, rr_sample::Float32, min_depth::Int32=3)Apply Russian roulette for path termination. Returns (should_continue::Bool, new_beta::SpectralRadiance).
sourceHikari.same_hemisphere Method
same_hemisphere(w1, w2) -> BoolCheck if two directions are in the same hemisphere (both have same sign of z).
sourceHikari.sample Method
sample(table::AliasTable, u::Float32) -> (index::Int32, pmf::Float32)Sample from the alias table using uniform random u ∈ [0,1). Returns 1-based index and its PMF.
Hikari.sample Method
sample(sampler::PowerLightSampler, u::Float32) -> (light_idx::Int32, pmf::Float32)Sample a light with probability proportional to power.
sourceHikari.sample Method
sample(sampler::UniformLightSampler, u::Float32) -> (light_idx::Int32, pmf::Float32)Sample a light uniformly. Returns 1-based index and PMF.
sourceHikari.sample_1d Method
sample_1d(rng::SobolRNG, px::Int32, py::Int32, sample_idx::Int32, dim::Int32) -> Float32Generate a 1D Sobol sample for the given pixel and dimension.
sourceHikari.sample_2d Method
sample_2d(rng::SobolRNG, px::Int32, py::Int32, sample_idx::Int32, dim::Int32) -> Tuple{Float32, Float32}Generate a 2D Sobol sample for the given pixel and dimension.
sourceHikari.sample_Le Method
Sample Le at a point using trilinear interpolation. Returns RGBSpectrum(0.0) if Le_grid is nothing.
sourceHikari.sample_T_maj_loop! Method
sample_T_maj_loop!(iter::RayMajorantIterator, ...) -> SampleTMajResultDelta tracking loop using the unified RayMajorantIterator. Handles both homogeneous media (single segment) and heterogeneous media (DDA traversal). Uses deterministic LCG RNG for medium sampling (pbrt-v4 pattern).
sourceHikari.sample_bounce Method
sample_bounce(cloud::CloudVolume, ray, si, scene, beta, depth) -> (should_bounce, new_ray, new_beta, new_depth)For volumes, we handle everything in shade() including the continuation ray, so no bounce is needed here.
sourceHikari.sample_bounce Method
sample_bounce(material::Material, ray, si, scene, beta, depth) -> (should_bounce, new_ray, new_beta, new_depth)Sample BSDF to generate a bounce ray for path continuation.
sourceHikari.sample_bsdf_spectral Function
sample_bsdf_spectral for MediumInterface - forwards to wrapped material.Hikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::ThinDielectricMaterial, textures, wo, n, uv, lambda, sample_u, rng, regularize=false) -> SpectralBSDFSampleSample thin dielectric BSDF matching pbrt-v4's ThinDielectricBxDF::Sample_f.
Thin dielectric surfaces model materials like window glass where light can either reflect or transmit straight through (no refraction bend).
Key physics (pbrt-v4 lines 225-230):
R₀ = FrDielectric(|cos_θ|, eta)
R = R₀ + T₀²R₀/(1 - R₀²) where T₀ = 1 - R₀
T = 1 - R
Transmitted direction: wi = -wo (straight through)
Reflected direction: wi = (-wo.x, -wo.y, wo.z) (mirror reflection in local coords)
Hikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::ConductorMaterial, textures, wo, n, uv, lambda, sample_u, rng, regularize=false) -> SpectralBSDFSampleSample metal BSDF with conductor Fresnel. Matches pbrt-v4's ConductorBxDF::Sample_f exactly.
The implementation works in local shading coordinates where n = (0,0,1), then transforms back.
When regularize=true, the microfacet alpha is increased to reduce fireflies from near-specular paths (matches pbrt-v4 BSDF::Regularize).
Hikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::DiffuseTransmissionMaterial, textures, wo, n, uv, lambda, sample_u, rng, regularize=false) -> SpectralBSDFSampleSample diffuse transmission BSDF matching pbrt-v4's DiffuseTransmissionBxDF::Sample_f.
This material diffusely scatters light in both reflection (same hemisphere) and transmission (opposite hemisphere). Sampling is proportional to max(R) and max(T).
sourceHikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::MatteMaterial, textures, wo, n, uv, lambda, sample_u, rng) -> SpectralBSDFSampleSample diffuse BSDF with spectral evaluation. Uses pbrt-v4 convention: work in local shading space where n = (0,0,1).
sourceHikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::CoatedConductorMaterial, textures, wo, n, uv, lambda, sample_u, rng, regularize=false) -> SpectralBSDFSampleSample CoatedConductor BSDF using pbrt-v4's LayeredBxDF approach.
This is a layered material with:
Top layer: Dielectric coating (can be rough or smooth)
Bottom layer: Conductor (metal) with complex Fresnel
Key pbrt-v4 details (materials.cpp lines 345-392):
Conductor eta/k are scaled by interface IOR: ce /= ieta, ck /= ieta
If reflectance mode: k = 2 * sqrt(r) / sqrt(1 - r), eta = 1
When regularize=true, both interface and conductor microfacet alphas are increased to reduce fireflies from near-specular paths (matches pbrt-v4 BSDF::Regularize).
Hikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::MirrorMaterial, textures, wo, n, uv, lambda, sample_u, rng) -> SpectralBSDFSampleSample perfect specular reflection with spectral evaluation.
sourceHikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::GlassMaterial, textures, wo, n, uv, lambda, sample_u, rng) -> SpectralBSDFSampleSample glass BSDF with reflection or refraction. Uses Fresnel to choose between reflection and transmission.
sourceHikari.sample_bsdf_spectral Function
sample_bsdf_spectral(table, mat::CoatedDiffuseMaterial, textures, wo, n, uv, lambda, sample_u, rng, regularize=false) -> SpectralBSDFSampleSample CoatedDiffuse BSDF using pbrt-v4's LayeredBxDF random walk algorithm.
This is a 100% port of pbrt-v4's LayeredBxDF::Sample_f. The algorithm:
Sample entrance interface (top dielectric for wo.z > 0)
If reflection at entrance: return immediately with pdfIsProportional=true
If transmission: start random walk through layers
At each depth: possibly scatter in medium, then sample interface
When ray exits through transmission: return the accumulated sample
Russian roulette for path termination
When regularize=true, the coating's microfacet alpha is increased to reduce fireflies.
Hikari.sample_cie_x Method
sample_cie_x(table::CIEXYZTable, lambda::Float32) -> Float32Sample CIE X color matching function at wavelength lambda (in nm). Uses nearest-neighbor lookup into tabulated data.
sourceHikari.sample_cie_xyz Method
sample_cie_xyz(table::CIEXYZTable, lambda::Float32) -> Vec3fSample CIE XYZ color matching functions at a single wavelength. Returns Vec3f(X, Y, Z) values at that wavelength.
sourceHikari.sample_cie_y Method
sample_cie_y(table::CIEXYZTable, lambda::Float32) -> Float32Sample CIE Y color matching function at wavelength lambda (in nm).
sourceHikari.sample_cie_z Method
sample_cie_z(table::CIEXYZTable, lambda::Float32) -> Float32Sample CIE Z color matching function at wavelength lambda (in nm).
sourceHikari.sample_conditional_1d Method
Sample 1D from conditional distribution (row of 2D distribution).
sourceHikari.sample_continuous Method
Sample continuous value from Distribution1D. Returns (sampled value in [0,1], pdf, offset index).
sourceHikari.sample_continuous Method
Sample a 2D point from the flat distribution. Returns (Point2f(u, v), pdf). The textures parameter is used to deref TextureRef fields when Distribution2D is stored in a MultiTypeSet.
Hikari.sample_d65 Method
sample_d65(lambda::Float32) -> Float32Sample the D65 illuminant spectrum at wavelength lambda (nm). Uses linear interpolation between tabulated values.
sourceHikari.sample_d65_spectral Method
sample_d65_spectral(lambda::Wavelengths) -> SpectralRadianceSample D65 illuminant at multiple wavelengths. Returns raw D65 values (around 80-120 across visible spectrum, normalized to 100 at 560nm). Matches pbrt-v4's illuminant->Sample(lambda) behavior.
sourceHikari.sample_density Method
Sample density at a point using trilinear interpolation.
Uses pbrt-v4's cell-centered interpretation:
p ∈ [0,1]³ is normalized position within bounds
Grid has nx×ny×nz voxels (1-indexed in Julia)
Voxel i is centered at (i - 0.5) / n in normalized space
Interpolation uses 8 neighboring voxels with proper clamping at boundaries
Hikari.sample_dielectric_interface Method
sample_dielectric_interface(wo, uc, u, alpha_x, alpha_y, eta, refl_trans_flags) -> LayeredBSDFSampleSample the dielectric coating interface (top layer in CoatedDiffuse). Handles both smooth (specular) and rough (microfacet) dielectric surfaces.
This matches pbrt-v4's DielectricBxDF::Sample_f exactly.
sourceHikari.sample_dielectric_transmission_spectral Method
sample_dielectric_transmission_spectral(eta, wo, uc) -> (wi, T, valid)Sample transmission through a dielectric interface. Returns transmitted direction, transmittance, and validity flag.
Uses pbrt-v4 convention: eta = n_t / n_i (transmitted IOR / incident IOR). The wo direction is in local shading space where z is the surface normal.
sourceHikari.sample_diffuse_interface Method
sample_diffuse_interface(wo, u, reflectance) -> LayeredBSDFSampleSample the diffuse base layer (bottom in CoatedDiffuse). This is a simple cosine-weighted hemisphere sampler.
sourceHikari.sample_diffuse_transmission_bottom Method
sample_diffuse_transmission_bottom(wo, u, uc, reflectance, transmittance, refl_trans_flags) -> LayeredBSDFSampleSample the diffuse transmission bottom layer for the LayeredBxDF walk. Handles both reflection (R/π, same hemisphere) and transmission (T/π, opposite hemisphere).
sourceHikari.sample_exponential Method
sample_exponential(u::Float32, a::Float32) -> Float32Sample from exponential distribution with rate parameter a. Returns -log(1-u)/a, matching pbrt-v4's SampleExponential.
sourceHikari.sample_f Method
Compute incident ray direction for a given outgoing direction and a given mode of light scattering corresponding to perfect specular reflection or refraction.
sourceHikari.sample_filter Method
Sample the filter using the FilterSampler. Returns FilterSample with position and weight.
sourceHikari.sample_fresnel_microfacet Method
Sample FresnelMicrofacet using Fresnel-weighted probability to choose between reflection and transmission.
sourceHikari.sample_fresnel_specular Method
Compute the direction of incident light wi, given an outgoing direction wo and return the value of BxDF for the pair of directions.
sourceHikari.sample_ggx_vndf Method
sample_ggx_vndf(wo, alpha_x, alpha_y, u) -> Vec3fSample visible normal from GGX distribution.
sourceHikari.sample_hg Method
sample_hg(g, wo, u) -> (wi, pdf)Importance sample the Henyey-Greenstein phase function. Returns sampled direction and PDF.
sourceHikari.sample_hg_phase_spectral Method
sample_hg_phase_spectral(g, wo, u) -> (wi, phase_pdf)Sample direction from Henyey-Greenstein phase function.
sourceHikari.sample_le Method
Sample a ray leaving the environment light (for photon mapping / light tracing). Uses importance sampling based on environment map luminance.
The ray is sampled by:
Importance sampling a direction from the environment map
Placing a disk of radius scene_radius centered at scene_center, perpendicular to that direction
Sampling a point on the disk and shooting a ray inward
Hikari.sample_li Method
Compute radiance arriving at ref.p interaction point at ref.time time due to the ambient light.
Args
a::AmbientLight: Ambient light which illuminates the interaction pointref.ref::Interaction: Interaction point for which to compute radiance.u::Point2f: Sampling point that is ignored forAmbientLight, since it emits light uniformly.
Returns
Tuple{S, Vec3f, Float32, VisibilityTester} where S <: Spectrum:
- `S`: Computed radiance.
- `Vec3f`: Incident direction to the light source `wi`.
- `Float32`: Probability density for the light sample that was taken.
For `AmbientLight` it is always `1`.
- `VisibilityTester`: Initialized visibility tester that holds the
shadow ray that must be traced to verify that
there are no occluding objects between the light and reference point.Hikari.sample_li Method
Compute radiance arriving at ref.p interaction point at ref.time time due to that light, assuming there are no occluding objects between them.
Args
p::PointLight: Light which illuminates the interaction pointref.ref::Interaction: Interaction point for which to compute radiance.u::Point2f: Sampling point that is ignored forPointLight, since it has no area.
Returns
Tuple{S, Vec3f, Float32, VisibilityTester} where S <: Spectrum:
- `S`: Computed radiance.
- `Vec3f`: Incident direction to the light source `wi`.
- `Float32`: Probability density for the light sample that was taken.
For `PointLight` it is always `1`.
- `VisibilityTester`: Initialized visibility tester that holds the
shadow ray that must be traced to verify that
there are no occluding objects between the light and reference point.Hikari.sample_li Method
Compute radiance arriving at interaction point from the environment light. Uses importance sampling based on environment map luminance.
Args
e::EnvironmentLight: Environment light.ref::Interaction: Interaction point for which to compute radiance.u::Point2f: Random sample for direction selection.
Returns
Tuple of (radiance, incident direction, pdf, visibility tester)
sourceHikari.sample_light_sampler Method
sample_light_sampler(p, q, alias, u::Float32) -> (light_idx::Int32, pmf::Float32)GPU-compatible light sampling using pre-computed alias table arrays.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights, light::AmbientLight, p::Point3f, lambda::Wavelengths, u::Point2f)Sample ambient light spectrally (uniform sphere). NOTE: Ambient light represents uniform illumination from all directions. We sample the full sphere uniformly - the BSDF evaluation will naturally give zero for directions below the surface, and cos_theta weighting handles the rest.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights, light::DiffuseAreaLight, p, lambda, u)Sample a diffuse area light spectrally. Uniformly samples a point on the triangle and converts area PDF to solid angle PDF.
Following pbrt-v4's DiffuseAreaLight::SampleLi + Triangle::Sample (uniform area).
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights, light::DirectionalLight, p::Point3f, lambda::Wavelengths, u::Point2f)Sample a directional light spectrally.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights, light::EnvironmentLight, p::Point3f, lambda::Wavelengths, u::Point2f)Sample environment light spectrally with importance sampling.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights, light::PointLight, p::Point3f, lambda::Wavelengths, u::Point2f)Sample a point light spectrally.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights, light::SpotLight, p::Point3f, lambda::Wavelengths, u::Point2f)Sample a spotlight spectrally.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights, light::SunLight, p::Point3f, lambda::Wavelengths, u::Point2f)Sample a sun light spectrally.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights::StaticMultiTypeSet, flat_idx::Int32, p, lambda, u)Sample a light from a StaticMultiTypeSet using a flat 1-based index. Converts flat index to SetKey internally.
sourceHikari.sample_light_spectral Method
sample_light_spectral(table, lights::StaticMultiTypeSet, idx::SetKey, p, lambda, u)Sample a light from a StaticMultiTypeSet using type-stable dispatch via with_index.
sourceHikari.sample_nanovdb_density Method
sample_nanovdb_density(medium::NanoVDBMedium, media, p_world::Point3f) -> Float32Sample the density at a world-space point using trilinear interpolation. Matches pbrt-v4's SampleFromVoxels<TreeT, 1, false> sampler.
The media parameter is used to deref TextureRef fields when NanoVDBMedium is stored in a MultiTypeSet.
Hikari.sample_piecewise_1d Method
Sample 1D piecewise constant distribution (pbrt-v4 compatible). Returns (continuous_position, pdf, cell_index).
sourceHikari.sample_piecewise_2d Method
Sample from the 2D distribution using inverse CDF sampling. Returns (point, pdf, grid_index).
sourceHikari.sample_segment! Method
sample_segment!(seg, T_maj_accum, beta, r_u, r_l, rng_state, ...) -> SampleTMajResultSample interactions within a single majorant segment. Following pbrt-v4's inner loop of SampleT_maj. Uses deterministic LCG RNG for medium sampling (pbrt-v4 pattern).
sourceHikari.sample_spectral_material Function
sample_spectral_material(table, materials::StaticMultiTypeSet, idx, wo, ns, tfc, lambda, u, rng, regularize=false)Type-stable dispatch for spectral BSDF sampling. Returns SpectralBSDFSample from the appropriate material type.
When regularize=true, near-specular BSDFs will be roughened to reduce fireflies.
Arguments:
tfc::TextureFilterContext: Contains UV coordinates and screen-space derivatives for texture filtering.
Hikari.sample_specular_reflection Method
Compute the direction of incident light wi, given an outgoing direction wo and return the value of BxDF for the pair of directions. sample parameter isn't needed for the δ-distribution.
Hikari.sample_specular_transmission Method
Compute the direction of incident light wi, given an outgoing direction wo and return the value of BxDF for the pair of directions. sample parameter isn't needed for the δ-distribution.
Hikari.sample_tent Method
Sample tent distribution for one dimension. Uses inverse CDF sampling of the triangle distribution.
sourceHikari.sample_visible_wavelengths Method
sample_visible_wavelengths(u::Float32) -> Float32Sample a single wavelength using importance sampling for visible light. Inverse CDF of the hyperbolic secant squared distribution.
From pbrt-v4: lambda = 538 - 138.888889 * atanh(0.85691062 - 1.82750197 * u)
sourceHikari.sample_wavelengths_stratified Method
sample_wavelengths_stratified(u::NTuple{4, Float32}) -> WavelengthsSample 4 wavelengths with stratified sampling (one per stratum).
sourceHikari.sample_wavelengths_uniform Method
sample_wavelengths_uniform(u::Float32) -> WavelengthsSample 4 wavelengths using hero wavelength sampling with stratified offsets. This gives better spectral coverage than independent uniform samples.
sourceHikari.sample_wavelengths_visible Method
sample_wavelengths_visible(u::Float32) -> WavelengthsSample 4 wavelengths using importance sampling with hero wavelength method. Uses pbrt-v4's visible wavelength distribution for reduced variance.
sourceHikari.sample_σ_a Method
Sample σ_a at a point using trilinear interpolation. Returns default RGBSpectrum(1.0) if σ_a_grid is nothing.
sourceHikari.sample_σ_s Method
Sample σ_s at a point using trilinear interpolation. Returns default RGBSpectrum(1.0) if σ_s_grid is nothing.
sourceHikari.sampler_hash Function
sampler_hash(px::Int32, py::Int32, sample_idx::Int32, dim::Int32, seed::UInt32=0) -> UInt64Hash function for stratified sampler, matching pbrt-v4's Hash() behavior. Produces deterministic, well-distributed hash for (pixel, sample, dimension) tuples.
sourceHikari.sampler_hash_2d Method
sampler_hash_2d(px::Int32, py::Int32, sample_idx::Int32, dim::Int32) -> UInt64Simplified hash for 2D samples where we need two correlated values.
sourceHikari.save_nanovdb Method
save_nanovdb(filepath, data::Array{Float32,3}, origin, extent)Build a NanoVDB tree from a dense 3D array and save to file.
sourceHikari.save_nanovdb Method
save_nanovdb(filepath, buffer, metadata)Save a NanoVDB buffer (from build_nanovdb_from_dense) to a .nanovdb file that can be loaded back with NanoVDBMedium(filepath; ...).
Prepends GridData (672 bytes) + TreeData (64 bytes) header, compresses with zlib.
sourceHikari.shade Method
shade(mat::Emissive, ray, si, scene, beta, depth, max_depth) -> RGBSpectrumShading for emissive material returns the emission directly. Emissive materials don't reflect light, they only emit.
sourceHikari.shade Method
shade(cloud::CloudVolume, ray, si, scene, beta) -> RGBSpectrumShade a ray that hit a volume material using ray marching with single scattering. The SurfaceInteraction gives us the entry point into the volume.
This version checks for scene intersections DURING ray marching, so objects inside or in front of the volume are properly rendered with correct transmittance.
sourceHikari.shade Method
shade(material::Material, ray, si, scene, beta, depth, max_depth) -> RGBSpectrumCompute direct lighting and specular bounces for a surface hit. This is the generic implementation that works for all material types.
sourceHikari.sin2_theta Method
sin2_theta(w) -> Float32Get sin²(θ) of a direction in local coordinates.
sourceHikari.sin_phi Method
sin_phi(w) -> Float32Get sin(φ) of a direction in local coordinates (matches pbrt-v4's SinPhi).
sourceHikari.sin_theta Method
sin_theta(w) -> Float32Get sin(θ) of a direction in local coordinates.
sourceHikari.sobol_sample Method
sobol_sample(a::Int64, dimension::Int32, scramble_seed::UInt32, sobol_matrices) -> Float32Generate a single Sobol sample for the given index and dimension. Reference: pbrt-v4/src/pbrt/util/lowdiscrepancy.h SobolSample (lines 167-180)
Arguments:
a: Sample index (0-based)
dimension: Sobol dimension (0-based, max NSOBOL_DIMENSIONS-1)
scramble_seed: Seed for FastOwen scrambling
sobol_matrices: The Sobol generator matrices array (must be GPU-accessible)
Hikari.sobol_sample_unscrambled Method
sobol_sample_unscrambled(a::Int64, dimension::Int32) -> Float32Generate an unscrambled Sobol sample (for debugging/comparison).
sourceHikari.spawn_spectral_ray Method
spawn_spectral_ray(item::PWMaterialEvalWorkItem, wi::Vec3f, new_beta::SpectralRadiance,
is_specular::Bool, pdf::Float32, eta_scale::Float32) -> PWRayWorkItemCreate a new ray work item for indirect lighting from a material evaluation.
sourceHikari.spectral_to_linear_rgb Method
spectral_to_linear_rgb(table::CIEXYZTable, L::SpectralRadiance, lambda::Wavelengths) -> Vec3fConvert spectral radiance to linear RGB (no gamma):
Convert to XYZ using color matching functions
Transform XYZ to linear sRGB using standard matrix (D65 white point)
This matches pbrt-v4's RGBFilm::ToSensorRGB → outputRGBFromSensorRGB pipeline.
The light sources use uplift_rgb_illuminant which multiplies by D65 illuminant spectrum (RGBIlluminantSpectrum), so no chromatic adaptation is needed here. D65-shaped spectra integrated under CIE XYZ naturally produce the correct white balance when transformed using the standard sRGB matrix.
Hikari.spectral_to_linear_rgb_passthrough Method
spectral_to_linear_rgb_passthrough(L::SpectralRadiance) -> Vec3fPseudo-spectral passthrough: read RGB directly from spectral channels. For use when uplift_rgb(...; method=:passthrough) was used to store RGB directly. This matches PbrtWavefront behavior and avoids spectral-to-XYZ conversion noise.
Hikari.spectral_to_srgb Method
spectral_to_srgb(table::CIEXYZTable, L::SpectralRadiance, lambda::Wavelengths) -> Vec3fConvert spectral radiance to sRGB:
Convert to XYZ using color matching functions
Transform XYZ to linear sRGB using standard matrix (D65 white point)
Apply sRGB gamma curve
This matches pbrt-v4's RGBFilm pipeline. Light sources use uplift_rgb_illuminant which multiplies by D65 illuminant spectrum, so no chromatic adaptation is needed.
Hikari.spectral_to_xyz Method
spectral_to_xyz(table::CIEXYZTable, L::SpectralRadiance, lambda::Wavelengths) -> Vec3fConvert spectral radiance to CIE XYZ using the pbrt-v4 algorithm.
The formula is: XYZ = (1/CIE_Y_integral) * Average(SafeDiv(CMF * L, pdf))
where CMF is the color matching function and pdf is the wavelength sampling PDF.
sourceHikari.spectrum_to_photometric Method
spectrum_to_photometric(s::Spectrum) -> Float32Compute photometric luminance of a spectrum, matching pbrt-v4's SpectrumToPhotometric.
For RGBIlluminantSpectrum, this extracts only the D65 illuminant component (ignoring the RGB multiplier), matching pbrt-v4's behavior where SpectrumToPhotometric extracts s.Illuminant() first.
This is used for light normalization: scale = 1 / spectrum_to_photometric(spectrum)
Hikari.specular_bounce Method
specular_bounce(type, bsdf, ray, si, scene, beta, depth, max_depth) -> RGBSpectrumCompute specular reflection or transmission contribution by tracing a bounce ray.
sourceHikari.srgb_gamma_to_linear Method
srgb_gamma_to_linear(c::Float32) -> Float32Remove sRGB gamma curve from an sRGB value to get linear RGB.
sourceHikari.start_pixel Method
Other samplers are required to explicitly call this, in their respective implementations.
sourceHikari.stratified_sample_1d Method
stratified_sample_1d(px::Int32, py::Int32, sample_idx::Int32, dim::Int32) -> Float32Generate a deterministic 1D sample in [0, 1) for the given pixel, sample index, and dimension. This matches pbrt-v4's IndependentSampler behavior.
sourceHikari.stratified_sample_2d Method
stratified_sample_2d(px::Int32, py::Int32, sample_idx::Int32, dim::Int32) -> Tuple{Float32, Float32}Generate a deterministic 2D sample in [0, 1)² for the given pixel, sample index, and dimension. Uses two consecutive dimensions for the two components.
sourceHikari.sunsky_to_envlight Method
sunsky_to_envlight(; direction, intensity=1f0, turbidity=2.5f0, ...) -> (EnvironmentLight, SunLight)Pre-bake the Hosek-Wilkie spectral sky model into an equal-area EnvironmentMap and create a separate SunLight for the sun disk. Matches pbrt-v4's makesky approach: evaluate the spectral model at 13 wavelengths, convert to XYZ via CIE color matching functions (dividing by CIE_Y_integral), then to sRGB for storage.
Arguments
direction::Vec3f: Direction TO the sun (normalized internally)intensity::Float32 = 1f0: Overall brightness multiplier (scale parameter, default 1.0)turbidity::Float32 = 2.5f0: Atmospheric turbidity (1=clear, 10=hazy)ground_albedo::RGBSpectrum = RGBSpectrum(0.3f0): Ground color below horizonground_enabled::Bool = true: Whether to show ground below horizonresolution::Int = 512: Resolution of equal-area square map
Returns
A tuple of (EnvironmentLight, SunLight) to be added to the scene.
Hikari.surface_direct_lighting_inner! Method
Inner function for surface direct lighting - can use return statements.
Uses BVH light sampler for spatially-aware importance sampling. Nearby lights get higher probability than distant ones (pbrt-v4's BVHLightSampler).
Now uses pre-computed Sobol samples from pixel_samples (pbrt-v4 RaySamples style).
sourceHikari.tan2_theta Method
tan2_theta(w) -> Float32Get tan²(θ) of a direction in local coordinates.
sourceHikari.terminate_secondary_wavelengths Method
terminate_secondary_wavelengths(lambda::Wavelengths) -> WavelengthsSet PDF to zero for secondary wavelengths when a wavelength-dependent event occurs (e.g., refraction with dispersion). This indicates that only the hero wavelength (first) should contribute to the pixel.
sourceHikari.to_grid Method
Calculate indices of a point p in grid constrained by bounds.
Computed indices are in [0, resolution), which is the correct input for hash.
Hikari.trace_shadow_transmittance Method
trace_shadow_transmittance(accel, media_interfaces, media, rgb2spec_table, origin, dir, t_max, lambda, medium_idx)Trace a shadow ray computing transmittance through media and transmissive boundaries. Returns (T_ray, r_u, r_l, visible) where:
T_ray: spectral transmittance
r_u, r_l: MIS weight accumulators for combining with path weights
visible: false if ray hits an opaque surface
Following pbrt-v4's TraceTransmittance: transmissive surfaces (MediumInterface) let the ray through, while opaque surfaces block it. The final contribution is computed as: Ld * T_ray / average(path_r_u * r_u + path_r_l * r_l)
sourceHikari.trowbridge_reitz_d Method
trowbridge_reitz_d(wm, alpha_x, alpha_y) -> Float32Evaluate the TrowbridgeReitz D (normal distribution function) at microfacet normal wm. Matches pbrt-v4's TrowbridgeReitzDistribution:😄(wm).
sourceHikari.trowbridge_reitz_d_pdf Method
trowbridge_reitz_d_pdf(w, wm, alpha_x, alpha_y) -> Float32Evaluate the visible normal distribution D(w, wm) for PDF computation. Matches pbrt-v4's D(w, wm) = G1(w) / AbsCosTheta(w) * D(wm) * AbsDot(w, wm).
sourceHikari.trowbridge_reitz_effectively_smooth Method
trowbridge_reitz_effectively_smooth(alpha_x, alpha_y) -> BoolCheck if the distribution is effectively smooth (matches pbrt-v4's EffectivelySmooth).
sourceHikari.trowbridge_reitz_g Method
trowbridge_reitz_g(wo, wi, alpha_x, alpha_y) -> Float32Compute G(wo, wi) Smith masking-shadowing function (matches pbrt-v4's G).
sourceHikari.trowbridge_reitz_g1 Method
trowbridge_reitz_g1(w, alpha_x, alpha_y) -> Float32Compute G1(w) Smith masking function (matches pbrt-v4's G1).
sourceHikari.trowbridge_reitz_lambda Method
trowbridge_reitz_lambda(w, alpha_x, alpha_y) -> Float32Compute Lambda(w) for Smith masking-shadowing (matches pbrt-v4's Lambda).
sourceHikari.trowbridge_reitz_pdf Method
trowbridge_reitz_pdf(w, wm, alpha_x, alpha_y) -> Float32Compute PDF for visible normal sampling (matches pbrt-v4's PDF).
sourceHikari.trowbridge_reitz_sample_wm Method
trowbridge_reitz_sample_wm(w, u, alpha_x, alpha_y) -> Vec3fSample visible normal from TrowbridgeReitz distribution (matches pbrt-v4's Sample_wm).
sourceHikari.uint32_to_bytes Method
uint32_to_bytes(v::UInt32) -> NTuple{4,UInt8}GPU-compatible conversion of UInt32 to bytes.
sourceHikari.uint64_to_bytes Method
uint64_to_bytes(v::UInt64) -> NTuple{8,UInt8}GPU-compatible conversion of UInt64 to bytes.
sourceHikari.uplift_rgb Method
uplift_rgb(rgb::RGBSpectrum, lambda::Wavelengths; method=:sigmoid) -> SpectralRadianceConvert Hikari RGBSpectrum to spectral radiance at given wavelengths. Uses global table - not GPU-compatible. Use the version with explicit table for GPU kernels.
Methods:
:sigmoid- Smooth sigmoid polynomial (pbrt-v4 style, lowest variance, default):simple- Fast piecewise linear:smits- Smits' method:passthrough- Store RGB directly as first 3 spectral channels (pseudo-spectral, fastest)
Hikari.uplift_rgb Method
uplift_rgb(table::RGBToSpectrumTable, rgb::RGBSpectrum, lambda::Wavelengths) -> SpectralRadianceGPU-compatible version that takes an explicit table parameter. Uses sigmoid polynomial method (pbrt-v4 style, lowest variance).
sourceHikari.uplift_rgb_illuminant Method
uplift_rgb_illuminant(table::RGBToSpectrumTable, rgb::RGBSpectrum, lambda::Wavelengths) -> SpectralRadiance
uplift_rgb_illuminant(table::RGBToSpectrumTable, s::RGBIlluminantSpectrum, lambda::Wavelengths) -> SpectralRadianceConvert RGB to spectral radiance for illuminants (light sources, environment maps). Following pbrt-v4's RGBIlluminantSpectrum which multiplies by the D65 illuminant spectrum.
Use this for:
Environment maps (ImageInfiniteLight)
Any RGB-specified light source
Do NOT use for:
Material reflectance/albedo (use uplift_rgb instead)
Emission from non-illuminant sources
Hikari.uplift_rgb_unbounded Method
uplift_rgb_unbounded(rgb::RGBSpectrum, lambda::Wavelengths) -> SpectralRadianceConvert Hikari RGBSpectrum to spectral radiance for emission/illumination. Uses sigmoid polynomial method with scaling for unbounded values. Uses global table - not GPU-compatible. Use the version with explicit table for GPU kernels.
sourceHikari.uplift_rgb_unbounded Method
uplift_rgb_unbounded(table::RGBToSpectrumTable, rgb::RGBSpectrum, lambda::Wavelengths) -> SpectralRadianceGPU-compatible version that takes an explicit table parameter.
sourceHikari.uplift_scalar Method
uplift_scalar(value::Float32, lambda::Wavelengths) -> SpectralRadianceConvert a scalar value to uniform spectral radiance.
sourceHikari.uv_to_direction_equal_area Method
uv_to_direction_equal_area(uv::Point2f, rotation::Mat3f) -> Vec3fConvert UV to direction using equal-area (octahedral) mapping. Inverse of direction_to_uv_equal_area.
sourceHikari.uv_to_direction_equirect Method
Convert equirectangular UV coordinates to a direction vector. Inverse of direction_to_uv_equirect.
The rotation matrix transforms from render space to light/map space. We apply the rotation (not inverse) to transform from light space back to render space.
sourceHikari.visible_wavelengths_pdf Method
visible_wavelengths_pdf(lambda::Float32) -> Float32PDF for importance-sampled visible wavelengths, centered at 538nm. This distribution reduces variance by sampling more where human vision is sensitive.
From pbrt-v4: PDF = 0.0039398042 / cosh²(0.0072 * (lambda - 538))
sourceHikari.vp_accumulate_to_rgb_kernel! Method
vp_accumulate_to_rgb_kernel!(...)Accumulate spectral radiance to RGB with filter weight support. Following pbrt-v4's film accumulation pattern:
Convert spectral L to XYZ using CIE matching functions
Convert XYZ to linear sRGB
Apply maxComponentValue clamping for firefly suppression
Accumulate weighted RGB: rgbSum += weight * rgb
Accumulate weight: weightSum += weight
Note: Sensor simulation (imaging_ratio, white balance) is applied in postprocessing, not here. This matches pbrt-v4's architecture where the film stores raw linear HDR values and the sensor conversion happens at output time.
sourceHikari.vp_compute_geometric_normal Method
compute_geometric_normal(primitive) -> Vec3fCompute geometric normal from a triangle primitive.
sourceHikari.vp_compute_partial_derivatives Method
vp_compute_partial_derivatives(primitive) -> (dpdu, dpdv)Compute position partial derivatives (∂p/∂u, ∂p/∂v) from triangle vertices and UVs. These are used for texture filtering and tangent space construction. Following pbrt-v4's Triangle::InteractionFromIntersection.
sourceHikari.vp_compute_shading_normal Method
vp_compute_shading_normal(primitive, barycentric, geometric_normal) -> Vec3fCompute interpolated shading normal from vertex normals.
sourceHikari.vp_compute_shading_tangents Method
vp_compute_shading_tangents(primitive, barycentric, ns, dpdu, dpdv) -> (dpdus, dpdvs)Compute shading tangent vectors from vertex tangents or geometric derivatives. Following pbrt-v4's approach: use vertex tangents if available, otherwise orthonormalize geometric dpdu/dpdv to the shading normal.
sourceHikari.vp_compute_surface_geometry Method
vp_compute_surface_geometry(primitive, barycentric, ray) -> NamedTupleCompute all surface geometry needed for material evaluation. Returns (pi, n, dpdu, dpdv, ns, dpdus, dpdvs, uv).
sourceHikari.vp_compute_uv_barycentric Method
vp_compute_uv_barycentric(primitive, barycentric) -> Point2fCompute UV coordinates using barycentric coordinates from ray intersection.
sourceHikari.vp_finalize_film_kernel! Method
vp_finalize_film_kernel!(...)Finalize film by dividing weighted RGB sum by weight sum. Following pbrt-v4's RGBFilm::GetPixelRGB:
- rgb = rgbSum / weightSum (if weightSum != 0)
Hikari.vp_generate_camera_rays_kernel! Method
vp_generate_camera_rays_kernel!(...)Generate camera rays with per-pixel wavelength sampling and filter weight computation. Following pbrt-v4's GetCameraSample: samples the filter, computes offset and weight.
sourceHikari.vp_generate_ray_samples! Method
vp_generate_ray_samples!(backend, state, sample_idx, depth, sobol_rng)Generate pre-computed Sobol samples for all active rays at the current depth.
sourceHikari.vp_generate_ray_samples_kernel! Method
vp_generate_ray_samples_kernel!(...)Generate pre-computed Sobol samples for the current bounce. Following pbrt-v4's WavefrontPathIntegrator::GenerateRaySamples:
For each active ray in the queue, generate 7 samples for this bounce
Samples are stored in pixel_samples indexed by pixel_index
Dimension allocation: 6 (camera) + 7 * depth
This replaces rand() calls with correlated low-discrepancy samples.
sourceHikari.vp_trace_shadow_rays_kernel! Method
vp_trace_shadow_rays_kernel!(...)Trace shadow rays and accumulate unoccluded contributions. For rays through media, computes transmittance along the ray. Handles transmissive boundaries (MediumInterface) by tracing through them.
sourceHikari.weight_color Method
weight_color(lum_p, lum_q, sigma, variance) -> Float32Color/luminance edge-stopping weight. If variance > 0, scales by sqrt(variance) for variance-guided filtering.
sourceHikari.weight_depth Method
weight_depth(d_p, d_q, sigma, step_size) -> Float32Depth edge-stopping weight. Uses step size to adapt to increasing filter radius.
sourceHikari.weight_normal Method
weight_normal(n_p, n_q, sigma) -> Float32Normal edge-stopping weight using dot product. High sigma means more tolerance for normal differences.
sourceHikari.world_to_index_f Method
world_to_index_f(medium::NanoVDBMedium, p::Point3f) -> Point3fTransform a point from world space to index space (floating point). Matches pbrt-v4's Grid::worldToIndexF exactly.
sourceHikari.world_to_local Method
world_to_local(v, n, tangent, bitangent) -> Vec3fTransform direction from world space to local (shading) space. In local space, the normal is (0, 0, 1).
sourceHikari.world_to_local Method
Given the orthonormal vectors s, t, n in world space, the matrix M that transforms vectors in world space to local reflection space is: sx, sy, sz tx, ty, tz nx, ny, nz
Since it is an orthonormal matrix, its inverse is its transpose.
sourceHikari.worley3d Method
worley3d(x, y, z; seed=0) -> Float643D Worley (cellular) noise. Returns distance to nearest feature point in [0, ~1.5]. The characteristic "cell" structure creates puffy, billowy patterns ideal for clouds.
sourceHikari.worley_fbm3d Method
worley_fbm3d(x, y, z; octaves=3, persistence=0.5, lacunarity=2.0) -> Float64Multi-octave Worley noise for more detailed cellular patterns.
sourceHikari.wrap_equal_area_square Method
wrap_equal_area_square(uv::Point2f) -> Point2fWrap UV coordinates for equal-area sphere mapping (octahedral wrapping). Handles coordinates outside [0,1]² by mirroring appropriately.
sourceHikari.xy_to_XYZ Method
xy_to_XYZ(x, y) -> (X, Y, Z)Convert CIE xy chromaticity to XYZ with Y=1.
sourceHikari.xyz_e_to_linear_srgb Method
xyz_e_to_linear_srgb(xyz::Vec3f) -> Vec3fConvert CIE XYZ (in Equal-Energy illuminant space) to linear sRGB (D65). This applies Bradford chromatic adaptation from E to D65 before the XYZ-to-sRGB transformation. Use this for spectral rendering where wavelength sampling assumes an equal-energy white spectrum.
Matrix values are inlined for performance and to avoid const reload issues.
sourceHikari.xyz_to_linear_srgb Method
xyz_to_linear_srgb(xyz::Vec3f) -> Vec3fConvert CIE XYZ to linear sRGB color space. Uses the standard XYZ to sRGB matrix (D65 white point). Note: This assumes XYZ is already in D65 illuminant space.
sourceHikari.zsobol_get_sample_index Method
zsobol_get_sample_index(morton_index, dimension, log2_spp, n_base4_digits) -> UInt64Compute the permuted sample index for ZSobol sampling. Reference: pbrt-v4/src/pbrt/samplers.h ZSobolSampler::GetSampleIndex (lines 301-356)
This applies random base-4 digit permutations to the Morton-encoded index, ensuring good sample distribution across pixels while maintaining low-discrepancy.
Uses compile-time unrolled loop with branchless operations for SPIR-V compatibility.
sourceHikari.zsobol_hash Method
zsobol_hash(dimension::Int32, seed::UInt32) -> UInt64Hash function for ZSobol scrambling, matching pbrt-v4's Hash(dimension, seed). Uses MurmurHash64A on the byte representation of (dimension, seed).
sourceHikari.zsobol_sample_1d Method
zsobol_sample_1d(px, py, sample_idx, dim, log2_spp, n_base4_digits, seed, sobol_matrices) -> Float32Generate a 1D Sobol sample for the given pixel and sample index.
sourceHikari.zsobol_sample_2d Method
zsobol_sample_2d(px, py, sample_idx, dim, log2_spp, n_base4_digits, seed, sobol_matrices) -> (Float32, Float32)Generate a 2D Sobol sample for the given pixel and sample index. Uses two consecutive Sobol dimensions with independent scrambling seeds.
sourceHikari.ρ_lambertian_reflection Method
Hemispherical-hemisphirical reflectance value is constant.
sourceRaycore.maybe_convert_field Method
Raycore.maybe_convert_field(dhv::MultiTypeSet, tex::Texture)Convert Hikari Texture to Raycore.TextureRef or raw value for MultiTypeSet storage.
Const textures (scalars): return constval directly (no indirection needed)
Non-const textures (arrays): convert to TextureRef
Raycore.sync! Method
sync!(scene::Scene)Build/rebuild the acceleration structure and update scene bounds. Call this after adding geometry with push!.
Raycore.to_gpu Method
to_gpu(backend, table::CIEXYZTable) -> CIEXYZTableConvert CIEXYZTable to use GPU-compatible arrays. Uses KernelAbstractions backend for allocation.
sourceRaycore.to_gpu Method
to_gpu(ArrayType, mat::Emissive)Convert Emissive to GPU-compatible form.
sourceRaycore.to_gpu Method
to_gpu(backend, table::RGBToSpectrumTable) -> RGBToSpectrumTableConvert RGBToSpectrumTable to use GPU-compatible arrays. Uses KernelAbstractions backend for allocation.
source