how it works part 2
to solve this, we find the distance from the pixel's current texture coordinate to the center of the texel. we get the rate of change of the texture coordinate across pixels and put it in a 2x2 matrix. we invert that matrix, and then we can use that matrix to transform world positions, which effectively "quantizes" the position data we write to the geometry buffer with respect the texel size. that way we "trick" the light calculations into treating each pixel that exists within a texel as having the same position value.
@cinebox yeah with deferred rendering you typically have at least a buffer for positions, a buffer for normals, and a buffer for albedo color
@dankwraith Unity does it the way I described, world position is calculated from depth and the camera transform
@cinebox huh, makes sense, that implies you could use this technique with forward rendering too
how it works part 2
@dankwraith ...texture coordinate jacobian
very spicy
re: how it works part 2
@dankwraith That looks really good. Do you happen to have another video showing what it looks like with per-pixel lighting? When I look at your clip, I can imagine what it'll look like, but it would be nice to actually see it on the screen.
Also, are you making a game with this?
re: how it works part 2
@loke just posted it here: https://monads.online/@dankwraith/105738694681100823
our kickstarter went live today: https://www.kickstarter.com/projects/prophetgoddess/anathema
re: how it works part 2
@dankwraith Thanks. I noted immediately that the trailer didn't have this feature. 🙂
@dankwraith woah, that’s really clever! I’m guessing this means you need an actual buffer for position instead of just faking it from the depth map?