tynet-lichat/shirakumo - IRC Chatlog
Search
13:20:28
Colleen
Something like an entity class which copies its data to the uniforms of a selected shader pass?
13:20:28
fullmera
Something like an entity class which copies its data to the uniforms of a selected shader pass?
13:23:15
shinmera
the issue is: how do we efficiently determine the lights that have the highest contribution to the current frame out of all lights in the scene.
13:25:17
shinmera
there's a limited number of lights that can be loaded into memory at once, anyway, but you probably want to reduce the number even further than that because performance for forward-renderers scales pretty badly per light.
13:26:08
shinmera
but regardless of forward/deferred/whatever render technique, we need to automatically enable and disable lights relative to a given limit depending on how relevant they are to the current frame target
13:28:56
shinmera
an obvious measure of relevancy is whether the light location itself is currently visible, but that's tough because we need not only frustum culling for that, but also depth testing to eliminate hidden ones. But then, lights that are outside the camera frustum can still contribute to the elements that are within the frustum, so we need to widen that
13:29:22
shinmera
Worse, we'll want to prefer, say, a very strong light that might be far away over a very weak light close bay
13:31:03
Colleen
Also, sorry for the off topic question, but the entity class you alluded to earlier is the gl struct, right? How does the asset for the gl struct get populated with instances?
13:31:03
fullmera
Also, sorry for the off topic question, but the entity class you alluded to earlier is the gl struct, right? How does the asset for the gl struct get populated with instances?
13:31:47
shinmera
Sorry, I don't know what you mean. A UBO has a fixed size, so it's already populated when it's allocated.
13:32:52
Colleen
I'm not super familiar with opengl stuff. In that case, what process goes on if you wanted to, say, add or remove lights at runtime?
13:32:52
fullmera
I'm not super familiar with opengl stuff. In that case, what process goes on if you wanted to, say, add or remove lights at runtime?
13:32:53
shinmera
Or, that's not quite right, we *could* resize UBOs, we just don't have support for that at the moment.
13:34:19
Colleen
Okay I see. So would the workflow for editing a light source or similar be to disable it, change some bits, then re-enable it?
13:34:19
fullmera
Okay I see. So would the workflow for editing a light source or similar be to disable it, change some bits, then re-enable it?
13:34:58
shinmera
it needs to be enabled every time it should be used, because lights are evicted automatically based on an LRU cache.
13:38:03
Colleen
Ah, right, lol. I hadn't looked much at the renderer before. Is the treatment of gl structs new?
13:38:03
fullmera
Ah, right, lol. I hadn't looked much at the renderer before. Is the treatment of gl structs new?
13:49:27
Colleen
Regarding the lights issue, would it be reasonable to have a shader pass similar to the selection buffer pass with bitmaps on each texture fragment, then process it somehow?
13:49:27
fullmera
Regarding the lights issue, would it be reasonable to have a shader pass similar to the selection buffer pass with bitmaps on each texture fragment, then process it somehow?
13:51:48
Colleen
If the light's shader could write to this texture with its bit, then you could iterate over the texture and count occurences
13:51:48
fullmera
If the light's shader could write to this texture with its bit, then you could iterate over the texture and count occurences
13:54:47
shinmera
you could potentially compute some heuristic score per light then sort the score array on the gpu, but gpu sorting would require compute shaders.
13:55:28
Colleen
Assuming the that the portions of the screen affected by a light source could map to the texture, and that the texture did not need to be as large as the screen, you might be able to exfiltrate the texture to the cpu with less memory overhead
13:55:28
fullmera
Assuming the that the portions of the screen affected by a light source could map to the texture, and that the texture did not need to be as large as the screen, you might be able to exfiltrate the texture to the cpu with less memory overhead
13:55:56
shinmera
how would you determine what portions of the screen are affected by the light source
13:57:39
Colleen
could the light's fragment shader write to a texture fragment whose index is a function of the current fragment's location?
13:57:39
fullmera
could the light's fragment shader write to a texture fragment whose index is a function of the current fragment's location?
13:59:49
Colleen
If I have something more concrete I'll let you know. I'm spitballing poorly-formed ideas
13:59:49
fullmera
If I have something more concrete I'll let you know. I'm spitballing poorly-formed ideas
14:00:25
shinmera
generally the problem is that in order to determine whether a light affects a point on a surface you need to iterate over every light for every point on the surface.
14:01:12
Colleen
If you wouldn't mind another learning question, how does the selection buffer shader pass work?
14:01:12
fullmera
If you wouldn't mind another learning question, how does the selection buffer shader pass work?
14:01:12
shinmera
raytracers have a lot of different schemes to try and remedy this, path tracing, light tracing, combined tracing, photon maps, etc.
14:02:31
Colleen
Is the output of a fragment shader necessarily a color and nothing else, or could it be any data which is specified by a texture?
14:02:31
fullmera
Is the output of a fragment shader necessarily a color and nothing else, or could it be any data which is specified by a texture?
14:03:33
Colleen
Alright, so could a light's fragment shader write the id of the light to the texture?
14:03:33
fullmera
Alright, so could a light's fragment shader write the id of the light to the texture?
14:04:02
shinmera
I don't know what you mean by "a light's fragment shader", but sure, you could write an integer.
14:06:13
Colleen
If the texture is small enough, reading that back out to the cpu then just testing for the presence of each integer could be something
14:06:13
fullmera
If the texture is small enough, reading that back out to the cpu then just testing for the presence of each integer could be something
14:07:14
Colleen
As for the light's fragment shader, I was under the misapprehension that the light would be a shader object in a per-object pass, which is clearly not true
14:07:14
fullmera
As for the light's fragment shader, I was under the misapprehension that the light would be a shader object in a per-object pass, which is clearly not true
14:09:45
shinmera
GPU renderers are optimised for rendering flat geometry. They're basically just a vector polyfill, where you can nudge the vertices about in one stage, and then decide what colour to put into each pixel of the surface in the other.
14:10:48
shinmera
the latter, fragment stage, has no access to other geometry information, so it cannot do a lot of things that a standard raytracer would do
14:11:51
shinmera
like, a raytracer will shoot a ray from the camera to a surface, then evaluate some function to determine the colour contribution at that point. So far it's the same. But that function in a raytracer is usually recursive, where it can again shoot a ray over the scene until it finds a light, then go back and affect all the surfaces in its path
14:12:31
shinmera
Or do the inverse, shoot rays from each light to determine where the light is going to hit, store that info, and then look it up when you actually shoot the camera rays. That's called a "photon map"
14:13:10
Colleen
Right, so the ray tracer finds lights per fragment by its nature, but a standard raster renderer won't have that information?
14:13:10
fullmera
Right, so the ray tracer finds lights per fragment by its nature, but a standard raster renderer won't have that information?
14:14:11
shinmera
A GPU based renderer has access to the local information like the world, view, position, depth, and the parameters of the global lights.
14:15:39
Colleen
okay, so each light is just a bunch of numbers to multiply by and there's no trivial way of deciding which ones are providing more importance
14:15:39
fullmera
okay, so each light is just a bunch of numbers to multiply by and there's no trivial way of deciding which ones are providing more importance
14:16:59
shinmera
This is where forward/deferred rendering distinctions come in. In a forward renderer, for every fragment rendered, you evaluate the light contribution of each light. This scales badly, because you don't know whether the fragment is going to even survive the depth test or not, so you spend a lot of time shading stuff that won't be visible, and it gets worse with more lights. A deferred renderer will instead first render out all the static information needed for shading to a full screen buffer, then run shading on the info in that buffer, reducing the number of shading steps to lights*width*height.
14:17:39
shinmera
But either way, you don't want to have too many lights active at once, because they'll still cost you gpu time to evaluate, even if they don't make a noticeable difference to the final image.
14:18:05
shinmera
So the engine needs to sort the lights by importance and only keep those that make a noticeable difference. Somehowe.
14:19:33
Colleen
I think I have a good model of the problem now. Sorry for using you as a thought-dartboard as I figured it out
14:19:33
fullmera
I think I have a good model of the problem now. Sorry for using you as a thought-dartboard as I figured it out
14:20:26
shinmera
I am not immune to the allure of feeling smart for being able to explain something somewhat competently :v
14:23:39
shinmera
I did write a deferred renderer in trial ages ago https://www.youtube.com/watch?v=xIZxbIQPmqM