Sunday, March 4, 2007

This is how the Terrestrial Model System works

The game defines 4 sizes for "Terrestrials" (i.e. planets that arn't Gas Giants). To great astonishment the reader will learn that these are are named Tiny, Small, Medium and Large. These are defined by a subdivided icosahedron. Tiny is subdivided once, resulting in 41 vertices. Large is subdivided 4 times, resulting in 641. It is these vertices that comprise the Planetary Location Grid.

This grid is used for the locating of buildings and resource deposits and could also form the basis of a pathfinding graph (like for an A*) if I ever need one.

These geodesic spheres are then scaled so that the edge length on all of them are the same and this determines the final radius of each planet size within the galactic coordinate system.

For rendering, I take each basis grid and subdivide it a bunch more times. So a large planet's model ends up with about 10,000 vertices.


Here's a planet rendered in wireframe, to illustrate the vertex density.


I maintain a single instance of each of these models in a vertex buffer of Position/Texture, where the texture coordinates contains pre-calculated Longitude and Latitude. I don't really have any use for this but I could do latitudinally-based terrain alterations. Longitude will be used for dayside/nightside calcs. I'm doing these with a dot product in the pixel shader right now but I can drop that computation and use longitude compared to the star's position and the planet's current axial theta instead.

Then there's a second huge vertex buffer that contains the Perlin samples for every vertex of every planet in the galaxy. It ends up about 10MB for a Large Galaxy using a 2D texture coord and takes about 8 seconds to generate. This is a rather greedy use of video memory. If it turns out to be a problem, I have all the hooks in place to create and discard planet sample buffers as the camera moves around the galaxy. Each planet takes 0.1-ish seconds to sample and there are no more than a half-dozen terrestrials in any system so that's just a couple frames of delay.

The other approach would be to maintain a big buffer but only big enough for the fattest system. I keep all the vertex data in system memory as well already, so I can rebuild my buffers during device resets. So I could page just the chunks for the system currently containing the camera. I guess that would work a lot better - just a lock/unlock instead of going back to the perlin.

So when it comes time to render a planet, I set the basis model as Stream[0] and the sample buffer as Stream[1], with the appropriate offset and out comes the planet.

As of this writing, I do the displacements of the vertices inside the shader, working off the raw noise sample from the perlin. The coefficient is Noise^3 and samples range from -1.5ish to +1.5ish, which results in those nice steep mountains. But I think I'm going to move that calc back into a pre-processing stage during generation. No sense doing Noise^3 thousands of times per frame. And this will also allow me to centralize terrain displacement logic. I need that so I can displace building models and similar.

The other advantage to this is that it will allow me to implement some special handling for certain locations on the planetary surface: specifically mineral deposits. During processing, I can determine the locations of these resources and use the adjacencies on the high-density rendering models to flatten out the "hex" containing the resource. Or flag them to not get rendered at all. Then I can patch in a small model of ingots or crystals or what-have-you. It should look a lot better than decal textures.

There are a lot of unresolved issues with the renderings of planets. They need some sort of atmosphere to help them pop off the background.

This early planetary renderer is doing colors by vertex using an extremely simple lerp. I'm going to replace that with an altitude-based texture lookup. I'll pass the elevation through to the pixel shader and then it can do a 1D tex into the sample swatch. I'm hoping this will result in a smoother blending between terrain types.

Coastlines are a different problem because a pixel is either in or out of the water. The fuzzy dark coastlines you see in the current renderings are what you get when you lerp between Green and Blue. The texture lookup for color should smooth these out, getting rid of the dark band and hiding the hexagonal nature of the underlying structure quite a bit. But I'm a little worried about heavy artifacting right at 0 elevation. I'm betting it would look really static-y right along the coast as the planet and camera move.

What I'm considering is displacing the sea floor vertices down quite abruptly (right now they don't get displaced at all) from sea level. I'll color them based on their depth from a pale blue to a deep blue. And then, right at sea level, render a second, smooth sphere in blue with a 0.5 alpha. The hope is that the intersection of the ocean sphere and the land sphere will produce a nice-looking coastline. But who knows. Will just have to try it.

This ocean sphere could get a specular treatment that might look nice and if I got real ambitious, I could put an ocean waves effect on it.

No comments: