GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering
We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering.
Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods.
We demonstrate our approach in several scenarios, like the exploration of a 3D scan (81923 resolution), of hypertextured meshes (163843 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget.
Images and movies
See also
Online Video
BibTex references
@InProceedings\{CNLE09, author = "Crassin, Cyril and Neyret, Fabrice and Lefebvre, Sylvain and Eisemann, Elmar", title = "GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering", booktitle = "ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D)", month = "feb", year = "2009", publisher = "ACM Press", organization = "ACM", address = "Boston, MA, Etats-Unis", note = "to appear", url = "http://maverick.inria.fr/Publications/2009/CNLE09" }