VR Art Museum

One project that I really enjoyed that hit numerous topics was taking a real life sculpture and adding it to a virtual museum. Although I’ve come to discover that it’s probably not worth the trouble to use point cloud meshes in game engines, it is still fun to build something from a real life object. Here is an image of a sculpture in a museum in Miami. My coworker, who lives in Miami, took a 360 video around this large sculpture of a bust. We were simultaneously researching NeRFs, so we converted the video to a NeRF using Nvidia’s instant-ngp.

From that NeRF, we converted the neural rendering into a mesh (.ply file) using Nvidia’s marching cube mesh converter. Using CloudCompare or MeshLab, I trimmed up some of the artifacts that can be seen in the second image around the base of the bust. This was my first time using a mesh from a NeRF. I would recommend still using a LiDAR sensor, but for a quick project like this, a NeRF is a good alternative.

The biggest issues with using point cloud meshes are the quantity of triangles and lighting. There are far too many triangles and vertices that make up the point cloud converted mesh. 3D assets used for video games don’t usually have this many triangles. The high triangle count directly impacts lighting because it is way more computationally expensive to calculate light reflections on an object with tens of thousands of triangles. Also, I almost always run into the issue of having overlapping light map UVs with point cloud meshes.

When I start to combine 3D assets for video games with point clouds, it is very hard to make them blend cohesively. Apart from the lighting, it is usually very obvious to see which object in a scene is a point cloud mesh because of texture and material type.

Here is a walkthrough of the virtual art museum with the real life bust.