NeRFs

I haven’t been this excited about a topic in CV since GANs. Trying to take reality and put it in a 3D digital setting has been one of my greatest interests in technology. Whether it’s CGI in movies or video game engines like Unreal or Unity, there’s always been hardship in making a scene look real. With NeRFs, all the days of editing environments in 3D modeling tools can be replaced with a 30 second video. Below I’ll talk about some use cases and some NeRFs I’ve put together using NVIDIA’s instant-ngp.

Google Maps & Self Driving Cars

The first place a lot of companies are trying to use NeRFs is in the mapping of cities. So far, I’ve seen google and ydrive.ai both take the same approach. They drive around in cars to get the input data for a NeRF and build large scale NeRFs that can span city blocks (Block NeRF). This is different than many NeRFs thatfocus on an object of interest that a scanner orbits around. Google will obviously be using this technology to enhance their Google maps experience, but ydrive.ai might have a different approach. With their partnership with Unreal Engine, it seems that they are focusing on bringing NeRFs into video game engines through a plugin called CitySynth. Here is an interesting podcast showing off some of ydrive.ai’s NeRFs. I wish I had access to this plugin when I made my synthetic dataset!

CGI and Visual Effects

Some amazing technology has been built to satisfy the demands of VFX studios. AI has been used for de-aging actors (Deep Fakes), huge blockbuster budget tv shows like The Mandalorian and House of the Dragon have been using LED Volumes and Unreal Engine instead of green screens, and I’m not sure if it’s true but I’ve heard that optical flow originated from The Matrix’s ‘Bullet Time’ scene. VFX studios have been at the forefront of using technology for entertainment. I imagine they will do the same with NeRFs. Check out this great video on certain things NeRFs can help with regarding CGI.

Drone Footage

The first place I wanted to start in my NeRF exploration was drone footage. Below I have a couple NeRFs I made from drone footage I found online. In order to get a perfect video on a drone, you need either a really skilled pilot or a predetermined flight path. If using drone footage for advertisements or marketing, someone could just take images and use a NeRF to make the video. If a director doesn’t like the camera movement from the NeRF, it is very easy to make a new one. There wouldn’t be that flexibility using drone video. I also included an example here where I took images from a satellite imagery map and passed it through a NeRF.

Made from drone footage I found of a monument.

Made with 45 images of Red Rocks in Denver, Colorado.

I made a dataset screenshotting a 2d map online.

Forensics

This may be an odd business case, but it could make sense. I wonder if a jury was able to navigate in a 3D space of a car crash or crime scene, would it help them to make a decision easier than images. I couldn’t find the best quality images so many artifacts can be seen.