In the last two years there has been an explosion of research surrounding NeRFs, or Neural Radiance Fields, as a new way to render 3D scenes from a set of 2D images. This technology is rapidly developing, and it may be the best path forwards to photorealistic rendering of viewpoints never before seen before by a camera. Many large research labs like Facebook AI and Google research are pursuing this technology in order to render realistic simulations for self-driving cars and to improve the usability of 3D scans. Our project, NeRF or Nothing, will be a web application based on this technology that allows people to input videos or collections of photos and render novel realistic views of the scene they captured. This will include the ability for users to create “flythroughs” or move virtual cameras through scene's they have captured to create videos from unseen perspectives.
Please see the wiki in vidtonerf for additional learning resources and information.
Check out:
- Frontend
- Backend
- VidToNerf