Listen to this article |
NVIDIA Research today announced that it is bringing a number of improvements in rendering, simulation and generative artificial intelligence to SIGGRAPH 2024. The Computer Graphics Conference will be held July 28-Aug. 1 in Denver.
At SIGGRAPH, NVIDIA Corp. plans to present more than 20 papers introducing innovations in synthetic data generators and inverse plotting tools that can help train next-generation models. The company said its AI research improves simulation by increasing image quality and unlocking new ways to create 3D representations of real or imagined worlds.
Papers focus on diffusion models for visual generative AI, physics simulations, and increasingly realistic AI rendering. These include two Technical Best Paper Award winners and collaborations with universities in the US, Canada, China, Israel and Japan, as well as researchers from companies including Adobe and Roblox.
These initiatives will help create tools that developers and businesses can use to generate complex virtual objects, characters and environments, the company said. The generation of synthetic data can then be used to tell powerful visual stories, help scientists understand natural phenomena, or aid in simulation training of robots and autonomous vehicles.
Register now.
Table of Contents
ToggleDiffusion models improve texture painting, text-to-image generation
A popular tool for transforming text prompts into images, diffusion models can help artists, designers, and other creators quickly generate visuals for storyboards or production, reducing the time it takes to bring ideas to life.
Two papers from NVIDIA improve the capabilities of these generative AI models.
ConsiStory, a collaboration between researchers at NVIDIA and Tel Aviv University, makes it easy to generate multiple images with a consistent main character. The company said this is an essential capability for storytelling use cases such as illustrating a comic book or developing a screenplay. The researchers’ approach introduces a technique called subject-controlled shared attention, which cuts the time it takes to produce consistent images from 13 minutes to about 30 seconds.
Last year, NVIDIA researchers won the Best in Show award at the SIGGRAPH Real-Time Live event for artificial intelligence models that convert text or image prompts into custom textured materials. This year they present a paper that applies 2D generative diffusion models to interactive texture painting on 3D meshes, allowing artists to paint in real-time with complex textures based on any reference image.
NVIDIA Research kicks off development in the field of physics simulation
Graphic designers bridge the gap between physical objects and their virtual representations using physics simulation—a series of techniques that allow digital objects and characters to move in the same way they would in the real world. Several NVIDIA research papers have made breakthroughs in this area, including SuperPADL, a project that tackles the challenge of simulating complex human movements based on text.
call.
Using a combination of reinforcement learning and supervised learning, the researchers showed how the SuperPADL framework can be trained to reproduce the motion of more than 5,000 skills—and can run in real time on a consumer NVIDIA GPU.
Another NVIDIA article features a neural physics method that uses artificial intelligence to figure out how objects—whether represented as a 3D mesh, NeRF, or a solid object generated by a 3D text model—would behave as they move through the environment. NeRF, or Neural Radiation Field, is an artificial intelligence model that takes 2D images representing a scene as input and interpolates between them to render a complete 3D scene.
An article written in collaboration with Carnegie Mellon University discusses the development of a new kind of renderer. Instead of modeling physical light, the renderer can perform thermal analysis, electrostatics, and fluid mechanics (see video below). This method, which was named one of the top five papers at SIGGRAPH, is easy to parallelize and does not require cumbersome model cleaning, offering new opportunities to accelerate engineering design cycles.
Other simulation papers present a more efficient technique for modeling hair strands and pipes that speeds up fluid simulation by 10x.
Papers raise the bar for realistic rendering, diffraction simulation
The next set of papers from NVIDIA will introduce new techniques for modeling visible light up to 25x faster and simulating diffraction effects – such as those used in radar simulations for training self-driving cars – up to 1,000x faster.
The paper by NVIDIA and University of Waterloo researchers looks at free-space diffraction, an optical phenomenon where light spreads or bends around the edges of objects. The team’s method can integrate with path-tracing workflows to increase the efficiency of diffraction simulation in complex scenes, offering up to a 1000x speedup. In addition to rendering visible light, the model can also be used to simulate longer wavelengths of radar, sound, or radio waves.
Path tracing samples multiple paths—multi-reflected light rays traversing the scene—to create a photorealistic image. Two SIGGRAPH papers improve sampling quality for ReSTIR, a path-tracking algorithm first introduced by NVIDIA and Dartmouth College researchers at SIGGRAPH 2020 that was instrumental in bringing path-tracking to games and other real-time rendering products.
One of these papers, a collaboration with the University of Utah, shares a new way to reuse calculated paths that increases the effective number of samples by up to 25x, significantly improving image quality. The second improves the sample quality by randomly mutating a subset of the light path. This helps the denoising algorithms perform better and produce fewer visual artifacts in the final rendering.
It teaches AI to think in 3D
NVIDIA researchers are also presenting multi-purpose artificial intelligence tools for 3D representation and design at SIGGRAPH.
One paper introduces fVDB, a GPU-optimized framework for real-world 3D deep learning. The fVDB framework provides an AI infrastructure for large spatial scale and high resolution urban 3D models and NeRF, and segmentation and reconstruction of large-scale point clouds.
The Best Technical Paper winner, written in collaboration with Dartmouth College researchers, presents a theory that shows how 3D objects interact with light. The theory unifies a diverse spectrum of appearances into a single model.
In addition, NVIDIA Research’s collaboration with the University of Tokyo, the University of Toronto, and Adobe Research introduces an algorithm that generates smooth space-filling curves on 3D meshes in real time. While previous methods took hours, this framework runs in seconds and offers users a high degree of control over the output enabling interactive design.
See NVIDIA Research at SIGGRAPH
NVIDIA events at SIGGRAPH will include a fireside chat between NVIDIA founder and CEO Jensen Huang and lead author Lauren Goode Wiredon the influence of robotics and artificial intelligence in industrial digitization.
NVIDIA researchers will also present OpenUSD Day by NVIDIA, a day-long event that showcases how developers and industry leaders are adopting and developing OpenUSD to create AI-powered 3D channels.
NVIDIA Research has hundreds of scientists and engineers worldwide with teams focused on topics including artificial intelligence, computer graphics, computer vision, self-driving cars and robotics.
About autor
Aaron Lefohn leads the Real-Time Rendering Research team at NVIDIA. For more than ten years, he has led research teams in real-time graphics rendering and programming, and has created many research ideas in gaming, movie rendering, GPU hardware, and GPU APIs.
Lefohn’s teams’ inventions were instrumental in introducing ray tracing to real-time graphics, combining artificial intelligence and computer graphics, and pioneering computer graphics with real-time artificial intelligence. Some of NVIDIA’s products derived from the teams’ inventions include DLSS, RTX Direct Illumination (RTXDI), NVIDIA Real-Time Denoisers (NRD), OptiX Deep Learning Denoiser, and more.
The teams’ current areas of interest include real-time physical light transmission, AI computer graphics, image metrics, and graphics systems.
Lefohn previously worked in rendering research and development at Pixar Animation Studios, creating interactive rendering tools for film artists. He was also part of the graphics startup Neoptica, which created rendering software and programming models for the Sony PlayStation 3. In addition, Lefohn led real-time rendering research at Intel. He received a Ph.D in computer science from UC Davis, an MS in computer science from the University of Utah, and an MS in theoretical chemistry.