Course Description

Leading industry experts and researchers discuss the top unsolved problems in real-time rendering, why current solutions don’t work in-practice, the desired ideal solution, and the problems that need to be solved to work toward that ideal. This year’s course explores the issues in geometry representation for real-time rendering and integration with asset pipelines, transparency and volumetrics concerns, new directions for real-time graphics programming models, and a panel on applying deep learning to games' content creation challenges. Speakers include developers and researchers from Activision, Bungie, DICE, NVIDIA, Carnegie Mellon University, Aalto University, University of Southern California and Williams College.

Intended Audience

This course is targeted at game developers, computer graphics researchers, and other real-time graphics practitioners interested in understanding the limits of current rendering technology, and the types of innovation needed to advance the field.

Prerequisites

This is an intermediate/advanced course that assumes familiarity with modern real-time rendering algorithms and graphics APIs.

 

 


Syllabus

Open Problems in Real-Time Rendering

Tuesday, 26 July, 2:00 pm - 5:15 pm, Anaheim Convention Center, Ballroom A

                                                                                                                                      

2:00pm

Course Introduction

Aaron Lefohn (NVIDIA), Natalya Tatarchuk (Bungie)

 

2:10 pm

Geometry and Material Representation in Games: Lessons Learned and Challenges Remaining

Wade Brainerd (Activision)

Natalya Tatarchuk (Bungie)

 

3:05 pm

Peering Through a Glass, Darkly at the Future of Real-Time Transparency

Morgan McGuire (Williams College and NVIDIA)

 

4:00 pm

A Modern Programming Language for Real-Time Graphics. What is Needed?

Tim Foley (NVIDIA)

 

4:30 pm

Deep Learning: A New Tool for Content Creation and Game Design

Moderator: Kayvon Fatahalian (Carnegie Mellon University)

Panelists: Timo Aila (NVIDIA), Johan Andersson (DICE), Jaakko Lehtinen (Aalto University / NVIDIA), Hao Li (University of Southern California/Pinscreen)

 

5:15 pm

Closing Remarks

Aaron Lefohn (NVIDIA), Natalya Tatarchuk (Bungie)

 

Course Organizers

Aaron Lefohn (@aaronlefohn) is Director of Real-Time Rendering Research at NVIDIA, has led real-time rendering and graphics programming model research teams for over seven years, and has productized many research ideas into games, GPU hardware, and GPU APIs. In the far distant past, Aaron worked in rendering R&D at Pixar Animation Studios, creating interactive rendering tools for film artists. Aaron received his Ph.D. in computer science from UC Davis and his M.S. in computer science from University of Utah.

 

Natalya Tatarchuk (@mirror2mask) is the Graphics Lead and an Engineering Architect currently working on state-of-the art cross-platform next-gen rendering engine and game graphics for Bungie’s Destiny and its future releases. Previously she was a graphics software architect and a project lead in the Game Computing Application Group at AMD Graphics Products Group (Office of the CTO) where she pushed parallel computing boundaries investigating innovative real-time graphics techniques. Additionally, she had been the lead of ATI’s demo team creating the innovative interactive renderings and the lead for the tools group at ATI Research. She has published papers and articles in various computer graphics conferences and technical book series, and has presented her work at graphics and game developer conferences worldwide. Natalya is a member of multiple industry and hardware advisory boards. She holds an MS degree in Computer Science from Harvard University with a focus in Computer Graphics and Bachelors of Arts’ degrees in Mathematics and Computer Science from Boston University.

 


 

Geometry and Material Representation in Games: Lessons Learned and Challenges Remaining?

 

Abstract:

This presentation explores the problems faced by AAA game content creation pipelines, from production through runtime representation, across the continuum of geometry and material representations.

Today’s games’ 3D content pipelines involve the creation of twin high-fidelity and real-time 3D models, followed by the painting, baking, or capturing auxiliary of maps as specified by the engine’s material representation. This process is manual, time-consuming, error-prone, lossy, and results in a representation that directly targets hardware and graphics engine capabilities and is inherently not scalable. Bugs in this process can be insidious and difficult to resolve, resulting in flawed workarounds to produce acceptable results. Ultimately, the authors will challenge the research community to invent new content pipelines that are efficient, automatic, scalable, and preserve artist intent.

The presentation will begin with in-depth content production examples from Destiny’s asset pipeline, followed by the results from an extensive artist survey & interview process across AAA game and film studios. The authors will post-mortem pitfalls inherent in the standard game content representation and initiatives to utilize new content creation methods such as tessellation, displacement maps, and subdivision surfaces, conveying lessons learned and offering advice. Changes to the specification of GPU tessellation hardware will be proposed to improve quality and to enable its use for new kinds of geometry amplification.

Finally, the authors will promote high level concepts which we believe will guide development of the new content pipelines that we desire. These include separable content creation and runtime representations, and a continuum between geometry and material.

 

Presenters:

Wade Brainerd (Activision),
Natalya Tatarchuk (Bungie)

Bios:

Wade Brainerd (@wadetb) is a technical director at Activision, where he currently works with Infinity Ward on Call of Duty: Infinite Warfare.  Wade has been programming video games professionally since 1996, and likes to focus on graphics and performance optimization.

 

Natalya Tatarchuk (@mirror2mask) is the Graphics Lead and an Engineering Architect currently working on state-of-the art cross-platform next-gen rendering engine and game graphics for Bungie’s Destiny and its future releases. Previously she was a graphics software architect and a project lead in the Game Computing Application Group at AMD Graphics Products Group (Office of the CTO) where she pushed parallel computing boundaries investigating innovative real-time graphics techniques. Additionally, she had been the lead of ATI’s demo team creating the innovative interactive renderings and the lead for the tools group at ATI Research. She has published papers and articles in various computer graphics conferences and technical book series, and has presented her work at graphics and game developer conferences worldwide. Natalya is a member of multiple industry & hardware advisory boards. She holds an MS degree in Computer Science from Harvard University with a focus in Computer Graphics and Bachelors of Arts’ degrees in Mathematics and Computer Science from Boston University.

 

Materials:

Slides with notes (PDF, 194 MB)

 

 

 


 

Peering Through a Glass, Darkly at the Future of Real-Time Transparency

 

Abstract:

Today's real-time rendering techniques are exceptionally good and fast for large, opaque reflective surfaces. A large number of phenomena loosely grouped as "transparency" present serious challenges for both image quality and performance. These include partial coverage, refraction, translucent shadows, volumetric lighting, and subsurface scattering. Games use careful art direction and targeted effects, but have no unified solution that "just works" analogous to opaque surface methods.

The authors will first sketch the physics and computer science of why transparency is so challenging that even offline renderers can only approximate it. Then current methods will be presented, and the speaker will conclude by summarizing thoughts from leaders in production and research on the potential form of an ideal transparency solution...and practical constraints it must satisfy.

 

Presenter:

Morgan McGuire (Williams College and NVIDIA)

 

Bios:

Morgan McGuire (@CasualEffects) recently contributed to NVIDIA GPUs, Unity, Skylanders, The Graphics Codex, Project Rocket Golfing, and Computer Graphics: Principles and Practice, and undergraduate film and games courses.

 

Materials:
(Updated: August 6th, 2016)




Slides with notes (PDF, 46 MB), Video

 


A Modern Language for Real-Time Graphics: What is Needed?

Abstract:

For about a decade we've been programming GPUs in "shading" languages designed around the task of authoring simple surface shaders for early programmable pipelines. However, the tasks facing modern game developers have grown larger. Modern engines weave together snippets of code from many sources, specialize it into hundreds or thousands of high-performance variants, and then coordinate execution across CPU, GPU compute, and the modern rasterization pipeline. Existing shading languages just weren't built for this.

 

In this talk, we argue that programming models and languages built with an understanding of this task can make graphics programmers happier and more productive. We will explore the capabilities of programming models from research, and other domains, to put together a sketch of what an improved language for real-time graphics might look like.

 

Presenters:

Tim Foley (NVIDIA)

Bios:

Tim Foley (@TangentVector) works at NVIDIA, where he does research into improved programming models, languages, and APIs for graphics and game programming. He received his Ph.D. from Stanford University in 2012.

 

Materials:
(Updated: August 2nd 2016)


Slides (PPTX, 3 MB), Slides (PDF, 3.6 MB)

 

 


Panel: Deep Learning: A New Tool for Content Creation and Game Design

Abstract:

Deep learning techniques are having significant impact on previously unsolvable problems in many fields of computer science. (Successes in image analysis, language processing, and playing Go have dotted the recent popular press.) Increasingly, these techniques are being used to also solve hard problems in game development.  This panel will discuss recent uses of deep learning to advance the state-of-the art in content creation and player capture, discuss how this powerful new toolbox may impact future game design, and consider what advances are necessary to further increase its impact on the creation of interactive experiences.

 

Moderator:

Kayvon Fatahalian (Carnegie Mellon University)

Panelists:

Timo Aila (NVIDIA), Johan Andersson (DICE), Jaakko Lehtinen (Aalto University / NVIDIA), Hao Li (University of Southern California/Pinscreen)

 

Bio:

Kayvon Fatahalian (@kayvonf) is an Assistant Professor in the Computer Science Department at Carnegie Mellon University. His research focuses on the design of high-performance systems for real-time rendering and for analysis and mining of images and videos at scale.

 

Timo Aila joined NVIDIA in 2007 from Helsinki University of Technology, where he led the computer graphics research group. His expertise ranges from real-time rendering in computer games (eg. Max Payne, third-party engine development for numerous games, the first commercial occlusion culling library Umbra) to hardware architectures, and also to high-quality image synthesis with contributions to the PantaRay rendering system used in Avatar, Tintin, and Hobbit. He also gained expertise in mobile graphics as the chief scientist of Hybrid Graphics, which was acquired by NVIDIA in 2006. Timo is currently working on machine learning in content creation and graphics. Previously he had a central role in NVIDIA's research efforts on ray tracing, stochastic rasterization, and light field reconstruction.

 

Johan Andersson (@repi) is a programmer, pixel pusher and VP at Electronic Arts where he leads the Frostbite Labs group. For the past 16 years he has been working with rendering, performance and core engine systems at DICE & Electronic Arts and been part of building the Frostbite game engine which is today used extensively across all of Electronic Arts. Johan is a member of multiple industry & hardware advisory boards and has frequently presented at GDC, SIGGRAPH and graphics hardware conferences on topics such as rendering, performance, game engine design and future software/hardware architectures.

 

Jaakko Lehtinen (@jaakkolehtinen) started out as a graphics programmer for Remedy Entertainment, an independent Helsinki, Finland based computer game studio, and contributed significantly to the look and feel of Max Payne 1 (2001), Max Payne 2 (2003) and Alan Wake (2010) through his work on rendering, modeling, and lighting technology. Jaakko obtained his Ph.D. from Helsinki University of Technology (now Aalto University School of Science and Technology) in 2007, after which he worked in research and teaching for two and a half years as a postdoctoral associate with Frédo Durand in the MIT graphics group. He joined NVIDIA in June 2010. Jaakko's interests span most areas of computer graphics, focusing on physically based rendering.

 

Hao Li is CEO of Pinscreen, an AR/social media startup in stealth mode, and assistant professor of Computer Science and Viterbi Early Career Chair at the University of Southern California. Before his faculty appointment he was a research lead at Industrial Light & Magic/Lucasfilm, where he developed the next generation real-time performance capture technologies for virtual production and visual effects. Prior to joining the force, Hao spent a year as a postdoctoral researcher at Columbia and Princeton Universities.

 

 

 

 

 

Contact: