Leading industry experts and researchers discuss the top unsolved problems in real-time rendering, why current solutions don’t work in-practice, the desired ideal solution, and the problems that need to be solved to work toward that ideal. This year’s course explores open issues in physically-based material representations, delves into the quest for dynamic global illumination in games, tunnels through the state of GPU computing for graphics, and tackles the new frontiers in using deep learning for real-time rendering. This year’s speakers include developers and researchers from Frostbite Labs (SEED), Unity Technologies, and NVIDIA Research. Intended
Audience This course is targeted at game developers, computer graphics researchers, and other real-time graphics practitioners interested in understanding the limits of current rendering technology, and the types of innovation needed to advance the field, especially with the emphasis on driving research toward techniques relevant to the games and real-time entertainment industry. Prerequisites This is an
intermediate/advanced course that assumes familiarity with modern real-time
rendering algorithms and graphics APIs. |
|
Tuesday, August 1st, 2:00 pm - 5:15 pm, Los
Angeles Convention Center, Room 408 AB
2:00pm
Course Introduction
Aaron Lefohn (NVIDIA), Natalya Tatarchuk (Unity Technologies)
2:10
pm
Physically-Based
Materials: Where Are We?
Sébastien Lagarde (Unity Technologies)
2:55 pm
A Certain Slant of Light: Past, Present
and Future Challenges of Global Illumination in Games
Colin Barré-Brisebois (Electronic
Arts SEED)
3:40 pm
Future Directions for
Compute-for-Graphics
Andrew Lauritzen
(Electronic Arts SEED)
4:25 pm
Deep Learning: The
Future of Real-Time Rendering?
Marco Salvi (NVIDIA)
5:10 pm
Closing
Remarks
Aaron Lefohn (NVIDIA), Natalya Tatarchuk (Unity Technologies)
Aaron Lefohn (@aaronlefohn) is Senior Director of
Real-Time Rendering Research at NVIDIA. Aaron has led real-time rendering and
graphics programming model research teams for over nine years and has
productized many research ideas into games, GPU hardware, and GPU APIs. In the
far distant past, Aaron worked in rendering R&D at Pixar Animation Studios,
creating interactive rendering tools for film artists. Aaron received his Ph.D.
in computer science from UC Davis and his M.S. in computer science from
University of Utah.
Natalya Tatarchuk (@mirror2mask) is a graphics engineer and a rendering enthusiast. As the
Director of Global Graphics at Unity Technologies, she is focusing on driving
the state-of-the-art rendering technology and graphics performance for the
Unity engine. Previously she was the Graphics Lead and an Engineering Architect
at Bungie, working on innovative cross-platform rendering engine and game
graphics for Bungie’s Destiny franchise,
including leading graphics on the upcoming Destiny
2 title. Natalya also contributed graphics engineering to the Halo series, such as Halo: ODST and Halo:Reach.
Before moving into game development full-time, Natalya was a graphics software
architect and a lead in the Game Computing Application Group at AMD Graphics
Products Group (Office of the CTO) where she pushed parallel computing
boundaries investigating advanced real-time graphics techniques. Natalya has
been encouraging sharing in the games graphics community for several decades,
largely by organizing a popular series of courses such as Advances in Real-time Rendering and the Open Problems in Real-Time Rendering at SIGGRAPH.
She has also published papers and articles at various computer graphics
conferences and technical book series, and has presented her work at graphics
and game developer conferences worldwide. Natalya is a member of multiple
industry and hardware advisory boards. She holds an M.S. in Computer Science
from Harvard University with a focus in Computer Graphics and B.A. degrees in
Mathematics and Computer Science from Boston University.
|
|
Abstract:
|
Physically-based rendering is now a widespread
term present in all modern game engine description, but its meaning is not
always well understood. Physically-based rendering can be split in three
categories: physically-based material, lighting and camera. This talk will
cover the material category and will go through the current state of
physically-inspired material use in real-time rendering. It will highlight
issues, limitations and progress to date. Then the author will propose future
directions to move closer to physically-based materials from both a software
developer and an artist perspective.
|
Presenter:
|
Sébastien Lagarde (Unity Technologies) |
Bio:
|
Sébastien Lagarde is Director of Rendering Research at Unity Technologies. He has
worked in the game industry since 2003 as an engine programmer with expertise
in rendering. He has worked on many game consoles for variety of titles, from
small casual games to AAA (Remember Me,
Mirror’s Edge 2, Star Wars Battlefront, among some). He
also developed the kernel of the Trioviz SDK, a
stereoscopic system used in many AAA games (Batman: Arkham City and Arkham Asylum, Assassin’s Creed II, Gears
of War 3, etc.). Sébastien has previously
worked for Neko Entertainment, Darkworks,
Trioviz, Dontnod and
Electronic Arts / Frostbite. |
Materials:
|
Slides: PowerPoint, PDF |
|
|
Abstract:
|
Global illumination (GI) has been an
ongoing quest in games. The perpetual tug-of-war between visual quality and performance
often forces developers to take the latest and greatest from academia and
tailor it to push the boundaries of what has been realized in a game product.
Many elements need to align for success, including image quality,
performance, scalability, interactivity, ease of use, as well as
game-specific and production challenges.
First we will paint a picture of the current state of global
illumination in games, addressing how the state of the union compares to the
latest and greatest research. We will then explore various GI challenges that
game teams face from the art, engineering, pipelines and production
perspective. The games industry lacks an ideal solution, so the goal here is
to raise awareness by being transparent about the real problems in the field.
Finally, we will talk about the future. This will be a call to arms, with the
objective of uniting game developers and researchers on the same quest to
evolve global illumination in games from being mostly static, or sometimes
perceptually real-time, to fully real-time.
|
Presenter:
|
Colin Barré-Brisebois
(Electronic Arts SEED)
|
Bio:
|
Colin Barré-Brisebois is a Senior Rendering Engineer at
Electronic Arts SEED, a cross-disciplinary team working on cutting-edge,
industry-leading future graphics technologies and creative experiences. Prior
to joining SEED, Colin was a Technical Director at WB Games' Montréal studio
on the Batman Arkham
franchise, where he led the rendering team and developed core lighting and
shading technology. Before Warner Brothers, he worked on many games at
Electronic Arts: Battlefield 3, Need for Speed, Army of TWO, Skate and
other AAA franchises. Colin has presented at various conferences including
GDC, GTC, I3D, SIGGRAPH, as well at local universities. He also has
publications in the ShaderX book series, the
ACM/IEEE, and his personal blog. |
Materials:
|
|
|
Abstract:
|
The past
few years have seen a sharp increase in the complexity of rendering
algorithms used in modern game engines. Large portions of the rendering work
are increasingly written in GPU computing languages, and decoupled from the
conventional “one-to-one” pipeline stages for which shading languages were
designed. Following Tim Foley’s talk from SIGGRAPH 2016’s Open Problems course on shading
language directions, we explore example rendering algorithms that we want to
express in a composable, reusable and performance-portable manner. We argue
that a few key constraints in GPU computing languages inhibit these goals,
some of which are rooted in hardware limitations. We conclude with a call to
action detailing specific improvements we would like to see in GPU compute
languages, as well as the underlying graphics hardware. |
Presenter:
|
Andrew Lauritzen (Frostbite Labs/SEED) |
Bio:
|
Andrew Lauritzen
(@AndrewLauritzen) is a rendering engineer in the Electronic
Arts SEED team where he works on rendering technology for future games.
Before SEED. he worked as a graphics software engineer at Intel for 8 years
where he worked with game developers and researchers to improve the algorithms,
APIs and hardware used for real-time rendering. He received his M.Math in computer science from
the University of Waterloo in 2008 where his research was focused on variance
shadow maps and other shadow filtering algorithms.
|
Materials:
|
|
|
Abstract:
|
Deep learning has recently completely changed fields such as computer
vision and speech recognition: will the same thing happen with real-time rendering?
In this talk we examine the strange but exciting possibility of deep learning
shaping the future of real-time rendering. First, we give a high-level
introduction to deep learning and we overview its impact to date on computer
graphics. We then demonstrate how to use deep generative models to tackle
long-standing rendering problems, such as antialiasing. Finally, we discuss
open problems and propose future directions of research. |
Presenter:
|
Marco Salvi (NVIDIA) |
Bio:
|
Marco Salvi Is a Senior Research
Scientist at NVIDIA, focusing on high-performance rendering algorithms and
future SW/HW architectures for real-time rendering. His research interests
include antialiasing, ray tracing, order-independent transparency, image
reconstruction, shading caches, and deep learning. Marco has been a
researcher for nearly a decade, and in a previous life was responsible for
architecting advanced graphics engines, performing low-level optimizations
and developing new rendering techniques for games on Playstation
2 / Playstation 3 / Xbox / Xbox 360 and PC. Marco
has M. S. in Physics from the University of Bologna, Italy (2001). |
Materials:
|
Contact: |
|