Week Overview

Keynotes / Fast-Forward Full Papers Short Papers Tutorials STARs Education Panels Poster Community Events
Week Overview
Time Monday Tuesday Wednesday Thursday Friday
09:00 09:30

FF

FF

FF

FF

09:30 10:30 Keynote 1 Keynote 2 Keynote 5 Keynote 6
10:30 10:45
10:45 11:00
11:00 12:15
12:15 12:30
12:30 12:45

She-Lunch

Best Paper Jury

12:45 13:00

Closing Session

13:00 13:15
13:15 13:45
13:45 14:00
14:00 14:45
14:45 15:00
15:00 15:30
15:30 16:00 Poster Q&A
16:00 16:30
16:30 17:00
17:00 17:30

Opening Session

17:30 17:45
17:45 18:00

EG General Assembly

18:00 18:45
18:45 19:00
19:00 19:30

Social Event

Fellows' Dinner

IPC Dinner

19:30 21:00

Public Lecture

21:00 22:00
22:00 22:30

Daily Program

Monday

09:00-10:30

10:45-12:15

13:15-14:45

15:00-16:30

17:00-18:00

Opening Session

19:00-22:30

Social Event

Tuesday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 1

  • George Drettakis The Quest for Easy Creation, Editing and Real-Time Rendering of Realistic 3D Scenes

11:00-12:30

14:00-15:30

16:00-17:30

19:00-22:00

Fellows' Dinner

Wednesday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 2

11:00-12:30

12:30-13:00

She-Lunch

14:00-15:30

15:30-16:00

Poster Q & A

16:00-17:30

17:45-18:45

EG General Assembly

19:30-21:00

Public Lecture: Keynote 3 & Keynote 4

  • 19:30 Björn Ommer Towards Efficient World Models for Visual Intelligence
  • 20:15 Anatole Lécuyer Shaping the future of our 3D immersion in digital worlds

Thursday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 5

11:00-12:30

14:00-15:30

16:00-17:30

19:00-22:00

IPC Dinner

Friday

09:00-09:30

Fast-Forward

09:30-10:30

Keynote 6

  • Bernd Bickel Design in the Age of AI and Spatial Computing

11:00-12:30

12:30-12:45

Best Paper Jury

12:45-13:45

Closing Session

Full Program

Tuesday · 11:00-12:30

Full Paper 1

Animating Humans with Gestures and Style

  1. Conversational Gesture Model (CGM): Extending Speaker-Centric Audio-Driven Motion Generation to Full Conversation Gestures

    Tomer Koren, Adi Rosenthal, Doron Friedman, Ariel Shamir

  2. Skeletal-Driven Animation of Anatomical Humans via Neural Deformation Gradients

    Gerritt Nolte, Fabian Kemper, Ulrich Schwanecke, Mario Botsch

  3. Dance Like a Chicken: Low-Rank Stylization for Human Motion Diffusion

    Haim Sawdayee, Chuan Guo, Guy Tevet, Bing Zhou, Jian Wang, Amit Haim Bermano

  4. SkinCells: Sparse Skinning using Voronoi Cells

    Egor Larionov, Igor Santesteban, Hsiao-yu Chen, Wei-Chin Gene Lin, Philipp Herholz, Ryan Goldade, Ladislav Kavan, Doug Roble, Tuur Stuyck

  5. VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

    Fatemeh Zargarbashi, Dhruv Agrawal, Jakob Buhmann, Martin Guay, Stelian Coros, Robert W. Sumner

Tuesday · 11:00-12:30

Full Paper 2

Diffusion and Beyond: Controlled Image Generation and Stylization

  1. Graph-based Black and White Stylization

    Javid Ali Sattari, Jimmy Lord, David Mould

  2. Palette Aligned Image Diffusion

    Elad Aharoni, Noy Porat, Dani Lischinski, Ariel Shamir

  3. Latent Diffusion-GAN: Adversarial Learning in the Autoencoded Latent Space

    U-Chae Jun, Jaeeun Ko, Jiwoo Kang

  4. Edge-preserving noise for diffusion models

    Jente Vandersanden, Sascha Holl, Xingchang Huang, Gurprit Singh

  5. TextFlux: An OCR-Free DiT Model for High-Fidelity Multilingual Scene Text Synthesis

    Yu Xie, Jielei Zhang, Pengyu Chen, Weihang Wang, Longwen Gao, Peiyi Li, Qian Qiao, Zhouhui Lian

Tuesday · 11:00-12:30

Full Paper 3

Structured for Speed: Spatial Representations for Real-Time Rendering

  1. Real-Time Rendering of Dynamic Line Sets using Voxel Ray Tracing

    Bram Kraaijeveld, Andrei C. Jalba, Anna Vilanova, Maxime Chamberland

  2. EBOAT: Error-Bounded Adaptive Tessellation of Singularities for Real-Time Catmull-Clark Subdivision Surfaces Rendering

    Yajun Zeng, Yang Lu, Cong Chen, Ruicheng Xiong, Ligang Liu

  3. Encoding Occupancy in Memory Location for Efficient and Compact High-Resolution Voxel Structures

    Jaina Modisett, Markus Billeter

  4. NePO: Neural Point Octrees for Large-scale Novel View Synthesis

    Noah Lewis, Darius Rückert, Marc Stamminger, Linus Franke

  5. NAADF: Globally Illuminated VoxelWorlds Accelerated with Nested Axis-Aligned Distance Fields

    Annalena Ulschmid, Marvin Ott, Jonas Macho, Michael Wimmer, Stefan Ohrhallinger

Tuesday · 14:00-15:30

Full Paper 4

Covering the Surface: Texture Synthesis, Patterns, and Compression

  1. Real-time by-example texture synthesis and filtering using local statistics exchange

    Nicolas Lutz, Guillaume Gilet

  2. Variable-Rate Texture Compression: Real-Time Rendering with JPEG

    Elias Kristmann, Michael Wimmer, Markus Schütz

  3. ProcTex: Consistent and Interactive Text-to-texture Synthesis for Part-based Procedural Models

    Ruiqi Xu, Zihan Zhu, Benjamin Ahlbrand, Srinath Sridhar, Daniel Ritchie

  4. Lightmap Compression with Color-Coherent UV Clustering and Cascade Texture Optimization

    Dehan Chen, Hongyu Huang, Yuzhe Luo, Hao Xu, Yuqing Zhang, Sipeng Yang, Xifeng Gao, Heng Cai, Chao Li, Xiaogang Jin

  5. Controllable Intrinsic Surface Pattern Generation Using Slime Mold Simulations

    Jeffrey Layton, Faramarz Samavati, Adam Runions

Tuesday · 14:00-15:30

Full Paper 5

Learning Surface and Scene Representations

  1. Mesh Processing Non-Meshes via Neural Displacement Fields

    Yuta Noma, Zhecheng Wang, Chenxi Liu, Karan Singh, Alec Jacobson

  2. Basis Networks: Learning basis functions for free-form triangulations

    Tobias Djuren, Marc Alexa

  3. Self-supervised Learning of Fine-to-Coarse Cuboid Shape Abstraction

    Gregor Kobsik, Morten Henkel, Yanjiang He, Victor Czech, Tim Elsner, Isaak Lim, Leif Kobbelt

  4. TLC-Plan: A Two-Level Codebook Based Network for End-to-End Vector Floorplan Generation

    Biao Xiong, Zhen Peng, Ping Wang, Qiegen Liu, Xian Zhong

  5. Floorplan Generation by Alternating Geometry and Semantics Optimization

    Wenming Wu, Sizhe Hu, Ligang Liu, Liping Zheng, Xiao-Ming Fu

Tuesday · 16:00-17:30

Full Paper 6

Go with the Flow: Fluid Simulation and Rendering

  1. Adaptive Optical Layers: Efficient Tall Cell Grids for Liquid Simulation

    Fumiya Narita, Takashi Kanai

  2. A Semi-Analytical Energy Model for Particle-Based Fluid Simulation Involving Complex Moving Boundaries

    Junyuan Liu, Shusen Liu, Yuzhong Guo, Ruikai Liang, Yin Li, Xiaowei He

  3. Dripping Thin Films for Real-time Digital Painting

    Zoé Herson, Axel Paris, Élie Michel

  4. Fluid Composer: Fluid Detail Composition and Rendering Using Video Diffusion Models

    Duowen Chen, Zhiqiang Lao, Yu Guo, Heather Yu

  5. A Particle-Based Approach to Extract Dynamic 3D FTLE Ridge Geometry

    Daniel Stelter, Thomas Wilde, Christian Rössl, Holger Theisel

Tuesday · 16:00-17:30

Full Paper 7

Structural Geometry: From Fabrication to Fracture

  1. Field-Aligned Surface-Filling Curve via Implicit Stitching

    Giovanni Cocco, Xavier Chermain

  2. Strain-Field Based Segmentation for Fabric Formwork

    Abhinit Sati, Tiffany Bao, Jeff Tedi, Edward Chien, Emily Whiting

  3. Designing inflatable shells using unstructured meshes

    Siyuan He, Arthur Lebée, Mélina Skouras

  4. DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning

    Yuhang Huang, Takashi Kanai

Wednesday · 11:00-12:30

Full Paper 8

From Pixels to Scenes: 3D Reconstruction and Generation

  1. ZeroScene: A Zero-Shot Framework for 3D Scene Generation from a Single Image and Controllable Texture Editing

    Xiang Tang, Ruotong Li, Xiaopeng Fan

  2. GS-2M: Material-aware Gaussian Splatting for High-fidelity Mesh Reconstruction

    Dinh Minh Nguyen, Malte Avenhaus, Thomas Lindemeier

  3. Layer3D: A 3D Layered Representation for Multiview Vector Graphics

    Zhongyue Guan, Yixin Hu, Zeyu Wang

  4. GeoFusionLRM: Geometry-Aware Self-Correction for Consistent 3D Reconstruction

    Ahmet Burak Yildirim, Tuna Saygin, Duygu Ceylan, Aysegul Dundar

  5. UniCross3D: Unified Cross-View and Cross-Domain Diffusion for Consistent Single-Image 3D Generation

    U-Chae Jun, Jaeeun Ko, Jiwoo Kang

Wednesday · 11:00-12:30

Full Paper 9

Motion in the Wild: From Individuals to Crowds

  1. Physics-Based Motion Tracking of Contact-Rich Interacting Characters

    Xiaotang Zhang, Ziyi Chang, Qianhui Men, Hubert P. H. Shum

  2. Step2Motion: Locomotion Reconstruction from Pressure Sensing Insoles

    Jose Luis Ponton, Eduardo Alvarado, Lin Geng Foo, Nuria Pelechano, Carlos Andujar, Marc Habermann

  3. ContactVision: Learning Foot Contact from Video for Physically Plausible Gait Animation

    DaeYong Kim, Gyuseok Yi, Ri Yu

  4. Herds from Video: Learning a Microscopic Herd Model from Macroscopic Motion Data

    Xianjin Gong, James Gain, Damien Rohmer, Sixtine Lyonnet, Julien Pettré, Marie-Paule Cani

  5. MPACT: Mesoscopic Profiling and Abstraction of Crowd Trajectories

    Marilena Lemonari, Andreas Panayiotou, Theodoros Kyriakou, Nuria Pelechano, Yiorgos Chrysanthou, Andreas Aristidou, Panayiotis Charalambous

Wednesday · 11:00-12:30

Full Paper 10

Light Transport: Sampling, Waves, and Denoising

  1. Wave Tracing: Generalizing The Path Integral To Wave Optics

    Shlomi Steinberg, Matt Pharr

  2. Gradient-Domain ReSTIR Path Tracing

    Yu-Chen Wang, Chris Wyman, Markus Kettunen, Daqi Lin, Lifan Wu, Shuang Zhao

  3. Statistical Denoising of Transient Rendering

    Oscar Pueyo-Ciutad, Álvaro López, Diego Gutierrez

  4. Stochastic Pairwise MIS for Unbiased Large-Kernel Reuse in Real-Time

    Trevor Hedstrom, Markus Kettunen, Daqi Lin, Chris Wyman, Tzu-Mao Li

  5. Deep Residual Combiner: A Learned Fusion of Spatial, Temporal, and Multiscale Correlated Pixel Estimates

    Weijie Zhou, Euan Hughes, Toshiya Hachisuka

Wednesday · 14:00-15:30

Full Paper 11

Hierarchical Geometry: Optimization and Simplification

  1. Convex Primitive Decomposition for Collision Detection

    Julian Knodt, Xifeng Gao

  2. Construction of clustered HLOD with As-Simplified-As-Possible boundaries

    Mathieu Ladeuil, Marc Trabucato, Alexis Vaisse, Noura Faraj

  3. Hierarchical Optimization of the As-Rigid-As-Possible Energy

    Hendrik Meyer, Bernd Bickel, Marc Alexa

  4. Embedding Optimization of Layouts via Distortion Minimization

    Alexandra Heuschling, Isaak Lim, Leif Kobbelt

  5. Contouring Signed Distance Fields by Approximating Gradients

    Maximilian Kohlbrenner, Marc Alexa

Wednesday · 14:00-15:30

Full Paper 12

Temporal Vision: Video Generation, Pose, and Narrative

  1. Story2Board: A Training-Free Approach for Expressive Visual Storytelling

    David Dinkevich, Matan Levy, Omri Avrahami, Dvir Samuel, Dani Lischinski

  2. SAGE: Structure-Aware Generative Video Transitions between Diverse Clips

    Mia Kan, Yilin Liu, Niloy J. Mitra

  3. MultiCOIN: Multi-Modal COntrollable Inbetweening

    Maham Tanveer, Yang Zhou, Simon Niklaus, Ali Mahdavi Amiri, Hao (Richard) Zhang, Krishna Kumar Singh, Nanxuan Zhao

  4. SEE4D: Pose-Free 4D Generation via Auto-Regressive Video Inpainting

    Dongyue Lu, Ao Liang, Tianxin Huang, Xiao Fu, Yuyang Zhao, Baorui Ma, Liang Pan, Wei Yin, Lingdong Kong, Wei Tsang Ooi, Ziwei Liu

  5. Enhancing Robust Category-Agnostic Pose Estimation through Multi-Modal Feature Alignment

    Boxuan Li, Juan Liu

Wednesday · 16:00-17:30

Full Paper 13

2D and Beyond: Stylized Animation and Reconstruction

  1. 3D Character Reconstruction from Hand-drawn Model Sheets

    Hyejeong Yoon, Wonjong Jang, Yoonha Hwang, Seungyong Lee

  2. Generative Cutout Animation

    Ivan Puhachov, Noam Aigerman, Thibault Groueix, Mikhail Bessmeltsev

  3. Mixed Super-Circles

    Emile Hohnadel, Thibaut Métivet, Florence Bertails-Descoubes

  4. Vector sketch animation generation with differentialable motion trajectories

    Xinding Zhu, Xinye Yang, Shuyang Zheng, Zhexin Zhang, Fei Gao, Jing Huang, Jiazhou Chen

Wednesday · 16:00-17:30

Full Paper 14

Solving Deformation: Numerical Methods for Elastic Simulation

  1. STAGED: Stress-Tensor Assisted Global-local-global solver for interactive Elastic shape Design

    Liangwang Ruan, Bin Wang, Tiantian Liu, Baoquan Chen

  2. Interpolated Adaptive Linear Reduced Order Modeling for Deformation Dynamics

    Yutian Tao, Maurizio Chiaramonte, Pablo Fernandez

  3. Progressively Projected Newton’s Method

    José Antonio Fernández-Fernández, Fabian Löschner, Jan Bender

  4. Affinification: A Fine Approximation of Deformations

    Alexandre Mercier-Aubin, Teseo Schneider, Paul Kry, Sheldon Andrews

Thursday · 11:00-12:30

Full Paper 15

Digital Humans: From Capture to Control

  1. DexterCap: Affordable and Automated Capture of Complex Hand-Object Interactions

    Yutong Liang, Shiyi Xu, Yulong Zhang, Bowen Zhan, He Zhang, Libin Liu

  2. Improving Facial Rig Semantics for Tracking and Retargeting

    Dalton Omens, Allise Thurman, Jihun Yu, Ron Fedkiw

  3. CANRIG: Cross-Attention Neural Face Rigging with Variable Local Control

    Arad Mohammadi, Sebastian Weiss, Jakob Buhmann, Loïc Ciccone, Robert W. Sumner, Derek Bradley, Martin Guay

  4. GTAvatar: Bridging Gaussian Splatting and Texture Mapping for Relightable and Editable Gaussian Avatars

    Kelian Baert, Mae Younes, Francois Bourel, Marc Christie, Adnane Boukhayma

  5. Neuralocks: Real-Time Dynamic Neural Hair Simulation

    Wei-Chin Gene Lin, Egor Larionov, Hsiao-yu Chen, Doug Roble, Tuur Stuyck

Thursday · 11:00-12:30

Full Paper 16

Measuring and Modeling Material Appearance

  1. High-Gloss SVBRDF Capture Using Bounce Light

    Tomáš Iser, Andrei-Timotei Ardelean, Tim Weyrich

  2. A Texture-Free Multi-Scale Model for Surface-Based Rendering of Knitted Fabrics

    Apoorv Khattar, Jean-Marie Aubry, Ling-Qi Yan, Zahra Montazeri

  3. A Discrete Polydisperse Anisotropic BSDF Model based on the Micrograin Framework

    Kewei Xu, Simon Lucas, Mickael Ribardiere, Benjamin Bringier, Pascal Barla

  4. HiMat: DiT-based Ultra-High Resolution SVBRDF Generation

    Zixiong Wang, Jian Yang, Yiwei Hu, Miloš Hašan, Beibei Wang

  5. Digitisation of Impasto and Gloss in Oil Paintings via Spatially Varying Bidirectional Reflectance Distribution Function Acquisition

    Chih Yang, Tzung-Han Lin

Thursday · 14:00-15:30

Full Paper 17

From Leaf to Planet: Natural Environment Generation and Simulation

  1. LeafFit: Plant Assets Creation from 3D Gaussian Splatting

    Chang Luo, Nobuyuki Umetani

  2. TreeON: Reconstructing 3D Tree Point Clouds from Orthophotos and Heightmaps

    Angeliki Grammatikaki, Johannes Eschner, Pedro Hermosilla, Oscar Argudo, Manuela Waldner

  3. HeatMat: Simulation of City Material Impact on Urban Heat Island Effect

    Marie Reinbigler, Romain Rouffet, Peter Naylor, Mikolaj Czerkawski, Nikolaos Dionelis, Elisabeth Brunet, Catalin Fetita, Rosalie Martin

  4. Authoring Terrestrial Planets with Diffusion Models

    Oliver Borg, James Gain, Éric Guérin, Adrien Peytavie, Marie-Paule Cani, Eric Galin, Guillaume Cordonnier

  5. Terrain Synthesis and Authoring based on Iso-Contours

    Benoit Huftier, Hugo Schott, Eric Galin, Oscar Argudo, Adrien Peytavie, Eric Guérin

Thursday · 14:00-15:30

Full Paper 18

Neural Appearance: Reflectance, Irradiance, and Light Transport

  1. Neural Progressive Photon Mapping

    Justin Benoist, Joey Litalien, Adrien Gruson

  2. Neural Local Inter-reflection Modeling for Garment Fold Rendering

    Jooeun Son, Nuri Ryu, Gyoonseo Kim, Joo Ho Lee, Seungyong Lee

  3. Real-time Rendering with a Neural Irradiance Volume

    Arno Coomans, Giacomo Nazzaro, Edoardo A. Dominici, Christian Döring, Floor Verhoeven, Konstantinos Vardis, Markus Steinberger

  4. A Real-Time Multi-Scale Neural Representation for Complex Surface Reflectance

    Heikki Timonen, Pauli Kemppinen, Jaakko Lehtinen

  5. BRDF Importance Baking: A Lightweight Neural Solution to Importance Sampling General Parametric BRDFs

    Yaoyi Bai, Songyin Wu, Zheng Zeng, Lingqi Yan, Beibei Wang

Thursday · 16:00-17:30

Full Paper 19

Parametric and Structured Geometry

  1. CADrawer: Autoregressive CAD Generation from 3D Sketches

    Yuanbo Li, Gilda Manfredi, Henro Kriel, Chengye Hao, Xianghao Xu, Adrien Bousseau, Daniel Ritchie

  2. Differentiable variable fonts

    Kinjal Parikh, Danny M. Kaufman, David I.W. Levin, Alec Jacobson

  3. 2D Piecewise Linear Scalar Fields with Invertible Integral Lines

    Leon Timm Erxleben, Michael Motejat, Christian Rössl, Holger Theisel

  4. Register-Efficient Linear-Time Evaluation in the Bernstein Basis

    Gábor Valasek, Anna Lili Horváth

  5. Improving the watertightness of parametric surface/surface intersection

    Yuqing Wang, Xiaohong Jia, Jieyin Yang, Bolun Wang, Pengbo Bo, Yang Liu

Thursday · 16:00-17:30

Full Paper 20

Immersive and Interactive: Rendering Across Displays and Devices

  1. Robo-Saber: Generating and Simulating Virtual Reality Players

    Nam Hee Kim, May Liu, Jaakko Lehtinen, Perttu Hämäläinen, James O’Brien, Jason Peng

  2. Real-Time Neural Materials on Mobile VR

    Zilin Xu, Yang Zhou, Yehonathan Litman, Matt Jen-Yuan Chiang, Lingqi Yan, Anton Michels

  3. ML-PEA: Machine Learning-Based Perceptual Algorithms for Display Power Optimization

    Kenneth Chen, Nathan Matsuda, Thomas Wan, Ajit Ninan, Alexandre Chapiro, Qi Sun

  4. ProjectiveShading: Inserting 3D Objects into Indoor Images with Complex Shadows

    Jundan Luo, Xiaolong Wu, Nanxuan Zhao, Lu Wang, Wenbin Li, Christian Richardt

  5. PBR-Inspired Controllable Diffusion for Image Generation

    Bowen Xue, Giuseppe Claudio Guarnera, Shuang Zhao, Zahra Montazeri

Friday · 11:00-12:30

Full Paper 21

Maps and Meshes: Parameterization and Geometry Processing

  1. Adaptive Use of LBO Bases by Shape Feature Scales for High-Quality and Efficient Shape Correspondence

    Chong Zhao, Wencheng Wang, Fei Hou

  2. TABI: Tight and Balanced Interactive Atlas Packing

    Floria Gu, Nicholas Vining, Alla Sheffer

  3. Volume Quantization with Flexible Singularities for Hexahedral Meshing

    Hendrik Brückler, Marcel Campen

  4. Fast Injective Mesh Parameterization via Beltrami Coefficient Prolongation

    Guy Fargion, Ofir Weber

  5. DiskScissors: Cutting Arbitrary-Topology Solids for Bijective Mapping

    Steffen Hinderink, Marcel Campen

Friday · 11:00-12:30

Full Paper 22

Advancing 3D Gaussian Splatting

  1. Multi-Spectral Gaussian Splatting with Neural Color Representation

    Lukas Meyer, Josef Grün, Maximilian Weiherer, Bernhard Egger, Marc Stamminger, Linus Franke

  2. RotGS: Rotation-Guided 3D Gaussian Splatting for Turntable Sequences without Structure-from-Motion

    Kyumin Kim, Dohae Lee, Hanul Baek, In-Kwon Lee

  3. Adaptive Spatio-Temporal 3D Gaussian Splatting for Scenes with Oscillatory Motion

    Petros Tzathas, Jeffrey Hu, Andréas Meuleman, Guillaume Cordonnier, George Drettakis

  4. OUGS: Active View Selection via Object-aware Uncertainty Estimation in 3DGS

    Haiyi Li, Qi Chen, Denis Kalkofen, Hsiang-Ting Chen

  5. Splat-based Metal Artifact Reduction in Cone-Beam CT via Polychromatic Modeling

    Kiseok Choi, Inchul Kim, Jaemin Cho, Hyeongjun Cho, Min H. Kim

Wednesday · 11:00-12:30

Short Paper 1

Thursday · 11:00-12:30

Short Paper 2

Friday · 11:00-12:30

Short Paper 3

Friday · 11:00-12:30

Short Paper 4

Monday · 09:00-10:30 · 10:45-12:15

Tutorial 1

Simulation Methods for Multiphysics Phenomena in Visual Computing

Fabian Löschner, Stefan Rhys Jeske, José Antonio Fernández-Fernández, and Jan Bender

Monday · 09:00-10:30 · 10:45-12:15

Tutorial 2

A Hands-On Introduction to Discrete Differential Operators on Polygon Meshes

Sven Dominik Wagner, Astrid Bunge, and Mario Botsch

Monday · 13:15-14:45 · 15:00-16:30

Tutorial 3

Deep Learning on Meshes and Point Clouds

Ruben Wiersma

Monday · 13:15-14:45 · 15:00-16:30

Tutorial 4

Optimal Transport for Fluid Simulation New and Old

Cyprien Plateau--Holleville and Bruno Lévy

Tuesday · 14:00-15:30 · 16:00-17:30

Tutorial 5

Fast Explicit 3D Reconstructions and How To Use Them

Bernhard Kerbl, Markus Steinberger, Linus Franke, Florian Hahlbohm, and Andrea Tagliasacchi

Tuesday · 14:00-15:30 · 16:00-17:30

Tutorial 6

Effective User Studies in Computer Graphics: From Pixels to Perception

Daniel Martin, Martin Weier, Piotr Didyk, Mauricio Flores-Vargas, Ernst Kruijff, Rachel McDonnell, and Sandra Malpica

Wednesday · 14:00-15:30 · 16:00-17:30

Tutorial 7

Convex Optimization in Computer Graphics

Leticia Mattos Da Silva

Thursday · 14:00-15:30 · 16:00-17:30

Tutorial 8

Introduction to Optimization Time Integration for Solids and Fluids

Jiayi Eris Zhang and Minchen Li

Monday · 09:00-10:30

STAR 1

Magnetic Modeling and Simulation for Computer Graphics

Xingyu Ni, Yuechen Zhu, Ruicheng Wang, and Bin Wang

Monday · 09:00-10:30

STAR 2

Advances in Neural 3D Mesh Texturing: A Survey

Sai Raj Kishore Perla, Hao (Richard) Zhang, and Ali Mahdavi-Amiri

Monday · 10:45-12:15

STAR 3

Survey on differential estimators for 3D point clouds

Léo Arnal--Anger, Thibault Lejemble, David Coeurjolly, Loïc Barthe, and Nicolas Mellado

Monday · 10:45-12:15

STAR 4

Establishing Shape Correspondences: A Survey

Alexandra Heuschling, Hannah Meinhold, and Leif Kobbelt

Monday · 13:15-14:45

STAR 5

How to Build Digital Humans?

WojciechZielonka, Tobias Kirschstein, Timo Bolkart, Simon Giebenhain, Vanessa Sklyarova, Xiang Deng, Donglai Xiang, Shunsuke Saito, Yebin liuyebin, Matthias Niessner, and Justus Thies

Monday · 15:00-16:30

STAR 6

Non-Rigid 3D Shape Correspondences: From Foundations to Open Challenges and Opportunities

Aleksei Zhuravlev, Lennart Bastian, Dongliang Cao, Nafie El Amrani, Paul Roetzer, Viktoria Ehm, Riccardo Marin, Hiroki Nishizawa, Shigeo Morishima, Christian Theobalt, Nassir Navab, Daniel Cremers, Florian Bernard, Zorah Lähner, and Vladislav Golyanik

Thursday · 11:00-12:30

Invited STARs - 45 min each

STAR 7

Deep Sketch-Based 3D Modeling: A Survey

https://doi.org/10.1111/cgf.70302

Alberto Tono, Jiajun Wu, Gordon Wetzstein, Iro Armeni, Hariharan Subramonyam, James Landay, and Martin Fischer

STAR 8

A Survey of Inter-Prediction Methods for Time-Varying Mesh Compression

https://doi.org/10.1111/cgf.15278

Jan Dvořák, Filip Hácha, Gerasimos Arvanitis, David Podgorelec, Konstantinos Moustakas, and Libor Váša

Thursday · 14:00-15:30

STAR 9

State-of-the-art in deep learning approaches for single-panorama indoor modeling and exploration

Giovanni Pintore, Marco Agus, Jens Schneider, and Enrico Gobbetti

Monday · 13:15-14:45

Education 1

Monday · 15:00-16:30

Education 2

Tuesday · 11:00-12:30

Education 3

Tuesday · 09:30-10:30

Keynote 1

The Quest for Easy Creation, Editing and Real-Time Rendering of Realistic 3D Scenes

George Drettakis

Wednesday · 09:30-10:30

Keynote 2

Learning to See the 3D World

Lourdes De Agapito Vicente

Wednesday · 19:30-21:00

Keynote 3

Towards Efficient World Models for Visual Intelligence

Björn Ommer

Keynote 4

Shaping the future of our 3D immersion in digital worlds

Anatole Lécuyer

Thursday · 09:30-10:30

Keynote 5

Graphics' Final Frontier

Jaakko Lehtinen

Friday · 09:30-10:30

Keynote 6

Design in the Age of AI and Spatial Computing

Bernd Bickel

Wednesday · 15:30-16:00

Poster Q & A

Keynotes

George Drettakis
Inria Université Côte d’Azur

The Quest for Easy Creation, Editing and Real-Time Rendering of Realistic 3D Scenes

In this talk we will present over 25 years of research motivated by the goal of providing solutions to easily create realistic 3D scenes by capturing real content, allowing subsequent editing -- most importantly re-lighting – and allowing real-time rendering of the resulting scenes. We look back at several early projects, and how they allowed us to advance our understanding of the fundamental difficulties of developing algorithms to achieve our goals by building on physics-based rendering and traditional graphics solutions. We will then stress the importance of being open to new tools and methodologies, most importantly deep learning. We will illustrate how adopting such techniques and methodologies early provided a significant advantage, both in relighting and real-time rendering for novel view synthesis, in part by building on our expertise in realistic rendering for training data generation. We will discuss the importance of efficiency and optimization even in early stages of these research projects, and finally discuss how the power of recent generative models provides exciting new possibilities, opening the way to powerful solutions to our overarching goals of easily creating, editing and rendering realistic 3D content.

George Drettakis graduated in Computer Science (CS) in Crete, Greece, obtained an M.Sc.and a Ph.D., (1994) in CS at the University of Toronto, Canada, under the supervision of Eugene Fiume, followed by an ERCIM postdoc in Grenoble, Barcelona and Bonn (94-95). He obtained an Inria researcher position in the iMAGIS group in Grenoble (1995), and the degree of "Habilitation" at the University of Grenoble (1999). In 2000 he founded the REVES research group at INRIA Sophia-Antipolis (2002-2015), followed by the current GRAPHDECO group. He has received several awards: the Eurographics (EG) Outstanding Technical Contributions award in 2007, EG Distinguished Career Award (2024), Inria-French Academy of Sciences Grand Prix (2024), the ACM SIGGRAPH Computer Graphics Achievement Award (2025), and was named EG (2007) and ACM Fellow (2026). He was papers co-chair of the EG Rendering Workshop in 1998, EG conference in 2002 and 2008, technical papers chair of SIGGRAPH Asia 2010, associate editor for major graphics journals, and chairs the EG working group on Rendering. His research spans many topics in computer graphics, with an emphasis on rendering. He initially concentrated on lighting and shadow computation and subsequently worked on 3D audio, perceptually-driven algorithms, virtual reality and 3D interaction. In recent years he has focused more on learning-based appearance capture, relighting and novel view synthesis (previously known as image-based rendering), culminating in the development of 3D Gaussian Splatting.

Jaakko Lehtinen
Aalto University / NVIDIA Research

Graphics' Final Frontier

Computer graphics has undergone an incredible journey from its (visually) humble beginnings into our current ability to simulate the appearance and motion of complex scenes to a degree often difficult to distinguish from reality. Yet closing the final gap to the look and feel of live action footage remains elusive. At the same time, modern purely data-driven methods routinely surpass the realism of traditional first-principles graphics approaches, but come with only coarse controls.In this talk, I'll draw on my experience of working with both classic and data-driven image generation techniques and attempt to outline a vision for the "endgame" of computer graphics that synthesizes the classic first-principles approaches with the power of data.

Jaakko is an associate professor at Aalto University and a distinguished research scientist at NVIDIA Research in Helsinki, Finland. He works on computer graphics and machine learning, with particular interests in generative modelling, realistic image synthesis, and appearance acquisition and reproduction. Overall, he's fascinated by the combination of machine learning techniques with physical simulators in the search for robust, interpretable AI. Prior to his current positions, Jaakko spent 2007-10 as a postdoc with Frédo Durand at MIT. Before his research career, he worked for the game developer Remedy Entertainment in 1996-2005 as a graphics programmer, and contributed significantly to the graphics technology behind the worldwide blockbuster hit games Max Payne (2001), Max Payne 2 (2003), and Alan Wake (2009).

Lourdes De Agapito Vicente
University College London / Synthesia Technologies

Learning to See the 3D World

Building algorithms that can emulate human 3D perception, using as input single images or video sequences taken with a consumer camera, proved to be a challenging task for years but has recently seen astounding progress. For decades, machine learning solutions faced the challenge of scarcity of 3D annotations, encouraging important advances in weak and self-supervision. However, recent efforts in large-scale paired image-3D dataset collection have led to a paradigm shift and fully supervised feed-forward large 3D reconstruction models have become a reality. In this talk I will describe progress in both static and dynamic 3D reconstruction, from early optimization-based solutions that captured sequence-specific 3D models, towards more powerful 3D-aware neural representations that can be trained from 2D image supervision only, to today’s large transformer-based, multi-view feed-forward models for metric-scale dense 3D reconstruction. I will also describe the successful commercial uptake of this technology and will show its application to AI-driven video synthesis.

Lourdes holds the position of Professor of 3D Vision at the Department of Computer Science, University College London (UCL) where she heads the Vision and Imaging Science Group. She received her BSc, MSc and PhD degrees from Universidad Complutense de Madrid (Spain). In 1997 she joined the Robotics Research Group at the University of Oxford as an EU Marie Curie Fellow. In 2001 she was appointed Lecturer at Queen Mary University of London, where she held an ERC Grant. Lourdes joined UCL in 2013 and was promoted to full professor in 2015. Her research in computer vision has consistently focused on the inference of 3D information from images or videos acquired with a single camera. Lourdes has served as Program Chair for CVPR 2016 and ICCV 2023, serves regularly as Area Chair for the top Computer Vision conferences (CVPR, ICCV, ECCV) and was Keynote speaker at ICRA 2017, ICLR 2021 and ECCV'24. Lourdes is co-founder of London-based startup Synthesia, the world’s largest AI video generation platform for business, currently valued at $4B. Synthesia's text-to-video technology allows users to create professional videos directly on the browser, removing the physical constraints of conventional production.

Bernd Bickel
ETH Zurich

Design in the Age of AI and Spatial Computing

As the boundaries between the digital and physical worlds blur, we face a profound opportunity to reimagine how we design the world around us. While advanced manufacturing, artificial intelligence, and spatial computing offer unprecedented potential for architecture, engineering, and art, their impact is often limited by a lack of design tools that can seamlessly bridge human creativity with physical realizability. In this talk, I will explore the transformation of design workflows from traditional CAD tools toward intelligent design systems. I will discuss how optimization-based design and tailored data-driven models enable novel approaches for interactive shape exploration and beyond, demonstrating their applicability to challenges ranging from intricate microstructures to high-performance building facades. A central theme is the control problem: the inherent tension between the probabilistic nature of modern generative AI and the high precision and editability required for professional engineering. I will conclude by reflecting on the evolving role of algorithms as creative partners. I will share a vision for a future where technology provides the "digital superpowers" that complement rather than replace human intuition, enabling us to build a more sustainable, functional, and resilient world.

Bernd Bickel is a Full Professor of Computational Design at ETH Zurich and a Research Scientist at Google. He previously served as a Professor and Vice President at ISTA and worked as a Research Scientist at Disney Research. He received his PhD in Computer Science from ETH Zurich in 2010. His research intersects visual computing, digital fabrication, and machine learning, focusing on computational tools that bridge digital design and physical manufacturing. His work includes high-fidelity performance capture, data-driven material modeling, functional metamaterials, and creative AI & generative design, integrating physics-based simulation with machine learning to create high-performance structures and systems. Bernd’s contributions have been recognized with a Technical Achievement Award from the Academy of Motion Picture Arts and Sciences (2019), the ACM SIGGRAPH Significant New Researcher Award (2017), an ERC Starting Grant (2016), and the ETH Medal (2011) for his doctoral dissertation.

Anatole Lécuyer
Inria Rennes/IRISA

Shaping the future of our 3D immersion in digital worlds

Virtual reality (VR) naturally evokes a set of advanced technologies designed to immerse users in synthetic 3D worlds simulated in real-time by a computer. Through dedicated interfaces such as head‑mounted displays, VR applications enable powerful experiences, transporting users to imaginary places or allowing them to interact with virtual characters and remote people. The first VR systems date back to the 1960s, but today we are living through a pivotal moment for the field, as it steadily moves toward widespread, mass‑market adoption. In this talk, we will explore the next steps for VR technologies. We will first argue that VR is progressively introducing greater physical engagement into 3D human-computer interaction, for example through haptic technologies (tactile or force feedback) or through virtual embodiment via self‑avatars (anthropomorphic representations of the user within a virtual environment). We will also examine the ongoing convergence of VR with physiological and neural interfaces, pointing toward future interactive systems that directly leverage users’ cognitive states and open the door to even more compelling and holistic experiences. The talk will be illustrated with some of our latest scientific results, offering a glimpse of what could become.. the future of our 3D immersion in digital worlds.

Anatole Lécuyer is Director of Research, at Inria, the French National Institute for Research in Digital Science and Technology, based in Rennes. For more than 20 years, he has been conducting research in the field of virtual reality, exploring new ways of interacting with virtual worlds, such as haptic or neural interfaces. He is the co‑author of over 250 scientific publications and 15 patents. He serves as an expert for numerous organizations, including the French National Research Agency and the European Commission. He served as Associate Editor of IEEE Transactions on Visualization and Computer Graphics, and Presence journal. He was General Chair of the IEEE Virtual Reality Conference (2025), Program Chair of IEEE Virtual Reality Conference (2015-2016) and General Chair of IEEE Symposium on Mixed and Augmented Reality (2017). Anatole Lécuyer received the Inria–Académie des Sciences Young Researcher in Digital Science Award in 2013, the IEEE VGTC Technical Achievement Award in Virtual/Augmented Reality in 2019, and was inducted into the IEEE Virtual Reality Academy in 2022.

Björn Ommer
Ludwig Maximilian University of Munich

Towards Efficient World Models for Visual Intelligence

Visual intelligence requires more than perception or the generation of plausible images or videos. It requires world models that represent the state of the world and how it changes. While recent progress in learning scene appearance from images and video has been remarkable, explicit models of kinematics are still vastly lacking: Current video models are largely computationally costly, focus on synthesizing only a single likely future, and paint future pixels rather than explicitly representing all possible motions that could lead there. In this talk, I will present recent progress toward efficient world models that make dynamics directly accessible, represent a multitude of possible futures, and allocate computation adaptively to the dynamic content of a scene rather than uniformly to individual pixels. Consequently, efficiency is not merely a matter of speed. It becomes a modeling principle that shapes what world models represent, how they reason, and what applications they make possible. The talk will then broaden the perspective and ask what happens when generative AI turns intelligence into a scalable, widely accessible commodity, propelling us from an information society toward a knowledge society with democratized access to actionable knowledge. Efficient world models, in this sense, are not simply compressed versions of larger systems, but a step toward visual intelligence grounded in dynamics, uncertainty, and efficient reasoning.

Björn Ommer is a full professor for Computer Science at LMU Munich where he leads the Computer Vision & Learning Group. Previously he was a full professor at Heidelberg University. After studying computer science and physics at the University of Bonn, he earned a Ph.D. from ETH Zurich, and held a postdoc position at UC Berkeley. He is LMU's Chief AI Officer, a director of the Bavarian AI Council, an ELLIS Fellow, and has served as an editor for IEEE T-PAMI and on the boards of numerous CVPR, ICCV, ECCV, and NeurIPS conferences. Björn's research interests are in generative AI, visual understanding, and explainable neural networks. His group developed several influential approaches in generative modeling, such as Stable Diffusion, which have seen broad adoption across academia, industry, and beyond and reflect his broader goal of advancing the democratization of generative AI.