Beyond Perspective Cameras: Multiperspective Imaging, Rendering, and Projection

March 4, 2009
9:15a-10:30a
Halligan 111A
Speaker: Jingyi Yu, University of Delaware

Abstract

A perspective image represents the spatial relationships of objects in a scene as they appear from a single viewpoint. In contrast, a multiperspective image combines what is seen from several viewpoints into a single image. Despite their incongruity of view, effective multiperspective images are able to preserve spatial coherence and can depict, within a single context, details of a scene that are simultaneously inaccessible from a single view, yet easily interpretable by a viewer. In computer vision, multiperspective images have been used for analyzing structure revealed via motion and generating panoramic images with a wide field-of-view using mirrors.

In this talk, I will provide a complete framework for using multiperspective imaging models for computer graphics and vision. Our multiperspective framework consists of four key components: acquisition, reconstruction, rendering, and display. A multi- perspective camera captures a scene from multiple viewpoints in a single image. From the input image, intelligent software can recover 3D scene geometry using multiperspective stereo matching algorithms or shape-from-distortion approaches. A specific class of surfaces that are suitable to be reconstructed using shape-from-distortion are specular (reflective and refractive) surfaces, which can also be viewed as general multi-perspective cameras. The recovered geometry, along with lighting and surface reflectance, can then be loaded into the multi-perspective graphics pipeline for real-time rendering. Finally, we can visualize the rendering results on a unique multi- perspective display that combines a single consumer projector and specially-shaped mirrors/lenses. Such displays will offer an unprecedented level of flexibility in terms of aspect ratio, size, field of view, etc.

Bio

Jingyi Yu is an assistant professor at Computer and Information Science Department at the University of Delaware. He received his B.S. from Caltech in 2000 and M.S. and Ph.D. degree in EECS from MIT in 2005. His research interests span a range of topics in computer graphics, computer vision, and image processing, including computational photography, medical imaging, non-conventional optics and camera design, tracking and surveillance, and graphics hardware.