Data Meeting Information

All guest lectures will be held in the Lubrano conference room on the 4th floor of the CIT.



Data Tuesday, March 6


Active Imaging and Display at MERL: A Selective Overview

Ramesh Raskar, Mitsubishi Electric Research Labs (MERL)

The processing for image analysis, display and user interaction can be significantly simplified by actively modifying and sensing scene illumination, camera parameters and display configuration. I will present an overview of several projects at MERL in Computational Photography, Active Illumination for scanning and Projector-based Displays.

Ramesh Raskar joined MERL as a Research Scientist in 2000 after his doctoral research at U. of North Carolina at Chapel Hill. His work spans a range of topics in computer vision and graphics including computational photography, projective geometry, non-photorealistic rendering and intelligent user interfaces. Current projects include flutter shutter camera, composite RFID (RFIG), multi-flash non-photorealistic camera for depth edge detection, locale-aware mobile projectors, high dynamic range video, image fusion for context enhancement and quadric transfer methods for multi-projector curved screen displays.

Dr. Raskar received the TR100 Award, Technology Review's 100 Top Young Innovators Under 35, 2004, Global Indus Technovator Award 2003, instituted at MIT to recognize the top 20 Indian technology innovators on the globe, Mitsubishi Electric Valuable Invention Award 2004 and Mitsubishi Electric Information Technology R&D Award 2003. He is a member of the ACM and IEEE.

Homepage: http://www.merl.com/people/raskar/raskar.html



Data Tuesday, March 13


High-Resolution, Real-Time Geometry Video Acquisition
Using a Phase-Shifting Method

Song Zhang, Harvard University

High-resolution, real-time 3D geometric shape measurement for dynamically deformable objects has a huge potential for applications in many areas, including entertainment, medical, design and manufacturing, etc. However, due to the challenging nature of the problem, no other system with such capability has ever been developed. In this talk I will discuss a recently developed system with such capabilities.

The system we develop is based on a digital fringe projection and phase-shifting technique. It utilizes a single-chip Digital-Light-Processing (DLP) projector to project computer generated fringe patterns onto the object and a high-speed Charge-Coupled-Device (CCD) camera synchronized with the projector to acquire the fringe images at a frame rate of 180 frames per second. Based on a three-step phase-shifting technique, each frame of the 3D shape is reconstructed using three consecutive fringe images. Therefore the 3D data acquisition speed of the system is 60 frames per second (faster than video speed 24fps). Together with fast 3D reconstruction algorithms and parallel processing software we developed, high-resolution, real-time 3D shape measurement is realized at a frame rate of up to 30 frames per second and a resolution of 300K points per frame. The motion of geometric shape changes, such as facial expressions can be accurately measurement with such a system.

Homepage: http://www.math.harvard.edu/~songzhang
Reading: Realtime Shape Measurement, Calibration of Structured Light Systems



Data Tuesday, March 20


Reflectance Modeling for Vision and Graphics

Todd Zickler, EECS, Harvard University

An image depends on a number of scene properties, including shape, reflectance, illumination and viewpoint. These properties interact to create specular highlights and other intricate visual effects. Understanding and modeling these effects is important for computer vision systems to succeed in real-world environments and for graphics systems to accurately synthesize visual appearance. In the past, the dominant paradigm in computer vision has been to assume that surfaces are purely diffuse (Lambertian) and either ignore complex reflectance effects or treat them as noise. This talk is meant to convey an alternative approach. Although our visual world contains a vast collection of different materials, there are common physical reflectance properties (reciprocity, isotropy, separability, spatial coherence, etc.) that are shared by broad classes of materials. By developing computational tools that exploit these properties, we can use image data more efficiently and improve many vision and graphics systems. To demonstrate this approach, I present tools that can be used for a variety of applications, including 3D reconstruction, modeling, recognition, tracking, and segmentation. Bio: Todd Zickler received a B. Eng. degree in honors electrical engineering from McGill University in 1996, and an MS degree in electrical engineering from Yale University in 2001. He received a PhD degree in electrical engineering from Yale in 2004, at which point he joined Harvard University as an assistant professor of electrical engineering in the Division of Engineering and Applied Sciences. His research interests span computer vision, image processing and computer graphics, and he is currently focused on image-based modeling and efficient representations for visual appearance. In 2006, he was the recipient of an NSF Career Award titled "Foundations for Ubiquitous Image-based Appearance Capture."

Homepage: http://www.eecs.harvard.edu/~zickler


Data Tuesday, April 3


3D Scanning at the Yale Graphics Group

Holly Rushmeier, CS, Yale University

I will present a survey of the different lines of research that the Yale Graphics Group is pursuing that combine 3D scanning and computer graphics. Our research involves generating digital objects from measured positions and digital images in a form that makes it possible to analyze objects, and to apply aspects of the objects to new designs. I will describe techniques for capturing and applying the appearance of materials that change over time as a result of weathering processes. I will also briefly discuss methods we are working on to capture consistent color information, fill in missing data in scanned data sets, and capture data in difficult field conditions. Bio: Holly Rushmeier is a professor of computer science at Yale University. She received the BS, MS, and PhD from Cornell University. Since receiving the PhD she has held positions at Georgia Tech, NIST and IBM TJ Watson Research. Her current research focuses on scanning and modeling of shape and appearance properties, and on applications in cultural heritage. Her recent past projects include a project to create a digital model of Michelangelo's Florence Pieta and models of Egyptian cultural artifacts in a joint project between IBM and the Government of Egypt. Dr. Rushmeier serves on the editorial boards of ACM Transactions on Perception, Computer Graphics Forum and IEEE Computer Graphics and Applications. She has been papers chair or co-chair for several conferences including the ACM SIGGRAPH conference and IEEE Visualization.

Homepage: http://graphics.cs.yale.edu/holly/


Data Tuesday, April 17


Architecture - Archaeology - Virtual Reality - Virtual Heritage: Rescue the past. Save the world

Donald Sanders, Institute for the Visualization of History

Donald Sanders, President of the Institute for the Visualization of History, will give a presentation about his work in the field of Virtual Heritage, a new profession that he helped pioneer in the early 1990s. The presentation will review benefits of applying interactive 3D computer graphics technologies to the study of the past. He will then discuss some of the projects he has been involved with and the new insight they provided to archaeologists and other historians.

Homepage: http://www.vizin.org/