Our goal is to build a tool to consistently replace materials on objects in static images, where the term "material" denotes the properties of surfaces that define how the surface interacts with light (Typical use case: take a picture of a car, replace the car paint).
The challenge in this project is to be able to extract a consistent pair of geometric properties and incident illumination in a single image from the unique knowledge of the material properties of objects in the images.
This type of problem generally falls into the "shape from shading" category, which is known to be ambiguous. However, in our situation, we do not seek the real geometry of the objects nor the real distribution of incident light but only a consistent pair that would allow to replace the material properties while keeping the same viewing and illumination conditions.
The project has already started, and we have a working pipeline for convex objects. In real life cases however, objects are not convex and require to also compute which parts of incident half-sphere of directions each pixel actually sees. We propose to compute this visibility information, in order to increase the accuracy of the extracted information.