There are a variety of communities interested in producing high-fidelity 3D models of real-world objects: video-game designers, archaeologists, movie studios, biomedical engineers, 3D printing hobbyists, and many more. Some of these groups have access to dedicated 3D scanners, using highly calibrated lasers and all sorts of other expensive equipment. At the other end of things, online services exist that attempt to reconstruct a model of your object from a handful of photos that you send to them. The results are sometimes serviceable, but the process is somewhat unpredictable.
Perhaps there's a middle ground. The Microsoft Kinect is accessible to consumers, and has a depth camera as well as a video camera. What if we could just wave a Kinect camera around an object, send the data to a program, and get out a highly detailed model of the object, complete with its surface appearance painted right on the model?
3D scanning is an active research area and a rather difficult computational problem. Your project is to dive in head-first, and attempt, through a mix of your own ingenuity and your reading of the efforts of others (that is, academic papers), to make high-quality 3D scanning available to the hobbyist on a budget.
Your objective is to create a program that can take some kind of input from a Kinect and produce an accurate and precise model of the 3D surface structure and appearance of a real-world object.
Both 3D scanning and the abstract operations involved in it are active topics of research within the field of “computational geometry”. Super-resolution, appearance modeling, and “multi-view scene reconstruction” are topics widely investigated in the computer vision and graphics research communities. Doing a good job on this project will involve reading up on and implementing published techniques.