Image may be NSFW.
Clik here to view.
Mustafa Doğa Doğan, shown here in front of MIT’s Great Dome. Photo credit: Hannah Harens.
Jane Halpern | Department of Electrical Engineering and Computer Science
Mustafa Doğa Doğan is a 3rd year PhD student working with Prof. Stefanie Mueller’s Human-Computer Interaction (HCI) Engineering Group at the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL). Recently, Doğan was awarded the prestigious Adobe Research Fellowship for his fascinating work with novel physical tagging mechanisms (called unobtrusive tags), which can contain and communicate digital information about the world. We sat down with Doğan to learn more.
First off, tell me a bit about yourself and your career at MIT!
Certainly! I’m a third-year PhD student in Course 6. Our research group at CSAIL focuses on human-computer interaction (HCI) and digital fabrication, and my research specifically is about the development and identification of unobtrusive tags. I really enjoy being at MIT; it’s a very exciting and collaborative environment!
Outside of my research, I am involved in several student organizations focusing on diversity, equity and inclusion in the greater MIT and EECS community at large, including THRIVE and its recent initiative EECS Graduate Application Assistance Program (GAAP). I am also part of the MIT European Club, where I’m serving as co-president this year, and GradRat (the ring tradition).
Outside of MIT, I enjoy volunteering and giving back to my research community through organizational work for HCI conferences and reviewing. When COVID happened last year, the largest conference we publish at, the ACM CHI Conference on Human Factors in Computing Systems (pronounced ‘kai’) got canceled, which was really disappointing! I was going to present my 1st first-authored paper—and the conference would have been in Hawaii! I suggested to Stefanie that we try holding a virtual gathering instead, and so we decided to co-organize a Zoom event, bringing together the authors of 21 CHI papers. We were excited to host more than 120 attendees from all over the world—at a time where everyone was trying to figure out how Zoom works, so we are very happy it went so smoothly!
In your research, what big question do you aim to solve?
I describe my research focus, unobtrusive tags, as subtle or invisible fingerprints embedded in objects. My vision is to be able to read these tags seamlessly in everyday interactions, and to transfer information seamlessly between the physical and digital worlds.
Some of my inspiration actually came from Adobe; when I was a young kid, I was very interested in Web and graphic design, and Adobe’s software tools had a big impact on me. I remember my elementary school technology teacher introducing me to Dreamweaver; that’s how I started designing and building Web applications. Many years later, when I was in college, Adobe released Capture, which is a mobile application that allows you to point your phone at an object or an image and extract its outline or colors to use in graphic design, which is useful for rapid design and prototyping. I like to use an analogy to Capture to describe my research — essentially, I want to extend this type of interaction to include objects’ functionality and usage, in a way that’s similar to metadata in digital files. When you listen to an audio file, it has metadata embedded, such as the album or artist’s name. In image files, you have embedded data about what camera was used to capture the image, the GPS location, etc. My vision is for that type of metadata to extend to physical objects, and for the viewer to be able to access information related to objects just by pointing a phone at them.
What kinds of applications do you envision for this technology?
For this, we can make a distinction between engineered tags and natural tags — with natural tags, I’m thinking that we can take existing materials and objects and look at the natural fingerprints that come with them. For example, we can use the micron-scale textures on their surfaces as fingerprints. Let’s say you are in a workshop with a lot of unlabeled materials around, it can be dangerous to use some of them in certain ways, for instance, with a laser cutter. By using these micron-scale textures, we can distinguish between materials that look similar and identify potentially hazardous materials.
Secondly, I mentioned engineered tags; an example is our recent project called G-ID, which I was able to present virtually at CHI 2020. Basically, the idea is that you can 3D print thousands of copies of the same model, but give each copy a unique identifier. We do this by manipulating the 3D printer path for each unique copy, so that the surface has a unique texture that is machine-detectable using conventional smartphone cameras. The name G-ID comes from G-code, which is a filetype that 3D printers use to interpret what kind of path they are supposed to follow. So each object’s originating fabrication code is its G-code. We evaluated in the paper the number of unique patterns it was possible to make using our approach, and came up with over 17,000 individual, distinguishable patterns. We show that such small variations can be either on the surface of the object or inside the object.
As a sample use case that I see as quite relatable to academics, we’ve applied G-ID’s technology in a customized coffee maker that recognizes users by looking at their 3D printed mug’s texture, then matching each cup owner to their preferred beverage.
Image may be NSFW.
Clik here to view.
Here, three key covers created as part of Doğan's G-ID project are shown in high magnification, revealing their unique "fingerprints". Image courtesy M. Doğa Doğan.
As another example that is relevant to our research lab, we’re also planning to use this technology on the keys we give to UROPs in our group. Every year, we lend quite a few keys to the students, and sometimes not all of them get returned, so Stefanie and I had this idea where we tag each key cover with a unique G-ID and when students come back, we take a picture of each key cover to confirm who it belongs to.
How far away are we from actually being able to touch biological objects and “see” their history?
With machine learning tools, we are able to pick up on things we are not able to see physically as humans, because computational tools are often better at recognizing much subtler details.
Going back to the natural tags, I have a current project involving deep learning to sense and distinguish different materials at a molecular level. Our hope is that you won’t need an electron microscope to get a level of detail that can give away a material; a cheap $20 camera can do it.
What is your goal after earning your PhD—do you want to keep working on new technical explorations or develop further applications?
Hopefully both! I’m very much interested in developing human-centered applications, because I enjoy thinking about how we can help humans with computer technology. I love building user experiences and interfaces, whether it’s for advancing digital fabrication tools or facilitating everyday interactions. At the same time, I am really drawn to academia and mentoring—I want to be a professor someday, as I don’t want to miss out on that aspect of learning something new every day and giving back to people.
News Image:
Clik here to view.
