Ahuja to design automated systems for categorizing visual data
By Elise King, Coordinated Science Lab
March 30, 2012
- ECE Professor Narendra Ahuja has received a grant to develop an image matching framework to program computers to learn and recognize whether and where objects appear in images.
- By using region-based representation, computers will be programmed to use regions as features.
- The computers that will use this framework will be able to learn objects by being shown a series of images, and then will be able to recognize those objects in new images.
As the volume of digital information continues to grow, automated systems for categorizing visual data are becoming increasingly important in order to keep visual data organized and allow people to access the data they need in a quick, efficient and relevant way.
ECE Professor Narendra Ahuja recently received a 3-year, $419,655 grant from the Office of Naval Research (ONR) to design this type of automated system. Through this grant Ahuja, a Donald Biggar Willet Professor, hopes to develop an image matching framework that can be used to program computers to learn and recognize whether and where objects appear in images.
Ahuja previously received an NSF EAGER grant to research the problem of categorizing visual data. Through this recent ONR grant, Ahuja and other researchers will build off of the strengths of existing approaches used in previous research to take a new approach to this problem: Using regions to capture the basic information for recognizing objects.
By using region-based representation, computers will be programmed to use regions as features and can therefore detect things such as the relative spatial layout of an object in order to recognize the object.
The computers that will use this framework will be able to learn objects by being shown a series of images, and then will be able to recognize those objects in new images without human supervision. This kind of technology can be used for everyday purposes such video surveillance and medical purposes. Ahuja said that, for example, a computer programmed to categorize visual data can be shown a series of images that contain brain tumors so that it learns what a brain tumor looks like. Then it can be told to look at a new patient’s brain scans and will be able to locate and identify brain tumors on its own.
Similarly, Ahuja said, if say it is against the rules for someone to enter a locker room alone, this computer can be taught to recognize the difference between one person and multiple people, and can then monitor video surveillance to see if only one person enters the locker room. If only one person goes in to the room then a warning alarm will sound.
Researchers want to also make sure this technology is accurate, robust and fast. “You have to be tolerant of variability, but not too tolerant,” Ahuja said. The computer should be able to tell that a bright red apple and a dark red apple are still both apples, but an apple and an orange are different. “If you don’t know what matters you will be inaccurate,” Ahuja said.
Editor's note: media inquiries should be directed to Brad Petersen, Director of Communications, at email@example.com or (217) 244-6376.