College of Computing, Georgia Tech, Atlanta
8th June 2017, 4:15 p.m.
S2|02 Room C110, Robert-Piloty-Gebäude, Hochschulstr. 10, 64289 Darmstadt
„Large-scale Situational Awareness with Camera Networks and Multimodal Sensing“
Sensors of various modalities and capabilities, especially cameras, have become ubiquitous in our environment. Their intended use is wide ranging and encompasses surveillance, transportation, entertainment, education, healthcare, emergency response, disaster recovery, and the like. Technological advances and the low cost of such sensors enable deployment of large-scale camera networks in large metropolis such as London and New York. Multimedia algorithms for analyzing and drawing inferences from video and audio have also matured tremendously in recent times. Despite all these advances, large-scale reliable systems for media-rich sensor-based applications such as surveillance are yet to become commonplace. Why is that? There are several forces at work here. First of all, the system abstractions are just not at the right level for quickly prototyping such applications in the large. Second, while Moore’s law has held true for predicting the growth of processing power, the volume of data that applications are called upon to handle is growing similarly, if not faster. Enormous amount of sensing data is continually generated for real-time analysis in such applications. Further, due to the very nature of the application domain, there are dynamic and demanding resource requirements for such analyses. The data intensive nature coupled with the lack of right set of abstractions for programming such applications have hitherto made realizing reliable large-scale surveillance systems difficult. The fundamental challenges include dealing with heterogeneity, scalability, virtualization, and mobility. In this talk, I will present some of the challenges and solution approaches we have taken for addressing the needs of large-scale sensor-based applications, often classified as situation awareness applications, using smart surveillance as a canonical example.
Professor Umakishore Ramachandran received his Ph. D. in Computer Science from the University of Wisconsin, Madison in 1986, and has been on the faculty of Georgia Tech since then. For two years (July 2003 to August 2005) he served as the Chair of the Core Computing Division within the College of Computing. His fields of interest include parallel and distributed systems, computer architecture, and operating systems. He has authored over 100 technical papers and is best known for his work in Distributed Shared Memory (DSM) in the context of the Clouds operating system; and more recently for his work in stream-based distributed programming in the context of the Stampede system.
Currently, he is leading a project that deals with large-scale situation awareness using distributed camera networks and multi-modal sensing with applications to surveillance, connected vehicles, and transportation. He led the definition of the curriculum and the implementation for an online MS program in Computer Science (OMSCS) using MOOC technology for the College of Computing, which is currently providing an opportunity for students to pursue a low-cost graduate education in computer science internationally. He has so far graduated 28 Ph.D. students who are well placed in academia and industries. He is currently advising 5 Ph.D. students. He is the recipient of an NSF PYI Award in 1990, the Georgia Tech doctoral thesis advisor award in 1993, the College of Computing Outstanding Senior Research Faculty award in 1996, the College of Computing Dean's Award in 2003 and 2014, the
College of Computing William ``Gus'' Baird Teaching Award in 2004, the ``Peter A. Freeman Faculty Award" from the College of Computing in 2009 and in 2013, the Outstanding Faculty Mentor
Award from the College of Computing in 2014, and became an IEEE Fellow in 2014.