About this Event
More than 80% consumer Internet traffic is for videos and most of them are recorded videos. Meanwhile, many organizations (such as national parks, vacation resorts, departments of transportation) provide real-time visual data (images or videos). These videos allow Internet users to observe events remotely. This speech explains how to discover real-time visual data on the Internet. The discovery process uses a crawler to reach many web pages. The information on these web pages are analyzed to identify candidates of real-time data. The data is downloaded multiple times over an extended time period; changes are detected to determine whether it is likely to provide real-time data. The data can be used during an emergency. For example, viewers may check whether a street is flooded and cannot pass. It is also possible using the data to observe long-term trends, such as how people react to movement restrictions during the COVID pandemic.
Speaker:
Dr. Yung-Hsiang Lu is a professor at the Elmore Family School of Electrical and Computer Engineering of Purdue University. He is a fellow of the IEEE (2021), ACM Distinguished Scientist (2013), and ACM Distinguished Speaker (2013). In 2015-2019, he was a co-founder and adviser of a technology startup that received SBIR-1 and SBIR-2 (Small Business Innovation Research). In 2020-2022, he was the director of the John Martinson Engineering Entrepreneurial Center at Purdue University. His research topics include efficient computer vision for embedded systems, cloud and mobile computing. He leads a research project analyzing real-time video streams from thousands of network cameras. He is the lead organizer of the IEEE Low-Power Computer Vision Challenge since 2015. He has published two books: Intermediate C Programming (ISBN 9781498711630) and Low-Power Computer Vision: Improve the Efficiency of Artificial Intelligence (editor, ISBN 9780367744700). More about him at: