Ƶ

December 20, 2024
light snow Snow 31 °F

Professor honored for research detecting ’deepfake’ videos

Professor Yu Chen is exploring a unique angle to figure out altered footage

Professor Yu Chen, a faculty member in the Department of Electrical and Computer Engineering at the Thomas J. Watson College of Engineering and Applied Science, in the Data Center of the Engineering and Science Building at Ƶ's Innovative Technologies Complex. Professor Yu Chen, a faculty member in the Department of Electrical and Computer Engineering at the Thomas J. Watson College of Engineering and Applied Science, in the Data Center of the Engineering and Science Building at Ƶ's Innovative Technologies Complex.
Professor Yu Chen, a faculty member in the Department of Electrical and Computer Engineering at the Thomas J. Watson College of Engineering and Applied Science, in the Data Center of the Engineering and Science Building at Ƶ's Innovative Technologies Complex. Image Credit: Jonathan Cohen.

False information on the internet is nothing new, but advances in digital technology are making it increasingly difficult to spot what’s fake and what’s real.

One researcher who is exploring a unique angle for figuring out “deepfakes” — manipulated videos of people saying things they did not say — is Ƶ Professor Yu Chen.

A faculty member at the Thomas J. Watson College of Engineering and Applied Science’s Department of Electrical and Computer Engineering, Chen was recently honored for his contributions to the security, privacy and authentication of optical imagery by , the international professional society for optical engineering. He and 46 others were elected for 2024 as fellows of the organization, which represents 258,000 people from 184 countries.

Chen started as an SPIE student member 20 years ago, while studying for his PhD at the University of Southern California. Since then, his research has been funded by the National Science Foundation, the U.S. Department of Defense, the Air Force Office of Scientific Research (AFOSR), the Air Force Research Lab (AFRL), New York state and various industrial partners. He also has authored or co-authored more than 200 scientific papers.

For his latest deepfake research, Chen drilled down into video files to find “fingerprints” such as background noise and electrical frequency that can’t be changed without destroying the file itself.

“We are living in a world where more fake things are mingled with real things,” he said. “It’s raised the bar for each of us to make sense of it all and make decisions about which one you want to believe. Our research is about finding anchor points so that we can have a better sense that something is suspicious.”

Chen believes his research bypasses the need to develop better AIs to fight “bad” AIs, which he sees as an “endless arms race.”

“People look back two to three years ago when deepfakes started, and they can easily tell it’s fake because someone’s eyes are not symmetric, or they’re smiling in a way that’s not natural,” he said. “The next generation of deepfake tools are really good, and you can’t tell that it’s a fake.”

The problems only will multiply as we move into a “metaverse” of augmented reality using products like Google Glass or Apple Vision Pro. What happens when we can’t trust our own eyes?

“We will start have the physical world — the real world — closely interwoven with a cyber world,” Chen said. “Look at the new Apple goggles that will enable people to leverage cyberspace in their daily lives. Deepfakes will be a huge issue — how can you tell something is real or something is faked?”

One aspect of deepfakes and their spread may be the most difficult to control, and that’s the human factor.

“Social media makes the situation even worse because it’s an echo chamber,” Chen said. “People believe what they want to believe, so they see something they like and they say, ‘Oh, I know that’s true.’ Some influencers try to harvest that for their own purposes.”

Chen will receive his honor in April at the SPIE Defense + Commercial Sensing (DCS) Conference in Washington, D.C., and he’s been invited to speak about his deepfake research at an SPIE conference later this year in Portugal.

“I envision our research paving the way for lives in the future that intricately blend the realms of reality and virtuality,” he said. “I also hope I can help to promote the visibility of Ƶ in the SPIE community.”