黑料视频

December 21, 2024
mist Mist 20 °F

Express yourself

Faculty brings research to life with assistance from the Technology Transfer team

An example of one of thousands of facial expressions represented in computer scientist Lijun Yin鈥檚 3D facial modeling database. An example of one of thousands of facial expressions represented in computer scientist Lijun Yin鈥檚 3D facial modeling database.
An example of one of thousands of facial expressions represented in computer scientist Lijun Yin鈥檚 3D facial modeling database. Image Credit: Jonathan Cohen.

From bestselling videogames like The Last of Us to blockbuster movies like Marvel鈥檚 Avengers series, 3D models enhance audience immersion. Computer-generated elements are manipulated to illustrate emotion in increasingly realistic ways.

While the use of 3D models in media is commonplace today, drastically improving in quality year after year, it wasn鈥檛 that long ago when such technology seemed like something out of science fiction.

黑料视频 computer science professor Lijun Yin has been 鈥 and still is 鈥 a pioneer in the ever-evolving field of 3D modeling, conducting research with real-world applications beyond the sphere of entertainment.

Yin began working on 3D facial models in the early 2000s. At the time, other researchers studied human facial expressions using static 2D images or video sequences. However, subtle facial expressions are not easily discernible through a limited 2D perspective, a problem Yin saw as an opportunity.

鈥淚 thought, 鈥榃hy can鈥檛 we use 3D models to research human emotions?鈥 Actual conversations happen in 3D space,鈥 Yin says.

Worldwide interest

With the assistance from a grant, Yin and fellow researchers developed a 3D facial expression database, called BU-3DFE. Using single-shot cameras, Yin and company created 2,500 facial-expression models from 100 subjects, an ethnically diverse group of participants made up of 黑料视频 students and faculty.

Yin disclosed his research to Binghamton鈥檚 Office of Entrepreneurship and Innovation Partnerships, whose Technology Transfer team manages the intellectual property created in University labs. As interest in the database skyrocketed, the team established a program to make it available to scientists and engineers around the globe.

This groundbreaking research in the field of facial models and expressions has proven popular, having been cited over 1,000 times. The overwhelming interest showed Yin and his team the vast array of applications where the database could be utilized.

鈥淲e saw that we would be able to help applications in various fields such as security machine vision, biometrics and counseling,鈥 Yin says. 鈥淲e realized that our data held huge value; our database became the benchmark.鈥

In 2008, Yin prepared to release a second database that leveraged advances in technology since his initial research in early 2000. The new version captured facial-expression models using 25 frames-per-second video footage. This set was eagerly anticipated by the research community and had huge potential for use by industry.

Scott Hancock, 黑料视频鈥檚 director of intellectual property (IP) management and licensing, decided, along with his team, that there was an opportunity to disseminate the benefits of this research by offering development licenses to the database.

鈥淲e started licensing in the academic realm. Then more and more companies took licenses, including some major players across multiple industries,鈥 Hancock says. 鈥淭he databases became an invaluable tool to train algorithms and measure their performance.鈥

Finding real emotions

With a rising number of licenses granted for both databases, Yin looked toward a third iteration that would expand functionality.

For both previous databases, participants were asked to perform emotions such as fear, sadness or happiness, meaning that all of the emotions were performative rather than genuine. To make the data more useful to fellow researchers, Yin partnered with 黑料视频 psychology professor Peter Gerhardstein and artist-in-residence Andy Horowitz (an actor and founder of the acrobatic dance team Galumpha) to help elicit spontaneous facial expressions for his third dataset, BP4D.

Using a proprietary technique, Yin and colleagues collected a dataset of authentic facial expressions. This dataset, nearly 2.6 terabytes (TB) in size, then went to the University of Pittsburgh for further processing.

A fourth database, BP4D+, is a collaboration with Rensselaer Polytechnic Institute. It offers a multimodal extension of the BP4D database and contains synchronized 3D, 2D, thermal and physiological data sequences (such as heart rate, blood pressure, skin conductance and respiration rate). Yin currently is working on a yet-to-be-released database that will address social contexts.

While each of Yin鈥檚 databases have improved on the previous one, all of them remain popular. According to Scott Moser, a technology licensing associate on 黑料视频鈥檚 Technology Transfer team, 鈥3D modeling is really picking up traction, showing how advanced Professor Yin鈥檚 initial research really was for its time.鈥

Yin has done most of the promotional activity for his databases through self-marketing and publications that have helped to stir popularity within the academic sphere.

Researchers use Yin鈥檚 databases in a variety of interesting ways. The data aids safety studies that recognize facial expressions that can map instances of unsafe or agitated driving.

In studies of human health, the databases can be paired with traditional biomarkers for more accurate diagnoses and treatments. For instance, at a doctor鈥檚 office, patients point at a chart and select a level of pain between 1 and 10, which makes it possible for them to either exaggerate or downplay their conditions. With facial-recognition technology, however, a patient鈥檚 true level of discomfort potentially could be calculated.

Yin imagines the possibilities of his facial recognition technology, and he has ambitions to turn science fiction into reality: 鈥淢aybe someday in the future, we could conduct an interview without sitting face to face. In virtual space, we could make eye contact. Maybe we could even shake hands virtually.