One of the current challenges for plastic surgeons is that they do not have guidelines for facial reconstruction procedures that take into account different ethnicities and age groups. This means that function can be restored to faces, but their features may not look natural in comparison to a healthy person of the same age and ethnicity.
Now a team of researchers from Imperial College London and theRoyal Free Hospital are asking visitors to the Science Museum to volunteer to have their faces scanned. The aim is to build up a database of 3D computer models of faces with different expressions, of different ethnicities and spanning different age groups.
Ultimately, these models could be used as templates for plastic surgeons, to enable them to build faces that are both functional and more natural looking for the patient. The team also believe their technology could have a range of other applications – from improving our understanding of human evolution, to enhancing lie detection methods and even helping children with autism.
This project first began in 2012 in conjunction with Great Ormond Street Hospital (under the supervision of Mr. David Dunaway), where 12,000 volunteers had their faces scanned at the Science Museum to develop 3D models with a neutral expression, across a range of different ages and ethnicities.
In 2017, the team are scanning the faces of volunteers showing a range of different expressions such as anger and sadness, to develop 3D face models that can show this expressive range.
Dr Allan Ponniah, co-lead from the Royal Free Hospital, said: “What we are aiming for is to develop bespoke 3D faces models that act as a roadmap for facial reconstruction procedures. We are still a few years away from using this procedure in surgery, but it shows real promise.
“The applications could be life changing. For instance, if we want to generate the face of a five-year-old Chinese girl, our computer program will create a model that looks realistic and gives us dimensions we can use to rebuild a face. That would be really useful for a child with a specific facial deformity. You could input data and generate a face with the closest resemblance to the patient, within the normal range.”
The data being generated by the scanning booth at the Science Museum is sent to Dr Stefanos Zafeiriou and his research group at Imperial for analysis. For the last decade, the team have been developing a system that can analyse thousands of faces in a matter of hours. If this procedure was carried out by humans it would take years.
The computer program maps the different facial landmarks such as eye sockets, noses and foreheads and other more subtle features such as the skin and corners of the lips. These landmarks are then assigned coordinates by the program so that direct comparisons can be made between the face scans. This enables the team to build up a statistical model of what an average face looks like at different stages in its growth and different ethnicities.
Visitors to the scanning booth at the Science Museum are being asked to pull different expressions including disgust, anger, fear, sadness, surprise and pain – and also to pout, flare their nostrils and puff out their cheeks. The latter movements are medically relevant for patients who have conditions like facial palsy following strokes.
Dr Zafeiriou, from Imperial’s Department of Computing, said: “It is a real privilege to be working with Dr Ponniah on this project, which has the potential to revolutionise facial reconstruction procedures. The beauty of our approach is that we can map hundreds of landmarks on the face and also features such as bone structure and muscles under the skin to create much more realistic faces.
“Ultimately, we hope in some instances that patients can bring in old video recordings of themselves and we can morph this information into a 3D face model so that it more closely resembles what they would’ve looked like at their age before needing reconstructive surgery. What we hope to offer patients is a new type of approach that enables them to get a face that is as natural looking as possible.”
New applications for the technology
The team believe that their approach could have a range of further applications outside of a medical setting. For example, it could be used as a facial recognition tool.
Dr Zafeiriou explains: “I look completely different from my driving license photo and this can cause problems when I am being identified, say at a shop or even at an airport. Using our approach, a camera could scan my passport or license and my face and project this data onto a 3D model, which could morph my features into a younger and older me to corroborate that it is me.”
Helping children with autism could be another application. Some children with autism find it difficult to express themselves and read the facial expressions of others to understand emotions and meaning implied by them. The researchers suggest that parents could download an app to their phone, which would scan their child’s face and morph it into computer generated face. They could use this computer generated face in a game designed to teach the real person what different expressions and emotions look like.
Another application that the researchers suggest is in lie detection and crime prevention, where a camera could be trained on a suspect and used to detect micro-expressions that may indicate a false statement.
The team also believe their technology might ultimately be useful in unlocking some of the mysteries of our evolution. Currently, to reconstruct the faces of past humans, researchers piece fragments of skulls together and use clay to rebuild faces, muscle-by-muscle. The scientists developing the new technology suggest that researchers could scan the skulls of ancient humans collected by museums and use this data to reconstruct what ancient humans looked like, in a matter of minutes. This could be helpful in constructing a more realistic and detailed timeline of humans and how they have evolved.
Source : Imperial College London