Pages

21 December 2018

How Hackers Can Use A 3-D Printed Head/Face, To Defeat Facial Recognition Authentication & Successfully Break Into Your Mobile Device


Fake fingerprints, fake DNA, fake digital persona, and fake facial recognition. Demand two-step authentication, and the darker digital angels of our nature will find a way to defeat our digital moats. Victoria Bell posted a December 17, 2018 article on the website of the DailyMail.com, about a new technique that cyber thieves and others are using to breach our mobile devices. Not surprisingly, Ms. Bell notes that ” a 3-D printed head can trick your smartphone’s facial recognition technology into unlocking your phone.”

Google’s Android phones “were the least secure, with some devices opening by simply showing a photograph of the owner,” Ms. Bell wrote. Apple iPhones were determined to be the most secure, even though the company switched from fingerprint authentication to facial recognition last year. 

Ms. Bell notes that “the tests were conducted by Forbes reporter Tom Brewster, who commissioned a 3-D model of his own head to test the face unlocking systems on a range of phones. Apple’s iPhone X models, including the XR and the XS were compared against Android models Galaxy Note 8, Galaxy S9, LG G7, Thinq and OnePlus6.” Ms. Bell notes that “only the iPhone X models defended against the attack, giving credence to Apple’s claims that their software is the most secure. The worst offender among the Androids was the OnePlus6, which appeared to open almost instantly after being shown the model head.”

When it comes to security, nothing is digitally bullet-proof that I am aware of. And the problem of impersonating someone — either through facial recognition, or other digital means, is going to get even more challenging in 2019 — as artificial intelligence (AI) enhanced malware begins to invade our digital universe in a more profound and far-reaching way. Indeed, researchers at the AI/tech firm, NVIDIA recently published a paper explaining how they used Generative Adversarial Networks (GAN), to customize realistic looking faces,”from just a few photos, Joe Middleton wrote on the December 18, 2018 edition of the DailyMail. “The fake faces can easily be customized by using a method known as ‘style transfer’ which blends the characteristics of one image with another. The generator thinks of the image as a collection of three styles, known as coarse styles (pose, hair, face, shape), middle styles (facial features and eyes), middle styles (facial features and eyes), and fine styles (color scheme).”

“Animals such as cats, and objects such as a bedroom can also be generated, using the same method,” Mr. Middleton wrote. “The researchers created a grid to show the extent to which they could alter people’s facial characteristics — using only one source image.” And, GAN technology has only been around for four years Mr. Middleton added. While the technology isn’t foolproof, the AI can’t yet replicate the texture of one’s hair, it is still very, very good, and will likely get even better as the technology matures.

The facial recognition domain is undergoing as much disruptive change as is cyber. Some make-up artists have developed products and techniques which can fool many of our best facial recognition software. Ditto for the 3-D resin, Anti-Surveillance Mask that allows you to masquerade as someone else for a short period of time — but, enough time to get through an airport screening process for example. On the other end of the spectrum, AI is allowing us, or someone to fill in the gaps, if only a partial facial picture is available. In the September 7, 2017 edition of the DailyMail, Harry Pettit reported on the Disguised Face Identification (DFI) System, which uses an artificial intelligence (AI) network, to map facial points; and, ultimately reveal the person’s true/real identity. Mr. Pettit wrote that “the software [technique] could also see [herald] the end of public anonymity.” DFI employs a deep learning, AI neural network that allows the software to continuously improve its ability to correctly identify, or reveal the true identity of an individual.”

“This is very interesting for law enforcement and other organizations that want to capture criminals,” said Amarjot Singh, a researcher at the University of Cambridge, who worked on DFI and recently discussed the technology with the online science and technology publication, Inverse. “The potential applications are beyond imagination,” Mr. Singh added. 

A team of international researchers, led by Mr. Singh, tested the DFI system’s capabilities and performance by “feeding it images of people, using a variety of disguises to cover their faces,” Mr. Pettit wrote. “The images had a mixture of complex, and simple backgrounds to challenge the AI in a variety of scenarios,” he added. The AI neural network “identifies people by measuring the distances and angles between 14 facial points — ten for the eyes, three for the lips, and one for the nose,” Mr. Pettit explained. “It uses these readings to estimate the hidden facial structure; and, then compares this with learned images to unveil the persons true identity. In early tests,” he wrote, “the algorithm correctly identified people whose faces were covered by hats or scarves 56 percent of the time. This accuracy dropped to 43 percent when the faces [individual/s] were wearing glasses.” That said, the technology is in the early stages of development; and one has to assume that the use of an AI/deep-learning neural network will most likely substantially improve on these early performance numbers.”

All of which is to say, anti-surveillance masks, and make-up artistry notwithstanding, hiding one’s true identity is getting harder and harder to do. Compounding that problem, is the growing threat that AI-enabled technology is going to make it much easier to impersonate you and less difficult to overcome two-step identity authentication. RCP, fortunascorner.com

No comments:

Post a Comment