It could be the answer to the ever more invasive facial recognition systems used by Facebook, Google and others to try and identify you in every picture put online.
Researchers at the University of Toronto have revealed AI software than can tweak your snaps so you can’t be identified.
They say their Instagram-like filter can tweak pictures so they look the same to human eyes, but disrupt machine learning systems used by web giants to identify users.
Researchers from the University of Toronto have developed an algorithm specifically designed to disrupt facial recognition systems. It changes specific pixels in the image in a way that is almost invisible to the human eye
The technology uses a deep learning technique called adversarial training, which puts two artificial intelligence algorithms against each other.
‘Personal privacy is a real issue as facial recognition becomes better and better,’ Professor Parham Aarabi said.
‘This is one way in which beneficial anti-facial-recognition systems can combat that ability.’
One neural network works to identify faces, and the second disrupts the facial recognition task of the first.
The two are constantly fighting and learning from each other, which sets up an AI arms race.
The AIs created an Instagram-like filter that can be applied to photos to protect the user’s privacy.
The algorithm changes specific pixels in the image in a way that is almost invisible to the human eye.
‘The disruptive AI can ‘attack’ what the neural net for the face detection is looking for,’ graduate student Avishek Bose said.
‘If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable.
‘It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.’
Social media platforms contain algorithms that ingest data about who you are, your location and people you know every time you upload a photo or video
After testing their system on more than 600 faces with a wide range of ethnicities, lighting conditions and environments, the researchers found that it could reduce the proportion of faces that were originally detectable from nearly 100 percent down to 0.5 percent.
‘The key here was to train the two neural networks against each other – with one creating an increasingly robust facial detection system, and the other creating an ever stronger tool to disable facial detection, Bose said, the lead author on the project.
The new technology can also disrupts image-based search, feature identification, emotion and ethnicity estimation, and all other face-based attributes that could be ingested by social media platforms automatically.
Though not yet available to the public, the team hopes to make the technology open for ruse via an app or website.
‘Ten years ago these algorithms would have to be human defined, but now neural nets learn by themselves – you don’t need to supply them anything except training data,’ Aarabi said.
‘In the end they can do some really amazing things. It’s a fascinating time in the field, there’s enormous potential.’