One of the most popular and highly desired access control systems is as complex in design as it needs to be simple to use.
Unlike key cards, fingerprint scanners or even voice recognition, biometric face readers should require no contact and no real additional actions. In theory, if not always in practice, a user should just be able to look at the screen and be recognised in an instant.
The reality is complex, and it took decades for facial recognition technology to develop from a complex, typically human-directed tool into something so ubiquitous that many people use facial recognition to secure their mobile phones.
How do they work? Why did they take so long to develop? Why are they used? And what should building managers consider when they do use them?
How Does Facial Recognition Work?
Many of the ways in which we as human beings subconsciously and effortlessly process the world around us require a lot more effort for a machine to process, and it took decades for a workable process to be developed.
Whilst we will look at another person and subconsciously recognise them almost immediately, a facial recognition system needs to go through multiple stages of cognition first:
- Face Detection – An optical scanner or camera needs to determine if an image contains a human face, where, how many and ensure it is isolated for closer detection.
- Feature Extraction – Certain distinctive facial features, typically including the contours of the cheeks, the eyes, the nose and the mouth, are extracted and analysed.
- Data Representation – The facial features are converted into usable data points that can be compared to other faces. These are typically known as feature vectors.
- Database Search – The feature vectors are compared to a database of faces alongside their metadata.
- Pattern Recognition And Matching – The system then compares the data against the faces in the database and looks for close matches, as set by the system itself using various algorithms.
- Identity Verification – If there is a match, then the system decides if it is close enough to count as a valid match, or sorts through various close matches to find the one that is closest.
Why Did Facial Recognition Take So Long To Develop?
Facial recognition was first attempted in the 1960s, but the system was fairly limited in scope, requiring humans to highlight facial features using a graphics tablet.
In 1970, a Japanese research system did not require human intervention, but despite interest in the project, it quickly became clear that more powerful hardware was needed to reliably identify facial features.
It took until the 1990s before more advanced systems became available and allowed for the practical use of facial recognition, with the development of the Viola-Jones framework in 2001. Whilst it was slow to train faces, it was also fast enough at detecting them to make it worth it.
Why Are Facial Recognition Systems Used?
- They are very streamlined on the user side, allowing people to authenticate themselves without having a passcard or password, or even needing to touch a machine.
- They are more secure, as it is extremely difficult, if not practically impossible, to bypass a properly trained facial recognition system.
- It is easily integrated into other access control systems.
Leave A Comment