Face Recognition With Android

The Great Effect of Face Recognition With Android and TensorFlow

Face Recognition With Android and TensorFlow.
Let’s learn first of all what “facial recognition” and “face detection” are all about. While many individuals alternately use both phrases, they are, in fact, two completely distinct words.

What is face recognition?

Face recognition: if you see a person’s face, identify him or her from a known dataset of registered faces. Let’s suppose that we have a picture dataset containing registered faces (Eastwood, Beethoven, and Madona). We want to know who the face belongs to for every fresh face picture.

So, what is face detection?

In short, face detection is: when an input image is supplied, the question is whether the faces of individuals in the image are present. And to know where every face (e.g. a binding box) is placed and to also know the position of the eyes, nose, and mouth for every face (known as face landmarks).
To solve this challenge, they must find a measure to compare the similarity of faces.

We seek a solution offline

Google knows who is in the photos and tags the faces automatically when we upload photos to the cloud.
All processing takes place with GPUs and TPUs on the servers they have. With our “modest” ARM we hope to tackle this issue offline.
What checks my face using my mobile banking app? My smartphone is working.

Use of ML Kit by Google to recognize the face

Why not use the ML kit from Google to identify faces?
Well, the Google ML Kit provides face detection, but facial recognition is not available (yet). The initial piece of the algorithm pipeline is using ML Kit, then something else for the next recognition.

How does a system for facial recognition work?

In the first stage, we identify the face of the input image.
Second, they distorted the picture to match the face using the recognized landmarks. Thus all cut faces are in the same place with their eyes.
Thirdly, the face is cut and correctly scaled to feed the Deep Learning model of recognition. In this stage, also certain pre-processing procedures (e.g., normalization and “whitening” of the face) are done.
Fourth, the “sweet part” is the “deep neural network.” In this stage, we will concentrate more.

Develop the mobile app

Face Recognition With Android and TensorFlow.

We will adapt the classic example of TensorFlow to be utilized on the MobileFaceNet model. The source code for Android, iOS, and Raspberry Pi may be found in this repository. We will focus on making it work on Android here, but it would simply be the comparable approach on the other platforms.

Adding the step of face recognition

We supply the original sample with another DL model and they calculate the results in a single step. We must install many stages for this application. The majority of labor is divided by the detection, first by face detection, and second by face recognition.

Add the step of face recognition

We must first add the model file of TensorFlow Lite to the project’s asset folder.
And in the setup part of DetectorActivity we change the needed settings to meet our model needs. We set the model’s input size.
Let’s alter the name of SimilarityClassifier’s classification interface as what the model outputs now is similarity. It permits us to record reconnaissance objects in the dataset. We identify the field of confidence as a distance because it would take something more to trust the definition of recognition.

Click to rate this post!
[Total: 0 Average: 0]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top