Android Face Recognition Made Easy: An Introduction to the Mobile Vision API

Explore the fascinating world of face recognition on Android devices with the help of Google’s Mobile Vision API in our latest blog post. Learn how to use this powerful API to detect human faces in images and videos. From implementation to application, let our step-by-step tutorial inspire you as you delve into the exciting realm of Android face recognition.

Android Face Recognition

The Android Face Recognition API tracks faces in photos and videos using distinctive points such as eyes, nose, ears, cheeks, and mouth. Instead of recognizing individual features, the API captures the face as a whole and then identifies distinctive points and classifications if defined. Moreover, the API can recognize faces at various angles.

Landmarks

A landmark is an interesting point within a face. The left eye, right eye, and nose base are all examples of landmarks. The following landmarks are currently possible with the API:

  • Left and right eye
  • Left and right ear
  • Left and right ear tip
  • Nose base
  • Left and right cheekbone
  • Left and right corner of the mouth
  • Base of the mouth

Classification

Classification determines whether a specific facial feature is present. The Android Face API currently supports two classifications:

  • Eyes open: Utilizes the methods getIsLeftEyeOpenProbability() and getIsRightEyeOpenProbability().
  • Smile: Utilizes the method getIsSmilingProbability().

Face Orientation

The orientation of the face is determined using Euler angles, which refer to the rotation angle of the face around the X, Y, and Z axes.

  • Euler Y tells us whether the face is looking left or right.
  • Euler Z tells us whether the face is rotated/tilted.
  • Euler X tells us whether the face is looking up or down (currently not supported).

Note: If a probability cannot be calculated, it is set to -1.

Sample Project Structure for Android Face Recognition

To begin implementation, you need to add the required dependency in your build.gradle file:

compile 'com.google.android.gms:play-services-vision:11.0.4'

Additionally, add the following metadata within the application tag in the AndroidManifest.xml file:

meta-data
    android:name="com.google.android.gms.vision.DEPENDENCIES"
    android:value="face"/

This informs the Vision library that you intend to recognize faces within your application. You also need to add the required permissions for the camera in the AndroidManifest.xml:

uses-feature
    android:name="android.hardware.camera"
    android:required="true"/>

Sample Project Structure for Android Face Recognition

The main layout for the activity is defined in the activity_main.xml file. It includes two ImageViews, TextViews, and Buttons. One ImageView is used to display sample images and show the results. The other is used to capture an image from the camera.

Activity Code

The Java code for the activity (MainActivity.java) contains the logic for face recognition. There are also some key points to note:

  • The imageArray array contains the sample images to be scanned when the “PROCESS NEXT” button is clicked.
  • The detector is initialized with the required parameters.
  • The methods processImage() and processCameraPicture() contain the code for actual face recognition and drawing a rectangle over it.
  • The onRequestPermissionsResult() method is called to check permissions at runtime.
  • The onActivityResult() method is called to handle the result of camera usage.

Conclusion

The Android Face Recognition API provides a powerful way to detect faces in images and videos. With Google’s Mobile Vision API, developers can quickly and easily integrate this functionality into their Android applications. It’s important to note that the API currently only supports face detection and not facial recognition. However, it opens up many interesting use cases, from image processing to user recognition.

Create a Free Account

Register now and get access to our Cloud Services.

Posts you might be interested in: