Stream results parsing

This page provides information on how the LIQA live results can be parsed and visualized.

How it works

To illustrate the overall process of results processing, we provide on this page an example function updateStatus that transforms options from RxJS Observable to human-readable feedback:

liqa.qualityStatus$.subscribe( (options) => { 
    updateStatus(options);
});

This function will show how to process LIQA results into textual warnings (e.g. textWarning). But it can be easily extended to any modifications of DOM elements being only limited by your application user interface (UI).

We suggest several levels of details that you may utilize depending on the complexity of your UI and the variety of feedback you may want to show to your end-user:

Level 1: "All or nothing"

This level of detail in feedback allows to provide end-user with a simple and general "binary" status. Here is how it may look and be realized in your system UI:

Here is how your system can parse LIQA output to get the status:

updateStatus(options) {

  let textWarning = '';

  if (options.imageIsOkay) {
    textWarning = 'Status: OK';
  else textWarning = 'Status: NOT OK';
  
  return textWarning
}

Hint: it might be nearly always a good idea to allow your user to take image only if the quality is good enough. To achieve it, you can add a "Take a selfie" button on your Image Collecting Page with a video stream. Block this "Take a selfie" button until options.imageIsOkay of liqa.quailityStatus$ is not true.

Level 2: "State the reason"

This level of detail in feedback allows to provide end-user with statuses about upper-level parameters influencing face image quality: detection, position, rotation, and illumination (light). Here is how it may look and be realized in your system UI:

Here is how your system can parse LIQA output to get the statuses:

updateStatus(options) {

  let textWarning = '';
  let textWarningPos = '';
  let textWarningRot = '';
  let textWarningIll = '';

  if options.faceDetection {
    textWarning = 'Detection: OK';
  else {
    textWarning = 'Detection: NOT OK';
  }
  
  
  if (options.facePosition == 'ok') {
    textWarningPos = 'Position: OK';
  else {
    textWarningPos = 'Position: NOT OK';
  }
  
  
  if (options.faceRotation == 'ok') {
    textWarningRot = 'Rotation: OK';
  else {
    textWarningRot = 'Rotation: NOT OK';
  }
  
  
  if (options.faceIllumination == 'ok') {
    textWarningRot = 'Illumination : OK';
  else {
    textWarningRot = 'Illumination : NOT OK';
  }
  
  return [
    textWarning,
    textWarningPos,
    textWarningRot,
    textWarningIll
  ]
}

Level 3: "Comprehensive feedback"

This level of detail in feedback allows to provide end-user with exact problems and suggested actions to improve the face image quality. Please refer to liqa.qualityStatus$ to find out about all parameters and possible values. Here is how it may look and be realized in your system UI:

Here is how your system can parse LIQA output to get the statuses and feedback:

General parsing & Detection

updateStatus(options) {

  let textWarning = '';
  let textWarningPos = '';
  let textWarningRot = '';
  let textWarningIll = '';

  if (options.imageIsOkay) {
    textWarning = 'Everything is fine';
  } else if (!options.faceDetection) {
    textWarning = 'Face is not detected';
  else {
    textWarningPos = parsePosition(options);
    textWarningRot = parseRotation(options);
    textWarningIll = parseIllumination(options);
  }
  
  return [
    textWarning,
    textWarningPos,
    textWarningRot,
    textWarningIll
  ]
}

Position feedback

function parsePosition(options) {
  let textWarning = '';
  if (options.facePosition == 'too far') {
    textWarning = 'Bring camera closer';
  } else if (options.facePosition == 'too close') {
    textWarning = 'Move camera back a little';
  } else if (options.facePosition == 'out of frame') {
    textWarning = 'Parts of the face are covered';
  } else if (options.facePosition == 'not in center') {
    textWarning = 'Face isn't in center of frame';
  } 
  return textWarning
}

⚠️ For versions before 5.1.0:

function parsePosition(options) {
  let textWarning = '';
  if (options.facePosition == 'too far') {
    textWarning = 'Bring camera closer';
  } else if (options.facePosition == 'too close') {
    textWarning = 'Move camera back a little';
  } else if (options.facePosition == 'out of frame') {
    textWarning = 'Parts of the face are covered';
  } 
  return textWarning
}

Rotation feedback

function parseRotation(options) {
  let textWarning = '';
  if (options.faceRotation == 'turn left') {
    textWarning = 'Turn your head to the left';
  } else if (options.faceRotation == 'turn right') {
    textWarning = 'Turn your head to the right';  
  } else if (options.faceRotation == 'incline left') {
    textWarning = 'Incline your head to the left';
  } else if (options.faceRotation == 'incline right') {
    textWarning = 'Incline your head to the right';
  } else if (options.faceRotation == 'incline up') {
    textWarning = 'Lift your head';
  } else if (options.faceRotation == 'incline down') {
    textWarning = 'Lower your head';
  }  
  return textWarning
}

Illumination feedback

function parseIllumination(options) {
  let textWarning = '';
  if (options.faceIllumination == 'too contrast') {
    textWarning = 'Too much contrast. Ajust lighting conditions';
  } else if (options.faceIllumination == 'too dark') {
    textWarning = 'Too dark. Blue dots should disappear';
  } else if (options.faceIllumination == 'too light') {
    textWarning = 'Too bright. Red dots should disappear';
  }
  return textWarning
}

Last updated