Skip to main content

Usage

Here is what the flow looks like in your web app utilizing a client-server architecture, which we recommend for security reasons.

  1. Your server/backend sends a request to Lens and validates your Client ID using the above Client Validation URL.
  2. If successful, it will get back a session key that has a lifespan of 5 minutes.
  3. Using that session key, your server/backend can pass it to the client/frontend.
  4. In your client/frontend, assuming you already installed the npm package to load the SDK, use setLensSessionKey() to set the current session, init() to initialize the Lens components and to start capturing.
  5. Once you’re happy with the result, use the result from capture() to either display it on the frontend or pass it back to the backend for your custom needs.
  6. SDK runs checks for blur and precense of document on a cropped image, you can access them with getIsDocument() and getBlurStatus()

Typical useflow

1. Initialize Lens

await VeryfiLens.initWasm(sessionToken, CLIENT_ID)

2. Capture an image

await VeryfiLens.captureWasm()

3. Submit image

async function processImage(image, clientId, username, apiKey, deviceData) {
try {
const requestOptions = {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
image: image,
username: username,
api_key: apiKey,
client_id: clientId,
device_data: deviceData,
}),
};
const response = await fetch(PROCESS_DOCUMENT_URL, requestOptions);
if (!response.ok) {
throw new Error(`HTTP error! Status: ${response.status}`);
}
return await response.json();
} catch (error) {
console.error("Error processing the image:", error);
throw error; // Re-throw the error for further handling if needed
}
}

To better demonstrate the integration, feel free to check out this sample project that we made on GitHub: