Six Steps to Optimizing Your Data Capture Solution

September 6, 2018

4 minute read

Implementing an effective data capture solution is crucial to any enterprise that needs to convert its dormant, unstructured data into readable and searchable content in order to glean new business insights.

Enterprises that successfully employ data capture technology usually find a host of benefits, such as:

Development of New Products and Services

Document capture solutions enable companies to analyze their unstructured data looking for associations that reveal new client needs. When new opportunities are uncovered, enterprises can create new products and services to fill the gap. For example, banks might use a data capture solution to digitize all their historical loan records because if they're able to feed all of that non-searchable, unstructured content into their analytics engine, they might find new opportunity to deliver new services to customers.

Improved Customer Experience

Companies use automatic data capture to make all customer records readable and usable by their analytics engines. This enables them to improve the client experience in instances where they find duplication of products or inefficient services. For instance, converting unstructured data may reveal that certain clients have purchased insurance on a mortgage in the past, and then similar insurance on a loan at a later date. Using this information, the company can approach the customer to offer a more streamlined and efficient single insurance policy that covers all of the customer’s insurable assets. 

Regulatory Compliance

Robust automated data capture also enables companies to more easily achieve regulatory compliance. For example, In the new GDPR environment customers have the right to be forgotten. If they switch their banks, they can request that the original institution delete all their records. It would be impossible for the bank to comply with that request if they were not able to find every content asset that contained customers’ PII. And that can only be done if the organization uses a data capture process to convert their unstructured data and make all content in all repositories searchable.

At Adlib, we’ve found that in order for enterprises to realize these benefits, there are six critical steps they must take as part of their data capture solution. Read on for a look at how each of these six steps works.

The Six Steps to Optimal Data Capture

Step One: File Identification

The data capture solution begins with an upfront assessment of existing content in order to understand the scope of the project. In this step the company identifies what kind of content it has, where it's located, the volume, the types of languages and the different document types (not just in terms of formats, but in terms of whether they're forms or invoices, or customer correspondence or legal contracts).

And, at this early stage it’s often possible to do some basic de-duplication to reduce the amount of work required in later steps. For example, if there are two Word documents with the exact same name, the exact same date, same file size, they are almost certainly duplicates of each other, and all but one copy can be deleted. 

Step Two: OCR and Rendering

The next step in the data capture process is to standardize the content. This involves using Optical Character Recognition (OCR) to create searchable PDFs—which means all enterprise content will now be in a single, consistent format. At this stage it is also possible to perform some analysis of the flow of the content within the documents themselves. Many documents don't follow the basic one paragraph after another format—they may have multiple columns, images, or tables.

The technology behind  effective data capture enables understanding of the flow of the content (e.g. bottom of one column back up to the top of the next). This is critical when analyzing the internal content of files because, for example, a social security number might start at the bottom of one column and then continue at the top of the next.

Step Three: Classification

Classification entails grouping the normalized content into ‘buckets’ of similar documents. For instance, invoices are grouped together, or legal contracts of different types will be collected together. Employing this type of classification makes documents increasingly relevant to each other. For instance, searching for invoice numbers in an entire data set is less efficient, and may be less successful, than only having to search within those documents classified and grouped as ‘invoices.’ And the same is true when searching grouped legal contracts for legal clauses/language.

Step Four: Data Extraction

Extraction involves performing operations on related forms of content.  For example, with legal contracts, the goal might be to identify and extract the party names. Or for invoices it might be the invoice number or the amount.

Once values or data have been extracted, those values can be added to the document’s metadata. This means the original document is no longer needed, and the extracted values can be moved from a File Share or a legacy system, into whatever new system or repository is required, appended with that new metadata so it's easier to find. Metadata containing important extracted values can help speed up decision-making, serve up answers to customers faster, develop new products and services more quickly, and help meet regulatory compliance demands.

Step Five: QA and Reconciliation

QA and reconciliation is an area that often gets overlooked in the data capture process because the assumption is that technology can be a magic bullet and everything can be automatically processed. But, especially in projects were there may be tens of millions of files involved, there are likely to be pieces of data that technology on its own just can’t deal with. So a QA and reconciliation component is mandatory. For greatest accuracy the process requires touch points where people can perform manual spot checks on the capture results.

Step Six: Upload and Output

The final step in an effective data capture solution is to upload and output—meaning performing an action on the data to help reach project goals.

Examples of upload and output actions might include: returning the data to the original repository but with registered metadata, or, in a merger and acquisition, extracting values from the acquired content and migrating it to the new systems, or moving from a legacy system to a cloud environment.

The exact nature of these actions will ultimately be determined by the business needs driving the implementation of the automated data capture.

Wrap Up

An effective data capture solution, one that follows these six steps, is essential to converting all of your company’s unstructured data into fuel for its analytics engines. Making that dormant data readable, searchable and ready for analysis enables enterprises to improve business insights, speed up decision-making and product development, enhance the customer experience and reduce compliance risk.

Don’t forget to share this post