This section covers key concepts related to the Find duplicates) Workflow step.
A cluster is a collection of records that have been identified as representing the same entity using the Find duplicates rules. Each cluster is identified by a unique cluster ID.
Each match between two records will have one of the following confidence levels:
|Exact (0)||Each individual field that makes up the record matches exactly.|
|Close (1)||Records might have some fields that match exactly, and some fields that are very similar.|
|Probable (2)||Records might have some fields that match exactly, some fields that are very similar, and some fields that differ a little more.|
|Possible (3)||Records contain the majority of fields that have a number of similarities, but do not match exactly.|
|None (4)||Records do not match.|
If your data has columns tagged already, this step will recognize the tagged columns and automatically assign a relevant Find duplicates column mapping to them. Otherwise, you can manually assign the column mappings based on your knowledge of the data.
This step will only recognize the following system-defined tags:
It is important to map your columns as accurately as possible before using the Find duplicates step to make the matching process more efficient. For example, mapping a column as Address when it contains primarily company or name information will lead to less accurate results.
Additionally, using the more granular address element mappings such as Premise and Street and Locality as opposed to the higher level Address mapping (providing your data is divided in such a way) will mean that less effort is required to identify individual address components.
For more information on how Find duplicates utilizes these column mappings, you can refer to the advanced configuration page.
You can apply different rulesets to columns with the same tag by using group IDs.
For example, you may have delivery and billing addresses that you want to treat differently. You would tag both as an address, but create separate group IDs, allowing you to apply different rulesets: only accept an exact match for the billing address, but a close one for the delivery address.
To apply a group ID to one or more columns, use the left-hand side menu in Workflow Designer:
The Find duplicates step creates blocks of similar records to assist with the generation of suitable candidate record pairs for scoring. Blocks are created from records that have the same blocking key values.
Blocking keys are created for each input record from combinations of the record's elements that have been keyed. Keying is the process of encoding individual elements to the same representation so that they can be matched despite minor differences in spelling.
Click Undefined blocking keys to specify a blocking key set.
To view the default and define your own blocking key sets, go to Glossary > Find Duplicates blocking keys.
Find out how to create your own blocking keys.
A ruleset is a set of logical expressions (rules) that control how records are compared and how match statuses/levels are decided.
Click Undefined ruleset to specify a ruleset.
To view the default and define your own rulesets, go to Glossary > Find Duplicates rulesets. Find out how to create your own rules.
The following default blocking keys and rulesets are available:
GBR_Individual_Defaultwill find individuals in Great Britain. Note that emails, phone numbers, and other identifiers will not be taken into account, but can be added manually.
GBR_Household_Defaultwill find households in Great Britain.
GBR_Location_Defaultwill find locations in Great Britain.
You can retain your duplicate store to disk, so it can be used for searching and maintenance operations.
Duplicate stores are retained to your machine's Data Studio repository, within the
experianmatch sub-directory. However, if you have configured a separate instance of the Find duplicates server, duplicate stores will be retained on that same machine.
To retain a duplicate store when using the Find duplicates step:
Any duplicate store can be encrypted if you specify this before running the Find duplicates step.
Encrypting the store protects data whilst the step is running on disk and is especially important for duplicate stores that have been retained for later use. Non-retained stores are deleted after the step has completed but can still be protected while the results are being processed.