Tutorial for CLI tool clkhash

For this tutorial we are going to process a data set for private linkage with clkhash using the command line tool clkutil - equivalent to running python -m clkhash.

Note you can also use the Python API.

The Python package recordlinkage has a tutorial linking data sets in the clear, we will try duplicate that in a privacy preserving setting.

First install clkhash, recordlinkage and a few data science tools (pandas and numpy).

$ pip install -U clkhash recordlinkage numpy pandas
[1]:
import json
import numpy as np
import pandas as pd
[2]:
import recordlinkage
from recordlinkage.datasets import load_febrl4

Data Exploration

First we have a look at the dataset.

[3]:
dfA, dfB = load_febrl4()

dfA.head()
[3]:
given_name surname street_number address_1 address_2 suburb postcode state date_of_birth soc_sec_id
rec_id
rec-1070-org michaela neumann 8 stanley street miami winston hills 4223 nsw 19151111 5304218
rec-1016-org courtney painter 12 pinkerton circuit bega flats richlands 4560 vic 19161214 4066625
rec-4405-org charles green 38 salkauskas crescent kela dapto 4566 nsw 19480930 4365168
rec-1288-org vanessa parr 905 macquoid place broadbridge manor south grafton 2135 sa 19951119 9239102
rec-3585-org mikayla malloney 37 randwick road avalind hoppers crossing 4552 vic 19860208 7207688

Note that for computing this linkage we will not use the social security id column or the rec_id index.

[4]:
dfA.columns
[4]:
Index(['given_name', 'surname', 'street_number', 'address_1', 'address_2',
       'suburb', 'postcode', 'state', 'date_of_birth', 'soc_sec_id'],
      dtype='object')
[5]:
dfA.to_csv('PII_a.csv')

Hashing Schema Definition

A hashing schema instructs clkhash how to treat each column for generating CLKs. A detailed description of the hashing schema can be found in the api docs. We will ignore the columns ‘rec_id’ and ‘soc_sec_id’ for CLK generation.

[6]:
with open("_static/febrl_schema_v2_overweight.json") as f:
    print(f.read())
{
  "version": 2,
  "clkConfig": {
    "l": 1024,
    "kdf": {
      "type": "HKDF",
      "hash": "SHA256",
        "info": "c2NoZW1hX2V4YW1wbGU=",
        "salt": "SCbL2zHNnmsckfzchsNkZY9XoHk96P/G5nUBrM7ybymlEFsMV6PAeDZCNp3rfNUPCtLDMOGQHG4pCQpfhiHCyA==",
        "keySize": 64
    }
  },
  "features": [
    {
      "identifier": "rec_id",
      "ignored": true
    },
    {
      "identifier": "given_name",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 64 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 300}, "hash": {"type": "doubleHash"} }
    },
    {
      "identifier": "surname",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 64 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 300}, "hash": {"type": "doubleHash"} }
    },
    {
      "identifier": "street_number",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits": 300}, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "address_1",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  300} }
    },
    {
      "identifier": "address_2",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  300} }
    },
    {
      "identifier": "suburb",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  300} }
    },
    {
      "identifier": "postcode",
      "format": { "type": "integer", "minimum": 100, "maximum": 9999 },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits":  300} }
    },
    {
      "identifier": "state",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 3 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 300} }
    },
    {
      "identifier": "date_of_birth",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits":  300}, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "soc_sec_id",
      "ignored": true
    }
  ]
}

Validate the schema

The command line tool can check that the linkage schema is valid:

[7]:
!clkutil validate-schema _static/febrl_schema_v2_overweight.json
schema is valid

Hash the data

We can now hash our Personally Identifiable Information (PII) data from the CSV file using our defined linkage schema. We must provide two secret keys to this command - these keys have to be used by both parties hashing data. For this toy example we will use the keys ‘key1’ and ‘key2’, for real data, make sure that the keys contain enough entropy, as knowledge of these keys is sufficient to reconstruct the PII information from a CLK! Also, do not share these keys with anyone, except the other participating party.

[8]:
!clkutil hash PII_a.csv key1 key2 _static/febrl_schema_v2_overweight.json clks_a.json
generating CLKs: 100%|█| 5.00k/5.00k [00:00<00:00, 1.06kclk/s, mean=949, std=9.82]
CLK data written to clks_a.json

Inspect the output

clkhash has hashed the PII, creating a Cryptographic Longterm Key for each entity. The stats output shows that the mean popcount (number of bits set) is quite high (949 out of 1024) which can effect accuracy.

To reduce the popcount you can modify the individual ‘numBits’ values for the different fields. It allows to tune the contribution of a column to the CLK. This can be used to de-emphasise columns which are less suitable for linkage (e.g. information that changes frequently).

[9]:
!clkutil describe clks_a.json
    ----------------------------------------------------------------------------------------------------------------------------
    |                                                        popcounts                                                         |
    ----------------------------------------------------------------------------------------------------------------------------

 461|                                         o o
 437|                                         o o o
 413|                                        oo o o
 389|                                        oo o o
 364|                                        oo o o
 340|                                        oo o o o
 316|                                        oo o o o
 292|                                        oo o o o
 267|                                        oo o o o o
 243|                                      o oo o o o o
 219|                                      o oooo o o o
 195|                                      o oooooooo o
 170|                                    o oooooooooo o
 146|                                    o oooooooooooo
 122|                                    o oooooooooooo o
  98|                                  o oooooooooooooooo
  73|                                  ooooooooooooooooooo
  49|                                o ooooooooooooooooooo
  25|                           o o oooooooooooooooooooooo o
   1| ooo ooo    ooo oooooooooooooooooooooooooooooooooooooooooooooo
     -------------------------------------------------------------
     8 8 8 8 8 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
     8 8 9 9 9 0 0 0 1 1 1 2 2 2 2 3 3 3 4 4 4 5 5 5 6 6 6 6 7 7 7
     6 9 2 5 8 1 4 7 0 3 7 0 3 6 9 2 5 8 1 4 8 1 4 7 0 3 6 9 2 5 9
       . . . . . . . . .   . . . . . . . . .   . . . . . . . . .
       1 2 3 4 5 6 7 8 9   1 2 3 4 5 6 7 8 9   1 2 3 4 5 6 7 8 9


------------------------
|       Summary        |
------------------------
|  observations: 5000  |
|min value: 886.000000 |
|  mean : 948.948000   |
|max value: 979.000000 |
------------------------

First, we will reduce the value of numBits for each feature.

[10]:
with open("_static/febrl_schema_v2_reduced.json") as f:
    print(f.read())
{
  "version": 2,
  "clkConfig": {
    "l": 1024,
    "kdf": {
      "type": "HKDF",
      "hash": "SHA256",
        "info": "c2NoZW1hX2V4YW1wbGU=",
        "salt": "SCbL2zHNnmsckfzchsNkZY9XoHk96P/G5nUBrM7ybymlEFsMV6PAeDZCNp3rfNUPCtLDMOGQHG4pCQpfhiHCyA==",
        "keySize": 64
    }
  },
  "features": [
    {
      "identifier": "rec_id",
      "ignored": true
    },
    {
      "identifier": "given_name",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 64 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 200}, "hash": {"type": "doubleHash"} }
    },
    {
      "identifier": "surname",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 64 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 200}, "hash": {"type": "doubleHash"} }
    },
    {
      "identifier": "street_number",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits": 200}, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "address_1",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  200} }
    },
    {
      "identifier": "address_2",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  200} }
    },
    {
      "identifier": "suburb",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  200} }
    },
    {
      "identifier": "postcode",
      "format": { "type": "integer", "minimum": 100, "maximum": 9999 },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits":  200} }
    },
    {
      "identifier": "state",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 3 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 200} }
    },
    {
      "identifier": "date_of_birth",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits":  200}, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "soc_sec_id",
      "ignored": true
    }
  ]
}

[11]:
!clkutil hash PII_a.csv key1 key2 _static/febrl_schema_v2_reduced.json clks_a.json
generating CLKs: 100%|█| 5.00k/5.00k [00:00<00:00, 1.33kclk/s, mean=843, std=13.8]
CLK data written to clks_a.json

And now we will modify the numBits values again, this time de-emphasising the contribution of the address related columns.

[12]:
with open("_static/febrl_schema_v2_final.json") as f:
    print(f.read())
{
  "version": 2,
  "clkConfig": {
    "l": 1024,
    "kdf": {
      "type": "HKDF",
      "hash": "SHA256",
        "info": "c2NoZW1hX2V4YW1wbGU=",
        "salt": "SCbL2zHNnmsckfzchsNkZY9XoHk96P/G5nUBrM7ybymlEFsMV6PAeDZCNp3rfNUPCtLDMOGQHG4pCQpfhiHCyA==",
        "keySize": 64
    }
  },
  "features": [
    {
      "identifier": "rec_id",
      "ignored": true
    },
    {
      "identifier": "given_name",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 64 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 200}, "hash": {"type": "doubleHash"} }
    },
    {
      "identifier": "surname",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 64 },
      "hashing": { "ngram": 2, "strategy": {"numBits": 200}, "hash": {"type": "doubleHash"} }
    },
    {
      "identifier": "street_number",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits": 100}, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "address_1",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  100} }
    },
    {
      "identifier": "address_2",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  100} }
    },
    {
      "identifier": "suburb",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "strategy": {"numBits":  100} }
    },
    {
      "identifier": "postcode",
      "format": { "type": "integer", "minimum": 50, "maximum": 9999 },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits":  100} }
    },
    {
      "identifier": "state",
      "format": { "type": "string", "encoding":  "utf-8"},
      "hashing": {"ngram": 2, "positional": true, "strategy": {"numBits": 100}, "missingValue": {"sentinel":  ""}
      }
    },
    {
      "identifier": "date_of_birth",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "strategy": {"numBits":  200}, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "soc_sec_id",
      "ignored": true
    }
  ]
}

[13]:
!clkutil hash PII_a.csv key1 key2 _static/febrl_schema_v2_final.json clks_a.json
generating CLKs: 100%|█| 5.00k/5.00k [00:00<00:00, 9.03kclk/s, mean=705, std=16]
CLK data written to clks_a.json

Great, now approximately half the bits are set in each CLK.

Each CLK is serialized in a JSON friendly base64 format:

[14]:
# If you have jq tool installed:
#!jq .clks[0] clks_a.json

import json
json.load(open('clks_a.json'))['clks'][0]
[14]:
'unsZ/W7D35s8q759bf77155ean+p8fq96fzf9u9bnXf3rX2gGfntPvR2/tOd314aOvuv/97z+lrY8st+fP8PYVd9/KjZN6rMx+T/O6r/v/Hdvt1f1at2+f+Xe53iX94f9988b3mhTsIQbf+7Xr3Sff71fuze9k3sX++db4d73v0='

Hash data set B

Now we hash the second dataset using the same keys and same schema.

[15]:
dfB.to_csv('PII_b.csv')

!clkutil hash PII_b.csv key1 key2 _static/febrl_schema_v2_final.json clks_b.json
generating CLKs: 100%|█| 5.00k/5.00k [00:00<00:00, 9.40kclk/s, mean=703, std=19.4]
CLK data written to clks_b.json

Find matches between the two sets of CLKs

We have generated two sets of CLKs which represent entity information in a privacy-preserving way. The more similar two CLKs are, the more likely it is that they represent the same entity.

For this task we will use the entity service, which is provided by Data61. The necessary steps are as follows: - The analyst creates a new project with the output type ‘mapping’. They will receive a set of credentials from the server. - The analyst then distributes the update_tokens to the participating data providers. - The data providers then individually upload their respective CLKs. - The analyst can create runs with various thresholds (and other settings) - After the entity service successfully computed the mapping, it can be accessed by providing the result_token

First we check the status of an entity service:

[16]:
SERVER = 'https://testing.es.data61.xyz'

!clkutil status --server={SERVER}
{"project_count": 7953, "rate": 2256142, "status": "ok"}

The analyst creates a new project on the entity service by providing the hashing schema and result type. The server returns a set of credentials which provide access to the further steps for project.

[17]:
!clkutil create-project --server={SERVER} --schema _static/febrl_schema_v2_final.json --output credentials.json --type "mapping" --name "tutorial"
Project created

The returned credentials contain a - project_id, which identifies the project - result_token, which gives access to the mapping result, once computed - upload_tokens, one for each provider, allows uploading CLKs.

[18]:
credentials = json.load(open('credentials.json', 'rt'))
print(json.dumps(credentials, indent=4))
{
    "project_id": "515f737eeaa2d675de19050819361aeedff9e6ac0c32e7a4",
    "result_token": "9305c8261537b248fb053859e8883376c0529f7fae7f9c37",
    "update_tokens": [
        "ccf80bca32a48224c718e43b1539edea11212f8d63bfe6a1",
        "d216baa7cd0e98b9fff6768cb326a33aff6fb54f72d8d619"
    ]
}

Uploading the CLKs to the entity service

Each party individually uploads its respective CLKs to the entity service. They need to provide the resource_id, which identifies the correct mapping, and an update_token.

[19]:
!clkutil upload \
       --project="{credentials['project_id']}" \
        --apikey="{credentials['update_tokens'][0]}" \
        --output "upload_a.json" \
        --server="{SERVER}" \
       "clks_a.json"
[20]:
!clkutil upload \
       --project="{credentials['project_id']}" \
        --apikey="{credentials['update_tokens'][1]}" \
        --output "upload_b.json" \
        --server="{SERVER}" \
       "clks_b.json"

Now that the CLK data has been uploaded the analyst can create one or more runs. Here we will start by calculating a mapping with a threshold of 0.9:

[21]:
!clkutil create --verbose  \
    --server="{SERVER}" \
    --output "run_info.json" \
    --threshold=0.9 \
    --project="{credentials['project_id']}" \
    --apikey="{credentials['result_token']}" \
    --name="CLI tutorial run A"
Entity Matching Server: https://testing.es.data61.xyz
[22]:
run_info = json.load(open('run_info.json', 'rt'))
run_info
[22]:
{'name': 'CLI tutorial run A',
 'notes': 'Run created by clkhash 0.13.1b6',
 'run_id': '136175a389e426974db689d7a604dd39ae56223c428f056e',
 'threshold': 0.9}

Results

Now after some delay (depending on the size) we can fetch the results. This can be done with clkutil:

[23]:
!clkutil results --watch \
        --project="{credentials['project_id']}" \
        --apikey="{credentials['result_token']}" \
        --run="{run_info['run_id']}" \
        --server="{SERVER}" \
        --output results.txt
State: running
Stage (3/3): compute output
State: running
Stage (3/3): compute output
State: completed
Stage (3/3): compute output
Downloading result
Received result
[24]:
with open('results.txt') as f:
    str_mapping = json.load(f)['mapping']

mapping = {int(k): int(v) for k,v in str_mapping.items()}
print('The service linked {} entities.'.format(len(mapping)))
The service linked 4001 entities.

Let’s investigate some of those matches and the overall matching quality. In this case we have the ground truth so we can compute the precision, recall, and accuracy.

[25]:
with open('PII_a.csv', 'rt') as f:
    a_raw = f.readlines()
with open('PII_b.csv', 'rt') as f:
    b_raw = f.readlines()

num_entities = len(b_raw) - 1

def describe_accuracy(mapping, show_examples=False):
    if show_examples:
        print('idx_a, idx_b,     rec_id_a,       rec_id_b')
        print('---------------------------------------------')
        for a_i in range(10):
            if a_i in mapping:
                a_data = a_raw[a_i + 1].split(',')
                b_data = b_raw[mapping[a_i] + 1].split(',')
                print('{:3}, {:6}, {:>15}, {:>15}'.format(a_i+1, mapping[a_i]+1, a_data[0], b_data[0]))
        print('---------------------------------------------')

    TP = 0; FP = 0; TN = 0; FN = 0
    for a_i in range(num_entities):
        if a_i in mapping:
            if a_raw[a_i + 1].split(',')[0].split('-')[1] == b_raw[mapping[a_i] + 1].split(',')[0].split('-')[1]:
                TP += 1
            else:
                FP += 1
                # as we only report one mapping for each element in PII_a,
                # then a wrong mapping is not only a false positive, but
                # also a false negative, as we won't report the true mapping.
                FN += 1
        else:
            FN += 1 # every element in PII_a has a partner in PII_b


    print('Precision: {:.2f}, Recall: {:.2f}, Accuracy: {:.2f}'.format(
        TP/(TP+FP),
        TP/(TP+FN),
        (TP+TN)/(TP+TN+FP+FN)))
[26]:
describe_accuracy(mapping, True)
idx_a, idx_b,     rec_id_a,       rec_id_b
---------------------------------------------
  2,   2751,    rec-1016-org,  rec-1016-dup-0
  3,   4657,    rec-4405-org,  rec-4405-dup-0
  4,   4120,    rec-1288-org,  rec-1288-dup-0
  5,   3307,    rec-3585-org,  rec-3585-dup-0
  6,   2306,     rec-298-org,   rec-298-dup-0
  7,   3945,    rec-1985-org,  rec-1985-dup-0
  8,    993,    rec-2404-org,  rec-2404-dup-0
  9,   4613,    rec-1473-org,  rec-1473-dup-0
 10,   3630,     rec-453-org,   rec-453-dup-0
---------------------------------------------
Precision: 1.00, Recall: 0.80, Accuracy: 0.80

Precision tells us about how many of the found matches are actual matches. The score of 1.0 means that we did perfectly in this respect, however, recall, the measure of how many of the actual matches were correctly identified, is quite low with only 81%.

Let’s go back and create another mapping with a threshold value of 0.8.

[27]:
!clkutil create --verbose  \
    --server="{SERVER}" \
    --output "run_info.json" \
    --threshold=0.8 \
    --project="{credentials['project_id']}" \
    --apikey="{credentials['result_token']}" \
    --name="CLI tutorial run B"

run_info = json.load(open('run_info.json', 'rt'))
Entity Matching Server: https://testing.es.data61.xyz
[28]:
!clkutil results --watch \
        --project="{credentials['project_id']}" \
        --apikey="{credentials['result_token']}" \
        --run="{run_info['run_id']}" \
        --server="{SERVER}" \
        --output results.txt
State: running
Stage (2/3): compute similarity scores
State: running
Stage (2/3): compute similarity scores
State: completed
Stage (3/3): compute output
Downloading result
Received result
[29]:
with open('results.txt') as f:
    str_mapping = json.load(f)['mapping']

mapping = {int(k): int(v) for k,v in str_mapping.items()}

print('The service linked {} entities.'.format(len(mapping)))
describe_accuracy(mapping)
The service linked 4975 entities.
Precision: 1.00, Recall: 0.99, Accuracy: 0.99

Great, for this threshold value we get a precision of 100% and a recall of 99%.

The explanation is that when the information about an entity differs slightly in the two datasets (e.g. spelling errors, abbrevations, missing values, …) then the corresponding CLKs will differ in some number of bits as well. For the datasets in this tutorial the perturbations are such that only 80% of the derived CLK pairs overlap more than 90% (the first threshold). Whereas 99% of all matching pairs overlap more than 80%.

If we keep reducing the threshold value, then we will start to observe mistakes in the found matches – the precision decreases (if an entry in dataset A has no match in dataset B, but we keep reducing the threshold, eventually a comparison with an entry in B will be above the threshold leading to a false match). But at the same time the recall value will keep increasing for a while, as a lower threshold allows for more of the actual matches to be found. However, as our example dataset only contains matches (every entry in A has a match in B), this phenomenon cannot be observered. With the threshold 0.72 we identify all matches but one correctly (at the cost of a longer execution time).

[30]:
!clkutil create --verbose  \
    --server="{SERVER}" \
    --output "run_info.json" \
    --threshold=0.72 \
    --project="{credentials['project_id']}" \
    --apikey="{credentials['result_token']}" \
    --name="CLI tutorial run B"

run_info = json.load(open('run_info.json', 'rt'))
Entity Matching Server: https://testing.es.data61.xyz
[31]:
!clkutil results --watch \
        --project="{credentials['project_id']}" \
        --apikey="{credentials['result_token']}" \
        --run="{run_info['run_id']}" \
        --server="{SERVER}" \
        --output results.txt
State: running
Stage (2/3): compute similarity scores
State: running
Stage (2/3): compute similarity scores
State: running
Stage (2/3): compute similarity scores
Progress: 100.00%
State: running
Stage (3/3): compute output
State: completed
Stage (3/3): compute output
Downloading result
Received result
[32]:
with open('results.txt') as f:
    str_mapping = json.load(f)['mapping']

mapping = {int(k): int(v) for k,v in str_mapping.items()}

print('The service linked {} entities.'.format(len(mapping)))
describe_accuracy(mapping)
The service linked 4995 entities.
Precision: 1.00, Recall: 1.00, Accuracy: 1.00

It is important to choose an appropriate threshold for the amount of perturbations present in the data.

Feel free to go back to the CLK generation and experiment on how different setting will affect the matching quality.

Cleanup

Finally to remove the results from the service delete the individual runs, or remove the uploaded data and all runs by deleting the entire project.

[33]:
# Deleting a run
!clkutil delete --project="{credentials['project_id']}" \
        --apikey="{credentials['result_token']}" \
        --run="{run_info['run_id']}" \
        --server="{SERVER}"
Run deleted
[34]:
# Deleting a project
!clkutil delete-project --project="{credentials['project_id']}" \
        --apikey="{credentials['result_token']}" \
        --server="{SERVER}"
Project deleted