Privacy is Dead, Long Live Privacy

Like any other grad student with sketchy dial-up service in the late nineties, my hours of work were stored on a local hard drive.  The day it crashed, my ensuing panic was entirely focused on getting my draft thesis back.  I didn’t care about my journals, budget, contacts, or my (failing) diet plan. But that thesis was the culmination of eleven months of research and writing.  I was lucky; I had paid for the ‘support plan’ and within a couple hours a tech showed up at my door. He popped out my drive, managed to restore most of the data, and popped in a new drive.  I went back to work, and he went on his merry way.  Which would be the of the story, except for a phone call I received seven months later from a friendly fellow asking if I wanted him to mail me back my hard drive.  He had thought it had dummy data on it, but when his wife saw the diet plan, she told him I was a real person.

Knowing not much about computers at the time, I had a profound sense of ownership over that drive and in due time it arrived in my mailbox. My career was born in that phone call, and after this many years, I still can’t be certain what ‘privacy’ is.  But I am absolutely certain that everyone cares deeply about it.  Try to open a locked stall door next time you’re in a public bathroom, and you’ll be certain too.

Privacy professionals have spent a lot of time debating the word, boundaries of legal rights are, and how to define accountability.  But we still don’t have a good way of measuring how much privacy we have, how much we want, or how much we are willing to trade away for other benefits – like connecting with our friends and family on Facebook.  Our information on privacy comes from lengthy legal documents: terms of use, end user license agreements, acceptable use policies, etc. No wonder no one can make an informed decision.  Privacy isn’t just contextual, it’s transitive.

As I mentioned in an early post … We mediate what we say, to whom we say it and where.  These decisions are made the same as any other human value; an individual’s choice, informed by culture, tradition and experience.  This ability to control the expression of values is critical, but we don’t often consider it, until it’s lost.  I certainly didn’t. 

And what of other human values?  

Humanity is missing from the technical infrastructure that the world has come to rely on.  Machine learning and artificial intelligence applications lay further bare this missing humanity.  If even a small part of what technology companies are working on today becomes the machinery of tomorrow, it will function autonomously.  We have a responsibility to teach this autonomous black box to be obsessed with humanity; to learn to express our values in its language. 

How do you teach a machine to trust?  To evaluate and make decisions based on protecting privacy?  And how do you do that when those values vary so dramatically by person, state, religion or any other organized group we have formed.  The current technology was designed without input but we know now that was a mistake.  How do we design our next infrastructure holistically and at scale?

Measurement.    

There’s no real numbers for a human value. 

Yet, there can be representative ones, and we can use those to demonstrate change.  For example, ‘If you share this information, your available privacy will change from X to Y’.  That’s meaningful.  Clarifying those changes helps us all have a real conversation about what we’re giving up, and what we’re getting in return.  Across domains, disciplines and even geographic boundaries.

And, if we can begin to measure the presence and absence of human values, we could create the basis for teaching those values to machines.

Which changes completely how we could approach informational privacy at scale. And how AI could learn humanity.