People Don’t Get It

And companies don’t help them, because it’s not in their best interests.

On the other end of the transaction, data subjects also have to navigate a complex set of requirements that change from service to service; for example, what is private by default on Facebook may not be on WhatsApp.  

Each application and service comes with different settings for privacy preferences.  

Making things even more complex, for better or worse, until 2012 Google had 60 different privacy policies for the various products and services it offered (Rao, 2012).  

For owners of Social Network Sites (SNS) in particular, there is a significant profit motive to make ‘sharing’ as easy as possible, as more content and users drive increased ad revenue. 

Finally, as more and more services are available online only, the punitive damage associated with opting out is increasing.  For example, renewing a driver’s licence online (depending on location) may take less time than in-person wait times.  In addition, the Digital Divide comes in to play; being able to access the resources associated with online services can often be a problem in lower-income and / or rural areas, where Internet service can be an added expense or unavailable (Norris, 2001).

It is becoming easier for companies to collect data and analyze it, compared to the past when everything was paper-based.  For example, social network sites (SNS) are one of the most common forms of computer mediated communication (CMC), defined as sites that require the data subject is asked to create a profile, identify other users and explore the site based on those connections (Ellison & others, 2007).  

Such sites generate billions of unique data subject visits a month.

SNS focus on enhancing user connectivity.  They do not necessarily inform users about the privacy risks associated with increasing disclosure of their PI.  Most SNS do not enable a data subject to control what other users may post about them on the site.  In one study, 58% of participants report they are ‘very concerned’ that other users may reveal PI without their consent online, but 26% report willingness to disclose their friends’ photos and comments (Ho, Maiga, & Aïmeur, 2009).

Service providers of SNS’ have complete and unrestricted access to the data that users post about themselves and others.  They generate profit from providing these ‘free’ services by selling advertising based on the specificity of the user profile that can be created.  The more data a user shares, the more tailored the advertising can be and the more valuable that dataset for the company.

Privacy policies are another supporting instrument that organizations use to explicate their information management practices in respect of PI.  

Such policies are used by organizations to communicate with data subjects; as one maneuvers through websites, each different site is guided by a different set of policy expectations resulting in numerous policies to review.  Regardless of whether an organization is obligated to use consent or notice for collection, it is implicitly required and best practice determined by regulatory authorities to present the data subject with a privacy policy.[1]

Research has sought to evaluate the efficacy of privacy policies, noting that they are often unread, when read difficult to understand and generally unsupportive of data subject decision-making processes (McDonald & Cranor, 2008; Milne & Culnan, 2004).  

As early as 2007, research indicated 3% of people reviewed online privacy policies carefully, noting that policies were too time consuming to read and difficult to understand; yet noting that they were more comfortable at sites that have a privacy policy (Cranor & Tongia, 2007).

One particular study notes that the length of the policy is a factor in the infrequency with which they are reviewed by data subjects, concluding that data subjects are unlikely to understand the privacy risk of disclosing information online (McDonald & Cranor, 2008).  There are other structural issues with online privacy policies, first that they are designed to be read by a human and include language that is open to interpretation.  

Websites can include any volume of information in the policy, and online it is particularly easy to provide details. 

 Combined with differences in presentation, these factors make it difficult for data subjects to determine how a policy may apply and when it might change (Cranor, 2003).  Noting these difficulties, alternatives to privacy policies such as P3P have been suggested but have not garnered sustained broad adoption for reasons including design challenges (Cranor, 2003).

Organizations Don’t Get It

Confusion over patchwork legislation and terminology can lead to inactivity in operationalization of privacy as a result of the inability to assign roles and responsibilities.  

If a Chief Privacy Officer is not required by legislation, who is responsible for organizational privacy programs, practices and outcomes?  Ultimately, each organization decides how best to manage programs and when, or if, to track and report on outcomes.  How does a data subject learn about how their information is managed at a given organization, and from whom?  Such processes vary substantially from organization to organization.  

Without access to, or consistency of, this information, it seems unlikely that a data subject could make informed decisions about privacy, or give meaningful consent.

The duality of a privacy professional’s role combined with the variety of organizational cultures results in a number of different combinations of depth, quality, breadth, nature and application of operational privacy.  

Privacy programs have no set criteria, metric or descriptive quality.  

The same conditions that enable customization bring the lack of transparency for the data subject.  How do I know if Hotmail and Gmail manage my information in the same way?  Or if they do it differently, how do I know if that difference matters to me? Information provided in privacy policies is often vague and lengthy.  

There are other privacy problems that manifest for data subjects when organizations try to respond to privacy requirements under legislation.

Applying privacy legislation to service organizations means that front-line staff should be educated and empowered to discuss privacy with data subjects.  For example, when a store clerk asks for my zip code, s/he should be able to explain where it goes, who has access to it and why.  Moreover, what are the implications for sharing or not sharing that information?  Otherwise, a data subject cannot meaningfully provide consent to sharing that information.  Imagine the store lines if this were the case now.  The advent of cloud computing makes consent even more complex, particularly if the cloud services are outsourced or sold through a reseller.  

Privacy legislation sets out the rules for managing information, but this is predicated on the assumption that the initial collection of PI was lawful and appropriate.  Even then, traditional computing schemes like role based access controls are difficult to implement in environments where there is a hierarchical service delivery model.  For example, one person may work directly with the customer while another is responsible for data input.  The data subject may assume their point of contact is the only person they are consenting to see their data.

Breach notification requirements vary procedurally.  

For example, characteristics for what constitutes a breach are not set out by legislation.  An unauthorized access by a staff person may or may not require notification, depending on the organization’s practices and internal policies.  

Further, the mechanisms for identifying breaches, for example, back end logging, may increase the risk of breach itself by creating more records of PI.

Let’s NOT Make More Laws, Please.

Lots of countries have multiple privacy acts, typically sector or issue specific.  Some define ‘privacy’ or refer to informational privacy.  Some use audit for enforcement, others are complaint based.  Fines may apply to violations in some legislation, others allow for civil or even criminal penalties.  Even within a given country, different rules may apply. 

Graham Greenleaf does an excellent job of tracking these each year; here’s 2023.

Read with caution however, because this inventory includes privacy laws – not necessarily laws which have a privacy impact, e.g. identity assurance requirements, age verification etc. In consideration of those, without scientific confirmation, my guess would be triple or even quadruple the obligations globally.

The most critical difference amongst legislation is the mechanisms that authorize collection of personal information.  There are two types of collection practices: (1) consent or (2) notice (usually) plus authority.

Private sector companies typically are required to use a consent based collection mechanism.  If a company wants to collect, use and disclose a data subject’s personal information as defined in a given law, they must ask consent first.  The type of consent can vary: it may be in writing or oral, and it may be explicit or implicit.  

Typically, legislation that governs Government activities operate using a notice function. This allows Government to bypass consent requirements by providing a notice of collection, which typically states: (a) what information is being collected, (b) the reason for collection, and (c) a contact person to ask questions.  

See ‘Notices‘ for some of my own terrible personal photography for examples.

The Privacy ‘Police’

Countries with privacy legislation use a variety of enforcement mechanisms that are constantly evolving.  For some, a Privacy Regulator is appointed.  For others, there are civil and / or criminal penalties (Baker & McKenzie & International Association of Privacy Professionals, 2012).  For example, in Canada the Office of the Privacy Commissioner / Ontario was enacted under the provincial privacy legislation.  In Hong Kong, there are criminal penalties for direct marketing.    

In order to comply with legislation, named organizations create a variety of policies, standards and procedures.  In some countries, the legislation specifies the need for a Chief Privacy Officer (CPO) role such as Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) section 4.1.  

In other organizations, privacy is part of another group (security, compliance or legal for example).  Organizational policies are typically managed through traditional program management procedures that are not specific to privacy; for example, accountable person, budget assigned, a program of regular training and awareness (American Institute of Certified Public Accountants, Generally Accepted Privacy Principles).   

Together, these activities make up a privacy management program run by the CPO (or equivalent).  Once the program is up and running, there are several mechanisms that may be used to evaluate not only the efficacy of the day-to-day operations but also identify any new potential privacy impacts to data subjects (as required under legislation).  

Typically, a data subject would have no visibility or transparency to organizational privacy practices unless required by legislation. Data subjects face an increasingly complex computational environment that they must negotiate in order to adequately protect themselves.  In parallel, both Government and private sector organizations face increased external scrutiny from the press and regulatory bodies around the world.  

While there are some technical and policy solutions, to date there is no codified and / or institutionalized mechanism for representing privacy to a data subject.

Privacy IS Important

The world heard from years from a variety of (mostly) Silicon Valley tech executives that privacy was ‘dead’, didn’t matter, or that ‘really people don’t care’. Setting aside the fact that people who yell the loudest about privacy ‘being dead’ are the same who profit from the death by the billions, privacy does in fact serve four well-established functions in society.

  1. It enables personal autonomy, the ability of an individual to control when information is released to the public.  
  2. It allows for individuals to deviate from social or institutional norms.  
  3. Privacy allows for self-evaluation.  
  4. Privacy encourages communication by allowing for limited and protected circumstances (Bland, 1968).  

It can refer to features in physical architecture, such as a ‘privacy fence’ (Abu-Gazzeh, 1995; Booher & Burdick, 2005; Mustafa, 2010; Witte, 2003).  It can represent a set of engineering requirements for an information management system, such as ‘privacy requirements’ (Anton, Earp, & Young, 2009; He, Antón, & others, 2003; Kavakli, Kalloniatis, Loucopoulos, & Gritzalis, 2006; Omoronyia, Cavallaro, Salehie, Pasquale, & Nuseibeh, 2013).  It has been trademarked as a marketing feature, such as ‘privacy by design’ (Cavoukian, 2009, 2012; Cavoukian & others, 2009; Duncan, 2007; Felten, 2012).  

Frankly, the existence of an ever growing plethora of privacy legislation across the world further suggests an interest in privacy (DeCew, 1997).

So, What’s the Real Problem?

Privacy is difficult because it is legislated.  

Legislation is open to interpretation, differs from jurisdiction to jurisdiction, and people have different understandings of concepts such as consent.  This is further complicated by the evolution of networked communications, or ubiquitous computing (or AI).

Nonetheless, computational technologies are a valid means of addressing the issues involved by positing that privacy is important, regulated by different legislation (even within the same geographic jurisdiction) and that organizations do not understand how to apply privacy rights or fulfill obligations under these laws.

Concerns about Definitions

I have discussed and consumed endless definitions for privacy, and conclude with the notion of control over information disclosure as the central concept.  

Some scholars disagree.  

For feminist scholars, the very existence of privacy is a cover for gender inequality (Fox-Genovese, 1992; MacKinnon, 1989).  Some scholars prefer the group notion of privacy, versus the individual rights notion (Altman, 1975).  These are valid arguments and not undermined in the methodology or formal model presented herein.  

This work does not seek out a specific definition of privacy, it merely acknowledges the phenomena and associated interest.  Inspired by Marsh, it also proposes an ‘end-state’ approach wherein we attempt to define a formal model that behaves the same regardless of definition or outcome.  

We can test the formal model against multiple definitions from multiple sources and observe privacy from any point of view.

Concerns with the Method

The very subjectivity of privacy is one of the reasons that previous studies have withered.  Attempting to base research on one or more definitions of privacy is indeed limited.  

Think of privacy.  

How would you visualize it?  

How would you seek to explain your expectations and guidelines for information disclosure?  

When asked individually it is difficult enough for us to conceive of a descriptor for privacy that is clear or concise, let alone consistent.  Instead, I start with a notional idea of privacy that is non-discipline specific: that we as individuals have an interest in privacy.  In this way, the model may indeed suffer from being closely linked to one concept versus another.  

The advantage is we are well aware of these biases, and can surface them here.  

As well, the model can be adjusted to adapt as it is tested and refined.  It is possible, then, to devise tests for each definition to determine if the formal model does indeed bring people closer to an acceptable point of privacy representation and people’s expectations thereof.

The Trickiness of Values

I chose to represent privacy as a continuous variable over a specific range.  

While my diagrams (last post) presents the overall schema, my model stays within an even narrower range here, [+1,+9] by focusing on the positive valuation of privacy as a human value.  In a formal model it is demonstrable that small differences yield significant outcomes (more or less privacy overall, for example).  The notion that the value of privacy is somewhat inflexible or more important in context is consistent with this level of sensitivity.   

Different individuals may perceive their subjective assessment of their privacy differently, for example, a value of 5 to me might be high privacy, while to another data subject it might very low privacy.  To that extent, my model disregards the need for stratification at the outset, suggesting that this may be a topic for future work. 

Once testable, the formal model may allow observation of anomalies or consistencies in the behaviour of privacy that have not been observable prior because they were untestable (and therefore not subject to Popper’s refutability principle).  It becomes possible to identify privacy behavioural norms.  To that end, experiments have been designed to test the model with data subjects to observe behaviour and present those results (later).

This ‘implementation’ of sorts is a first step towards the possibilities presented by using values for formal models in privacy.

Social Thresholds for Privacy

Privacy research across disciplines touches on some similar themes, spheres of activity, control and individual versus the group.  Inherent in these themes is the notion of values, and the determination that one ‘has privacy’ or ‘does not have privacy’.  This threshold is contextually dependent (consistent with Nissenbaum’s theory) on the amount of overall privacy available (Nissenbaum, 2009).  Adapting Marsh’s thresholds for trust (Marsh, 1994), figure below illustrates this for privacy. 

Threshold for Privacy

An absence of privacy is not the same as being the subject of total surveillance, which suggests a negative valuation of privacy is possible.  My work is focused on the positive valuation for privacy.  Historically we each made a determination about our own degree of privacy based on a number of factors such as those described earlier.  Technology has changed the availability of those factors, and added new ones (described later). 

 The notion of this threshold nonetheless underpins the value of privacy.