These types of issues may arise because organizations have treated privacy as a problem to be solved (Baker, 2009, 2012; Dribben, 2012; Nissenbaum, 1998; Orcutt, 2012; Pope, 2010; Solis, 2013; Tavani, 2005). Even the current mechanisms for evaluating and measuring privacy (see an overview discussed in the Case Study) focus on organizational activities.
However, the multitude of legislation that provides a complex environment also presents an opportunity. These laws, the cases and orders associated with enforcing them, describe the actors (organizations and roles) and rules by which they may manage PI. These rules are predictable and generally described as if-then statements. For example, ‘If an Organization of type ABC collects information of type X the information must be protected.’
In Computer Science, these types of rules could be represented in a finite state machine (FSM).
A FSM model can calculate privacy from the perspective of the data subject. Inputs can be derived from a set a factors that together provide the characteristics of privacy. Some weighting of the inputs may be required, and will include pre-defined factor sets (some binary, some ordinal). While some inputs may not be possible to calculate entirely, representative measurement allows for some number to be assigned to differentiate between one state and another. This thesis conjectures and sets out to demonstrate that this kind of representation can be utilized in multiple environments, as a mobile app in a networked computing environment (‘ubiquitous computing’) or integrated as part of the consent process for a company providing greater transparency for the data subject then the consent box presented by Google and others.
Coming up, I’ll return to the abstract themes of privacy before reviewing scholarship that impacts directly our formal model. I’ll also incorporating a review of existing practitioner tools and methods, evaluating them for effectiveness.