While the Popp and Poindexter approach is common within the CS domain, it fails to recognize the instances where security and privacy do not converge, and may in fact conflict. While the authors highlight the typical privacy protections: privacy appliances, data transformations, anonymization, selective revelation, immutable audit and self-reporting data, they fail to demonstrate an understanding (as (Hecker et al., 2008)) that the best privacy protection is to minimize collection. Other policy research fails to consider the ethical considerations associated with privacy research, suggesting that the individuals’ role is minimal.
In addition, the authors do not discuss the business purpose behind programs – a critical legislated privacy requirement is the justification for personal information collection – and / or question the factual evidence that supported the development of IAO and TIA.
Some similarities exist in ontological approaches.
Each specifies some method of formalized representation of legal requirements, which is significant difference in privacy – the only legislated area of CS. They all follow the same steps, outlined by Hecker et al, in the creation of the privacy ontology, (1) define a glossary of terms, (2) define static model concepts, including resources, entities and relationships, (3) identify safeguards to protect resources, and (4) identify the processes that apply.
Problems arise upon closer examination.
Hecker et al notes that the very purpose of Web 2.0 – information dissemination – is the anti-thesis of privacy. They explore the concept of how generic privacy ontology can be used to remake the architecture of e-commerce transactions to be privacy friendly and encourage capitalism, but do not address the core question. For example, what is the possibility of re-architecting the Internet as we know it, so that Web-based transactions simply did not require the transfer of personal information at all?