TheĀ Institute for Advanced Legal Studies hosted an excellent seminar byĀ Professor Michael Birnhack from the Faculty of Law at Tel Aviv University who was talking about āA Quest for a Theory of Privacyā.
He pointed out that while weāre all very worried about privacy, weāre not really sure what should be done. It might be better to pause and review the legal āmessā around privacy and then try to find an intellectually-consistent way forward. This seems like a reasonable course of action to me, so I listened with interest as Michael explained that for most people, privacy issues are becoming more noticeable with Facebook, Google Buzz, Airport ānudatronsā, Street View, CCTV everywhere (particularly in the UK) and so on. (Iām particularly curious about the intersection between new technologiesāsuch as RFID tags and biometricsāand public perceptions of those technologies, so I found some of the discussion very interesting indeed.)
Michael is part of theĀ EU PRACTIS research group that has been forecasting technologies that will have an impact on privacy (good and bad: PETs and threats, so to speak). They use a roadmapping technique that is similar to the one we use at Consult Hyperion to help our clients to plan their strategies for exploiting new transaction technologies and is reasonably accurate within a 20 year horizon. Note that for our work for commercial clients, we use a 1-2 year, 2-5 year, and 5+ year roadmap. No-one in a bank or a telco cares about the 20 year view, even if we could predict it with any accuracyāand given that Iāve just read the BBC correspondents informed predictions for 2011 and they donāt mention, for example, whatās been going on in Tunisia and Egypt, Iād say thatās pretty difficult.
One key focus that Michael rather scarily picked out is omnipresent surveillance, particularly of the body (data about ourselves, that is, rather than data about our activities), with data acted upon immediately, but perhaps itās best not go into that sort of thing right now!
He struck a definite chord when he said that it might be the new business models enabled by new technologies that are the real threat to privacy, not the technologies themselves. These mean that we need to approach a number of balances in new ways: privacy versus law enforcement, privacy versus efficiency, privacy versus freedom of expression. Moving to try and set these balances, via the courts, without first trying to understand what privacy is may take us in the wrong direction.
His idea for working towards a solution was plausible and understandable. Noting that privacy is a vague, elusive and contingent concept, but nevertheless a fundamental human right, he said that we need a useful model to start with. We can make a simple model by bounding a triangle with technology, law and values: this gives three sets of tensions to explore.
Law-Technology. It isnāt a simple as saying that law lags technology. In some cases, law attempts to regulate technology directly, sometimes indirectly. Sometimes technology responds against the law (eg, anonymity tools) and sometimes it co-operates (eg, PETsāa point that I thought I might disagree with Michael about until I realised that he doesnāt quite mean the same thing as I do by PETs).
Technology-Values. Technological determinism is wrong, because technology embodies certain values. (with reference to Social Construction of Technology, SCOT). Thus (as I think repressive regimes around the world are showing) itās not enough to just have a network.
Law-Values, or in other words, jurisprudence, finds courts choosing between different interpretations. This is where Michael got into the interesting stuff from my point of view, because Iām not a lawyer and so I donāt know the background of previous efforts to resolve tensions on this line.
Focusing on that third set of tensions, then, in summary: FromĀ Warren and Brandeisā 1890 definition of privacy as the right to be let alone, there have been more attempts to pick out a particular bundle of rights and call them privacy.Ā Alan Westinās 1967 definition was privacy as control: the claims of individuals or groups or institutions to determine for themselves when, how and to what extent information about them is communicated to others.
This is a much better approach than the property right approach, where disclosing or not disclosing, āprivateā and āpublicā are the states of data. Think about the example of smart meters, where data outside the home provides information about how many people are in the home, what time they are there and so on. This shows that the public/private, in/out, home/work barriers are not useful for formulating a theory. The alternative that he put forward considers the person, their relationships, their community and their state. Iām not a lawyer so I probably didnāt understand the nuances, but this didnāt seem quite right to me, because there are other dimensions around context, persona, transaction and so on.
The idea of managing the decontextualisation of self seemed solid to my untrained ear and eye and I could see how this fitted with the Westin definition of control, taking on board the point that privacy isnāt property and it isnāt static (because it is technology-dependent). I do think that choices about identity ought, in principle, to be made on a transaction-by-transaction basis even if we set defaults and delegate some of the decisions to our technology and the idea that different persona, or avatars, might bundle some of these choices seems practical.
Michaelās essential point is, then, that a theory of privacy that is formulated by examining definitions, classsifications, threats, descriptions, justifications and concepts around privacy from scratch will be based on the central notion of privacy as control rather than secrecy or obscurity. As a technologist, Iām used to the idea that privacy isnāt about hiding data or not hiding it, but about controlling who can use it. Therefore Michaelās conclusions from jurisprudence connect nicely connect with my observations from technology.
An argument that I introduced in support of his position during the questions draws on previous discussions around the real and virtual boundary, noting that the lack of control in physical space means the end of privacy there, whereas in virtual space it may thrive. If Iām walking down the street, I have no control over whether I am captured by CCTV or not. But in virtual space, I can choose which persona to launch into which environment, which set of relationships and which business deals. I found Michaelās thoughts on the theory behind this fascinating, and Iām sure Iāl be returning to them in the future.
These are personal opinions and should not be misunderstood as representing the opinions of
Consult Hyperion or any of its clients or suppliers
These are the personal opinions of Consult Hyperion and its guests and should not be misunderstood as representing the opinion of its clients or suppliers. To discuss how any of the technologies discussed in this post can benefit your business, please contact Consult Hyperion.