RamShchoryFirstPaper 2 - 27 Apr 2015 - Main.EbenMoglen
|
|
META TOPICPARENT | name="FirstPaper" |
Informed Consent to the Use of Private Data | | It is unclear whether then users will choose to pressure on the software operators, demand a better protection of their privacy, or even demand a prohibition of the use of private data; or will they just ignore the danger and continue their everyday use, benefiting from what technology offers. But as long as people get to think, discuss and debate this important issue, we are already in a better place than the one we are in today, when everybody simply click “I agree”. | |
> > |
You might have considered some other solutions. You could look at Terms of Service: Didn't Read, for example. Approaches that depend on state regulation are no better than one state at a time. But what can be done without depending on regulation through technical and shared social means will be everywhere at once.
There is also no reason why entities cannot emit their privacy policies in a standard machine-readable format, allowing user agents (browsers and other web clients for people) to interpret those policies in user-focused terms, including by offering users an interface to accept or reject sites' and services' offers to them based on the policies that accompany those offers.
Failure to consider technical as well as political and legal solutions to problems will, for reasons I have suggested, lead to incomplete analysis in every case. The best way to improve this draft, in my opinion, is to strengthen its technical side, which will result in the compression of some arguments and the removal of others from the present draft.
| |
\ No newline at end of file |
|
RamShchoryFirstPaper 1 - 06 Mar 2015 - Main.RamShchory
|
|
> > |
META TOPICPARENT | name="FirstPaper" |
Informed Consent to the Use of Private Data
-- By RamShchory - 06 Mar 2015
The Problem – Websites and Apps’ Privacy Policies Do Not Actually Provide Either Notice or Choice
Notice and Choice
The concept of notice and choice is the legal framework designed to assure that the user of an app or a website (“a software”) is aware of- and agrees to the fact that she is supplying private information to the operator of the software and to the uses being done with this information.
As a matter of practice, the user receives notification of these uses – containing possible privacy infringements – through a privacy policy, and is directly choosing by a trivial click on an “I agree” button, or implicitly choosing by merely continuing to use the website.
However, it seems that nevertheless no notice can be found, and, consequentially, neither a choice. No notice can be found because although privacy policies may contain the relevant information, they are not practically understandable. To illustrate – estimations are that an average user will have to dedicate a month each year to read all the lengthy privacy policies she encounters, and in fact few do. Moreover, often people mistakenly think that the existence of a privacy policy means the operator has to keep their information private, but this is clearly not the case. Finally, because these are complex legal documents many users simply lack the tools to comprehend them (and sometimes they are simply in a different language).
Analogizing to tort law informed consent principles, when one cannot understand what it is one gives one’s consent to, that is, when one is not properly informed, the consent is an artificial agreement. How can you agree to something – all the more so about something that can potentially hurt you – when you do not understand what that thing is?
The Dangers
The obvious solution to this problem, is, of course, making the notice understandable. However, the mere understanding of the information provided in the privacy policies, and the consent provided to the operator to use the information are not enough, as the user must also truly understand the potential consequences and the danger that may occur due to that infringement (approved or otherwise) of her privacy.
A Possible Solution – Warning the User of the Possible Dangers
I can think of two possible ways to both notify a user of the things people can do with her information, and to assure that she is also aware of the dangers.
The Prospectus Way
A first approach will be to use warning methods adopted in the securities context, that is, a declaration by the company of the risk factors an investor – or in our case a user – might incur if she chooses to invest/use. This “prospectus” way of warning differs from today’s privacy policies because it requires the operator to directly refer to risks. However, it seems to be insufficient to our cause, as it will suffer from the same understanding shortcomings, that is, being too long and in a professional language.
The Cigarette Way
A different way is to borrow the plain packaging warning approach that was designed to warn cigarette consumers of the product’s health hazards, that is, demand a clear, blunt, straightforward and even frightening warning of what might happen and what damages might be caused if the user indeed chooses to agree to the terms of the privacy policy. If one truly believes that a real danger lies beneath giving away private information, why not shout it from the rooftops? Why not give a clear warning of what might happen, who might do what, and let the user decide if she wants to take the risk, for the rewards it will bring?
Analysis of the Cigarette Solution
Although we might expect this kind of solution to deal with the unawareness problem efficiently, it has substantial costs. First, such blunt warning might over-deter people from using economically efficient software. Even if we ignore the potential damage to the software’s owners and the political difficulties they may cause in their reluctance to go along with such a solution (which cannot by themselves justify ruling out a desired regulation), if people will be too afraid to use the internet, we might be harming desirable technological development. Second, it is unclear how such a warning will be designed and articulated to users, as the nature of the potential damage is less clear than that of cigarettes. Finally, such warning might be perceived as an overreaction and be taken lightly, thus achieving the very opposite of what we tried to accomplish. People often don’t see the potential harms in privacy breach as a problem (at least not their own problem), and depicting it bluntly as a very serious one may be creating antagonism and reluctance to get the bottom of the dangers.
So what can we do? I suggest shifting the focus of notice and choice to a place that will not attempt to convey the whole message – including both immense possibilities and serious dangers – to every user in every software. Rather, I think that a simple, clear and thought provoking message will be much more efficient. If a website or an app will be required to convey a simple message, it could stimulate the user to give a second thought to the infringement of her privacy, and the potential damages that can come with waiving it. By raising the consumers’ awareness, a solution of that sort can inspire thought about this subject, and support the rise of an important public debate.
It is unclear whether then users will choose to pressure on the software operators, demand a better protection of their privacy, or even demand a prohibition of the use of private data; or will they just ignore the danger and continue their everyday use, benefiting from what technology offers. But as long as people get to think, discuss and debate this important issue, we are already in a better place than the one we are in today, when everybody simply click “I agree”.
|
|
|
|
This site is powered by the TWiki collaboration platform. All material on this collaboration platform is the property of the contributing authors. All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
|
|