Is the age of privacy over? What is at stake when we lose our privacy? How does a lack of privacy affect security, democracy, and society? Maria Armoudian speaks with Helen Nissenbaum, Michael Patrick Lynch, Bruce Schneier, and Joshua Fairfield.

Helen Nissenbaum is a Professor of Information Science at Cornell Tech. She is an expert in privacy and privacy law and is the author of Privacy in Context: Technology, Policy, and the Integrity of Social Life.

Michael Patrick Lynch is a Professor of Philosophy at the University of Connecticut. He is an expert in democracy and the ethics of technology and is the author of The Internet of Us: Knowing More and Understanding Less in the Age of Big Data.

Bruce Schneier is an Adjunct Lecturer in Public Policy at Harvard’s Kennedy School. He is an expert in computer security and privacy and is the author of Click Here to Kill Everybody: Security and Survival in a Hyper-connected World.

Joshua A.T. Fairfield is a Professor of Law at Washington and Lee University. He is an expert in law and technology and is the author of Owned: Property, Privacy, And The New Digital Serfdom.

 

This interview has been edited for clarity and length 


Maria Armoudian: I want to begin with the concept of privacy itself, why we need privacy, and the compromises we are facing in today’s internet age. Michael Patrick Lynch, how would you contextualise this?

Michael Patrick Lynch: Well, the question of why we need privacy gets to the heart of why we want to protect information, whether it is our own or anybody’s at this point in time. And in general, we think that information is power, knowledge is power, and the ability to collect information, to distribute it, to store it, to consume it, all those activities lead to the ability to control our environment and often control people. So, abstractly speaking, from the point of view of a philosopher, the reason we are interested in privacy often is connected to the more general interest that we have in protecting and controlling information because of its ability to be used to pursue ends that we may or may not approve of.

MA: What are some examples that you can think of?

MPL: Examples range from the sorts of information that is collected by us as we leave our digital trails, the use of our phones, all the apps that everyone is using are constantly streaming and collecting data about the users, including ourselves, collecting data about where we are, what we are doing, what we are searching, and that goes for all the digital platforms and the digital search engines that we use as well. So in a sense, a concrete example is the world of the internet around us which is a completely personalised information-collecting environment. And it works in part because they are able to track the sorts of data that we leave behind. We are interacting in our digital life and our digital environment in ways that allow our information to be used both for benign purposes and sometimes for less benign reasons such as to get us to believe one thing rather than another.

MA: Let’s bring Bruce Schneier in. Bruce, I believe you wrote about privacy as an inherent human right and need. How so?

Bruce Schneier: I can give two answers to your question. The first is that privacy is very much about autonomy, dignity, being in control of how you present yourself to the world. And privacy violations are a violation of that: how did they know that? How did they get that information when I didn’t tell them? How did I lose control of how I present myself? And that is fundamental to humanity. The way I present myself to my family and my friends and my work colleagues is different, not because there is something to hide but because it is a different context, and privacy supports my ability to do that. I want to now give another very different answer, privacy is essential to human progress. If you think about how we progress as a society, as a species, it is by trying new things and deciding that they are okay. A few years ago, gay marriage was approved in the US, it is legal in all fifty states and that went from impossible to inevitable, and in order to make that transition it had to have been tried in secret in the past and people were able to experiment and eventually it gets accepted by society and becomes the norm. We are seeing the same thing with marijuana legalisation. Without privacy we lose the ability as individuals to experiment with change.

MA: How is it that the experimentation contrasts with the public knowledge? Are you suggesting that there has to be a long period of private experimentation?

BS: Imagine one hundred years ago, fifty years ago, there was no privacy and gay sex is illegal and anyone who does it goes to jail, same thing with marijuana, there is no experimentation, there is nobody who tries pot and says ‘You know, that wasn’t that bad’, and then tell their friends and the counterculture builds. If there is no ability to act in private then anything that is frowned upon today will never be experimented successfully and society won’t change, we stagnate.

MA: Would you agree Joshua Fairfield?

Joshua Fairfield: I think it is one of the many different ways that we use the word privacy. They are loosely grouped around a set of things that humans really care about, which is that our human condition is that we are alone individuals who have to function in a group in order to get anything done in our language, in our culture, and in our society. And so we feel significant anxiety about managing this balance between the group and the individual. And one of the many things that we have done with privacy is to try to police that boundary: we don’t have a coherent set of rules that governs everything in privacy, but what we do have are peoples growing unease, the sense they have lost control of the data that is flowing from their devices that surrounds them and into this ecosystem.

MA: You have called privacy a public good. How would you explain that for us?

JF: Well, for one thing privacy is something that we need. It is like water or air, it is the kind of thing that humans thrive when they have it and you can see that humans don’t do well when they don’t. If you cram too many people into crowded housing conditions, they are not happy. Similarly, with a number of other kinds of surveillance, people simply don’t like it, they don’t thrive very well when oppressed. But even more than that, if you look at what a lot of the problem is, for what we call public goods such as clean air or clean water, often what we see is there is this tension between what I do and what other people do. What I meant when I said that privacy is a public good is that one of the big problems online is not what we say about each other, it is about what other people say. We are all busy contributing to data about each other by carrying devices, by buying items that create the sort of web work of constantly dredged surveillance. We provide lots of good information about each other. I will give you an example: I have never ever given any internet service my birthday – it is a useless protest, they know it perfectly well anyway because my wife entered it into her calendar and at that point, they have those details. So when I say privacy is a public good I mean it is like clean water, it is the kind of thing that we have to cooperate for together in order for us to have, and if each of us acts selfishly then it is something that none of us have.

MA: So that actually speaks to the concept of societal trust and the need of trust. Is privacy connected to trust and why is trust important? Joshua do you want to start?

JF: Alright, so the relationship between privacy and trust is very complicated. On the one hand, privacy is very much ‘I don’t trust you therefore I am keeping control of certain pieces of information, often about myself, away from you because I fear that you could use them even against me in a court of law or I might just be embarrassed’. And so if I don’t trust you then I don’t reveal these things. The difficulty also is that a deep level of trust arises in a society in which people do attend to each other’s needs for privacy, and in that sense the two go hand in hand. For example, trust in government might deepen if we were to have reasonable rules in place governing when and where the government can access this massive pool of corporate collected data. So the two go hand in hand. But they are not the same thing though: I might exercise privacy rights because I don’t trust you and I might trust a government more that does recognise those rights.

MA: Let’s also talk about a concept Helen Nissenbaum wrote about which is contextual integrity. What do you mean by that, Helen?

Helen Nissenbaum: I think that question really relates back to what you were discussing before, so I want to address that: what is privacy? Why privacy? And very importantly, why now? And I am pleased that Josh raised the point about clean air because you could ask a similar question about clean air: how come we were not worried about clean air centuries ago? The answer in that case is, we became aware of it when the level of pollution became so great that it affected people’s lives and we realised that we were taking something for granted. So the theory of contextual integrity – which is uttered as an understanding of what privacy is and why it is valuable – the approach that I take is to say that privacy, insofar as it is valuable, is about the appropriate flow of information in society, acknowledging that information flow in society is very important, that what causes anxiety here is when we see disruptive flows of information, when the assumptions about who can have what information about us, and what conditions, and for how long, and what they can do with that information, then we get concerned that privacy is being violated. And because of these enormous advancements, suddenly these assumptions that we have lived by are being challenged. And the approach I take to privacy is a little bit different. I don’t necessarily think that in all cases people have a right to control information, we don’t have a right to necessarily at all times have the right to be able to choose how we present ourselves to others. But I would like to just say, ‘Okay privacy in that case according to contextual integrity is about the appropriate flow’ and the appropriate flow of information serves both the individual interests but also it promotes the vital good, the societal good, and the common good…Privacy can support social institutions, not only the benefit of the individual. So that is what the project of contextual integrity is all about, it is to map the concept of the appropriate flow of information as an interpretation of privacy.

MA: Bruce Schneier, how do you respond to this idea?

BS: Yeah, it makes perfect sense, and Helen has spent a lot of time mapping this out. It is a really good way to think about the value of privacy. What you asked at the beginning is what is privacy, and it doesn’t necessarily mean that it is always good. We are as a society constantly balancing the benefits of sharing information with the benefits of keeping it public and we are going to be balancing that throughout this century, I think that is going to be one of the big discussion points of this century. Things like medicine, incredibly private information yet it will be incredibly valuable if it is in large databases that researchers have access. Someone mentioned location trafficking, we all use Google maps which tells us how to get places based on the fact that everyone who uses the app is under surveillance. So privacy doesn’t necessarily mean you always get to have it, sometimes society’s needs outweigh your individual needs, but privacy I think is certainly about autonomy and how we present ourselves to the world. We might not get to choose that, I mean there are a lot of times I don’t get to choose how I get to present myself to the world, but there are times I do. And it is not about trust: there are things I don’t tell my parents, not because I don’t trust them, but because they are my parents, and there are things I don’t tell my work colleagues not because I don’t trust them, there are things I am not going to say on this show not because I don’t trust you but because those are not the right context for whatever that disclosure might be.

MA: Michael Patrick Lynch you have also talked about dehumanisation. It sounds sort of similar to what Bruce is saying would you agree?

MPL: In many ways I do. I certainly agree that there is distinction to be made between the importance of trust and the importance of privacy, we are not talking about the same thing when we are talking about those concepts, although they are related. Let me just say one thing I think is really important to emphasise in Helen’s account of contextual integrity which is really important. One of the things she was saying is that privacy is valuable, insofar as it is valuable, with regard to the appropriate flow of information. And Helen, one of the things I thought was really interesting is that in passing you said that part of the problem that we are facing now is that the norms that determine what is appropriate and not appropriate information flow in a variety of contexts are now unsettled. And I think that is a really important piece of the puzzle that we are facing today which is that variety of norms, norms about who should know, how we do know, what actually knowing is, what does it mean for me to really have your information – those sorts of questions are actually making a difference in how we resolve these questions because it is not very clear how we are supposed to answer them in a lot of contexts. And as a result, people can become legitimately confused about whether it is a good thing that people can share information that can then be used by others.

In response to your question about dehumanisation, I think that is a big question. The reason I think a lack of privacy is related to dehumanisation is that when you start to strip away people’s privacy, and this includes their information privacy, you begin to remove control from them. As Helen said, control is not all there is to privacy, there are times when control is not the issue. But sometimes it is, and when it is, when we strip away people’s ability make choices with regards to their information, we are slowly stripping away their humanity in a certain way. In the same way, you might think when we put a prisoner in a situation where they have no privacy, where they can be visible at all times, when we do that we come to look at them not as people that are actually capable of making choices but rather as objects to be viewed, to be controlled, and not as autonomous humans. That is the connection I think that we see fundamentally between privacy and dehumanisation and the concept that Joshua mentioned earlier that is autonomy.

MA: Let’s step now into how much surveillance we are dealing with. What are the worst possibilities or aspects of a loss of privacy, of spying?

BS: I really often dislike worst-case examples because then we are just discussing extremes. I think the worst case is the fiery death of everyone on the planet plus a zombie apocalypse. At the same time, we have some hot zone pandemic.

MA: That sounds pretty bad.

BS: It sounds pretty bad. What we are talking about really is surveillance capitalism, that capitalism is redesigning itself to collect and commoditise our information. And we are not quite, but almost under ubiquitous surveillance. Our smart phones are an incredibly powerful surveillance device that we put in our pocket willingly every morning and it knows where we work, when we sleep, when we get up, it knows so many things. So really think about the datafication of all of that being used largely without your knowledge and largely against your interests. The point of surveillance is not to tell you stuff that you want but to sell you stuff you don’t want and that is the underlying business model, it is persuasion and control. And governments have those same aims in mind and that is really the way to think about this ubiquitous surveillance.

MA: That kind of links into something Joshua has also written about in terms of this possibility of digital peasantry, becoming digital peasants owned by software and advertising companies. How can we understand this concept Joshua of feudal security?

JF: I think that the main historical path to follow that lets you see this is that one of the big developments of contract law was to move us away from cooperation by status. So the idea was that a local lord would say ‘You are going to work for me’, and they would say ‘Well, I would prefer not to do that’, and the response would be ‘You are a peasant, go do it’. Over a range of years we have developed this system whereby people negotiate the basis for human cooperation and we sign a contract saying I will work for you. And that is a well-known shift, what we call the shift from status to contract. But the problem is now we have shifted all the way back from contracts back to status again in the sense that in the current state of affairs when you click ‘I Agree’, you not only are agreeing you do not own the device, you are agreeing because of some strange twist of legal fate to two other incredibly important features. The first is that you agree to give up your legal rights, that is you now must go to arbitration and you won’t see justice done in those contexts often. And the second thing is that you have to give up rights to the data streams, the flows of data off the device, you have no control over that. That is caused by a weird confluence of arbitration law, contract law, and intellectual property law. But the end result of it is instead of us being owners of these devices, we are in fact the sources of data to be harvested under this trifecta of arbitration law, online contract law, and intellectual property law.

MA: Helen Nissenbaum?

HN: I want to go back to a point that Michael raised about norms and the norm changes. The way I think about these norms, which are information flow norms, I would label them as settled accommodations that we have arrived at. So these are moments where we have to trade privacy off against some other important aim or value. When we share information with physicians, for example, or our parents, these habits are settled accommodations that are not arbitrary: we have gotten to them over many years because these norms will often be in place because they promote certain ends or they promote certain functions in society. So when technology comes along and is completely disruptive, I think the problem that we have been confronting during the past few decades, I think the worst thing that we ever did was take on board this idea that control involves a two-way contract between whoever the collector is and the data subject. Because when that data reaches some party’s hands, we all know there is this enormous back end of analysis and then even trivial decisions about me. The apocalypse is one thing, but if you go and apply for a loan and you don’t get it you know that could be apocalyptic for your life, or you apply for a job and you don’t get it and all this back end stuff that is going on really defies the norms that have been in place, that have protected us. And in the US context, we have a Constitution and Bill of Rights and essentially that recognises that governments are very different in how powerful they are and so we need to put protections in place for individuals, and one of the protections is to limit information that governments can gather. But when technology comes along and disrupts the settled accommodation, suddenly you have powerful actors that have control over your fate and you have no idea. The autonomy you have lost isn’t about this particular flow of information from my device, it is about much larger things. So we need to look at the norms, this is something that we haven’t done very much of and needs to happen.

MA: I wonder, what would some of those policies look like if governments were really trying to protect our privacy? And I think this probably goes to two different strands, one being the sort of corporate data mining that we talked about but the other is of course surveillance state types of data. Helen Nissenbaum, what would be good public interest policy if privacy was our priority?

HN: First of all, let me say what I would not do, which is figure out better ways of expressing privacy policies because I think that the path we have gone down for the past few decades, and this may even have come from the early days of the code of fair information practice principle, it creates policy by insisting that it is a two-way relationship between the individual and the data collector and I don’t think that we are going to get a vital good from that kind of relationship. But then the question is what could we do? Now my own approach tends to be contextual and I actually think if we did that kind of analysis well we could achieve much more because we would be asking questions like ‘What are the values that healthcare should serve? ‘What are the values that education should serve?’ And ‘what should the constraints be on the data flows that we can avoid harming individual interests and at the same time serve these important societal goals?’ So that would be the direction and then you could have people like myself working with the area experts advising us on the information that is needed in the public interest.

MA: Joshua Fairfield, what would you say to this?

JF: Well I think for reasonable policy a lot of the future of this debate depends on whether or not Europe makes it as a polity and continues to have that strong effect around privacy law worldwide. But [Europe] has certainly advanced and tested for the better part of thirty years a number of different approaches and some of them seem to speak to our evolving understanding of this new data environment, things like purpose limitations to help begin dealing with the problem that when people sell their data they really don’t tend to price into it the fact that the data is going to be sold and resold and passed on to third, fourth, and fifth order buyers and so on. If that happens, then we kind of lose the plot. So I think one thing we can practically do is look at the EU and decide whether we want that or not, but the other thing that is going to have to change, which a number of people have alluded to, is what it means to consent to anything online, the idea of autonomy: ‘No, it is okay, this person can do X, Y, or Z because I said it is fine, I consented’. And the problem is, is that everyone knows the current system is broken and nobody has got a real good idea of what to do about it. If I had to pick one big policy debate and discussion, and it goes really deep, we can say on a simple level ‘Consent is opt-out on one side of the Atlantic and opt-in on the other side’. So we are going to have to come to an agreement about what it means to agree to something in principle or in part online, and that debate is only just getting started.

MA: Bruce Schneier?

BS: So, consent is very much the old way of dealing with computers. They are our laptops and our cell phones that we visit, we visit websites, we visit things, we can consent or not and yes, that is broken, but that is gone now. You walk into a store, there are cameras, there is no ability to consent. There is an airplane flying over your city collecting surveillance data, there is no button to consent. There is no button to consent when you are tracked by your phone. Computers are now in our environment and the notion of notice and consent is failing in a major way. And I think when we look at regulating privacy, regulating data we have to look at all aspects of it, and I did write a book on this – Data and Goliath – where I talked about ways to regulate data collection, data usage, data reuse, data storage, data deletion, data accuracy, these are all going to be medium-term fixes for what Helen rightly says is a much deeper conversation, that we need to but probably won’t have any time soon. So we really need to look at the entire data ecosystem and figure out what we can do. And in some areas, we are not going to get privacy. I mean, you can imagine a future where it would be illegal for you to not put your medical records in the giant global database they are all using for research because that would harm everybody. But in some cases your data is necessary maybe for a short time. I mentioned ways before how data is good for ten minutes after which it is useless. So different data types have different rules and we will be debating this really for the rest of the century.

MA: Helen Nissenbaum?

HN: I just want to respond to Bruce. So in the case you gave, perhaps going forward given what we know about diseases and environmental toxins and so fort,h that it may be required for individuals to provide data towards the greater social good, that could be quite compatible with privacy, we could argue that flow of information is an appropriate one and therefore it is compatible with privacy. And then for the kind of old fashioned way that computer scientists used to term privacy as the release of any data at all for better or for worse, we use the term secrecy so we don’t confuse ourselves about what we are giving up. Giving up information doesn’t mean giving up privacy.

MA: Michael Patrick Lynch?

MPL: I think the exchange between Helen and Bruce there was really helpful. Because I think it underlines something that has been running through this conversation, which is that on the surface we have been talking about privacy, but again and again we return to these more basic fundamental issues about the flow of information and the flow of data and the knowledge that we glean from that data. Knowledge is power and one thing we know about power is that power corrupts and the more absolute knowledge that we seem to be able to glean from each other’s activities online and offline, the more it seems we are opening to that power that comes with that knowledge and information being corrupted.

I think one of the real issues here that has been dawning on me not just in this conversation but in conversations like this recently, is that the concept of privacy itself is complicated but super valuable, just like the concept of consent and just like the concept of autonomy, these concepts – perhaps unsurprisingly given the unsettled nature of the norms – are often too rough to function for different purposes. And perhaps because of that not fitting perfectly into the keyhole that we want them to now fit into, they are not unlocking the puzzle that we are really concerned with which concerns the flow and the control that we have not of our privacy but of information in general. That is the problem that I think our society is facing right now. And if I could add one other note, which brings us back to the very question about why all this matters, we have talked about surveillance capitalism which is a super important idea, but the other thing to do think about is the weaponisation of information. Information is increasingly under information pools, so to speak, sloshing around out there on the boundaries of the internet. That pool can be increasingly weaponised not just by totalitarian governments but by allegedly democratic governments like the US. Witness what has happened with the stealing of various tools that the NSA has developed to operate on our information. And I think what has been taken from the NSA and has been used by nefarious actors to, for example, shut down the city of Baltimore, those sorts of activities are just one illustration of how information can be super dangerous and therefore the sorts of questions that we are asking are questions that are super vital for us to answer as humans and as democratic citizens.

MA: What would you add to that Joshua Fairfield?

JF: Well I think that final reference to democracy underscores this entire debate and I think that is something that is well worth putting out into the open. I think it is worth exploring the possibility that this idea that privacy is dead or that privacy can’t be regulated reasonably is not true on the facts, we have regimes that can do this. What I think we are hearing though is this surveillance capitalism profit model and it definitely does not want to be regulated. And with that comes this idea that maybe it is not simply a matter of this can’t be done by law and maybe we are being told this can’t be done by law by people who really profit from us not having reasonable protections in place. And I don’t want this to take a political swing at all, but I had the joy of heading up a conference and one of the books presented at this conference was on the rise of different populist movements worldwide, and I have to say that worldwide a number of governments are coming to power that would deeply misuse this sort of structure of commercial surveillance that then gets repurposed for government purposes. And so I do think that the two issues do flow into one another and I have said often at talks that any culture that refuses to get this question right isn’t going to exit the century as a liberal democracy.

MA: Since we probably cannot address the policy question given all of the complications that you have all laid out, let’s turn to something which is called obfuscation. Helen Nissenbaum, talk to us about our own power to use obfuscation. What are you telling us about?

HN: I do want to preface this because I love that initiative. We call it data obfuscation just to differentiate it from other kinds of obfuscation. But I do want to preface this answer by saying that the solution really has to come from policy approaches writ large, and that if we carry on the way we are carrying on at the moment, then data obfuscation is not going to be a solution. However, the way I think about data obfuscation is that it is a weapon in the toolbox. Right now we have large companies who have grown ever larger in the vacuum of regulation and they have got a monopoly on large quantitates of data. Those of us in the field have been following this and shouting about it, but the people who are in power at the moment – and I don’t mean governmental power – but the commercial entities have had an absolute free run, and as far as I am concerned, have been effectively unregulated. So what data obfuscation does is that it allows individuals to utilise a tool to get at a little weakness of some of these surveillance systems. Since these surveillance systems are so hungry for data, we don’t really have power to resist and hide ourselves. But what we can do is feed these systems fake information, we can provide a lot of information to obfuscate certain activities. Now it doesn’t apply everywhere, but the reason we think of it as a tool is that where the big powerful advertising powers have been able to walk away from any kind of regulation or community agreement is because there is nothing on the other side. There is no down side to them behaving as badly as they like. And so the idea of data obfuscation is to set up systems whereby the way people protest is by feeding the system inaccurate information so that the companies who are counting on this kind of analytic power are not going to be certain that the inferences that they draw are accurate. Now we have performed a few demonstration projects, and unfortunately one of them has a very sad ending, and that is that Google Chrome store simply banned us and there was nothing that we could do about it. Yes, we can fight back using data obfuscation, but unless we get protection for this ability I think it is not clear how long we could do it for.

MA: So when you say data obfuscation by giving inaccurate data, what would be a concrete example of how to thwart the data-gathering system? Would it be a regular sort of feeding it inaccurate data, like putting really silly things into search engines?

HN: I don’t know about silly things. We do have one product called TrackMeNot, you can download and it sends fake queries into all search bars. There is a lot of pushback, the community argues that the companies will detect it and just erase the inaccurate queries and so forth. Another example, the one that got banned, is AdNauseam and what it does is it clicks on all the ads on any website. Now again, there is so much data that companies like Facebook and Google are getting, you know they are not just getting it from your behaviours they are often just buying masses of data from each other, they get data from data brokers, so it is a little bit of a drop in the bucket. It is not a solution, it is just a tool.

MA: Any other ideas for how to deal with this as a society? Josh Fairfield, what do you say?

JF: I think there is one other big decision to be made, and again you can look at the difference between the US and a number of other countries in the EU and then make your decision. Everybody wants some degree of protection as regards to this interface between what Google knows and what the government can get their hands on. And the two big models are the US’s model, where the government is supposed to be restricted in terms of the data that it collects and the reasons for which it collects it. However, private companies are free to gather almost whatever they want – it is sort of like the Wild West and has been for too long. So with respect to the American system, private companies can gather what they want and then the government can get access to it either through smoothed arrangements by payments or because they can require disclosure by law. The EU system is different, there the governments are pretty unrestrained in terms of what data they can directly see, it is the private companies that are more restricted in terms of what they can gather and retain. And I will say this, it has maybe been a more interesting system to do that way than the American system, or at least the American system has had this big wound in the middle of the constitutional order which is fine, the government is supposed to be restricted but they are not really because they can get their hands on anything that a third party has. So with that in mind, I think that is a big policy decision we are going to have to make overall: what is the default here?

MA: Bruce Schneier?

BS: So you asked what can people do without government help, and the short answer is nothing. This data is not under our control. Advice like don’t have an email address, don’t have a cell phone is kind of stupid advice for living in the twenty-first century. This is what the market gives us, and without government intervention, this is what we have. We have this corporate surveillance state that the government is piggybacking off. It comes down to how we act collectively as citizens and that is what is missing here, and it is hard to say that in the US which is very small government, libertarian, but it is not working for us. And if we want to fix this, we need to have government be a pushback. There is not going to be a series of things you can do – and certainly things like data poisoning tricks are helpful but they are around the edges, they are not going to solve the problem in the main. Government is what has been missing and what we need.

MA: Final words Michael Patrick Lynch?

MPL: What can we do? Well one thing we can do is educate each other. I share the cynicism that many of my colleagues have about our ability without government to do very much to address the problem directly, to take direct policy actions. There is one exception and that is I think we need to not just educate ourselves individually but we need to lobby for and support efforts to educate people more broadly about how the internet actually works, how the flow of information is managed, what your phones are actually doing, what personalisation is, and simply the very facts of the digital world ecosphere which surrounds us all. I think without that effort it is going to be very difficult to get people to understand the importance of coming up with policy.


This interview originally aired on the Scholars’ Circle. To access our archive of episodes and download this interview, click here.

For more of our audio and visual content, check out our YouTube channel, or head to the University of Auckland’s manuscripts and archives collection.

Disclaimer: The ideas expressed in this discussion reflect the views of the guests and not necessarily the views of The Big Q. 

You might also like:

Q+A: The Golden State Killer case: Are our data rights slipping away?

What goes on inside intelligence agencies? 🔊