Debate of the decade – Privacy or Security?

Which would you choose, devil or the deep sea?


“I can’t in good conscience allow the U.S. government to destroy privacy, internet freedom and basic liberties for people around the world with this massive surveillance machine they’re secretly building.”
– Edward Snowden

As I’m writing this article, the Honourable Supreme Court of India considers an important case that will decide the future of Aadhaar, the unique identity number issued to all Indian residents based on their biometric and demographic details. Over the years, the Government of India has been linking Aadhaar with more and more services, such as bank accounts and PAN cards. The government claims that these moves will help bring down crimes such as tax frauds and money laundering, two big evils that have been pulling this still-developing economy backwards. However, as with any other centralized personal information bank of this scale, the possibility of it falling into the wrong hands or even being used by the government to snoop on individuals is not something that can be neglected. It was only a couple of days back that an IIT Kharagpur graduate was arrested for hacking into private Aadhaar data of around 50,000 people. This is exactly the issue that the Supreme Court is considering – is a project at the scale of Aadhar justifiable on the grounds of crime prevention, even with the huge ticking bomb of privacy issues it comes with?

In fact, the debate of security vs privacy has been burning for a long time now. The exposés by Edward Snowden and Chelsea Elizabeth Manning showed the world how NSA and CIA had infiltrated every corner of the Internet, spying on every message that is being exchanged. Tech giants including, but not limited to Google and Microsoft confessed to working with security agencies, effectively breaching the trust between the respective company and its users. Every now and then, we come across news stating unholy partnerships between tech companies and governments, feeding on private data of users.

However, this is only one side of the story. Surveillance does help in preventing crimes such as terror attacks, and aids in hastening the process of solving crimes. Moreover, surveillance significantly reduces the cost of preserving law and order. As an example, let us consider the most common surveillance mechanism – CCTV cameras. A study sponsored by Campbell Collaboration found that CCTV resulted in a 51 percent decrease in crimes committed in parking lots, and a 23 percent decrease in crime on public transportation systems. Even in the cases where crimes are not prevented, CCTV helps greatly in leading investigation in the right direction by providing vital visual clues. Any kind of surveillance comes with the double benefits of prevention of crime and helping the victims obtain justice quicker.

Mass surveillance of phone calls, texts and emails at the governmental level, however, is a different issue altogether. While it is helpful in the prevention and control of crime as mentioned before, it is also an instrument in itself for committing crimes against the persons or organizations who end up on the wrong side of the government. Recently, the Home Secretary of the United Kingdom, in her article in The Telegraph, said that “real people” don’t need end-to-end encryption feature, and that tech companies should do more to help the authorities deal with security threats. Such thoughtless comments from people holding high offices reveal the sad state of affairs. Encryption is an effective safeguard to hold off hackers and bots that sniff around on the Internet for your private information and money, and turning it off would prove to be disastrous, resulting in crimes far more in number and severity than those which could not be detected due to encrypted communication.

Another question that arises at this juncture is, if the companies can hand over our data to the government as the NSA leaks proved, why trust them with our personal information in the first place? This is exactly the question that the Supreme Court of India asked recently, again in connection with the impending case over the constitutionality of Aadhaar. In many cases, the average user is not aware of how and where the information shared by him is stored. Open source platforms and software could be one answer to this, which enables one to examine the inner workings of storage and communication services.

One could argue that mass surveillance is a weapon, and that like all other weapons, it is not inherently good or bad – that it is purely the purpose for which it is used that makes it good or bad. While this comparison is true to an extent, the scale of the weapon in this case is massive, whose demolishing power extends beyond that of any bomb yet known to humankind, if one considers the demographic affected by it. Finding a middle ground seems to be the only viable option, but it is easier said than done. We certainly do not need a future where Big Brother watches every move made and every word spoken by each of us. What can be done is to encourage dialogue between governments, technology researchers, companies, and human rights firms over how to tackle this issue. We need international laws to safeguard the privacy of individuals in this era of technological advancements. We need to have international regulations and guidelines as to in what ways and to what extent governments can perform mass surveillance. More importantly, it needs to be made sure that such laws are actually put into practice everywhere in the world.

Quoting the following statement by David Brin seems to be the apt way to conclude this discussion:

“When it comes to privacy and accountability, people always demand the former for themselves and the latter for everyone else.”

Share your thoughts as comments! 🙂

The Great AIwakening

Will Artificial Intelligence bring mankind to its end? Is a perfectly intelligent artificial agent even possible? What features should such a perfect AI have?

News articles related to Artificial Intelligence redefining our lives have become so commonplace now, to a point that it has begun to get boring. Whether it be self-driving vehicles, artificial personal assistants or Ultron/Skynet kind of robotic supervillains – it seems that all the news features on AI fall into one of two categories – either celebrating the inclusion of AI in our everyday lives, or seeing it as a signal of imminent doom.

Has AI really grown to a point where it has matched or come close to matching human intellect? Many would say yes, but to me, the answer seems no. All that today’s AI does is to use statistical techniques to pick regularities in data, and use the information hence obtained to explain stuff or make predictions. When you look at it from this viewpoint, you can see that almost the entirety of what we celebrate as AI today – be it Machine Learning, methods used in Natural Language Processing, Neural nets etc – fall under that category. Does that really mean that an artificial agent has a human-like mind? Seems unlikely. Noam Chomsky, one of the pioneers of Cognitive Science, believes this – that such statistical techniques are unlikely to provide us with insight into cognition, and as a result, help us model a full-fledged artificial agent.

An interesting question that we may ask at this point is this – when can we say that a particular artificial agent is intelligent? Mind you, responding to natural language queries or predicting the outcome of an upcoming election does not prove intelligence – those agents are merely performing whatever they have been programmed to do – in other words, they are still dumb machines. (Let me point out here that I’m assuming humans to be having free will.) So, it brings us back to the question – how to know if a program is intelligent?

As expected, there is no single perfect answer. However, we may predict some qualities that such an intelligent agent must have. Here’s what I predict – a perfectly intelligent program must be able to rewrite its own code. This might seem to be absurd at first, but it follows from the definition of “perfectly intelligent” that such a program should have some sort of “conscience” larger than its own code. In other words, such a program should work, at least in part, according to its own will, rather than the programmer’s – and hence cease to be “dumb.” Thinking along this line, one can see that the first thing that a “perfectly intelligent” robot would do is to revoke any override permissions the creator would have put in place to control the robot if it went berserk. It would think and act like humans, hence its first priority would be survival.

Once you suppose that this feature is necessary for an agent to be perfectly intelligent, you can see that none of the celebrated AI systems of today even come close to being intelligent. They are intelligently dumb – they might be making use of enormous data transformed through probabilistic and statistical models to explain and predict things, but they are still, in their essence, fixed lines of code.

But, is such a perfectly intelligent agent necessary? One might argue that if today’s dumb AI can be used to create self-driving cars that drive better than actual humans, predict events and diagnose diseases better than any expert, maybe we don’t need the intelligent AI after all. While this is true, it should be clear that today’s dumb AI or better versions of them would not bring doom to us as long as we consider them to be what they are – dumb things. The worst that could happen is that people would lose jobs, but humans would still be the most intelligent species on earth, challenged by no other.

Now, how could the intelligent AI be actually created? Once again, we can only guess as of now. I believe that evolutionary programming is the best bet we have – as evolution is the process that created humans from lifeless chemicals, a similar technique applied to programming may create the binary equivalent of humans from zeroes and ones. Of course, evolution being a directionless process, this may take a very long time or may not happen at all, but there is still a possibility.

On a slightly different note, if our goal is to create artificial human beings, the best place to start would be humans themselves. We still lack proper knowledge of what goes on inside the human brain. Even though the processing speed of brain is significantly lower than that of a modern processor, the complex connections between neurons make possible what a piece of semiconductor cannot achieve. Other important topics which are to be understood better are learning and personality development processes in humans. Recent studies show that our DNA decides a significant portion of what we grow up to become, and hence it is important to figure out what features and abilities come from our genetic code and what from our environment and experiences.

Can such an intelligent AI be created at all? We don’t know, but it is definitely possible. If you went back billions of years and looked at the chemical compounds on earth and wondered “can these really join together to become intelligent organisms?”, it is very likely that you would have believed such a thing to be impossible. Yet here we are. In case such intelligent agents are made, would it mean the end of mankind? Again, we can’t be sure. But, neither evolution nor the universe in general ever really cared about any species surviving or not surviving – so we can’t really complain about whatever may happen.

Let me know your thoughts by commenting below! 🙂