Category Archives: Uncategorized

Privacy in the Digital Age

There was a controversial educational exercise at St John’s University that highlighted the differences between what public perception of privacy and the legal definitions of the concept. A law professor at the university asked students to go out into public places and listen to other people’s conversations. The goal of the exercise, which was an optional assignment, was to illustrate the disconnect between the common perception of privacy in public spaces and the legal reality. Students were instructed to see how much information they could find out about a person using only Google as a resource. Many students were surprised with the results, with some reporting that they were able to find out a person’s identity. The lesson became a topic of debate after the professor wrote a New York Times op-ed about it. Much of the debate centered around the ethics of asking students to surveil strangers.

Even though the actions of the students were completely legal, it still constituted a violation of privacy. The people being surveilled believed their privacy was being protected by obscurity. This norm relies on the fact that the information being shared is hard to understand or obtain by others. The protections provided by obscurity can be rendered obsolete by the ease with which information gleamed from “private” conversations in public places can be cross-referenced with the huge amount of data on the internet. Despite the shortcomings revealed by the surveillance exercise, privacy via obscurity is still useful when you consider the cost of collecting the information. Unless you are the target of a determined actor, it is very unlikely that someone would be willing to invest the huge time commitment required to surveil you long enough to find any interpretable information. Communities that are subject to mass surveillance, where anybody can be considered a target, do not have luxury of relying on obscurity. Fully aware of the transparency in their public spaces, these conditions lead to fear and uncertainty that inhibits people’s actions in public.

I took issue with the both the methods and conclusions of the aforementioned educational exercise. I don’t believe that the educational value of the exercises merited the non-consensual violation of privacy the students engaged in. The idea that privacy in public is protected not by law but societal norms could be taught in a different way. The professor concludes the op-ed by saying that the most significant action people can take would be to respect the privacy of others, and to watch what they say in public. The first action is already a societal norm, and the second action implies that we should act as if we are under mass surveillance.

Privacy in the digital age is not dead, but its preservation will require work. Apart from the time and effort investments needed to conduct the surveillance, the discomfort felt by the students shows the presence of another privacy protection. Breaking these societal norms creates an internal conflict that stems from the fact that most people value the ability have privacy in public places. Resisting the proliferation of facial recognition and other dangerous surveillance technologies is one of the ways we can preserve the norms of privacy. Facial recognition and its ability to significantly decrease the cost of surveillance has the potential to create mass surveillance conditions that can modify our societal norms.

Blockchain and Interoperability

Blockchain’s attributes of immutability and decentralization are key parts of its ability to store data in a way that is both secure and accessible. While cryptocurrencies and NFT’s (Non-fungible tokens) have seen a recent rise in popularity, blockchain still has many potential uses that can leave a large mark on many different industries.

The unique attributes of blockchain provide many novel solutions to modern problems. Blockchain’s security guarantees can help to connect the many data silos in different industries. For example, a single patient’s health records could be stored in multiple databases. In order for a doctor to be able to provide the best possible care for a patient, they would need access to the patient’s full medical history. Storing healthcare records on a blockchain would allow for this data to be easily accessed by a doctor, while simultaneously keeping it secure from any tampering. Blockchain’s ability to create a safe platform for sensitive data to flow gives it the ability to provide unique answers to many types of problems. Electric vehicle batteries can longer operate vehicles when they degrade to 80% of their capacity. In order to ensure these batteries find a second life, blockchain can be implemented to create a circular economy. Tracking batteries using blockchain is a cost-efficient way to trace the supply chain and prevent waste.

I find blockchain’s security applications to be very interesting. Using blockchain to decentralize the access control mechanisms in a database would be interested. One of the key issues with a traditional centralized access control mechanism is that there is a single point of failure. Decentralizing the access controls by distributing the stored data can provide a number of beneficial safeguards. With access log stored as transactions on the blockchain, they gain immutability and prevent any attack from being hidden. The decentralization also makes data retrieval harder for an attacker because they would need to compromise a majority of the nodes on the blockchain.

AI Solutions to Human Problems

Last year I watched a very thought-provoking documentary series called Pandora’s Box: A Fable from the Age of Science. This six-part series, originally broadcast by the BBC in 1992, examines the consequences of political and technocratic rationalism. The first episode focused on how the Soviet Union used rational scientific methods to plan their economy and society. Scientists in the early Soviet era believed that they could teach citizens to work and behave in a rational manner. Embracing the view that society could function like a machine, engineers created a rational plan to dictate what every single factory in the country would produce. Strict adherence to this plan lead to absurd outcomes as society tried to meet the rational benchmarks and quotas they were assigned. The episode concludes with the original grand plan having devolved to nothing more than pointless and elaborate ritual.

Many of the game-changing technologies that were brought up in the class discussion rely on AI technology. Large quantities of data and complex algorithms are used to create newer improved models. The limitless potential and grand promises of these AI-powered solutions reminded of the documentary. The lesson I got from the movie was not to completely abandon rational solutions. I think it is important to be cautious of overly relying on AI to solve our problems. AI can help to uncover our blind spots, but it can also lead to them being further entrenched. Amazon came under fire a few years ago when their experimental employee recruitment tool was found to be discriminatory. In this case, the data that the model was trained on was based on the company’s current workforce, so the tool was looking to maintain the status quo. Creating a model that accounts for every possible factor is very difficult. Amazon abandoned its tool because even after remedying the original flaw, its engineers could never be completely certain that more biases could be uncovered in the future.

The trolley dilemma is often brought up when talking about the ethics of autonomous vehicles. Taking life and death decisions out of human hands is very dangerous. Another episode of the documentary focused on the United State’s use of game theory and systems analysis in its nuclear strategy during the Cold War. The documentary uses the Cuban Missile Crisis as an example where following the rational suggestions of analysts would have been the wrong move. The president’s threat of mutually assured destruction was considered irrational by the analysts, but it ultimately led to the end of the crisis.

I think some of the high expectations people have of AI solutions should be tempered a bit. AI, by virtue of being created by humans, can never truly eradicate human error. Its ability to reduce human error can bring significant improvements to society and I look forward to seeing how this technology is further explored and deployed.