The Singularity: With technology becoming increasingly prevalent in the 21st century, CHS students and staff evaluate its effects on one’s privacy and security
October 27, 2017
“Hey Alexa, what’s the weather today?”
“Today it is 67 degrees.”
“When will technology ever take over?”
“I’m sorry; I didn’t quite catch that.”
Almost every household across the United States has some form of technology, some form of Artificial Intelligence (AI). Some believe this exponential increase in technology ownership is a good thing, as it fuels innovation. Some believe it is a negative trend as it can become an invasion of privacy to cause people to become “detached” from reality.
And then there are those who believe in “the Singularity,” a hypothetical day in which AI will finally take over and the human race will become nonexistent.
To argue the validity of these statements would be pointless; instead, one must evaluate the effects of technology on society and the possible future it presents.
For some people, like sophomore Nathan Willman, AI like the Amazon Echo, or “Alexa,” presents a new frontier for technology.
He said, “Now (Alexa has) an update where you can get apps on it. I’ve gotten apps like Ted talk, so I can listen to it… There’s also games, and my friend has an Alexa so I can call her on it.”
Willman also noted the Smart Assistant has a new feature where the user can play back audio. While he said it can feel a bit creepy at times, he said he has never had the fear of Alexa invading his privacy.
Furthermore, computer science teacher Domingo David said he does not believe AI plays a large role in privacy, as it is mainly up to how users choose to utilize certain technologies.
“I don’t know if it’s because of AI or if it’s because, as a society, we do end up putting more (information) online, but I don’t think AI has much to do with (hacking). I think it’s mainly just people,” David said.
David later said he also believed that while cybersecurity can be said to have neither increased nor decreased, it has definitely been moving the in right direction.
“People are just merely catching onto this trend (of cybersecurity),” he said, “but it can be considered ramped up because more and more people are using and aware of it.”
According to a 2017 study conducted by Pew Research Center, however, internet users are having less and less faith in whether or not companies can truly protect their information with 49 percent of people surveyed noting a perceived decrease in personal information security compared to five years ago.
Moreover, the Yahoo and Equifax security breaches did not help boost internet user trust. According to Quartz Media, more than 140 million records from Equifax were compromised; however, that is a small number in comparison to Yahoo’s record-breaking breach in which over 3 billion records were compromised. Both these instances occurred just this year during October.
Madison Rosen, Cybersecurity Club president and senior, said she agrees with this study’s findings.
Rosen said via email, “Cyber-attacks have been more frequent and more destructive. The dynamic nature of technology makes it hard to keep pace with hackers. Hackers have the advantage, since they only have to get in once but we must be right 100 percent of the time.”
While the advent of Alexa has made searching up the weather or playing music easier than ever before, according to Wired, these devices may also present a scary reality: You are always being watched, or, at least, listened to.
DOWN THE RABBIT HOLE
One incident in early March of 2017 highlights this new development. In the trial of James Andrew Bates, who was charged with first-degree murder, Amazon ultimately decided to hand over audio files recorded by the smart assistant device in Bates’s home, according to the Associated Press.
According to CNET, the audio files provided proof that Bates had been with friends during the time of the murder discussing “football.”
This begs the question: Just how much is Alexa really dedicated to your privacy?
Lorrie Faith Cranor, professor of computer science and of engineering & public policy at Carnegie Mellon University, said via email, “Smart assistants that collect our personal data certainly can be used against us. Whether companies are actually using them for purposes that benefit the company and not their owners remains to be seen.”
Rosen said she also believes one must first understand how the information stored will be utilized before one can truly answer the question of whether AI is truly safe or not.
Even with these recent developments, however, the same 2017 Pew Research Center study reported only 30 percent of adults truly worry about their cybersecurity.
Willman said he believes cybersecurity is comparable with a figurative wall that protects one’s personal information.
However, Rosen said she believes cybersecurity entails more than just protection of information.
“Cybersecurity is the body of technologies, processes and practices designed to protect networks, computers, programs and data from attack, damage or unauthorized access,” she said.
Moreover, David said he thinks cybersecurity is mainly affected by users themselves.
“The user definitely has a lot to do with cybersecurity,” David said. “If somebody used ‘password’ as their password, there’s nothing we can really do about it, but at the same time, the people who are designing the systems should be able to design against dumb users, if you want to call them that.”
Even with the potential threats technology may present to the future of cybersecurity, according to Willman, AI and technological advancements is coming a lot faster and people should be ready for it.
On Sept. 12, 2017, Phil Schiller, Apple’s senior vice president of worldwide marketing, announced the plans for Apple’s new iPhoneX, which features new facial recognition software.
According to NPR, the implications of this new software may not be all that positive. In the 2017 NPR article published in September, past facial recognition experiences have not been ideal.
David said while he agrees the new software may not be safer, it will definitely work as well.
Willman, however, said he believes facial recognition will increase safety measures.
“I think (facial recognition software) is a lot safer… I think it’s a step towards the future,” he said.
Even so, with the release of the new iPhoneX scheduled for Nov. 3, the true implications of these new privacy measures are yet to be truly seen.
A LIFE IN DIGITAL
It is almost impossible to imagine a future without technology; it has become an integrated part of our economy and our livelihood.
David said, “Technological innovation is always a good thing. Whether or not we have to learn how to use certain technology, we should always be looking forward… I am waiting for the day when my car can drive me places. Can you imagine where I could go if I didn’t have to drive?”
Nevertheless, Cranor said she believes this way of thinking may be shortsighted.
“Generally, innovation is good,” she said, “but it is always important to think about the implications of what we are doing.”
Rosen said she also sees both pros and cons of technological advancement.
“Technical innovation is required to advance society. There are so many ways technology has benefitted mankind, from medical advances to farming, but we have come more and more reliant on technology, which, if weaponized, can cause great harm,” Rosen said via email.
According to a 2016 article published by The International Association of Privacy Professionals, the key to increasing internet privacy is understanding the core structure of AI as well as how it works.
Cranor and Rosen offer a simpler approach: Learn the proper precautions to take yourself.
“If you are not careful, you can expose your personal information and people may be able to break into your online accounts,” Cranor said. “Individuals can take some steps to protect themselves, for example choosing strong passwords and making sure they don’t use the same password on different accounts.”
Rosen said she also believes people should always be aware of AI and the potential future it presents.
She said, “Technology isn’t inherently good or bad – the way in which the technology is used determines its consequences. AI is no different; it can have great applications, but, if abused, could be used to invade our privacy.”7