z3r0trust Privacy Newsletter #16
*Note: This article was originally published by the author on September 27, 2020. This article is also available in Spanish here.
“There are only two industries that refer to their customers as ‘users’, technology and the illegal drugs trade…” — Edward Tufte
It is with great delight that I have decided to resurrect from retirement the z3r0trust Privacy Newsletter. I’ve written several privacy-themed pieces but this series is by far my favorite and most well-recognized among readers. You may have wondered why I decided to retire it in the first place? Part 15 was originally published on December 22, 2019. The new year was bearing down upon us and I felt like perhaps it was a good time to end the series and move on to other series writing projects I wanted to explore. Looking back at my decision then, it’s not that I regret retiring the series but might’ve done better just putting it on the shelf for a time.
So much is happening in the world of digital privacy. It’s too much for one monthly newsletter, awesome as though it may be, to attempt to capture. However, I’ll do my best to cover all of the important developments. I know there is much more digital privacy awareness and activism work that needs to be done and I am happy to help spread that awareness. We are at a crucial point in history where we either have to fight for our privacy rights or they will be erased by surveillance capitalism forever. I believe there is no going back once we reach that point. I expect to publish a new monthly installment of Becoming Virtually Untraceable probably toward the end-of-each month for the foreseeable future.
Law Enforcement Organizations (LEOs) are attacking crime with tenacity using whatever technology they can afford to get their hands on. I think that LEOs and intelligence agency use of technology is a good thing as long as the information collected is done per the laws that no person or organization is above. The rights afforded by the Constitution of the United States (or insert whatever country you live in if it has a Constitution) are not open to debate or misinterpretation by police or government agents. They are there to protect citizens and the government and police are supposed to respect those rights, not find ways to circumvent them or pass legislation that even temporarily diminishes any of those Constitutional rights.
Technology helps police and intelligence agencies to solve crimes, prevent cyber espionage, terrorist attacks, and catch sexual predators trafficking in child sexual abuse material (CSAM). That is one of the good results that come from this pairing. However, what I don’t agree with is police and intelligence agencies using technology against innocent civilians in the spirit of solving crimes or national security which often get thrown around by these types of agencies to step all over Constitutional rights like the Fourth Amendment. There is a real problem when you have police departments and federal agencies deploying Stingrays to fake cell phones into connecting to it so that police can slurp up all the data from people’s cell phones.
It is the same for facial recognition systems that routinely misidentify people of color or geofences that provide LEOs with way more information than they need. Sometimes, innocent people’s phone calls and data are collected by police for no other reason than their phone was in the wrong place at the wrong time. This is alarming because it could happen to any of us at any time. We are supposed to be protected against unlawful searches and seizures of our personal property (e.g., your phone data, house, car, other belongings) by the Fourth Amendment.
Police agencies have been getting geofence warrants approved by judges that they present to Tech companies like Google, Apple, Uber, Twitter, that require the Tech companies to turn over any data they have on user data that meets the specific geofence criteria within a specific time/date range and a geographic area which is known as a ‘geofence.’ Authorities can then search this data and do whatever they want within the terms of the warrant but who is to say that police or intelligence agencies stop where they are supposed to? And who else are they sharing that private data with as part of the good faith spirit of law enforcement cooperation? The FBI, DHS, CBP, ICE, ATF, DEA, NSA, CIA, USMS, USSS, or the IRS?
Alphabet soup was good as a childhood lunch entrée. Alphabet agency acronym soup is not good when it comes to protecting private American citizens’ phone data. It always comes back to the question of who is watching the watchers? We need the watchers [police] to some extent to maintain a peaceful society by enforcing established laws. However, when do their incremental encroachments upon privacy cross the line to the point where they become too much? Well, as it would happen, two federal judges have decided that police have crossed the line with geofence warrants recently, deciding that they violate Fourth Amendment rights (Fussell, 2020).
Unfortunately, we live in a time where some governments think it’s okay to violate the Constitution and spy on their citizens without probable cause or shut off the Internet to disrupt protests. The current administration in the U.S. has even gone as far as to label the loosely affiliated Black Lives Matter “group” a terrorist group?!? When did protesting against fascism make you a terrorist? I guess when the fascists are in power, huh? Not everyone who associates with ANTIFA uses violence to protest. Many ANTIFA supporters silently watch from the sidelines tweeting and posting on social media sites about the social injustices of the current administration. Hoping desperately that something will change. A new presidential election, a new hope for positive change. Will it be enough though or is a more radical, violent approach needed? I don’t condone violence but I can’t deny that our country was founded on it to win our freedom.
These governments enact policies and use strongarm tactics to control the media to limit outside information, control the narrative, and monitor our communications. It shows how desperate the leadership of this country is to control people and just how desperately out of touch with reality they are. If you’re not a gun-toting, redneck, immigrant-hating, white Anglo-Saxon Protestant (WASP) Republican, then you are the enemy! Wait, what now? That’s right.
It’s black or white for some people in the U.S. But this isn’t just about racism, fascism, or politics, your political affiliation and who you ultimately vote for has a lot to do with the direction the United States will go in terms of privacy protections or further erosion of all of your rights. Belarus has been experiencing Internet blackouts following citizen protests over what citizens call a rigged election. We shudder to think that such a thing could happen in America but I wouldn’t be surprised to see it happen. Just keep in mind that everything you do online is being tracked and monitored for monetary gain or government spying purposes.
Tech companies are worried over shifts in privacy rules, oh no! What will they do since their business models revolve around the collection and selling of private user data? Maybe, they could develop other, more ethical ways to support their business models like charging customers a small fee to use their services thereby opting out of data collection. I think for many, however, trusting that Tech companies would actually do what they promise is a no-go.
Researcher Bob Diachenko discovered in August 2020 that video gaming hardware manufacturer Razer leaked 100,000+ gamers’ personally identifiable information (PII) online to include payment transactions without credit card numbers, items purchased, email addresses, physical addresses. The data breach appears to have been the result of a misconfigured Elasticsearch cluster stored in the Cloud that was indexed by search engines like Google, Bing, and Yahoo. Diachenko also discovered that Telmate, a service owned by Global Tel Link and used by US prison inmates to communicate with outside of prison leaked tens of millions of private messages, call logs, as well as other PII such as text/photo messages, voicemail, prisoner date of birth, facility ID, full name, gender, offense, account balance, recipient email address, driver’s license number, physical address, and IP address in August 2020.
George Floyd’s medical records were inappropriately accessed posthumously by health system employees as yet another example of HIPAA violation. This was primarily done by healthcare providers who had access to his medical records.
70 different adult dating and e-commerce websites exposed PII including details about romantic preferences owned by Mailfire. 320 million individual records representing 882 GB of data were discovered by security researchers at vpnMentor to have been exposed. The PII includes full names; age and dates of birth; gender; email addresses; location data; IP addresses; profile pictures uploaded by users, and profile bio descriptions. But perhaps more alarming, the leak also exposed conversations between users on the dating sites as well as email content. The leak was apparently due to a misconfigured Elasticsearch (Cloud) server which again begs the question of just how safe it is to store any data in the Cloud? Very few organizations seem to be able to figure out how to properly configure Cloud storage containers like Elasticsearch servers or AWS S3 buckets.
At a certain point though, I feel strongly that Cloud Service Providers (C-SP) should be held accountable for not ensuring that clients properly configured the security settings of their containers. The Service Level Agreements (SLA) that clients must sign to use the C-SP services absolve the C-SP of any responsibility in terms of misconfigured containers. I think this is a faulty approach to Cloud security. Ultimately, the data owner is responsible for the security of their data but the C-SP should also have a shared responsibility to help secure it. It is their fucking Cloud platform after all.
Like AWS, how many headlines do we have to read that AWS S3 buckets were breached or leaky before the government finally steps in and creates legislation around this business segment? It is ridiculous. There is a shared security responsibility no matter how we want to argue it. This is also why a lot of IT security professionals still don’t trust hosting their data in the Cloud. It is, after all, “just somebody else’s computer.” Otherwise, if the status quo is allowed to continue it would appear that the Cloud business model is destined to fail in the long term if data security cannot be achieved at a higher success level than we are currently witnessing.
The Veterans Affairs (VA) department’s Inspector General is investigating a data breach of 46,000 veteran records that the public has only recently been informed of but for which the timeline details have not been forthcoming from the VA yet. So-called “security experts,” said, “Security experts said the relatively low number of impacted accounts — in comparison the 2015 U.S. Office of Personnel Management (OPM) breach affected 21.2 million — suggested the VA’s internal monitoring might have quickly detected something was awry so the agency could mitigate before hackers tampered with far more records.” What? No, that is not how this is supposed to work.
Intrusion Detection Systems or Intrusion Prevention Systems (IDS/IPS) are supposed to automatically alert system administrators whenever an intrusion event occurs and an IPS will go a step further to take measures to deny access to intruders. This is the opposite of failing secure, this is failing open. Take what you want. “The federal government has a bigger responsibility to protect the systems they use to transact their business because the potential for damage is much higher,” commented Brandon Hoffman, CISO at Netenrich. This CISO doesn’t believe that this minor data breach of only 46K individual veterans is a big deal and it could’ve been much worse. Thank goodness, I feel so relieved knowing that as a veteran myself. Give me a break.
Online gaming titan Activision was reportedly breached on September 20 leading to 500,000 accounts. Activision has yet to issue an official statement about the breach but the prudent action to take immediately following a breach like this is for users to immediately change their passwords, implement two-factor authentication (2FA), and if any debit/credit cards or other payment information was tied to the account then keep a close eye on their statements.
Major Privacy-related Lawsuits
Irish Privacy Watchdog & Facebook Square Off in Court Over Data Transfer Rights
Ever since the Irish Data Protection Commission decided that Facebook must halt trans-Atlantic data transfers because the standard US-EU contractual clauses are invalid for whatever tool Facebook uses to perform the data transfers. This comes on the heels of the EU’s top court striking down the US-EU Privacy Shield over concerns that the EU citizens’ data wasn’t safe once in U.S. control. I think that is a valid concern given the revelations that come out of classified data leaks like those from Snowden in particular.
People think that just because their data is protected by HTTPS and Transport Layer Security (TLS) encryption that it can’t be decrypted. The truth is that with network packet transmission, certain portions of the packet fields are not encrypted for proper message routing and the encrypted elements of the packets may still be subjected to deep packet inspection by firewalls along the network route it travels before it reaches the destination IP address. The origin IP and destination are visible in encrypted packet transmission. Some valuable intelligence can be gleaned from just that information alone.
Which TCP Port number the data is being sent to at the destination IP address is another information giveaway even if the application data or payload is encrypted. The size of the packets, the frequency of the transmissions, times, and dates are also intelligence collection points that can reveal valuable information to intelligence analysts. Your encrypted data, the portion of the data packet called the payload, SHOULD be safe from decryption but there are some tools and methods that can be used to decrypt certain types of transport encryption such as using Wireshark packet capture (pcap) or Microsoft’s Message Analyzer (below image) for starters.
At the big data, bulk network traffic 10,000-foot elevation macro view, products such as Cisco’s Encrypted Traffic Analytics or Juniper’s Sky Advanced Threat Protection can reveal anomalous network behavior or it can be used to monitor network traffic from various regions, entities, or even individuals if need be. This is the world we live in now, the technology we have can be used to look at the macro, 10,000-foot view, or drill down to the micro-level to watch Sam Smith in Virginia send encrypted message packets to someone calling themselves Ali Baba in Yemen.
SSL/TLS proxy servers used by many third-party network security products perform decryption of communications on one end of the proxy server to inspect the packets and then re-encrypts them before sending them along the path to the destination IP address. When this is done, there is potential for the once encrypted data to be collected and siphoned off in an unencrypted state before being re-encrypted. It’s not a stretch of the imagination to imagine an intelligence agency in any modernized country performing. This is why you want to use End-to-End Encrypted (E2EE) products like Signal as much as possible and it’s also why organizations like DOJ, the FBI, and other government agencies are so angry that they can’t get Big Tech to create encryption backdoors for them to spy on everyone with. But even with E2EE products, there are still vulnerabilities that could potentially be exploited as Will Dorman of Carnegie Mellon University (CMU) (i.e., the university that the FBI paid $1M to develop a Tor exploit), explains.
Privacy Legislation Developments
In the case of United States v. Moalin, four members of a Somali diaspora (i.e., A diaspora is a scattered population whose origin lies in a separate geographic locale) were charged by the US government (USG) for conspiring to send $10,900 to Somalia to support a foreign terrorist organization which they appealed and which raised questions about the USG’s bulk collection of US citizen’s communications.
The government was found to have possibly violated the Fourth Amendment when it collected the telephony metadata of millions of Americans, including at least one of the defendants, pursuant to the Foreign Intelligence Surveillance Act (FISA). This metadata obtained by the USG is “fruit of the poisonous tree” because it was illegitimately collected under authority which violates the Constitutional amendment rights of US citizens. Of course, this is an opinion of the Ninth Circuit federal court which is widely recognized as the most liberal of all the federal circuit courts.
It is unlikely that the Supreme Court of the United States (SCOTUS) would hold the same opinions, especially considering the number of conservative Justices continue to increase with the current Administration SCOTUS Justice appointments. I would not expect anything substantial to come of this but it is an important, privacy-related legal development that you should be aware of.
Republican Senators Introduce Federal Privacy Law
The “Setting an American Framework to Ensure Data Access, Transparency, and Accountability Act” or SAFE DATA Act was introduced on September 17, 2020, by Senate Commerce Chairman Roger Wicker (R-MS), with co-sponsors Sens. John Thune (R-SD), Marsha Blackburn (R-TN), and Deb Fischer (R-NE). As usual, however, the bill ultimately falls short of providing consumer privacy protections desperately needed in the US. Geez, I wonder why that is? Could it perhaps be due to corporate lobbyists?
The criticisms by digital privacy experts like Sara Collins of the Policy Counsel at Public Knowledge said,
“…the SAFE DATA Act requires transparency instead of mandating accountability for the ways companies handle, and mishandle, personal data. Consumers need and deserve better than a federal ‘ceiling’ of lax privacy protections.” Additionally, Collins also stated that “Critically, the SAFE DATA Act does not have adequate controls to prevent companies from invasively tracking each internet user’s every move online and lacks provisions that give users meaningful control over the data collected on them, and how that data is used, and does not go far enough to protect civil rights online… The SAFE DATA Act also affords businesses with too many opportunities to self-regulate.”
America, we’ve got to elect better politicians who will do the will of the People instead of just paying us lip service. So many of these politicians are compromised in their values, they receive kickbacks from these companies that stand to gain financially from the U.S. not passing strong consumer privacy protections enacted for Americans. They argue it will hurt their business models but what about the rights of American citizens? Are not to have a say in the matter or this country only about the rights of businesses to fuck us over? Money talks and bullshit walks, these politicians are fooling no one with these soft privacy bill proposals.
The Facial Recognition System Saga Continues to Spiral Out-of-Control
Error-prone Facial Recognition System (FRS) technology led to the arrest of Michael Oliver in Detroit when he had done nothing wrong and he has decided to sue the Detroit Police Department because the faulty FRS is flawed when identifying people of color. This wasn’t the first time it’s happened in Detroit though. In June 2020, during the national uproar over the wrongful death of George Floyd, another man named Robert Williams was misidentified by the same FRS technology and arrested for a crime he didn’t commit. Artificial Intelligence (AI) is nowhere close to where it needs to be in development to accurately identify people, especially people of color and particularly African Americans.
However, that hasn’t stopped police departments and federal agencies all over the US from paying for these services to “help” solve crimes and track people. ICE is a major customer of these types of FRS technology services for which they use it to track down illegal immigrants. Irrespective of your feelings about illegal immigration, I think most reasonably-minded people can agree that the perils of AI-powered FRS technology misidentification can have very serious negative consequences for innocent victims. Even just being arrested for a crime you didn’t commit by police can have tangible repercussions from employers who may fire an employee based on wanting to protect the image of their business.
There are very few regulations prohibiting the use of facial recognition system technology for police departments so there is no way for us to know how often it is being used by police or for us to challenge the accuracy of this technology. It’s only after someone has been wrongfully arrested, questioned, and possibly taken to trial for a crime they didn’t commit that a defense attorney would be allowed to view any evidence provided by prosecutors. By then, the individual who could be any of us if you think about it [let that sink in for a moment], will have his/her reputation damaged for being falsely arrested if they are innocent, and they might have lost their employment. That’s just for starters, how about irreparable damage to that innocent person’s reputation amongst friends and family. Imagine being arrested for being a sexual predator (when you’re actually not one) but then having to prove your innocence because some AI-powered FRS technology misidentified you.
Police departments often buy this technology with federal grants that protect them from having to disclose what they spent the money on but that’s not all. Often, these FRS tech vendors will give free trial use periods to police departments so they can see if they would like to purchase it. But what government agency is regulating these facial recognition system technology vendors to ensure that the products they are selling are actually accurate? Police and federal agencies are already using it to identify criminals and illegal immigrants. Wouldn’t accuracy verification be an important first step in this practice? It would seem logical to me.
Activists belonging to the immigrant rights group Mijente are protesting the surveillance company Palantir as they move into their new Denver, Colorado headquarters from their old San Francisco Bay Area location. Mijente has been engaged in a #NoTechforICE campaign since 2018 and like Clearview AI, Palantir is another cringy company that makes digital privacy advocates like me lose sleep at night. These companies are allowed to continue selling these massively privacy-invasive technologies that are being used in bad ways to track humans like cattle. They are exploiting our use of smartphones and the apps that share GPS location service data with those apps. Even Palantir recognizes it is vastly unpopular in its S-1 filing in which it wrote,
“criticism of our relationships with customers could potentially engender dissatisfaction among potential and existing customers, investors, and employees with how we address political and social concerns in our business activities.”
It is incredibly disparaging to think about how many different ways Americans are being surveilled, tracked, monitored, and exploited by both commercial and government entities in an increasingly aggressive manner while paying no mind whatsoever to the Constitutional rights afforded to US citizens by the First and Fourth Amendments. Should it ever come down a major decision by SCOTUS that ruled in favor of Fourth Amendment privacy protections for American citizens, there would be no appeal but these businesses would shut down US operations and just move internationally. The entire time they would cry about how privacy rights have ruined their ability to do business in the US, blah, blah, blah… I can already hear it now and though it hasn’t happened yet, it makes me want to vomit beforehand. The writing is on the wall.
App Privacy Exposure
The Customs and Border Patrol (CBP) reportedly purchased unlimited access to a national Automated License Plate Reader (ALPR) tracking database managed by a company called Vigilant for vehicles located anywhere in the US. Are we beginning to take notice of all of these new companies specializing in tracking cars, people, and activity popping all over? If not, you should be. It should be a cause for alarm. As the ACLU senior staff attorney Nate Wessler put it, “An agency tasked with stamping passports and searching for contraband at the border has no business buying unfettered access to the location data of hundreds of millions of Americans.”
This should be raising red flags and alarm bells all over the place with lawmakers in Washington DC but so far it is just crickets because Donald Trump continually creates drama to distract from all of the illegal shit he is allowing these agencies to get away with. Trump could care less about your Constitutional rights. He has repeatedly made fascist, autocrat-like comments to the effect that the media is the enemy of the state. He has encouraged protestors at his political rallies to be roughed up, and he has even said he won’t concede defeat if he loses the election for a peaceful transition of power.
This is an extremely dangerous time to be alive as an American and it is only exacerbated by global climate change and prolific cases like the George Floyd killing by police that the current Trump administration is using to justify all of these illegal tactics and programs. Some of these affronts to personal freedom like the First Amendment right to say whatever you want are being stripped away from Americans by the exploiting apps for their privacy exposures. You mention that you hate Trump and wish he’d be charged with treason and sent to prison for the rest of his life on Facebook or Twitter.
That post could potentially be added instantly to a watch list monitored by the US Secret Service and the FBI. Maybe that turns out to sync up with the no-fly list also? Who knows how the USG uses that data? What I do know is that it is an example of how metadata is tracked and monitored by authorities and how it can be used against citizens of the US. The apps are helping to make this a reality, the very apps we love to use to keep in touch with friends and family, for entertainment, for news, for expressing ourselves. Be careful about what you say online.
Exam Proctoring Apps Are a Privacy and Security Nightmare
Students from numerous colleges and universities are voicing concerns over exam proctoring apps that they are forced to use by their higher education institutions. However, little action is being taken by schools to reduce the privacy and security risks or bias that several exam proctoring apps contain or that they introduce to students’ devices. Several student bodies at universities around the nation have created petitions with signatures numbering in the thousands (e.g., the University of Texas at Dallas has 6,300 to stop using the Honorlock app) because the apps record their faces, driver licenses, network, and device information. There have been numerous reports of the exam proctoring app technology being flawed because anytime a student looks away from the screen with their eyes, the app assumes the student is cheating.
Once again, we have a newer digital technology being used to monitor human behavior that is potentially flawed. Sound familiar? Oh yeah, AI-powered facial recognition systems too, huh? Not only are these exam proctoring apps “blatant violations of our privacy as students” students at the University of Texas stated but some of them are reportedly highly inaccurate in their behavior analysis algorithms. The Procotorio app which records students in their rooms/homes while they take exams was cited by over 4,500 students at California State University Fullerton as being “creepy and unacceptable.” In July 2020, the ProctorU exam proctoring app suffered a data breach that resulted in 444,000 users having their Personally Identifiable Information (PII) leaked.
Students at the Florida International University stated that the Honorlock app required a webcam and microphone which makes it difficult for students living in dorms or elsewhere with limited access to quiet places for which to test. While Miami University students said that the Proctorio app tracks a student’s eyeball activity which “discriminates against neurodivergent students, as it tracks a student’s gaze, and flags students who look away from the screen as ‘suspicious.’ This, too, “negatively impacts people who have ADHD-like symptoms.”
Students have had some success with signed petitions but the underlying theme with privacy invasions, like these proctoring apps represent, is that we have to stand up and protest against them or nothing will be done about it. Just as all it takes for evil to prevail is for good men to do nothing, all it takes for your privacy to disappear completely is for us to say and do nothing when authorities attempt to take it away from us. If we don’t defend our rights, who will? Organizations like the Electronic Frontier Foundation (EFF) and the ACLU are only so big. They can’t stand up to every injustice.
We have to do some things for ourselves. One very effective way to do that is to vote with our wallets which means when we find out that a particular company, university, government agency, or organization is doing something wrong, we stop giving any of our money to that entity so that they get the message really quick. That is the quickest way to show these corrupt business executives and authorities that it won’t be tolerated. Much, much faster than waiting for corrupt politicians or government watchdog agencies to do anything about it.
The FBI is Concerned Ring Doorbell Cams Warn Residents of Police Raids
In more fallout from the BlueLeaks cache of information that resulted from the DDoSecrets hack, the FBI expressed concern in a technical analysis bulletin published in November 2019 that Amazon’s Ring doorbell cameras are alerting residents before they are about to be raided by police but that there are also opportunities that can be exploited with such devices. How ironic and predictable that law enforcement agencies would express concern over the use of home security cameras by private citizens.
LEOs are concerned that their officers/agents could be targeted by sniper fire or that occupants can have flush evidence down the toilet before LEOs can get inside to stop them from doing so. These are valid concerns but do they rise to the level of being more important than Fourth Amendment privacy rights? I would predictably argue that they don’t and what is more, the police could ask Amazon to cooperate or compel them with a search warrant to hand over Ring camera video footage for particular addresses.
As if we are supposed to care about the fact that they can’t execute a SWAT raid without occupants inside a residence being tipped off first. Oh well, that is the way it goes sometimes. Do you think that when I was in the Marines we always were able to achieve the element of surprise during attacks? Of course not. Sometimes you just have to use overwhelming force in the cover of darkness during operations and do your best to surprise the enemy by showing up to bust in doors at like 3 am when most people are sleeping. Additionally, there is already technology that exists that LEOs will end up adopting that can jam signals in the vicinity of a particular operation.
The use of signal jamming technology would be highly controversial but we should not expect to see some police departments try to get away with using it. I have read about instances in which police hacked into a home’s WiFi camera system to monitor what was going on inside the home. That is beyond creepy and should not be legal without probable cause and a warrant. But cops don’t need to hack your live camera feed if they can get a warrant to access your Cloud-stored video camera footage. Nothing new… Our privacy rights in the US mean very little at present time. All it takes is loosely defined “probable cause” and a search warrant signed by a judge for your privacy rights to disappear entirely.
Amazon is Trying to Worm its Way Into Apartments With Alexa for Residential
How would you like to rent an apartment where an Alexa home assistant has been pre-installed inside the unit to enable audible tours of the apartment to new tenants but that is capable of listening and recording audio which Amazon says will be deleted daily? That is exactly what they are proposing landlords do with this latest privacy-invasive technology use called ‘Alexa for Residential.’
“And it’s through this mass adoption of Alexa devices that consumers (and the army of human beings Amazon hires to listen to consumers) can collectively train Amazon’s voice recognition system, which is then monetized through Amazon’s primary source of profits: Amazon Web Services.”
Not only is this is a terribly bad idea for home privacy but what happens when the landlord remotely shuts off your air conditioning or turns off the lights to comply with some state-encouraged energy-saving appeal to consumers? Late on rent money this month? Oh, sorry. You’re locked out of your apartment until you pay up. I can easily see this blatant misuse of technology being challenged in court soon. Internet of Things (IoT) security and privacy is a hot mess of a nightmare currently.
Amazon, despite its online shopping convenience, has become another one of those quintessential evil corps within the last few years. They are trying to monopolize several industries and are hurting smaller businesses as well as underpaying and outright legally pursuing any employee that dares to speak out against its corrupt business practices. Employees have tried to unionize but Amazon has illegally pursued and fired employees for it. This just goes to show you that Jeff Bezos, the richest man in the world, is a plutocrat who cares nothing about his employees. He could easily afford to make Amazon the best-paying employer in the world but instead, he chooses to be greedy and pocket most of the Amazon earnings in his tax-sheltered, offshore bank accounts.
Google Formally Bans Stalkerware Apps From the Play Store, Whoa!
While it may seem like a positive step in the right direction for digital privacy, Google’s recent ban on stalkerware apps was done for sake of appearances only. The Tech giant firm left a gaping hole in their ‘formal’ ban on stalkerware for app developers that goes into effect on 1 October 2020. The only change app developers have to make is to re-brand their app as a child-tracking app and Google will allow it in their Play Store. I fail to see how that is going to protect children from being tracked by smartphone apps. This should be vigorously enforced as a COPPA violation but once again our esteemed USG has enacted more laws than it could ever hope to enforce unilaterally. We can’t rely on Tech companies to self-police. It doesn’t work, they will interpret laws in such a way that allows them to continue making profits and misses the intent of the law altogether. COPPA is a law that is supposed to protect against the collection of personal data of minors under 13 years of age. Guess what Sherlock? A kids’ physical location is personal information. We call those ‘token gestures’ on the account of government. Laws don’t mean anything if they’re not enforced. Why even bother?
Facebook Performing Privacy-Invasive Tech Experiments
Project Aria is a Facebook initiative that involves sending one hundred employees into society equipped with “smart” glasses that are equipped with microphones and video cameras that will record everything they come into contact with to determine potential ethical and privacy issues that could result from using this technology in Augmented Reality (AR) technology.
“We built Project Aria with privacy in mind and we’ve put provisions in place around where and how we’ll collect data a well as how it will be processed, used, and stored,” Andrew Bosworth, Facebook’s vice president of augmented and virtual reality, tweeted that day.
Sure, we’ll take Facebook for their word on that right? Facebook has by far the worst privacy track record as a result of numerous high-profile scandals like the Cambridge Analytica debacle. Why are they even be allowed to test this type of technology on the public? I think AR technology is fascinating and could be very useful but I don’t agree with allowing tech companies to run around filming and recording everything they can to be uploaded to their servers for analysis. That is wrong, it is a direct violation of every recorded person’s privacy. They do not have consent to do so in private spaces and recording people without their consent is illegal in certain US states.
California happens to be a two-party consent state when it comes to the legal recording of conversations but once again there is a legal loophole that Facebook is exploiting here. If the conversation takes place within a public area where the conversation could be easily overheard then it is ok.
Featured Privacy Tactics, Techniques, & Procedures
Paranoid Smart Speaker Blocker
Paranoid has developed a privacy add-on device for smart speakers of every type on home assistants called Paranoid Home that blocks the smart speaker’s ability to eavesdrop but still allows the device to function with normal voice commands. Unfortunately, Paranoid did not send me a sample of their product. Therefore, I was not able to independently verify any of their privacy claims. This technology is the type of privacy-enabling tech we need to see more of considering companies like Amazon have hired subcontractors to listen to what goes on in our home and market that data to third-party companies. Paranoid Home essentially works by blocking all audio unless it hears the safeword, “Paranoid” first. The device will have three different configuration options customers can essentially choose between low, medium, or ultra-high privacy control. The device retails for $49 USD or $129 for a 3-pack. I might just have to try one of these out. It sounds very promising.
The Monero cryptocurrency is the focus of the Internal Revenue Service’s attempt to buy tools to trace cryptocurrency transactions for Monero and which, to me, just screams incompetence and desperation on the part of authorities. Similarly, we are reminded of when the FBI paid Carnegie Mellon University to develop an exploit for Tor when they infiltrated and took down the infamous Playpen pedophilia Dark Web site and again when they paid Cellebrite for an exploit to unlock the San Bernardino terrorist shooter’s iPhone. We only hear about the positive, crime-solving applications of such exploit tools by government officials and law enforcement authorities. However, we also have to wonder whether these exploit tools are being used inappropriately, illegally, without search warrants? Who is controlling these tools to ensure that they aren’t being misused?
New Podcast Recommendation
If you’re looking for a new Tech privacy podcast to listen to, check out the Center for Human Technology <humanetech.com/podcast> which recently produced a really good documentary on Netflix called, The Social Dilemma, on September 9, 2020. I highly suggest you watch it and maybe check out their podcast also. It is an eye-opener.
Low-Tech Private Life Evasion & Anonymity Tip
New cars have key fobs that are protected with additional security measures from thieves who use wireless Radio Frequency (RF) signal boosters to amplify your vehicle’s key fob signal nearby through a window of your home or apartment termed relay attacks. RF signals are not blocked by glass or drywall. Concrete has to be a certain thickness with aluminum shielding on the inside wall for it to be able to block RF signals. But even then, there could be some minor emissions of RF signals. So for secure facilities, that is one of the things they look at when accrediting those facilities. Special equipment can test for that sort of thing.
What can you do to prevent relay attacks? Easy, just store your key fobs in a Faraday bag. Just test the Faraday bag to ensure it works at stopping the RF signal transmission by attempting to unlock your car from within the bag or box inside of your home. If it doesn’t unlock your car, assume it works effectively. As we know, nothing is 100% safe or secure. Thankfully, these types of relay attacks don’t happen often, and when they do they are usually committed a bunch of times by car thieves in a certain geographic region. Local news stations will let viewers know from information issued by local police departments.
That concludes September 2020’s edition of Becoming Virtually Untraceable. Remember to try your best to operate with a trust nothing or no one mentality. Or, only trust very few people with certain information. Always attempt to verify any information before trusting and relying on it to be accurate. Think about how many times you’ve been burned when you haven’t performed this simple action. The less information you share with the world online, the more private your life will be. Not all tech is bad tech, but many current applications of our modern technology are super-counterproductive to user privacy.
Big Tech has designed a global infrastructure where we, the consumer, are the product because they offer free usage of their services. It is a bullshit concept that government lawmakers continue to allow in practice because they get kickbacks from the Tech industry. The solution is not to become a Luddite and abandon all technology, but instead to force Tech to be implemented in a privacy-conscious manner. The only way we can do that is by demanding meaningful privacy legislation and voting with our wallets.
Sometimes, this is where being a hacker comes in handy as you have to get creative to hack your privacy. Smartphones are not a necessity, they are a luxury but also even more of a security and privacy liability than a social media website like LinkedIn are. Social media accounts are also not a necessity. None of this stuff is a necessity. It’s all just luxuries. We don’t NEED any of it. We choose to use it for various convenience reasons or entertainment but we got along fine without it before it all existed.
If privacy-invasive tech continues to creep into every nook and cranny of our personal lives, the option to delete it all, deactivate my smartphone, and stay off the digital grid as much as possible is always there. However, realize that you will still be monitored via satellites, CCTV cameras, ALPRs, and FRS technology. So, good luck with that approach! I’d rather embrace the technology and figure out how to make it more secure and private. The odds are stacked against us ever having true privacy as long as corporations keep lobbying politicians who create the laws that continue to suppress your privacy rights. Until next time friends.
***Trust No One. Verify Everything. Leave No Trace.***
Additional Privacy Resources
*Privacy-related articles also published by the author can be found here.