Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.
–Edward Snowden
As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought. The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices. “The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.” That which must not be said Examples of words or word sequences that provide false triggers include Alexa: “unacceptable,” “election,” and “a letter” Google Home: “OK, cool,” and “Okay, who is reading” Siri: “a city” and “hey jerry” Microsoft Cortana: “Montana”
Our setup was able to identify more than 1,000 sequences that incorrectly trigger smart speakers. For example, we found that depending on the pronunciation, «Alexa» reacts to the words “unacceptable” and “election,” while «Google» often triggers to “OK, cool.” «Siri» can be fooled by “a city,” «Cortana» by “Montana,” «Computer» by “Peter,” «Amazon» by “and the zone,” and «Echo» by “tobacco.” In our paper, we analyze a diverse set of audio sources, explore gender and language biases, and measure the reproducibility of the identified triggers.
Amazon Echo’s Alexa, Google Assistant and Cortana are always listening to obey your commands. Utter the wake word – Alexa, Cortana or Hey Google, and they spring into life. But what if you don’t want your voice recorded and stored on Amazon’s servers forevermore?
Thankfully, all these services give you the option of deleting your data. Here’s how to remove your voice data collected by the Amazon Echo, Google Home, Siri and Cortana….
If we give in to the sheer gigantic sweep of Facebook and the convenience it creates, and feed all our collective information into its ever-more-intelligent algorithms; if news is read and messages are sent primarily within the Facebook network so that each of these interactions sows new data points in our profiles; and if we build up thousands upon thousands of these innocuous-seeming interactions over years and years, and those interactions are overlaid with face-recognized images, marketing data from online purchases, browsing histories and, now, GPS-tracked driving data, is this total bartering of privacy worth the buy-in to Zuckerberg’s “supportive,” “safe,” “informed,” “civically engaged,” global community?
What we fear is a future in which potent personal data is combined with increasingly sophisticated technology to produce and deliver unaccountable personalized media and messages at a national scale. Combined with data-driven emerging media technologies, it is clear that the use of behavioral data to nudge voters with propaganda-as-a-service is set to explode. Imagine being able to synthesize a politician saying anything you type and then upload the highly realistic video to Facebook with a fake CNN chyron banner. Expect the early versions of these tools available before 2020. At the core of this is data privacy, or as they more meaningfully describe it in Europe, data protection. Unfortunately, the United States is headed in a dangerous direction on this issue. President Trump’s FCC and the Republican party radically deregulated our ISP’s ability to sell data monetization on paying customer data. Anticipate this administration further eroding privacy protections, as it confuses the public interest for the interests of business, despite being the only issue that about 95% of voters agree on, across every partisan and demographic segment according to HuffPo/YouGov. We propose three ideas to address these issues, which are crucial to preserving American democracy.
When we open up data, are we empowering people to come together? Or to come apart? Who defines the values that we should be working towards? Who checks to make sure that our data projects are moving us towards those values? If we aren’t clear about what we want and the trade-offs that are involved, simply opening up data can — and often does — reify existing inequities and structural problems in society. Is that really what we’re aiming to do?
The U.S. government reported a five-fold increase in the number of electronic media searches at the border in a single year, from 4,764 in 2015 to 23,877 in 2016.1 Every one of those searches was a potential privacy violation. Our lives are minutely documented on the phones and laptops we carry, and in the cloud. Our devices carry records of private conversations, family photos, medical documents, banking information, information about what websites we visit, and much more. Moreover, people in many professions, such as lawyers and journalists, have a heightened need to keep their electronic information confidential. How can travelers keep their digital data safe? This guide (updating a previous guide from 20112) helps travelers understand their individual risks when crossing the U.S. border, provides an overview of the law around border search, and offers a brief technical overview to securing digital data.
We don’t take our other valuables with us when we travel—we leave the important stuff at home, or in a safe place. But Facebook and Google don’t give us similar control over our valuable data. With these online services, it’s all or nothing. We need a ‘trip mode’ for social media sites that reduces our contact list and history to a minimal subset of what the site normally offers. Not only would such a feature protect people forced to give their passwords at the border, but it would mitigate the many additional threats to privacy they face when they use their social media accounts away from home. Both Facebook and Google make lofty claims about user safety, but they’ve done little to show they take the darkening political climate around the world seriously. A ‘trip mode’ would be a chance for them to demonstrate their commitment to user safety beyond press releases and anodyne letters of support. The only people who can offer reliable protection against invasive data searches at national borders are the billion-dollar companies who control the servers. They have the technology, the expertise, and the legal muscle to protect their users. All that’s missing is the will.
How many potentially incriminating things do you have lying around your home? If you’re like most people, the answer is probably zero. And yet police would need to go before a judge and establish probable cause before they could get a warrant to search your home. What we’re seeing now is that anyone can be grabbed on their way through customs and forced to hand over the full contents of their digital life.
First communication became digitized and free to everyone. Then, when clean energy became free, things started to move quickly. Transportation dropped dramatically in price. It made no sense for us to own cars anymore, because we could call a driverless vehicle or a flying car for longer journeys within minutes. We started transporting ourselves in a much more organized and coordinated way when public transport became easier, quicker and more convenient than the car. Now I can hardly believe that we accepted congestion and traffic jams, not to mention the air pollution from combustion engines. What were we thinking?
First they took over communication. I don’t believe what I hear anymore. I only trust what I see out there in the streets. Then, when they took over the energy grid and fuel supply, things started to move quickly. Transportation became increasingly restricted. It made no sense for us to use cars anymore, since their control systems wouldn’t let us go anywhere inside the city anyway. And the militias control the countryside, so with a bit of skin pigmentation, there’s no telling whether you’ll end up as labor or food. I wonder what those flying cars look like from the inside. The only things that fly around here are the autonomous police drones. Forget about using public transportation. Unless you want to get tased. Or shot. Their facial recognition software is not good at distinguishing dark faces, so they may well confuse you with a known threat. Now, I can hardly believe that we were once allowed to move freely about the city, not to mention not being watched by persistent, omnipresent security systems. Sometimes I use the sewers when I need to go to somewhere far. They haven’t rigged them up with cameras yet, I think. I guess the smell is deterrence enough for most people. It’s hard to wash off that journey.
Last Thursday, with friends and colleagues from Open Rights Group, I spent a few hours at the Adult Provider Network’s Age Verification Demonstration (“the demo”) to watch demonstrations of technologies which attempt to fulfil Age Verification requirements for access to online porn in the UK. Specifically: Age Verification (“AV”) is a requirement of part 3 of the Digital Economy Bill that seeks to “prevent access by persons under the age of 18” to “pornographic material available on the internet on a commercial basis”. There are many contentious social and business issues related to AV[…] there are many open questions and many criticisms of the Digital Economy Bill’s provisions; but to date there appears to have been no critical appraisal of the proposed technologies for AV, and so that is what I seek to address in this posting.
Facial Weaponization Suite protests against biometric facial recognition–and the inequalities these technologies propagate–by making “collective masks” in workshops that are modeled from the aggregated facial data of participants, resulting in amorphous masks that cannot be detected as human faces by biometric facial recognition technologies. The masks are used for public interventions and performances. One mask, the Fag Face Mask, generated from the biometric facial data of many queer men’s faces, is a response to scientific studies that link determining sexual orientation through rapid facial recognition techniques. Another mask explores a tripartite conception of blackness: the inability of biometric technologies to detect dark skin as racist, the favoring of black in militant aesthetics, and black as that which informatically obfuscates. A third mask engages feminism’s relations to concealment and imperceptibility, taking veil legislation in France as a troubling site that oppressively forces visibility. A fourth mask considers biometrics’ deployment as a security technology at the Mexico-US border and the nationalist violence it instigates. These masks intersect with social movements’ use of masking as an opaque tool of collective transformation that refuses dominant forms of political representation.
In western liberal democracies (where Tor is overwhelmingly based, and by raw numbers, largely serves) human-rights advocacy has better optics than privacy. But the opposite is true in the regions that Tor aims to serve. Privacy empowers the individual. Empowering the individual naturally dovetails with human rights, so its plausible that greater human rights is a natural byproduct of privacy advocacy. However, Tor’s pivot from “Privacy Enthusiasts” to “Human Rights Watch for Nerds” substantially increases the risk of imprisonment to those operating a Tor relay or using the Tor Browser Bundle from less HR-friendly regions.
Over the past several years, Marlinspike has quietly positioned himself at the front lines of a quarter-century-long war between advocates of encryption and law enforcement. Since the first strong encryption tools became publicly available in the early ’90s, the government has warned of the threat posed by “going dark”—that such software would cripple American police departments and intelligence agencies, allowing terrorists and organized criminals to operate with impunity. In 1993 it unsuccessfully tried to implement a backdoor system called the Clipper Chip to get around encryption. In 2013, Edward Snowden’s leaks revealed that the NSA had secretly sabotaged a widely used crypto standard in the mid- 2000s and that since 2007 the agency had been ingesting a smorgasbord of tech firms’ data with and without their cooperation. Apple’s battle with the FBI over Farook’s iPhone destroyed any pretense of a truce.
Imagine if there were an alternate dystopian reality where law enforcement was 100% effective, such that any potential law offenders knew they would be immediately identified, apprehended, and jailed. How could people have decided that marijuana should be legal, if nobody had ever used it? How could states decide that same sex marriage should be permitted, if nobody had ever seen or participated in a same sex relationship? The cornerstone of liberal democracy is the notion that free speech allows us to create a marketplace of ideas, from which we can use the political process to collectively choose the society we want. Most critiques of this system tend to focus on the ways in which this marketplace of ideas isn’t totally free, such as the ways in which some actors have substantially more influence over what information is distributed than others. The more fundamental problem, however, is that living in an existing social structure creates a specific set of desires and motivations in a way that merely talking about other social structures never can. The world we live in influences not just what we think, but how we think, in a way that a discourse about other ideas isn’t able to. Any teenager can tell you that life’s most meaningful experiences aren’t the ones you necessarily desired, but the ones that actually transformed your very sense of what you desire. We can only desire based on what we know. It is our present experience of what we are and are not able to do that largely determines our sense for what is possible.
But the well-publicized success stories obscure the fact that familial DNA searches can generate more noise than signal. “Anyone who knows the science understands that there’s a high rate of false positives,” says Erin Murphy, a New York University law professor and the author of Inside the Cell: The Dark Side of Forensic DNA. The searches, after all, look for DNA profiles that are similar to the perpetrator’s but by no means identical, a scattershot approach that yields many fruitless leads, and for limited benefit. In the United Kingdom, a 2014 study found that just 17 percent of familial DNA searches “resulted in the identification of a relative of the true offender.”
Under the first-of-its-kind legislation proposed in New York, drivers involved in accidents would have to submit their phone to roadside testing from a textalyzer to determine whether the driver was using a mobile phone ahead of a crash. In a bid to get around the Fourth Amendment right to privacy, the textalyzer allegedly would keep conversations, contacts, numbers, photos, and application data private. It will solely say whether the phone was in use prior to a motor-vehicle mishap. Further analysis, which might require a warrant, could be necessary to determine whether such usage was via hands-free dashboard technology and to confirm the original finding.
Over a year ago, an anonymous source contacted the Süddeutsche Zeitung (SZ) and submitted encrypted internal documents from Mossack Fonseca, a Panamanian law firm that sells anonymous offshore companies around the world.
Lovecraft’s concern was vast, alien entities who have no knowledge of, or concern for, the human race. Our modern-day concerns are about vast, alien entities who have total, invasive, privacy-destroying knowledge of the minutae of the human race - and still have no concern for us.
For months I had joked to my family that I was probably on a watch list for my excessive use of Tor and cash withdrawals […] the things I had to do to evade marketing detection looked suspiciously like illicit activities. All I was trying to do was to fight for the right for a transaction to be just a transaction, not an excuse for a thousand little trackers to follow me around. But avoiding the big-data dragnet meant that I not only looked like a rude family member or an inconsiderate friend, but I also looked like a bad citizen.,
Dem Quelltext zufolge werden in XKeyscore Nutzer automatisch als Extremisten markiert, wenn sie im Internet nach Anonymisierungs-Tools wie Tor oder Tails suchen, dank der globalen Überwachung von Suchanfragen. Gerade diese Werkzeuge sind aber bei vielen Gruppen beliebt, die auf Anonymität angewiesen sind, also etwa auch Anwälte, Menschenrechtsaktivisten und Journalisten in aller Welt. Die werden demnach aber ganz gezielt von der NSA ausspioniert, etwa auch die Inhalte ihrer E-Mails.
Over the course of the last three years leading panels and strategy sessions at the Allied Media Conference, and informed by the work of the Tactical Tech Collective—we’ve learned that conversations about safety and security are most successful when they are grounded in discussion about what we envision, know, and practice. The following set of questions offer a starting point for grassroots organizers interested in applying a contextual security framework to their organizing.
If there is a single word to describe Google, it is „absolute.” The Britannica defines absolutism as a system in which „the ruling power is not subject to regularized challenge or check by any other agency.” In ordinary affairs, absolutism is a moral attitude in which values and principles are regarded as unchallengeable and universal. There is no relativism, context-dependence, or openness to change.
During the past decade, the NSA has secretly worked to gain access to virtually all communications entering, leaving, or going through the country. A key reason, according to the draft of a top secret NSA inspector general’s report leaked by Snowden, is that approximately one third of all international telephone calls in the world enter, leave, or transit the United States. “Most international telephone calls are routed through a small number of switches or ‘chokepoints’ in the international telephone switching system en route to their final destination,” says the report. “The United States is a major crossroads for international switched telephone traffic.” At the same time, according to the 2009 report, virtually all Internet communications in the world pass through the US. For example, the report notes that during 2002, less than one percent of worldwide Internet bandwidth—i.e., the international link between the Internet and computers—“was between two regions that did not include the United States.”
“As regards illegal activity—people will break the law regardless of whether we know their names online or not. Laws already exist for those cases; we don’t need more. The general principles should not change simply because there is new, or widely misunderstood, technology.”
Right now, all of the places we can assemble on the web in any kind of numbers are privately owned. And privately-owned public spaces aren’t real public spaces. They don’t allow for the play and the chaos and the creativity and brilliance that only arise in spaces that don’t exist purely to generate profit. And they’re susceptible to being gradually gaslighted by the companies that own them.
When it comes to talking about social media, it’s easy to get trapped in utopian and dystopian rhetorics. My goal is not to go down one of these rabbit holes, but rather, to critically interrogate our participation in the culture of fear. Many of you are technologists, designers, pundits, and users. How are we contributing to or combating the culture of fear? What are our responsibilities with regard to the culture of fear? What kinds of things can and should we do?
Google’s harvesting of e-mails, passwords and other sensitive personal information from unsuspecting households in the United States and around the world was neither a mistake nor the work of a rogue engineer, as the company long maintained, but a program that supervisors knew about, according to new details from the full text of a regulatory report. The report, prepared by the Federal Communications Commission after a 17-month investigation of Google’s Street View project, was released, heavily redacted, two weeks ago. Although it found that Google had not violated any laws, the agency said Google had obstructed the inquiry and fined the company $25,000. On Saturday, Google released a version of the report with only employees’ names redacted.
So Sweden has granted private corporate interests – the copyright industry – more extensive powers than the Police, in terms of cracking down on the Net and making dissent and civil disobedience dangerous.