As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought. The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices. “The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.” That which must not be said Examples of words or word sequences that provide false triggers include Alexa: “unacceptable,” “election,” and “a letter” Google Home: “OK, cool,” and “Okay, who is reading” Siri: “a city” and “hey jerry” Microsoft Cortana: “Montana”
Our setup was able to identify more than 1,000 sequences that incorrectly trigger smart speakers. For example, we found that depending on the pronunciation, «Alexa» reacts to the words “unacceptable” and “election,” while «Google» often triggers to “OK, cool.” «Siri» can be fooled by “a city,” «Cortana» by “Montana,” «Computer» by “Peter,” «Amazon» by “and the zone,” and «Echo» by “tobacco.” In our paper, we analyze a diverse set of audio sources, explore gender and language biases, and measure the reproducibility of the identified triggers.
What we fear is a future in which potent personal data is combined with increasingly sophisticated technology to produce and deliver unaccountable personalized media and messages at a national scale. Combined with data-driven emerging media technologies, it is clear that the use of behavioral data to nudge voters with propaganda-as-a-service is set to explode. Imagine being able to synthesize a politician saying anything you type and then upload the highly realistic video to Facebook with a fake CNN chyron banner. Expect the early versions of these tools available before 2020. At the core of this is data privacy, or as they more meaningfully describe it in Europe, data protection. Unfortunately, the United States is headed in a dangerous direction on this issue. President Trump’s FCC and the Republican party radically deregulated our ISP’s ability to sell data monetization on paying customer data. Anticipate this administration further eroding privacy protections, as it confuses the public interest for the interests of business, despite being the only issue that about 95% of voters agree on, across every partisan and demographic segment according to HuffPo/YouGov. We propose three ideas to address these issues, which are crucial to preserving American democracy.
In recent months, the Alphabet Inc. unit put a DeepMind AI system in control of parts of its data centers to reduce power consumption by manipulating computer servers and related equipment like cooling systems. It uses a similar technique to DeepMind software that taught itself to play Atari video games, Hassabis said in an interview at a recent AI conference in New York. The system cut power usage in the data centers by several percentage points, “which is a huge saving in terms of cost but, also, great for the environment,” he said. The savings translate into a 15 percent improvement in power usage efficiency, or PUE, Google said in a statement. PUE measures how much electricity Google uses for its computers, versus the supporting infrastructure like cooling systems.
Lovecraft’s concern was vast, alien entities who have no knowledge of, or concern for, the human race. Our modern-day concerns are about vast, alien entities who have total, invasive, privacy-destroying knowledge of the minutae of the human race - and still have no concern for us.
So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all %E2%80%9Cnightmarish%E2%80%9D–a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.
Data is currency, and consumers are willing to hand over their information in exchange for “free or convenience,” Schneier said. Companies such as Facebook and Google want the data so that they can sell more stuff. Users hand it over to play games, to get email, or some other benefit. “I like to think of this as a feudal model. At a most fundamental model, we are tenant farming for companies like Google. We are on their land producing data,” he said. By handing the data over, users have an expectation of trust that Google, Facebook, and other data brokers will do the right thing with the personal data. However, this becomes a power play when governments get involved. Governments don’t need to collect the data themselves when corporations are already doing it. “The NSA woke up and said ‘Corporations are spying on the Internet, let’s get ourselves a copy,’” Schneier said. Most NSA surveillance “piggybacks” what the companies are already doing, he said.
There is simply no squaring the moral ambition of the “Don’t Be Evil” motto of Google founders Larry Page and Sergey Brin with funding for a group that promotes “The Many Benefits of Increased Atmospheric CO2.” ALEC is exactly who Google Chairman Eric Schmidt was talking about when he said at a recent Google symposium: “You can lie about the effects of climate change, but eventually you’ll be seen as a liar.”