Posts tagged security
privacytools.io is a socially motivated website that provides information for protecting your data security and privacy.
We noticed that this extension was distributed through a compromised Swiss security company website. Unsuspecting visitors to this website were asked to install this malicious extension. The extension is a simple backdoor, but with an interesting way of fetching its C&C domain. The extension uses a bit.ly URL to reach its C&C, but the URL path is nowhere to be found in the extension code. In fact, it will obtain this path by using comments posted on a specific Instagram post. The one that was used in the analyzed sample was a comment about a photo posted to the Britney Spears official Instagram account.
Hackers exploiting malicious software stolen from the National Security Agency executed damaging cyberattacks on Friday that hit dozens of countries worldwide, forcing Britain’s public health system to send patients away, freezing computers at Russia’s Interior Ministry and wreaking havoc on tens of thousands of computers elsewhere. The attacks amounted to an audacious global blackmail attempt spread by the internet and underscored the vulnerabilities of the digital age. Transmitted via email, the malicious software locked British hospitals out of their computer systems and demanded ransom before users could be let back in — with a threat that data would be destroyed if the demands were not met. By late Friday the attacks had spread to more than 74 countries, according to security firms tracking the spread. Kaspersky Lab, a Russian cybersecurity firm, said Russia was the worst-hit, followed by Ukraine, India and Taiwan. Reports of attacks also came from Latin America and Africa.
The U.S. government reported a five-fold increase in the number of electronic media searches at the border in a single year, from 4,764 in 2015 to 23,877 in 2016.1 Every one of those searches was a potential privacy violation. Our lives are minutely documented on the phones and laptops we carry, and in the cloud. Our devices carry records of private conversations, family photos, medical documents, banking information, information about what websites we visit, and much more. Moreover, people in many professions, such as lawyers and journalists, have a heightened need to keep their electronic information confidential. How can travelers keep their digital data safe? This guide (updating a previous guide from 20112) helps travelers understand their individual risks when crossing the U.S. border, provides an overview of the law around border search, and offers a brief technical overview to securing digital data.
We don’t take our other valuables with us when we travel—we leave the important stuff at home, or in a safe place. But Facebook and Google don’t give us similar control over our valuable data. With these online services, it’s all or nothing. We need a ‘trip mode’ for social media sites that reduces our contact list and history to a minimal subset of what the site normally offers. Not only would such a feature protect people forced to give their passwords at the border, but it would mitigate the many additional threats to privacy they face when they use their social media accounts away from home. Both Facebook and Google make lofty claims about user safety, but they’ve done little to show they take the darkening political climate around the world seriously. A ‘trip mode’ would be a chance for them to demonstrate their commitment to user safety beyond press releases and anodyne letters of support. The only people who can offer reliable protection against invasive data searches at national borders are the billion-dollar companies who control the servers. They have the technology, the expertise, and the legal muscle to protect their users. All that’s missing is the will.
How many potentially incriminating things do you have lying around your home? If you’re like most people, the answer is probably zero. And yet police would need to go before a judge and establish probable cause before they could get a warrant to search your home. What we’re seeing now is that anyone can be grabbed on their way through customs and forced to hand over the full contents of their digital life.
Until a few years ago, machine learning algorithms simply did not work very well on many meaningful tasks like recognizing objects or translation. Thus, when a machine learning algorithm failed to do the right thing, this was the rule, rather than the exception. Today, machine learning algorithms have advanced to the next stage of development: when presented with naturally occurring inputs, they can outperform humans. Machine learning has not yet reached true human-level performance, because when confronted by even a trivial adversary, most machine learning algorithms fail dramatically. In other words, we have reached the point where machine learning works, but may easily be broken.
Last Thursday, with friends and colleagues from Open Rights Group, I spent a few hours at the Adult Provider Network’s Age Verification Demonstration (“the demo”) to watch demonstrations of technologies which attempt to fulfil Age Verification requirements for access to online porn in the UK. Specifically: Age Verification (“AV”) is a requirement of part 3 of the Digital Economy Bill that seeks to “prevent access by persons under the age of 18” to “pornographic material available on the internet on a commercial basis”. There are many contentious social and business issues related to AV[…] there are many open questions and many criticisms of the Digital Economy Bill’s provisions; but to date there appears to have been no critical appraisal of the proposed technologies for AV, and so that is what I seek to address in this posting.
Tim O'Reilly writes about the reality that more and more of our lives – including whether you end up seeing this very sentence! – is in the hands of “black boxes” – algorithmic decision-makers whose inner workings are a secret from the people they effect.
O'Reilly proposes four tests to determine whether a black box is trustable:
1. Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome.
2. Success is measurable.
3. The goals of the algorithm’s creators are aligned with the goals of the algorithm’s consumers.
4. Does the algorithm lead its creators and its users to make better longer term decisions?
O'Reilly goes on to test these assumptions against some of the existing black boxes that we trust every day, like aviation autopilot systems, and shows that this is a very good framework for evaluating algorithmic systems.
But I have three important quibbles with O'Reilly’s framing. The first is absolutely foundational: the reason that these algorithms are black boxes is that the people who devise them argue that releasing details of their models will weaken the models’ security. This is nonsense.
For example, Facebook’s tweaked its algorithm to downrank “clickbait” stories. Adam Mosseri, Facebook’s VP of product management told Techcrunch, “Facebook won’t be publicly publishing the multi-page document of guidelines for defining clickbait because ‘a big part of this is actually spam, and if you expose exactly what we’re doing and how we’re doing it, they reverse engineer it and figure out how to get around it.’”
There’s a name for this in security circles: “Security through obscurity.” It is as thoroughly discredited an idea as is possible. As far back as the 19th century, security experts have decried the idea that robust systems can rely on secrecy as their first line of defense against compromise.
The reason the algorithms O'Reilly discusses are black boxes is because the people who deploy them believe in security-through-obscurity. Allowing our lives to be manipulated in secrecy because of an unfounded, superstitious belief is as crazy as putting astrologers in charge of monetary policy, no-fly lists, hiring decisions, and parole and sentencing recommendations.
So there’s that: the best way to figure out whether we can trust a black box is the smash it open, demand that it be exposed to the disinfecting power of sunshine, and give no quarter to the ideologically bankrupt security-through-obscurity court astrologers of Facebook, Google, and the TSA.
Then there’s the second issue, which is important whether or not we can see inside the black box: what data was used to train the model? Or, in traditional scientific/statistical terms, what was the sampling methodology?
Garbage in, garbage out is a principle as old as computer science, and sampling bias is a problem that’s as old as the study of statistics. Algorithms are often deployed to replace biased systems with empirical ones: for example, predictive policing algorithms tell the cops where to look for crime, supposedly replacing racially biased stop-and-frisk with data-driven systems of automated suspicion.
But predictive policing training data comes from earlier, human-judgment-driven stop-and-frisk projects. If the cops only make black kids turn out their pockets, then all the drugs, guns and contraband they find will be in the pockets of black kids. Feed this data to a machine learning model and ask it where the future guns, drugs and contraband will be found, and it will dutifully send the police out to harass more black kids. The algorithm isn’t racist, but its training data is.
There’s a final issue, which is that algorithms have to have their models tweaked based on measurements of success. It’s not enough to merely measure success: the errors in the algorithm’s predictions also have to be fed back to it, to correct the model. That’s the difference between Amazon’s sales-optimization and automated hiring systems. Amazon’s systems predict ways of improving sales, which the company tries: the failures are used to change the model to improve it. But automated hiring systems blackball some applicants and advance others, and the companies that makes these systems don’t track whether the excluded people go on to be great employees somewhere else, or whether the recommended hires end up stealing from the company or alienating its customers.
I like O'Reilly’s framework for evaluating black boxes, but I think we need to go farther.
Over the past several years, Marlinspike has quietly positioned himself at the front lines of a quarter-century-long war between advocates of encryption and law enforcement. Since the first strong encryption tools became publicly available in the early ’90s, the government has warned of the threat posed by “going dark”—that such software would cripple American police departments and intelligence agencies, allowing terrorists and organized criminals to operate with impunity. In 1993 it unsuccessfully tried to implement a backdoor system called the Clipper Chip to get around encryption. In 2013, Edward Snowden’s leaks revealed that the NSA had secretly sabotaged a widely used crypto standard in the mid- 2000s and that since 2007 the agency had been ingesting a smorgasbord of tech firms’ data with and without their cooperation. Apple’s battle with the FBI over Farook’s iPhone destroyed any pretense of a truce.
A poorly-named Wi-Fi hotspot sparked a security scare on a Qantas flight and prompted about 50 terrified passengers to refuse to fly. The hotspot name - Mobile Detonation Device — was spotted by a female passenger who saw it on her phone’s Wi-Fi menu before the plane left Melbourne airport.
Void pantographs work by exploiting the limitations and features of copying equipment. A scanner or photocopier will act as a low-pass filter on the original image, blurring edges slightly. It will also not be perfectly aligned with the directions of the document, causing aliasing. Features smaller than the resolution will also not be reproduced. In addition, human vision is sensitive to luminance contrast ratio. This means that if a grey region consists of a grid of very small dark dots the filtering will produce a lighter grey, while a region of larger dots will be affected differently (“big-dot-little-dot”). This makes it possible to see a pattern that previously was invisible.
qaul.net implements a redundant, open communication principle, in which wireless-enabled computers and mobile devices can directly form a spontaneous network. Text messaging, file sharing and voice calls are possible independent of internet and cellular networks. Qaul.net can spread like a virus, and an Open Source Community can modify it freely. In a time of communication blackouts in places like Egypt, Burma, and Tibet, and given the large power outages often caused by natural disasters, qaul.net has taken on the challenge of critically examining existing communication pathways while simultaneously exploring new horizons.
every device is a target for colonization, as each successfully exploited target is theoretically useful as a means to infiltrating another possible target. Port scanning and downloading banners to identify which software is operating on the target system is merely the first step of the attack (Figure 8). Top secret documents from the NSA seen by Heise demonstrate that the involved spy agencies follow the common methodology of online organized crime (Figure 9): reconnaissance (Figure 10) is followed by infection (Figure 11), command and control (Figure 12), and exfiltration (Figure 13). The NSA presentation makes it clear that the agency embraces the mindset of criminals. In the slides, they discuss techniques and then show screenshots of their own tools to support this criminal process (Figure 14, 15 and 16).
Over the course of the last three years leading panels and strategy sessions at the Allied Media Conference, and informed by the work of the Tactical Tech Collective—we’ve learned that conversations about safety and security are most successful when they are grounded in discussion about what we envision, know, and practice. The following set of questions offer a starting point for grassroots organizers interested in applying a contextual security framework to their organizing.
Facebook and Google seem very powerful, but they live about a week from total ruin all the time. They know the cost of leaving social networks individually is high, but en masse, becomes next to nothing. Windows could be replaced with something better written. The US government would fall to a general revolt in a matter of days. It wouldn’t take a total defection or a general revolt to change everything, because corporations and governments would rather bend to demands than die. These entities do everything they can get away with — but we’ve forgotten that we’re the ones that are letting them get away with things.
If ZunZuneo looks ridiculous in retrospect, it’s because 2011 is a different country. We now know U.S. security apparatus may threaten the “open Internet” as much as an oppressive government, if not more. Clinton’s speeches as secretary of state dwell on freedom of expression but not freedom from surveillance, and now—following the NSA revelations—we have a good idea why. Beyond all this, as sociologist Zeynep Tufecki writes, it’s likely that the failure of ZunZuneo will threaten online activism abroad, even if it’s not associated with the U.S. government.
“About six years ago I found a discussion forum online where users were sharing techniques for accessing various devices that were all networked through the internet. A large part of the discussion surrounded the ability to access unsecured webcam control panels, which had at some point been indexed though the search robots at Google. Interestingly, even control panels that required a password were sometimes very easily bypassed by a default user & password combination from the original device settings. At some point I started making screen captures [with] the webcams I was able to access. Sometimes it would be an image of a dog in a cage, or a tired employee behind a cash register in a convenience store… fairly uneventful moments, but every camera that successfully loaded felt like I was viewing a portal into another world, a space only accessible though digital means.
Using this methodology, I eventually accessed the control panel for this camera, which offered almost complete pan & tilt options, a 21x optical zoom, focus control, and exposure adjustments. The level of control was unparalleled compared to the other cameras I was accessing.
Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are – and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.
When Chelsea Manning (formerly Bradley Manning) was thirteen, the US government announced it had launched “Operation Infinite Justice”. Operation Infinite Justice sought to punish the perpetrators of the September 11, 2001 attacks, destroy Al Qaeda, and end the reign of the Taliban. Operation Infinite Justice was renamed “Operation Enduring Freedom” after protests from Islamic scholars, who argued that God, not the US government, was the arbiter of justice. But freedom, it seemed, was something the United States could give and take away. When Manning was fourteen, the Bush administration announced that detainees in Guantanamo did not deserve protection under the Geneva conventions and that torture was justified. When Manning was fifteen, the US invaded Iraq in response to fabricated reports that Saddam Hussein had weapons of mass destruction. When Manning was sixteen, US soldiers tortured and sodomised prisoners in Iraq’s Abu Ghraib prison. When Manning was seventeen, a movement emerged to prosecute the Bush administration for war crimes. Nothing really came of it. When Manning was nineteen, she joined the army.
So, hacker culture is kind of at a crossroads. For a long time it was totally cool that, you know what, I don’t really want to be political, because I just like to reverse code and it’s a lot of fun, and I don’t really have time for politics cause I spend thirteen hours a day looking at Shell code and socialism takes too long. That was great for a while, but we don’t get to be apolitical anymore. Because If you’re doing security work, if you’re doing development work and you are apolitical, then you are aiding the existing centralizing structure. If you’re doing security work and you are apolitical, you are almost certainly working for an organization that exists in a great part to prop up existing companies and existing power structures. Who here has worked for a a security consultancy? Not that many people, ok. I don’t know anybody who has worked for a security consultancy where that consultancy has not done work for someone in the defense industry. There are probably a few, and I guarantee you that those consultancies that have done no work that is defense industry related, have taken an active political position, that we will not touch anything that is remotely fishy. If you’re apolitical, you’re aiding the enemy.
Bitcoin’s resilience comes from a property I refer to as Too Big To Regulate. Put simply, it’s easier to tell ten people to behave, than ten thousand. So if we want a system that’s impossible to regulate, get the power in the hands of ten thousand rather than ten. But there are some factors in Bitcoin that are not Too Big To Regulate. There’s only a few parties that turn bitcoin (which teleports) into dollars (which buy stuff). There can, and will be more, but the quantity of these critical nodes is not set by Bitcoin itself.
All disruptive technologies upset traditional power balances, and the Internet is no exception. The standard story is that it empowers the powerless, but that’s only half the story. The Internet empowers everyone. Powerful institutions might be slow to make use of that new power, but since they are powerful, they can use it more effectively. Governments and corporations have woken up to the fact that not only can they use the Internet, they can control it for their interests. Unless we start deliberately debating the future we want to live in, and information technology in enabling that world, we will end up with an Internet that benefits existing power structures and not society in general.
A lot of psychological research has tried to make sense out of security, fear, risk, and safety. But however fascinating the academic literature is, it often misses the broader social dynamics. New York University’s Harvey Molotch helpfully brings a sociologist’s perspective to the subject in his new book Against Security.
Disruption created by intentional generation of fake GPS signals could have serious economic consequences. This article discusses how typical civil GPS receivers respond to an advanced civil GPS spoofing attack, and four techniques to counter such attacks: spread-spectrum security codes, navigation message authentication, dual-receiver correlation of military signals, and vestigial signal defense. Unfortunately, any kind of anti-spoofing, however necessary, is a tough sell.
For correspondents who report from conflict zones or on underground activism in repressive regimes, the risks are extremely high. Recently, two excellent investigative series—by The Wall Street Journal and Bloomberg News—and the release of a large trove of surveillance industry documents by Wikileaks dubbed “The Spy files,” provided a glimpse of just how sophisticated off-the-shelf monitoring technologies have become. Western companies have sold mass Web and e-mail surveillance technology to Libya and Syria, for instance, and in Egypt, activists found specialized software that allowed the government to listen in to Skype conversations. In Bahrain, meanwhile, technology sold by Nokia Siemens allowed the government to monitor cell-phone conversations and text messages.