Posts tagged security

Inside a low budget consumer hardware espionage implant

espionage, security, DIY, NSA, USB, GSM, S8, 2018

A while back Joe Fitz tweeted about the S8 data line locator1. He referred to it as “Trickle down espionage” due to its reminiscence of NSA spying equipment. The S8 data line locator is a GSM listening and location device hidden inside the plug of a standard USB data/charging cable. It supports the 850, 900, 1800 and 1900 MHz GSM frequencies. Its core idea is very similar to the COTTONMOUTH product line by the NSA/CSS [1] in which an RF device is hidden inside a USB plug. Those hidden devices are referred to as implants. The device itself is marketed as a location tracker usable in cars, where a thief would not be able to identify the USB cable as a location tracking device. Its malicious use-cases can, however, not be denied. Especially since it features no GPS making its location reporting very coarse (1.57 km deviation in my tests). It can, e.g., be called to listen to a live audio feed from a small microphone within the device, as well as programmed to call back if the sound level surpasses a 45 dB threshold. The fact that the device can be repackaged in its sliding case, after configuring it, i.e. inserting a SIM, without any noticeable marks to the packaging suggests its use-case: covert espionage.

via https://ha.cking.ch/s8_data_line_locator/

The Spycraft Revolution

espionage, security, progress, technology, Foreign-Policy

The world of espionage is facing tremendous technological, political, legal, social, and commercial changes. The winners will be those who break the old rules of the spy game and work out new ones. They will need to be nimble and collaborative and—paradoxically—to shed much of the secrecy that has cloaked their trade since its inception. The balance of power in the spy world is shifting; closed societies now have the edge over open ones. It has become harder for Western countries to spy on places such as China, Iran, and Russia and easier for those countries’ intelligence services to spy on the rest of the world. Technical prowess is also shifting. Much like manned spaceflight, human-based intelligence is starting to look costly and anachronistic. Meanwhile, a gulf is growing between the cryptographic superpowers—the United States, United Kingdom, France, Israel, China, and Russia—and everyone else. Technical expertise, rather than human sleuthing, will hold the key to future success.

via https://foreignpolicy.com/2019/04/27/the-spycraft-revolution-espionage-technology/

Turla’s watering hole campaign: An updated Firefox extension abusing Instagram

botnet, characters, security, instagram, Britney-Spears

We noticed that this extension was distributed through a compromised Swiss security company website. Unsuspecting visitors to this website were asked to install this malicious extension. The extension is a simple backdoor, but with an interesting way of fetching its C&C domain. The extension uses a bit.ly URL to reach its C&C, but the URL path is nowhere to be found in the extension code. In fact, it will obtain this path by using comments posted on a specific Instagram post. The one that was used in the analyzed sample was a comment about a photo posted to the Britney Spears official Instagram account.

via https://www.welivesecurity.com/2017/06/06/turlas-watering-hole-campaign-updated-firefox-extension-abusing-instagram/

Hackers Hit Dozens of Countries Exploiting Stolen N.S.A. Tool

security, ransomware, malware, NSA, NHS, windows, 2017

Hackers exploiting malicious software stolen from the National Security Agency executed damaging cyberattacks on Friday that hit dozens of countries worldwide, forcing Britain’s public health system to send patients away, freezing computers at Russia’s Interior Ministry and wreaking havoc on tens of thousands of computers elsewhere. The attacks amounted to an audacious global blackmail attempt spread by the internet and underscored the vulnerabilities of the digital age. Transmitted via email, the malicious software locked British hospitals out of their computer systems and demanded ransom before users could be let back in — with a threat that data would be destroyed if the demands were not met. By late Friday the attacks had spread to more than 74 countries, according to security firms tracking the spread. Kaspersky Lab, a Russian cybersecurity firm, said Russia was the worst-hit, followed by Ukraine, India and Taiwan. Reports of attacks also came from Latin America and Africa.

via https://www.nytimes.com/2017/05/12/world/europe/uk-national-health-service-cyberattack.html?action=click&contentCollection=Europe&region=Footer&module=WhatsNext&version=WhatsNext&contentID=WhatsNext&moduleDetail=undefined&pgtype=Multimedia

Digital Privacy at the U.S. Border: Protecting the Data On Your Devices and In the Cloud

EFF, privacy, security, travel, US, USA, data, borders

The U.S. government reported a five-fold increase in the number of electronic media searches at the border in a single year, from 4,764 in 2015 to 23,877 in 2016.1 Every one of those searches was a potential privacy violation. Our lives are minutely documented on the phones and laptops we carry, and in the cloud. Our devices carry records of private conversations, family photos, medical documents, banking information, information about what websites we visit, and much more. Moreover, people in many professions, such as lawyers and journalists, have a heightened need to keep their electronic information confidential. How can travelers keep their digital data safe? This guide (updating a previous guide from 20112) helps travelers understand their individual risks when crossing the U.S. border, provides an overview of the law around border search, and offers a brief technical overview to securing digital data.

via https://www.eff.org/wp/digital-privacy-us-border–2017

Social Media Needs A Travel Mode

politics, privacy, security, Maciej-Cegłowski, social-media, six-degrees, surveillance

We don’t take our other valuables with us when we travel—we leave the important stuff at home, or in a safe place. But Facebook and Google don’t give us similar control over our valuable data. With these online services, it’s all or nothing. We need a ‘trip mode’ for social media sites that reduces our contact list and history to a minimal subset of what the site normally offers. Not only would such a feature protect people forced to give their passwords at the border, but it would mitigate the many additional threats to privacy they face when they use their social media accounts away from home. Both Facebook and Google make lofty claims about user safety, but they’ve done little to show they take the darkening political climate around the world seriously. A ‘trip mode’ would be a chance for them to demonstrate their commitment to user safety beyond press releases and anodyne letters of support. The only people who can offer reliable protection against invasive data searches at national borders are the billion-dollar companies who control the servers. They have the technology, the expertise, and the legal muscle to protect their users. All that’s missing is the will.

via http://idlewords.com/2017/02/social_media_needs_a_travel_mode.htm

I’ll never bring my phone on an international flight again. Neither should you.

Medium, privacy, security, state surveillance, USA, authoritarianism, search and destroy, borders, border security, law, lawnessness

How many potentially incriminating things do you have lying around your home? If you’re like most people, the answer is probably zero. And yet police would need to go before a judge and establish probable cause before they could get a warrant to search your home. What we’re seeing now is that anyone can be grabbed on their way through customs and forced to hand over the full contents of their digital life.

via https://medium.freecodecamp.com/ill-never-bring-my-phone-on-an-international-flight-again-neither-should-you-e9289cde0e5f

Breaking things is easy

machine-learning, security, modeling, model, data, ML, 2016

Until a few years ago, machine learning algorithms simply did not work very well on many meaningful tasks like recognizing objects or translation. Thus, when a machine learning algorithm failed to do the right thing, this was the rule, rather than the exception. Today, machine learning algorithms have advanced to the next stage of development: when presented with naturally occurring inputs, they can outperform humans. Machine learning has not yet reached true human-level performance, because when confronted by even a trivial adversary, most machine learning algorithms fail dramatically. In other words, we have reached the point where machine learning works, but may easily be broken.

via http://www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-things-is-easy.html

A Sequence of Spankingly Bad Ideas

Medium, age verification, ORG, digital economy, surveillance, security, privacy, bad ideas, Alec Muffett

Last Thursday, with friends and colleagues from Open Rights Group, I spent a few hours at the Adult Provider Network’s Age Verification Demonstration (“the demo”) to watch demonstrations of technologies which attempt to fulfil Age Verification requirements for access to online porn in the UK. Specifically: Age Verification (“AV”) is a requirement of part 3 of the Digital Economy Bill that seeks to “prevent access by persons under the age of 18” to “pornographic material available on the internet on a commercial basis”. There are many contentious social and business issues related to AV[…] there are many open questions and many criticisms of the Digital Economy Bill’s provisions; but to date there appears to have been no critical appraisal of the proposed technologies for AV, and so that is what I seek to address in this posting.

via https://medium.com/@alecmuffett/a-sequence-of-spankingly-bad-ideas–483cecf4ba89

Rules for trusting “black boxes” in algorithmic control systems

algortihmics, trust, black boxes, security, decision making, prediction, data, machine learning, ethics

mostlysignssomeportents:

Tim O'Reilly writes about the reality that more and more of our lives – including whether you end up seeing this very sentence! – is in the hands of “black boxes” – algorithmic decision-makers whose inner workings are a secret from the people they effect.

O'Reilly proposes four tests to determine whether a black box is trustable:

1. Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome.

2. Success is measurable.

3. The goals of the algorithm’s creators are aligned with the goals of the algorithm’s consumers.

4. Does the algorithm lead its creators and its users to make better longer term decisions?

O'Reilly goes on to test these assumptions against some of the existing black boxes that we trust every day, like aviation autopilot systems, and shows that this is a very good framework for evaluating algorithmic systems.

But I have three important quibbles with O'Reilly’s framing. The first is absolutely foundational: the reason that these algorithms are black boxes is that the people who devise them argue that releasing details of their models will weaken the models’ security. This is nonsense.

For example, Facebook’s tweaked its algorithm to downrank “clickbait” stories. Adam Mosseri, Facebook’s VP of product management told Techcrunch, “Facebook won’t be publicly publishing the multi-page document of guidelines for defining clickbait because ‘a big part of this is actually spam, and if you expose exactly what we’re doing and how we’re doing it, they reverse engineer it and figure out how to get around it.’”

There’s a name for this in security circles: “Security through obscurity.” It is as thoroughly discredited an idea as is possible. As far back as the 19th century, security experts have decried the idea that robust systems can rely on secrecy as their first line of defense against compromise.

The reason the algorithms O'Reilly discusses are black boxes is because the people who deploy them believe in security-through-obscurity. Allowing our lives to be manipulated in secrecy because of an unfounded, superstitious belief is as crazy as putting astrologers in charge of monetary policy, no-fly lists, hiring decisions, and parole and sentencing recommendations.

So there’s that: the best way to figure out whether we can trust a black box is the smash it open, demand that it be exposed to the disinfecting power of sunshine, and give no quarter to the ideologically bankrupt security-through-obscurity court astrologers of Facebook, Google, and the TSA.

Then there’s the second issue, which is important whether or not we can see inside the black box: what data was used to train the model? Or, in traditional scientific/statistical terms, what was the sampling methodology?

Garbage in, garbage out is a principle as old as computer science, and sampling bias is a problem that’s as old as the study of statistics. Algorithms are often deployed to replace biased systems with empirical ones: for example, predictive policing algorithms tell the cops where to look for crime, supposedly replacing racially biased stop-and-frisk with data-driven systems of automated suspicion.

But predictive policing training data comes from earlier, human-judgment-driven stop-and-frisk projects. If the cops only make black kids turn out their pockets, then all the drugs, guns and contraband they find will be in the pockets of black kids. Feed this data to a machine learning model and ask it where the future guns, drugs and contraband will be found, and it will dutifully send the police out to harass more black kids. The algorithm isn’t racist, but its training data is.

There’s a final issue, which is that algorithms have to have their models tweaked based on measurements of success. It’s not enough to merely measure success: the errors in the algorithm’s predictions also have to be fed back to it, to correct the model. That’s the difference between Amazon’s sales-optimization and automated hiring systems. Amazon’s systems predict ways of improving sales, which the company tries: the failures are used to change the model to improve it. But automated hiring systems blackball some applicants and advance others, and the companies that makes these systems don’t track whether the excluded people go on to be great employees somewhere else, or whether the recommended hires end up stealing from the company or alienating its customers.

I like O'Reilly’s framework for evaluating black boxes, but I think we need to go farther.

http://boingboing.net/2016/09/15/rules-for-trusting-black-box.html

Meet Moxie Marlinspike, the Anarchist Bringing Encryption to All of Us

encryption, privacy, security, Moxie, Wired, technology, anarchism, USA

Over the past several years, Marlinspike has quietly positioned himself at the front lines of a quarter-century-long war between advocates of encryption and law enforcement. Since the first strong encryption tools became publicly available in the early ’90s, the government has warned of the threat posed by “going dark”—that such software would cripple American police departments and intelligence agencies, allowing terrorists and organized criminals to operate with impunity. In 1993 it unsuccessfully tried to implement a backdoor system called the Clipper Chip to get around encryption. In 2013, Edward Snowden’s leaks revealed that the NSA had secretly sabotaged a widely used crypto standard in the mid- 2000s and that since 2007 the agency had been ingesting a smorgasbord of tech firms’ data with and without their cooperation. Apple’s battle with the FBI over Farook’s iPhone destroyed any pretense of a truce.

via https://www.wired.com/2016/07/meet-moxie-marlinspike-anarchist-bringing-encryption-us/

Wi-Fi hotspot named ‘detonation device’ causes bomb scare at Melbourne airport

technolgy, terror, errorism, security, security-theatre

A poorly-named Wi-Fi hotspot sparked a security scare on a Qantas flight and prompted about 50 terrified passengers to refuse to fly. The hotspot name - Mobile Detonation Device — was spotted by a female passenger who saw it on her phone’s Wi-Fi menu before the plane left Melbourne airport.

via http://www.telegraph.co.uk/news/2016/05/02/wi-fi-hotspot-named-detonation-device-causes-bomb-scare-at-melbo/

Void pantograph

wikipedia, reproduction, void pantograph, copywrongs, security, copy, photocopy

Void pantographs work by exploiting the limitations and features of copying equipment. A scanner or photocopier will act as a low-pass filter on the original image, blurring edges slightly. It will also not be perfectly aligned with the directions of the document, causing aliasing. Features smaller than the resolution will also not be reproduced. In addition, human vision is sensitive to luminance contrast ratio. This means that if a grey region consists of a grid of very small dark dots the filtering will produce a lighter grey, while a region of larger dots will be affected differently (“big-dot-little-dot”). This makes it possible to see a pattern that previously was invisible.

https://en.wikipedia.org/wiki/Void_pantograph

qaul.net – قول

activism, OSS, art, network, hacktivism, security, decentralised, decentre

qaul.net implements a redundant, open communication principle, in which wireless-enabled computers and mobile devices can directly form a spontaneous network. Text messaging, file sharing and voice calls are possible independent of internet and cellular networks. Qaul.net can spread like a virus, and an Open Source Community can modify it freely. In a time of communication blackouts in places like Egypt, Burma, and Tibet, and given the large power outages often caused by natural disasters, qaul.net has taken on the challenge of critically examining existing communication pathways while simultaneously exploring new horizons.

http://www.qaul.net/text_en.html

NSA/GCHQ: The HACIENDA Program for Internet Colonization

internet, network, security, NSA, GCHQ, exploit, 0day, ORB, HACIENDA, organized crime

every device is a target for colonization, as each successfully exploited target is theoretically useful as a means to infiltrating another possible target. Port scanning and downloading banners to identify which software is operating on the target system is merely the first step of the attack (Figure 8). Top secret documents from the NSA seen by Heise demonstrate that the involved spy agencies follow the common methodology of online organized crime (Figure 9): reconnaissance (Figure 10) is followed by infection (Figure 11), command and control (Figure 12), and exfiltration (Figure 13). The NSA presentation makes it clear that the agency embraces the mindset of criminals. In the slides, they discuss techniques and then show screenshots of their own tools to support this criminal process (Figure 14, 15 and 16).

http://www.heise.de/ct/artikel/NSA-GCHQ-The-HACIENDA-Program-for-Internet-Colonization–2292681.html

Put Away Your Tinfoil Hat: Security in Context

security, privacy, contextula security, tactical tech

Over the course of the last three years leading panels and strategy sessions at the Allied Media Conference, and informed by the work of the Tactical Tech Collective—we’ve learned that conversations about safety and security are most successful when they are grounded in discussion about what we envision, know, and practice. The following set of questions offer a starting point for grassroots organizers interested in applying a contextual security framework to their organizing.

https://www.alliedmedia.org/news/2014/05/30/put-away-your-tinfoil-hat-security-context

Everything Is Broken

security, computers, rant, 0days, NSA, facebook, google, culture, everything is broken, human rights

Facebook and Google seem very powerful, but they live about a week from total ruin all the time. They know the cost of leaving social networks individually is high, but en masse, becomes next to nothing. Windows could be replaced with something better written. The US government would fall to a general revolt in a matter of days. It wouldn’t take a total defection or a general revolt to change everything, because corporations and governments would rather bend to demands than die. These entities do everything they can get away with — but we’ve forgotten that we’re the ones that are letting them get away with things.

https://medium.com/message/81e5f33a24e1

Meet the Company That Secretly Built ‘Cuban Twitter’

the atlantic, zunzuneo, state, security, NSA, US, surveillance, politics, business, cuba, tunisia

If ZunZuneo looks ridiculous in retrospect, it’s because 2011 is a different country. We now know U.S. security apparatus may threaten the “open Internet” as much as an oppressive government, if not more. Clinton’s speeches as secretary of state dwell on freedom of expression but not freedom from surveillance, and now—following the NSA revelations—we have a good idea why. Beyond all this, as sociologist Zeynep Tufecki writes, it’s likely that the failure of ZunZuneo will threaten online activism abroad, even if it’s not associated with the U.S. government.

http://www.theatlantic.com/technology/archive/2014/04/the-fall-of-internet-freedom-meet-the-company-that-secretly-built-cuban-twitter/360168/

Our Newfound Fear of Risk

Schneier, risk, security, reductionism, perception

Some of this fear results from imperfect risk perception. We’re bad at accurately assessing risk; we tend to exaggerate spectacular, strange, and rare events, and downplay ordinary, familiar, and common ones. This leads us to believe that violence against police, school shootings, and terrorist attacks are more common and more deadly than they actually are – and that the costs, dangers, and risks of a militarized police, a school system without flexibility, and a surveillance state without privacy are less than they really are.

https://www.schneier.com/blog/archives/2013/09/our_newfound_fe.html

Justice in a “Nation of Laws”: The Manning verdict

al jazeera, manning, freedom, security, espionage, terrorism, corruption, justice, corporatism

When Chelsea Manning (formerly Bradley Manning) was thirteen, the US government announced it had launched “Operation Infinite Justice”. Operation Infinite Justice sought to punish the perpetrators of the September 11, 2001 attacks, destroy Al Qaeda, and end the reign of the Taliban. Operation Infinite Justice was renamed “Operation Enduring Freedom” after protests from Islamic scholars, who argued that God, not the US government, was the arbiter of justice. But freedom, it seemed, was something the United States could give and take away. When Manning was fourteen, the Bush administration announced that detainees in Guantanamo did not deserve protection under the Geneva conventions and that torture was justified. When Manning was fifteen, the US invaded Iraq in response to fabricated reports that Saddam Hussein had weapons of mass destruction. When Manning was sixteen, US soldiers tortured and sodomised prisoners in Iraq’s Abu Ghraib prison. When Manning was seventeen, a movement emerged to prosecute the Bush administration for war crimes. Nothing really came of it. When Manning was nineteen, she joined the army.

http://www.aljazeera.com/indepth/opinion/2013/08/2013823123822366392.html

Ethics and Power in the Long War

ethics, power, surveillance, Eleanor Saitta, Dymaxion, hackers, security, intelligence, centralisat

So, hacker culture is kind of at a crossroads. For a long time it was totally cool that, you know what, I don’t really want to be political, because I just like to reverse code and it’s a lot of fun, and I don’t really have time for politics cause I spend thirteen hours a day looking at Shell code and socialism takes too long. That was great for a while, but we don’t get to be apolitical anymore. Because If you’re doing security work, if you’re doing development work and you are apolitical, then you are aiding the existing centralizing structure. If you’re doing security work and you are apolitical, you are almost certainly working for an organization that exists in a great part to prop up existing companies and existing power structures. Who here has worked for a a security consultancy? Not that many people, ok. I don’t know anybody who has worked for a security consultancy where that consultancy has not done work for someone in the defense industry. There are probably a few, and I guarantee you that those consultancies that have done no work that is defense industry related, have taken an active political position, that we will not touch anything that is remotely fishy. If you’re apolitical, you’re aiding the enemy.

https://noisysquare.com/ethics-and-power-in-the-long-war-eleanor-saitta-dymaxion/

Let’s Cut Through the Bitcoin Hype

bitcoin, security, dan kaminsky

Bitcoin’s resilience comes from a property I refer to as Too Big To Regulate. Put simply, it’s easier to tell ten people to behave, than ten thousand. So if we want a system that’s impossible to regulate, get the power in the hands of ten thousand rather than ten. But there are some factors in Bitcoin that are not Too Big To Regulate. There’s only a few parties that turn bitcoin (which teleports) into dollars (which buy stuff). There can, and will be more, but the quantity of these critical nodes is not set by Bitcoin itself.

http://www.wired.com/opinion/2013/05/lets-cut-through-the-bitcoin-hype/

Power And The Internet

internt, security, society, power, politics, corporatism, government

All disruptive technologies upset traditional power balances, and the Internet is no exception. The standard story is that it empowers the powerless, but that’s only half the story. The Internet empowers everyone. Powerful institutions might be slow to make use of that new power, but since they are powerful, they can use it more effectively. Governments and corporations have woken up to the fact that not only can they use the Internet, they can control it for their interests. Unless we start deliberately debating the future we want to live in, and information technology in enabling that world, we will end up with an Internet that benefits existing power structures and not society in general.

https://www.schneier.com/essay–409.html

Book Review: Against Security

book, review, security, security theatre, fear, perception, schneier

A lot of psychological research has tried to make sense out of security, fear, risk, and safety. But however fascinating the academic literature is, it often misses the broader social dynamics. New York University’s Harvey Molotch helpfully brings a sociologist’s perspective to the subject in his new book Against Security.

https://www.schneier.com/blog/archives/2012/12/book_review_aga.html

GPS Spoofing

attack, spoofing, anti-spoofing, security, technolgy, gps

Disruption created by intentional generation of fake GPS signals could have serious economic consequences. This article discusses how typical civil GPS receivers respond to an advanced civil GPS spoofing attack, and four techniques to counter such attacks: spread-spectrum security codes, navigation message authentication, dual-receiver correlation of military signals, and vestigial signal defense. Unfortunately, any kind of anti-spoofing, however necessary, is a tough sell.

http://www.gpsworld.com/gnss-system/signal-processing/straight-talk-anti-spoofing–12471

The spy who came in from the code

surveillance, intelligence, security, journalism, crypto, activism

For correspondents who report from conflict zones or on underground activism in repressive regimes, the risks are extremely high. Recently, two excellent investigative series—by The Wall Street Journal and Bloomberg News—and the release of a large trove of surveillance industry documents by Wikileaks dubbed “The Spy files,” provided a glimpse of just how sophisticated off-the-shelf monitoring technologies have become. Western companies have sold mass Web and e-mail surveillance technology to Libya and Syria, for instance, and in Egypt, activists found specialized software that allowed the government to listen in to Skype conversations. In Bahrain, meanwhile, technology sold by Nokia Siemens allowed the government to monitor cell-phone conversations and text messages.

http://www.cjr.org/feature/the_spy_who_came_in_from_the_c.php?page=all