The wreck of the SS Maheno can be found on the east coast of Fraser Island in Queensland, Australia. The ship — which was washed…

dailyoverview:

The wreck of the SS Maheno can be found on the east coast of Fraser Island in Queensland, Australia. The ship — which was washed ashore by a cyclone in 1935 — was an ocean liner that made regular crossings between New Zealand and Australia in the early 20th century. The 5,000-ton steel-hulled ship has slowly disintegrated over the years and remains a popular tourist attraction.

Instagram: https://bit.ly/2HbBaeF

25°16'01.6"S, 153°14'18.8"E

Source imagery: Andreas Dress (@andreasdress)

A history of Singapore, explained in 10 dishes

singapore, food, culture, history, Malay, Chinese, Indian, Eurasian, Peranakan, Hainanese, British

Singaporeans are obsessed with food. We can expound ceaselessly on where to find the best bak chor mee (minced meat noodles) and will queue for hours for a good yong tau foo (surimi-stuffed tofu and vegetables). Perhaps because most of us are descendants of immigrants thrust into an artificial construct of a nation, or maybe because we live in a country that is constantly renewing and rebuilding, one of the few tangible things that connects us to the past and our cultural identity is food. There are many facets of Singaporean cuisine: Malay, Chinese, Indian, Eurasian (a fusion of European and Asian dishes and ingredients) Peranakan (combining Chinese and Malay food traditions), and catch-all Western, which usually means old-school Hainanese-style British food—a local version of Western food adapted by chefs from the southern Chinese province of Hainan, who worked in British restaurants or households.

via https://roadsandkingdoms.com/2019/a-history-of-singapore-in–10-dishes/

A Programmers Take on “Six Memos for the Next Millennium”

calvino, writing, programming, six-memos, Six-Memos-for-the-Next-Millennium, 2019

The reason why I’m writing about [Six Memos for the Next Millennium] is that while I think that they are great memos about writing, the more I think about them, the more they apply to programming. Which is a weird coincidence, because they were supposed to be memos for writers in the next millennium, and programming is kind of a new form of writing that’s becoming more important in this millennium. Being a game developer, I also can’t help but apply these to game design. So I will occasionally talk about games in here, but I’ll try to keep it mostly about programming.

via https://probablydance.com/2019/03/09/a-programmers-take-on-six-memos-for-the-next-millenium/

Society of Amateur Radio Astronomers

radio, astronomy, howto, radio-telescope

There are lots of ways to get involved in radio astronomy but they are rarely obvious and do not always offer immediate gratification such as when looking through an optical telescope. Most radio telescope packages involve some construction and software set-up by the user, and that can be time consuming and frustrating especially if there are no clear instructions to guide the amateur. Nonetheless, it is a very rewarding intellectual endeavor to keep you busy to the end of your life. Beginners usually purchase one of the 3 types of radio telescopes, which cost less than $200 each.

via http://www.radio-astronomy.org/node/248

Northern Territory Intervention (2007)

Australia, Invasion, racism, NT, NTER, ADF, RDA, John-Howard, Indigenous-affairs, 2007, 2017

then prime minister, John Howard, and his Indigenous affairs minister, Mal Brough, launched the Northern Territory Emergency Response (NTER) into remote Indigenous communities. With no warning, and no consultation, the federal government moved swiftly to seize control of many aspects of the daily lives of residents in 73 targeted remote communities. It implemented coercive measures that would have been unthinkable in non-Indigenous communities. By deploying uniformed members of the Australian Defence Forces into the communities to establish logistics, the Intervention was designed to send a clear message of disruption and control. The government’s suspension of the Racial Discrimination Act raised further cause for concern. Township leases were compulsorily acquired over Aboriginal-owned land by the Commonwealth for a five-year period. And the permit system administered by Aboriginal land councils to control access to Aboriginal land was revoked. Medical teams were flown in to conduct compulsory health checks on children. Signs were posted declaring bans on alcohol and pornography in township areas. Income management was applied to all community residents receiving welfare payments, and income support payments were linked to satisfactory school attendance. The successful Community Development Employment Projects program was abolished, and employees were forced onto unemployment benefits. The police presence was increased in prescribed communities. And customary law was no longer allowed to be considered in bail applications and sentencing in criminal court cases.

via https://theconversation.com/ten-years-on-its-time-we-learned-the-lessons-from-the-failed-northern-territory-intervention–79198

A Three-Day Expedition To Walk Across Paris Entirely Underground

adventure, Paris, catacombs, cataphiles, urban-mobility, traverse, underground

To wander through the catacombs is to feel yourself inside of a mystery novel, full of false walls and trapdoors and secret chutes, each leading to another hidden chamber, containing another surprise. Down one passageway, you might find a chamber containing a sprawling Boschian mural that cataphiles had been gradually embellishing for decades; down another, you might see a life-size sculpture of a man half inside a stone wall, as though stepping in from the beyond; down yet another, you might encounter a place that upends your very sense of reality. In 2004, a squadron of cataflics on patrol in the quarries broke through a false wall, entered a large, cavernous space, and blinked in disbelief. It was a movie theater. A group of cataphiles had installed stone-carved seating for twenty people, a large screen, and a projector, along with at least three phone lines. Adjacent to the screening room were a bar, lounge, workshop, and small dining room. Three days later, when the police returned to investigate, they found the equipment dismantled, the space bare, except for a note: “Do not try to find us.”

via https://longreads.com/2019/03/13/a-three-day-expedition-to-walk-across-paris-entirely-underground/

On myths of smart technology, complex infrastructural networks and ecological crisis: @anabjain will be in conversation with…

IFTTT, Twitter, Superflux


(via http://twitter.com/Superflux/status/1106161062183923714)

“This final panorama embodies what made our Opportunity rover such a remarkable mission of exploration and discovery,” said…

NASA, Mars, Opportunity, panorama, photography, 2019

“This final panorama embodies what made our Opportunity rover such a remarkable mission of exploration and discovery,” said Opportunity project manager John Callas of NASA’s Jet Propulsion Laboratory in Pasadena, California. “To the right of center you can see the rim of Endeavor Crater rising in the distance. Just to the left of that, rover tracks begin their descent from over the horizon and weave their way down to geologic features that our scientists wanted to examine up close. And to the far right and left are the bottom of Perseverance Valley and the floor of Endeavour crater, pristine and unexplored, waiting for visits from future explorers.”

(via https://www.jpl.nasa.gov/news/news.php?feature=7348 )

The reason autopilots work well for flight control (since 1914!) is that high complexity and low complexity times are (normally)…

IFTTT, Twitter, yaneerbaryam


(via http://twitter.com/yaneerbaryam/status/1105633448872591361)

Project: start, restart, plan, replan, repair, finish, review, refactor, abandon, backburner Habit: start, rehearse, refine,…

IFTTT, Twitter, vgr


(via http://twitter.com/vgr/status/1105623516664090624)

You need 90% energy in processes that won’t terminate till you die, but aren’t habits or projects. Each also has an associated…

IFTTT, Twitter, vgr


(via http://twitter.com/vgr/status/1105348647959461889)

The “Tragedy of the Commons” was invented by a white supremacist based on a false history, and it’s toxic bullshit

mostlysignssomeportents:

In a brilliant Twitter thread, UCSB political scientist Matto Mildenberger recounts the sordid history of Garrett Hardin’s classic, widely cited 1968 article “The Tragedy of the Commons,” whose ideas are taught to millions of undergrads, and whose precepts are used to justify the privatization of public goods as the only efficient way to manage them.

Hardin’s paper starts with a history of the English Commons – publicly held lands that were collectively owned and managed – and the claim that commons routinely fell prey to the selfish human impulse to overgraze your livestock on public land (and that even non-selfish people would overgraze their animals because they knew that their more-selfish neighbors would do so even if they didn’t).

But this isn’t what actually happened to the Commons: they were stable and well-managed until other factors (e.g. rich people trying to acquire even more land) destabilized them.

Hardin wasn’t just inventing false histories out of a vacuum. He was, personally, a nasty piece of work: a white supremacist and eugenicist, and the Tragedy of the Commons paper is shot through with this vile ideology, arguing that poor people should not be given charity lest they breed beyond their means (Hardin also campaigned against food aid). Hardin was a director of the Federation for American Immigration Reform and the white nationalist Social Contract Press, and co-founded anti-immigrant groups like Californians for Population Stabilization and The Environmental Fund.

Mildenberger argues that Hardin was a trumpist before Trump: He served on the board of the Federation for American Immigration Reform (FAIR), whose talking points often emerge from Trump’s mouth.

(Hardin quotes that didn’t make it into his seminal paper: “Diversity is the opposite of unity, and unity is a prime requirement for national survival” and “My position is that this idea of a multiethnic society is a disaster…we should restrict immigration for that reason.”)

As Mildenberger points out, this isn’t a case where a terrible person had some great ideas that outlived them: Hardin’s Tragedy of the Commons was a piece of intellectual fraud committed in service to his racist, eugenicist ideology.

What’s worse: the environmental movement elevates Hardin to sainthood, whitewashing his racism and celebrating “The Tragedy of the Commons” as a seminal work of environmental literature. But Hardin is no friend of the environment: his noxious cocktail of racism and false history are used to move public lands into private ownership or stewardship, (literally) paving the way for devastating exploitation of those lands.

By contrast, consider Nobelist Elinor Ostrom’s Governing the Commons, whose groundbreaking insights on the management of common resources are a prescription for a better, more prosperous, more egalitarian future.

https://boingboing.net/2019/03/07/scientific-fraud.html

Word of the day: “micro-season” - in the classical Japanese calendar the year is divided into 72 five-day micro-seasons or kō….

IFTTT, Twitter, RobGMacfarlane


(via http://twitter.com/RobGMacfarlane/status/1105000494244814849)

Due to blatant peer pressure, I’ve accepted the challenge by @changeist to post the covers of 7 books that I love & recommend: …

IFTTT, Twitter, honorharger


(via http://twitter.com/honorharger/status/1104590576186417153)

Review: DARK EMU

m1k3y:

Dark Emuby Bruce Pascoe is a powerful, compelling work that achieves its dual aims: showing just how complex and well-developed the civilisation managing the Australian continent was prior to European contact and subsequent colonisation, and the lengths that have been gone to erase this vital pre-history from the collective minds of the current occupying Australian civilisation.

It brings…

View On WordPress

There’s a family of interesting related arguments: “You are not enough people” (Vonnegut theory of marital conflict) “Reality…

IFTTT, Twitter, vgr


(via http://twitter.com/vgr/status/1104523976603185153)

Inexplicably I’ve accepted the challenge by @akrishnan23 to post the covers of 7 books that I love/recommend: no explanations,…

IFTTT, Twitter, changeist


(via http://twitter.com/changeist/status/1104434888487383040)

It takes a bot to know one?

lewisandquark:

A couple of weeks ago, I wrote about GPT-2, a text-generating algorithm whose huge size and long-term analysis abilities mean that it can generate text with an impressive degree of coherence. So impressive, in fact, that its programmers at OpenAI have only released a mini version of the model for now, worried that people may abuse the full-size model’s easy-to-generate, almost-plausibly-human text. 

(below: some text generated by mini-GPT-2, in response to the prompt in italics)

This was a fantastic recipe for chocolate cake with raspberry sauce! I only made a couple of changes to the recipe. First, Iadded vanilla candles instead of meringues for a more mild and exotic fragrance. Once again, I only used 1 tsp of vanilla syrup for clarity. Second, the chocolate cake whipped cream was tempered by an additional 1 tsp of canola oil. The regular vegan whipped cream is soothing and makes it pleasing to the hungry healthiest person I know!

In the meantime, as OpenAI had hoped, people are working on ways to automatically detect GPT-2′s text. Using a bot to detect another bot is a strategy that can work pretty well for detecting fake logins, video, or audio. And now, a group from MIT-IBM Watson AI lab and Harvard NLP has come up with a way of detecting fake text, using GPT-2 itself as part of the detection system.

The idea is fairly simple: GPT-2 is better at predicting what a bot will write than what a human will write. So if GPT-2 is great at predicting the next word in a bit of text, that text was probably written by an algorithm - maybe even by GPT-2 itself.

There’s a web demo that they’re calling Giant Language model Test Room (GLTR), so naturally I decided to play with it.

First, here’s some genuine text generated by GPT-2 (the full-size model, thanks to the OpenAI team being kind enough to send me a sample). Green words are ones that GLTR thought were very predictable, yellow and red words are less predictable, and purple words are ones the algorithm definitely didn’t see coming. There are a couple of mild surprises here, but mostly the AI knew what would be generated. Seeing all this green, you’d know this text is probably AI-generated.

HERMIONE: So, you told him the truth?
Snape: Yes.
HARRY: Is it going to destroy him? You want him to be able to see the truth.
Snape: [turning to her] Hermione, I-I-I'm not looking for acceptance.
HARRY: [smiling] No, it's-it's good it doesn't need to be.
Snape: I understand.
	[A snake appears and Snape puts it on his head and it appears to do the talking. 	It says 'I forgive you.']
HARRY: You can't go back if you don't forgive.
Snape: [sighing] Hermione.
HARRY: Okay, listen.
Snape: I want to apologize to you for getting angry and upset over this.
HARRY: It's not your fault.
HARRY: That's not what I meant to imply.
	[Another snake appears then it says 'And I forgive you.']
HERMIONE: And I forgive you.
Snape: Yes.

Here, on the other hand, is how GLTR analyzed some human-written text, the opening paragraph of the Murderbot diaries. There’s a LOT more purple and red. It found this human writer to be more unpredictable.

I could have become a mass murderer after I hacked my governor module, but then I realized I could access the combined feed of entertainment channels carried on the company satellites. It had been well over 35,000 hours or so since then, with still not much murdering, but probably, I don’t know, a little under 35,000 hours of movies, serials, books, plays, and music consumed. As a heartless killing machine, I was a terrible failure.

But can GLTR detect text generated by another AI, not just text that GPT-2 generates? It turns out it depends. Here’s text generated by another AI, the Washington Post’s Heliograf algorithm that writes up local sports and election results into simple but readable articles. Sure enough, GLTR found Heliograf’s articles to be pretty predictable. Maybe GPT-2 had even read a lot of Heliograf articles during training.

image

However, here’s what it did with a review of Avengers: Infinity War that I generated using an algorithm Facebook trained on Amazon reviews. It’s not an entirely plausible review, but to GLTR it looks a lot more like the human-written text than the AI-generated text. Plenty of human-written text scores in this range.

The Avengers: Infinity War is a movie that should be viewed on its own terms, and not a tell-all about The Hulk.  I have always loved the guys that played Michael Myers, and of all the others like Angel and Griffin, Kim back to Bullwinkle, and Edward James Olmos as the Lion. Special mention must go to the performances of Robert De Niro and Anthony Hopkins.  Just as I would like to see David Cronenberg in a better role, he is a treat the way he is as Gimli.Also there is the evil genius Bugs Bunny and the amazing car chase scene that has been hailed as THE Greatest Tank Trio of All Time ever (or at least the last one).  With Gary Oldman and Robert Young on the run and almost immediate next day in the parking lot to be his lover, he tries to escape in a failed attempt at a new dream.  It was a fantastic movie, full of monsters and beasts, and makes the animated movies seem so much more real.

And here’s how GLTR rated another Amazon review by that same algorithm. A human might find this review to be a bit suspect, but, again, the AI didn’t score this as bot-written text.

The Harry Potter File, from which the previous one was based (which means it has a standard size liner) weighs a ton and this one is huge! I will definitely put it on every toaster I have in the kitchen since, it is that good.This is one of the best comedy movies ever made. It is definitely my favorite movie of all time. I would recommend this to ANYONE!

What about an AI that’s really, really bad at generating text? How does that rate? Here’s some output from a neural net I trained to generate Dungeons and Dragons biographies. Whatever GLTR was expecting, it wasn’t fuse efforts and grass tricks.

instead was a drow, costumed was toosingly power they are curious as his great embercrumb, a fellow knight of the area of the son, and the young girl is the agents guild, as soon as she received astering the grass tricks that he could ask to serve his words away and he has a disaster of the spire, but he was super connie couldn't be resigned to the church, really with the fuse effort to fit the world, tempting into the church of the moment of the son of the gods, there was what i can contrive that she was born into his own life, pollaning the bandit in the land. the ship, i decided to fight with the streets. he met the ship without a new priest of pelor like a particularly bad criters but was assigned as well.as he was sat the social shape and his desire over the river and a few ways that had been seriously into the fey priest. abaewin was never taken in the world. he had told me this was lost for it, for reason, and i cant know what was something good clear, but she had attack them 15, they were divided by a visators above the village, but he went since i was so that he stayed. but one day, she grew up from studying a small lion.

But I generated that biography with the creativity setting turned up high, so my algorithm was TRYING to be unpredictable. What if I turned the D&D bio generator’s creativity setting very low, so it tries to be predictable instead? Would that make it easier for GLTR to detect? Only slightly. It still looks like unpredictable human-written text to GLTR.

he is a successful adventurers of the city and the lady of the undead who would be able to use his own and a few days in the city of the city of bandits. he was a child to be a deadly in the world and the goddess of the temple of the city of waterdeep. he was a child for a few hours and the incident of the order of the city and a few years of research. she was a child in a small village and was invited to be a good deal in the world and in the world and the other children of the tribe and the elven village and the young man was exiled in the world. he was a child to the forest to the local tavern and a human bard, a human bard in a small town of his family and the other two years of a demon in the world.

GLTR is still pretty good at detecting text that GPT-2 generates - after all, it’s using GPT-2 itself to do the predictions. So, it’ll be a useful defense against GPT-2 generated spam.

But, if you want to build an AI that can sneak its text past a GPT-2 based detector, try building one that generates laughably incoherent text. Apparently, to GPT-2, that sounds all too human.

For more laughably incoherent text, I trained a neural net on the complete text of Black Beauty, and generated a long rambling paragraph about being a Good Horse. To read it, and GLTR’s verdict, enter your email here and I’ll send it to you.

Towards a general theory of “adversarial examples,” the bizarre, hallucinatory motes in machine learning’s all-seeing eye

mostlysignssomeportents:

For several years, I’ve been covering the bizarre phenomenon of “adversarial examples (AKA “adversarial preturbations”), these being often tiny changes to data than can cause machine-learning classifiers to totally misfire: imperceptible squeaks that make speech-to-text systems hallucinate phantom voices; or tiny shifts to a 3D image of a helicopter that makes image-classifiers hallucinate a rifle  

A friend of mine who is a very senior cryptographer of longstanding esteem in the field recently changed roles to managing information security for one of the leading machine learning companies: he told me that he thought that it may be that all machine-learning models have lurking adversarial examples and it might be impossible to eliminate these, meaning that any use of machine learning where the owners of the system are trying to do something that someone else wants to prevent might never be secure enough for use in the field – that is, we may never be able to make a self-driving car that can’t be fooled into mistaking a STOP sign for a go-faster sign.

What’s more there are tons of use-cases that seem non-adversarial at first blush, but which have potential adversarial implications further down the line: think of how the machine-learning classifier that reliably diagnoses skin cancer might be fooled by an unethical doctor who wants to generate more billings; or nerfed down by an insurer that wants to avoid paying claims.

My MIT Media Lab colleague Joi Ito (previously) has teamed up with Harvard’s Jonathan Zittrain (previously to teach a course on Applied Ethical and Governance Challenges in AI, and in reading the syllabus, I came across Motivating the Rules of the Game for Adversarial Example Research, a 2018 paper by a team of Princeton and Google researchers, which attempts to formulae a kind of unified framework for talking about and evaluating adversarial examples.

The authors propose a taxonomy of attacks, based on whether the attackers are using “white box” or “black box” approaches to the model (that is, whether they are allowed to know how the model works), whether their tampering has to be imperceptible to humans (think of the stop-sign attack – it works best if a human can’t see that the stop sign has been altered), and other factors.

It’s a fascinating paper that tries to make sense of the to-date scattershot adversarial example research. It may be that my cryptographer friend is right about the inevitability of adversarial examples, but this analytical framework goes a long way to helping us understand where the risks are and which defenses can or can’t work.

If this kind of thing interests you, you can check out the work that MIT Media Lab students are doing with Labsix, a student-only, no-faculty research group that studies adversarial examples.

https://boingboing.net/2019/03/08/hot-dog-or-not.html

Correcting the Record on the First Emoji Set

allthingslinguistic:

The Emojipedia blog has an important update in emoji history news!

Until now, Japanese phone carrier Docomo has most often been widely credited as the originator of what we know as emoji today. It turns out, that might not be the case, and today we are correcting the record.

SoftBank, the carrier that partnered with Apple to bring the iPhone to Japan in 2008, released a phone with support for 90 distinct emoji characters in 1997. For the first time, these are now available on Emojipedia.

The 90 emojis from SoftBank in 1997 predate the set of 176 emojis released by Docomo in 1999, which until now have most commonly been cited (including by Emojipedia) as being the first.

Not only was the 1997 SoftBank emoji set released earlier than the first known date of the Docomo emoji set (in “1998 or 1999”), one of the most iconic emoji characters now encoded as 💩 U+1F4A9 PILE OF POO in the Unicode Standard, originated in this release.

Unless or until we find evidence that Docomo had an emoji set available prior to this release, we hereby issue a correction that the original emoji set is from SoftBank in Japan in 1997, with designer/s unknown.

Read the whole post for more emoji history.

Correcting the Record on the First Emoji Set

There exists a series of underground bunkers with no entrances or exits of any kind scattered across North America, each of…

IFTTT, Twitter, ThePatanoiac


(via http://twitter.com/ThePatanoiac/status/1103915020369313792)

ONLY THE BEGINNING OF ANOTHER STRANGENESS - what happens when AI meets the alien consciousnesses that already live amongst us? I…

IFTTT, Twitter, jamesbridle


(via http://twitter.com/jamesbridle/status/1103645179444162560)

I’m in the non-apocalyptic camp of climate change believers. I don’t believe even the worst case will make the world…

IFTTT, Twitter, vgr


(via http://twitter.com/vgr/status/1103524374374580225)

I am SO thrilled to announce Atmospheric Memory, an epic new production by Rafael Lozano Hemmer (@errafael) curated by…

IFTTT, Twitter, Macroscopist


(via http://twitter.com/Macroscopist/status/1103647117963350017)

AI-created fine art could enliven galleries with visual aesthetics that humans couldn’t foresee, or it could become a…

IFTTT, Twitter, ibogost


(via http://twitter.com/ibogost/status/1103416245116911619)

Actually, lemme try mapping the 8 metaphors Mechanistic: GTD Brain: BASB/PKM Organism: Blitzkrieg model Culture: Improv theater…

IFTTT, Twitter, vgr


(via http://twitter.com/vgr/status/1103408460769554437)

‘The documents were written on hundreds of strips of bamboo, about the size of chopsticks, that seemed to date from 2,500 years…

IFTTT, Twitter, justinpickard


(via http://twitter.com/justinpickard/status/1102843081706139648)

Something I’ve been meaning to say about The Tragedy of the Commons. Bear with me for a small thread on why our embrace of…

IFTTT, Twitter, mmildenberger


(via http://twitter.com/mmildenberger/status/1102604887223750657)

Went to a talk last week by someone in the Texas Bureau of Economic Geology & it’s safe to say that many in the oil & gas…

IFTTT, Twitter, PeterBrannen1


(via http://twitter.com/PeterBrannen1/status/1102671126851870721)

A summary of Octavia E. Butler’s advice for writers: 1. “Read omnivorously…” 2. “Forget about talent.” 3. "Write, every day,…

IFTTT, Twitter, tamaranopper


(via http://twitter.com/tamaranopper/status/1102043025780457477)

’[D]rugs do not impart wisdom…any more than the microscope alone gives knowledge.They provide the raw materials of wisdom&are…

IFTTT, Twitter, PeterSjostedtH


(via http://twitter.com/PeterSjostedtH/status/1102177814386753537)

Crop Yield Prediction Gold. Predictive systems are being heavily invested in by DARPA to predict social unrest via crop yield…

IFTTT, Twitter, FRAUD_la


(via http://twitter.com/FRAUD_la/status/1102129267780055045)

’The “electronic” society is a special society contained within the wider “geometric” society … the geometric society is a…

IFTTT, Twitter, PeterSjostedtH


(via http://twitter.com/PeterSjostedtH/status/1102007047044894720)

Documentary wants: Throbbing Gristle Crass KLF Adrian Sherwood and On U Sound 80s Liverpool Scene Napalm Death The Damned Warp…

IFTTT, Twitter, eops


(via http://twitter.com/eops/status/1101609811622457345)

Damn, my GTD game has completely fallen apart in the last couple of years. My workflow is a global quantum entangled state of…

IFTTT, Twitter, vgr


(via http://twitter.com/vgr/status/1101928398329339904)

’Government funding and tourism revenues … have their limitations. But the mayor of Easter Island has come up with an innovative…

IFTTT, Twitter, justinpickard


(via http://twitter.com/justinpickard/status/1101763119339241472)

My touchpoint is that the Air Force measured many dimensions of lots of people and realised there is no ‘average’ human they…

IFTTT, Twitter, debcha


(via http://twitter.com/debcha/status/1101540289490112512)