Posts tagged classification

Unidentifiable fossils: palaeontological problematica

Science, palaeontology, classification, problematica, species, evolution

There is a detailed vocabulary used to describe organisms which defy classification and a system of nomenclature to denote confidence limits on probable or speculative affinities, but they are generally grouped together as “problematica”. A handy grab-bag of misfits that have exasperated or eluded scientists, ready for future generations to have a go at. In museums, problematica specimens reside in drawers and cabinets equivalent to the ubiquitous drawer of odds and sods that most people have in the kitchen.

via https://www.theguardian.com/science/2018/jul/26/unidentified-fossils-palaeontological-problematica

Research Blog: Wide & Deep Learning: Better Together with TensorFlow

deep-learning, wide-learning, machine-learning, generalisation, learning, classification

The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (“sparrows can fly” and “pigeons can fly”) and generalizing those learnings to apply to things we haven’t seen before (“animals with wings can fly”). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (“penguins can’t fly”). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization? It’s not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It’s useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.

via https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html

DeepFool: A simple and accurate method to fool deep neural networks

neural-networks, DeepFool, DeepDream, CNN, classification, machine-learning, deep-learning

State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. In this paper, we fill this gap and propose the DeepFool framework to efficiently compute perturbations that fools deep network and thus reliably quantify the robustness of arbitrary classifiers.

via http://gitxiv.com/posts/iZpWrvKdXmtRaQusa/deepfool-a-simple-and-accurate-method-to-fool-deep-neural

Why Do Taxonomists Write the Meanest Obituaries?

biology, taxonomy, classification, ICN, ICZN, history, openness, names

This tension between freedom and stability was long ago formalized in two sets of official and binding rules: the International Code of Zoological Nomenclature (ICZN), which deals with animals, and the International Code of Nomenclature for algae, fungi, and plants (ICN). Periodically updated by committees of working taxonomists, these documents set out precise, legalistic frameworks for how to apply names both to species and to higher taxa. (The animal and plant codes operate independently, which means that an animal can share a scientific name with a plant, but not with another animal, and vice versa.) While this freedom opens up a valuable space for amateur contributions, it also creates a massive loophole for unscrupulous, incompetent, or fringe characters to wreak havoc. That’s because the Principle of Priority binds all taxonomists into a complicated network of interdependence; just because a species description is wrong, poorly conceived, or otherwise inadequate, doesn’t mean that it isn’t a recognized part of taxonomic history. Whereas in physics, say, “unified theories” scrawled on napkins and mailed in unmarked envelopes end up in trashcans, biologists, regardless of their own opinions, are bound to reckon with the legacy of anyone publishing a new name. Taxonomists are more than welcome to deal with (or “revise”) these incorrect names in print, but they can’t really ignore them.

via http://nautil.us/issue/35/boundaries/why-do-taxonomists-write-the-meanest-obituaries

The NSA’s SKYNET program may be killing thousands of innocent people

Ars Technica, SKYNET, machine learning, warfare, murder, classification, NSA, CIA, USA, metadata

Many facts about the SKYNET program remain unknown, however. For instance do analysts review each mobile phone user’s profile before condemning them to death based on metadata? How can the US government be sure it is not killing innocent people, given the apparent flaws in the machine learning algorithm on which that kill list is based?“On whether the use of SKYNET is a war crime, I defer to lawyers,” Ball said. “It’s bad science, that’s for damn sure, because classification is inherently probabilistic. If you’re going to condemn someone to death, usually we have a ‘beyond a reasonable doubt’ standard, which is not at all the case when you’re talking about people with 'probable terrorist’ scores anywhere near the threshold. And that’s assuming that the classifier works in the first place, which I doubt because there simply aren’t enough positive cases of known terrorists for the random forest to get a good model of them.”

http://arstechnica.co.uk/security/2016/02/the-nsas-skynet-program-may-be-killing-thousands-of-innocent-people/

Inceptionism: Going Deeper into Neural Networks

image calsification, google research, neural networks, feedback, perception, classification, classi

One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations–these neurons activate in response to very complex things such as entire buildings or trees. One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in %E2%80%9CBanana.%E2%80%9D Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana

http://googleresearch.blogspot.be/2015/06/inceptionism-going-deeper-into-neural.html