To celebrate halloween we trained a net that creates endless vignettes about murdering humans, torture, necrophilia—kinda funny and campy like Evil Dead—using one of the greatest datasets ever— cannibal corpse lyrics
😵🗡️🤖🔪😵🗡️🤖
Neural network generating death metal, via livestream 24/7.
🤖Audio generated with modified SampleRNN trained on Cannibal Corpse
🤖Lyrics generated with pretrained 117M GPT2 fine-tuned on Cannibal Corpse
🤖Meat images generated with BigGAN interpolations in the #butchershop latent space
🤖You can generate all kinds of gross stuff on artbreeder https://artbreeder.com/i?k=ff84821d51… 🤖Vocals separated using Wave-U-Net (yup it separates death growls)
🤖Read more about our scientific research into eliminating humans from music https://arxiv.org/abs/1811.06633
Browser-based Idle Game by Frank Lantz lets you control an AI that runs a paperclip company.
There are no instructions but if you are familiar with the game AdVenture Capitalist you will probably work things out (and get drawn into it in the same way, drawn by the growing numbers and frustrated with impatience to level things up).
GPT-2 displays a broad set of capabilities, including the ability to generate conditional synthetic text samples of unprecedented quality, where we prime the model with an input and have it generate a lengthy continuation. In addition, GPT-2 outperforms other language models trained on specific domains (like Wikipedia, news, or books) without needing to use these domain-specific training datasets. On language tasks like question answering, reading comprehension, summarization, and translation, GPT-2 begins to learn these tasks from the raw text, using no task-specific training data. While scores on these downstream tasks are far from state-of-the-art, they suggest that the tasks can benefit from unsupervised techniques, given sufficient (unlabeled) data and compute.
In Hungary, Latvia, and Greece, travelers will be given an automated lie-detection test—by an animated AI border agent. The system, called iBorderCtrl, is part of a six-month pilot led by the Hungarian National Police at four different border crossing points. “We’re employing existing and proven technologies—as well as novel ones—to empower border agents to increase the accuracy and efficiency of border checks,” project coordinator George Boultadakis of European Dynamics in Luxembourg told the European Commission. “iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit.”
Some of these skilled lawyers did question whether their profession could ever entirely trust automation to make skilled legal decisions. For a small number, they suggested they would be sticking to “reliable” manual processes for the immediate future. However, most of the participants stressed that high-volume and low-risk contracts took up too much of their time, and felt it was incumbent on lawyers to automate work when, and where, possible. For them, the study was also a simple, practical demonstration of a not-so-scary AI future. However, lawyers also stressed that undue weight should not be put on legal AI alone. One participant, Justin Brown, stressed that humans must use new technology alongside their lawyerly instincts. He says: “Either working alone is inferior to the combination of both.”
Our notions of what it means to have a mind have too often been governed by assumptions about what it means to be human. But there is no necessary logical connection between the two. There is often an assumption that a digital mind will either be, or aspire to be, like our own. We can see this at play in artificial beings from Pinocchio to the creature in Mary Shelley’s Frankenstein to 2001: A Space Odyssey’s HAL to Data from Star Trek: The Next Generation. But a machine mind won’t be a human-like mind — at least not precisely, and not intentionally. Machines are developing a separate kind of interaction and interrelation with the world, which means they will develop new and different kinds of minds, minds to which human beings cannot have direct access. A human being will never know exactly what it’s like to be a bot, because we do not inhabit their modes of interaction.
Heraud researched the scourges of agriculture: hypoxic dead zones in the Gulf of Mexico and Baltic Sea, the colony collapse of bees, soil degradation, and human health problems from allergies to cancers. “Everything tied back to the blind, rampant, broadcast spraying of chemicals,” Heraud says. He and Redden figured they could teach machines to differentiate between crops and weeds, then eliminate the weeds mechanically or with targeted doses of nontoxic substances. The two first considered hot foam, laser beams, electric currents, and boiling water. They’d market the robot to organic farmers, who spend heavily on chemical-free weeding methods including mechanical tillage, which can be both fuel-intensive and damaging to soil. After months of research, they faced a disappointing truth: There was no way around herbicides. “Turns out zapping weeds with electricity or hot liquid requires far more time and energy than chemicals—and it isn’t guaranteed to work,” Heraud says. Those methods might eliminate the visible part of a weed, but not the root. And pulling weeds with mechanical pincers is a far more time-intensive task for a robot than delivering microsquirts of poison. Their challenge became applying the chemicals with precision.
“The video, called “Alternative Face v1.1”, is the work of Mario Klingemann, a German artist. It plays audio from an NBC interview with Ms Conway through the mouth of Ms Hardy’s digital ghost. The video is wobbly and pixelated; a competent visual-effects shop could do much better. But Mr Klingemann did not fiddle with editing software to make it. Instead, he took only a few days to create the clip on a desktop computer using a generative adversarial network (GAN), a type of machine-learning algorithm. His computer spat it out automatically after being force fed old music videos of Ms Hardy. It is a recording of something that never happened.”
AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS). However, the way these components are combined is novel and not exactly standard. In particular, AlphaGo uses a SL (supervised learning) policy to initialize the learning of an RL (reinforcement learning) policy that gets perfected with self-play, which they then estimate a value function from, which then plugs into MCTS that (somewhat surprisingly) uses the (worse!, but more diverse) SL policy to sample rollouts. In addition, the policy/value nets are deep neural networks, so getting everything to work properly presents its own unique challenges (e.g. value function is trained in a tricky way to prevent overfitting). On all of these aspects, DeepMind has executed very well. That being said, AlphaGo does not by itself use any fundamental algorithmic breakthroughs in how we approach RL problems.
The practice of using people’s outer appearance to infer inner character is called physiognomy. While today it is understood to be pseudoscience, the folk belief that there are inferior “types” of people, identifiable by their facial features and body measurements, has at various times been codified into country-wide law, providing a basis to acquire land, block immigration, justify slavery, and permit genocide. When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism.
Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development. Whether intentional or not, this “laundering” of human prejudice through computer algorithms can make those biases appear to be justified objectively.
What we’ve found, over and over, is an industry willing to invest endless resources chasing “delight” — but when put up to the pressure of real life, the results are shallow at best, and horrifying at worst. Consider this: Apple has known Siri had a problem with crisis since it launched in 2011. Back then, if you told it you were thinking about shooting yourself, it would give you directions to a gun store. When bad press rolled in, Apple partnered with the National Suicide Prevention Lifeline to offer users help when they said something Siri identified as suicidal. It’s not just crisis scenarios, either. Hell, Apple Health claimed to track “all of your metrics that you’re most interested in” back in 2014 — but it didn’t consider period tracking a worthwhile metric for over a year after launch.
Today, with the rapid development of digital technology, we can increasingly attempt to follow Leibniz’s logic. An increasing level of sophistication, to the point of some products becoming highly or fully autonomous, leads to complex situations requiring some form of ethical reasoning — autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of tasks they have to carry out, as well as their high degree of autonomy. How can such systems be designed to accommodate the complexity of ethical and moral reasoning? At present there exists no universal standard dealing with the ethics of automated systems — will they become a commodity that one can buy, change and resell depending on personal taste? Or will the ethical frameworks embedded into automated products be those chosen by the manufacturer? More importantly, as ethics has been a field under study for millennia, can we ever suppose that our current subjective ethical notions be taken for granted, and used for products that will make decisions on our behalf in real-world situations?
Nearly all of the most valuable companies throughout history were valuable through their strong network effects. If there is one motif in American economic history it is network effects. Every railroad made the railroad network more valuable, every telephone made the telephone network more valuable, and every Internet user made the Internet network more valuable. But no hedge fund has ever harnessed network effects. Negative network effects are too pervasive in finance, and they are the reason that there is no one hedge fund monopoly managing all the money in the world. For perspective, Bridgewater, the biggest hedge fund in the world, manages less than 1% of the total actively managed money. Facebook, on the other hand, with its powerful network effects, has a 70% market share in social networking. The most valuable hedge fund in the 21st century will be the first hedge fund to bring network effects to capital allocation.
With transfer learning, we can take a pretrained model, which was trained on a large readily available dataset (trained on a completely different task, with the same input but different output). Then try to find layers which output reusable features. We use the output of that layer as input features to train a much smaller network that requires a smaller number of parameters. This smaller network only needs to learn the relations for your specific problem having already learnt about patterns in the data from the pretrained model. This way a model trained to detect Cats can be reused to Reproduce the work of Van Gogh
Over the course of the next few months, we will be launching a prototype of the research already completed in statistical fact checking and claim detection. So far, our work has been in identifying claims in text by the named entities they contain, what economic statistics those claims are about, and verifying if they are “fact-checkable”. At the moment, we can only check claims that can be validated by known statistical databases — we built our system on Freebase (an fact database that came out of Wikipedia’s knowledge graph), and will be migrating it to new databases such as EUROSTAT and the World Bank Databank.
As we make algorithms that can improve themselves — stumbling first steps on the road to artificial intelligence — how should we regulate them? Should we require them to tell us their every step […] Or should we let the algorithms run unfettered? Nara Logics’ Jana Eggers […] suggests that a good approach is to have algorithms explain themselves. After all, humans are terrible at tracking their actions, but software has no choice but to do so. Each time a machine learning algorithm generates a conclusion, it should explain why it did so. Then auditors and regulators can query the justifications to see if they’re allowed. On the surface, this seems like a good idea: Just turn on logging, and you’ll have a detailed record of why an algorithm chose a particular course of action, or classified something a certain way. […] There’s a tension between transparent regulation of the algorithms that rule our futures (having them explain themselves to us so we can guide and hone them) and the speed and alacrity with which an unfettered algorithm can evolve, adapt, and improve better than others. Is he who hesitates to unleash an AI without guidance lost? There’s no simple answer here. It’s more like parenting than computer science: Giving your kid some freedom, and a fundamental moral framework, and then randomly checking in to see that the kid isn’t a jerk. But simply asking to share the algorithm won’t give us the controls and changes we’re hoping to see.
From the very beginning, since Archillect was made to find images by following a certain relational structure, I had to trust that Archillect would have a certain character in what she found and shared, which would create an almost personal profile. This is the reason I wanted to present Archillect as a person rather than a random bot. As people perceived Archillect as a character, a personality, they also contributed to the project through the ways they interacted with the project as a result of this perception. This was important to me.
For the foreseeable future, “artificial intelligence” is really just a term to describe advanced analysis of massive datasets, and the models that use that data to identify patterns or make predictions about everything from traffic patterns to criminal justice outcomes. AI can’t think for itself — it’s taught by humans to perform tasks based on the “training data” that we provide, and these systems operate within parameters that we define. But this data often reflects unhealthy social dynamics, like race and gender-based discrimination, that can be easy to miss because we’ve become so desensitized to their presence in society.
Modern research has become so specialized that our notion of impact is sometimes siloed. A world-class clinician may be rewarded for inventing a new surgery; an AI researcher may get credit for beating the world record on MNIST. When two fields cross, there can sometimes be fear, misunderstanding, or culture clashes. We’re not unique in history. In 1944, the foundations of quantum physics had been laid, including, dramatically, the later detonation of the first atomic bomb. After the war, a generation of physicists turned their attention to biology. In the 1944 book What is Life?, Erwin Schrödinger referred to a sense of noblesse oblige that prevented researchers in disparate fields from collaborating deeply, and “beg[ged] to renounce the noblesse”
The ‘Terror of War’ case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these ‘weak AI’ systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that are take enormous effort to detect.
In recent months, the Alphabet Inc. unit put a DeepMind AI system in control of parts of its data centers to reduce power consumption by manipulating computer servers and related equipment like cooling systems. It uses a similar technique to DeepMind software that taught itself to play Atari video games, Hassabis said in an interview at a recent AI conference in New York. The system cut power usage in the data centers by several percentage points, “which is a huge saving in terms of cost but, also, great for the environment,” he said. The savings translate into a 15 percent improvement in power usage efficiency, or PUE, Google said in a statement. PUE measures how much electricity Google uses for its computers, versus the supporting infrastructure like cooling systems.
It was the ultimate goal of many schools of occultism to create life. In Muslim alchemy, it was called Takwin. In modern literature, Frankenstein is obviously a story of abiogenesis, and not only does the main character explicitly reference alchemy as his inspiration but it’s partially credited for sparking the Victorian craze for occultism. Both the Golem and the Homunculus are different traditions’ alchemical paths to abiogenesis, in both cases partially as a way of getting closer to the Divine by imitating its power. And abiogenesis has also been the fascinated object of a great deal of AI research. Sure, in recent times we might have started to become excited by its power to create a tireless servant who can schedule meetings, manage your Twitter account, spam forums, or just order you a pizza, but the historical context is driven by the same goal as the alchemists - create artificial life. Or more accurately, to create an artificial human. Will we get there? Is it even a good idea? One of the talks at a recent chatbot convention in London was entitled “Don’t Be Human” . Meanwhile, possibly the largest test of an intended-to-be-humanlike - and friendlike - bot is going on via the Chinese chat service WeChat.
This is the first part of ‘A Brief History of Neural Nets and Deep Learning’. In this part, we shall cover the birth of neural nets with the Perceptron in 1958, the AI Winter of the 70s, and neural nets’ return to popularity with backpropagation in 1986.
“Compared with the accuracy of various human judges reported in the meta-analysis, computer models need 10, 70, 150, and 300 Likes, respectively, to outperform an average work colleague, cohabitant or friend, family member, and spouse (graypoints) […]
Automated, accurate, and cheap personality assessment tools could affect society in many ways: marketing messages could be tailored to users’ personalities; recruiters could better match candidates with jobs based on their personality; products and services could adjust their behavior to best match their users’ characters and changing moods; and scientists could collect personality data without burdening participants with lengthy questionnaires. Furthermore, in the future, people might abandon their own psychological judgments and rely on computers when making important life decisions, such as choosing activities, career paths, or even romantic partners. It is possible that such data-driven decisions will improve people’s lives”
So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all %E2%80%9Cnightmarish%E2%80%9D–a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.
Since we’re agent engineers, my husband and I tend to think agents are great. Also, we’re lazy and stupid by our own happy admission — and agents make us a lot smarter and more productive than we would be if we weren’t “borrowing” our brains from the rest of the internet. Like most people, whatever ambivalence we feel about our agents is buried under how much better they make our lives. Agents aren’t true AI, though heavy users sometimes think they are. They are sets of structured queries, a few API modules for services the agent’s owner uses, sets of learning algorithms you can enable by “turning up” their intelligence, and procedures for interfacing with people and things. As you use them they collect more and more of a person’s interests and history — we used to say people “rub off” on their agents over time.
Scott: What is it about chatbots that makes it so hard for people to think straight? Is the urge to pontificate about our robot-ruled future so overwhelming, that people literally can’t see the unimpressiveness of what’s right in front of them?
This is a game in which a sentient closet-based AI locks four girls in a room (with giant metal barriers) because one of them smudged her make-up, and forces them to repeatedly apply lipstick and eyeliner to freakishly giant doll heads until he is satisfied. That’s not my arch interpretation of events. That’s what actually happens.
Amidst the swirling maelstrom of technological progress so often heralded as the imminent salvation to all our ills, it can be necessary to remind ourselves that humanity sits at the center, not technology. And yet, we extrude these tools so effortlessly as if secreted by some glandular Technos expressed from deep within our genetic code. It’s difficult to separate us from our creations but it’s imperative that we examine this odd relationship as we engineer more autonomy, sensitivity, and cognition into the machines we bring into this world. The social environment, typified by the contemporary urban landscape, is evolving to include non-human actors that routinely engage with us, examining our behaviors, mediating our relationships, and assigning or revoking our rights. It is this evolving human-machine socialization that I wish to consider.