Some of these skilled lawyers did question whether their profession could ever entirely trust automation to make skilled legal decisions. For a small number, they suggested they would be sticking to “reliable” manual processes for the immediate future. However, most of the participants stressed that high-volume and low-risk contracts took up too much of their time, and felt it was incumbent on lawyers to automate work when, and where, possible. For them, the study was also a simple, practical demonstration of a not-so-scary AI future. However, lawyers also stressed that undue weight should not be put on legal AI alone. One participant, Justin Brown, stressed that humans must use new technology alongside their lawyerly instincts. He says: “Either working alone is inferior to the combination of both.”
Posts tagged automation
Heraud researched the scourges of agriculture: hypoxic dead zones in the Gulf of Mexico and Baltic Sea, the colony collapse of bees, soil degradation, and human health problems from allergies to cancers. “Everything tied back to the blind, rampant, broadcast spraying of chemicals,” Heraud says. He and Redden figured they could teach machines to differentiate between crops and weeds, then eliminate the weeds mechanically or with targeted doses of nontoxic substances. The two first considered hot foam, laser beams, electric currents, and boiling water. They’d market the robot to organic farmers, who spend heavily on chemical-free weeding methods including mechanical tillage, which can be both fuel-intensive and damaging to soil. After months of research, they faced a disappointing truth: There was no way around herbicides. “Turns out zapping weeds with electricity or hot liquid requires far more time and energy than chemicals—and it isn’t guaranteed to work,” Heraud says. Those methods might eliminate the visible part of a weed, but not the root. And pulling weeds with mechanical pincers is a far more time-intensive task for a robot than delivering microsquirts of poison. Their challenge became applying the chemicals with precision.
It can be really boring trying to program new sounds on complicated synthesizers. So why not get a computer to do it for you? By a process of what FoAM are calling artificial evolution, the Midimutant can grow new sounds on hardware synthesizers based on your starting sound. Midimutant. Developed, apparently, with Aphex Twin, the video below shows the Midimutant generating sounds on a Yamaha TX7. Something no one in their right mind would want to spend time programming.
So, will there still be enough jobs for everyone a few decades from now? Anybody who fears mass unemployment underestimates capitalism’s extraordinary ability to generate new bullshit jobs. If we want to really reap the rewards of the huge technological advances made in recent decades (and of the advancing robots), then we need to radically rethink our definition of “work.”
Badiou notes that the positive programme of Inventing the Future is organised around three points — full automation, universal basic income, and a “post-work” society — and that the first two of these points are really dependent on the third (automation as the means, UBI as the necessary consequence). He therefore addresses his critique to this nexus of ideas
The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum. The original DoNotPay, created by Stanford student Joshua Browder, describes itself as “the world’s first robot lawyer”, giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. For those in the UK, it helps them apply for asylum support.
Today, with the rapid development of digital technology, we can increasingly attempt to follow Leibniz’s logic. An increasing level of sophistication, to the point of some products becoming highly or fully autonomous, leads to complex situations requiring some form of ethical reasoning — autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of tasks they have to carry out, as well as their high degree of autonomy. How can such systems be designed to accommodate the complexity of ethical and moral reasoning? At present there exists no universal standard dealing with the ethics of automated systems — will they become a commodity that one can buy, change and resell depending on personal taste? Or will the ethical frameworks embedded into automated products be those chosen by the manufacturer? More importantly, as ethics has been a field under study for millennia, can we ever suppose that our current subjective ethical notions be taken for granted, and used for products that will make decisions on our behalf in real-world situations?
For hundreds of years the now-extinct turnspit dog, also called Canis Vertigus (“dizzy dog”), vernepator cur, kitchen dog and turn-tyke, was specially bred just to turn a roasting mechanism for meat. And weirdly, this animal was a high-tech fixture for the professional and home cook from the 16th century until the mid-1800s. Turnspit dogs came in a variety of colors and were heavy-set, often with heterochromatic eyes. They were short enough to fit into a wooden wheel contraption that was connected to ropes or chains, which turned the giant turkey or ham on a spit for the master of the house.
It was puzzling. I remembered the suggestion that they were outsourcing the email responses to call centers in the Philippines, but these emails were coming from someone not only familiar with construction administration, but with this specific project. That meant the only possible explanation was that they had some artificial intelligence software learn the project inside out by reading my email and automatically generating responses.
This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.
The ‘Terror of War’ case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these ‘weak AI’ systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that are take enormous effort to detect.
Zudem zeigte sich die Staatsanwaltschaft plötzlich wunderbar kunstverständig und sagte, das «übergeordnete Interesse an einer öffentlichen Debatte und die Fragen, die der ‹Random Darknet Shopper› aufwirft, den Besitz des Ecstasy gerechtfertigt». Die deutsche Kuratorin Inke Arns schrieb auf Facebook: «Der Schweizer Staatsanwalt scheint ein guter Kunstkritiker zu sein.» Und Marina Galperina, die Chefredaktorin des New Yorker Online-Magazins Hopes&Fears twitterte: «Schweizer Staatsanwalt: Es ist ok, online MDMA zu kaufen! (so lange man ein Bot in einem Kunstprojekt ist.)»
Apple autocorrect of the morning ‘neo-’ for 'bro-’ (screencap for evidence)
It was the ultimate goal of many schools of occultism to create life. In Muslim alchemy, it was called Takwin. In modern literature, Frankenstein is obviously a story of abiogenesis, and not only does the main character explicitly reference alchemy as his inspiration but it’s partially credited for sparking the Victorian craze for occultism. Both the Golem and the Homunculus are different traditions’ alchemical paths to abiogenesis, in both cases partially as a way of getting closer to the Divine by imitating its power. And abiogenesis has also been the fascinated object of a great deal of AI research. Sure, in recent times we might have started to become excited by its power to create a tireless servant who can schedule meetings, manage your Twitter account, spam forums, or just order you a pizza, but the historical context is driven by the same goal as the alchemists - create artificial life. Or more accurately, to create an artificial human. Will we get there? Is it even a good idea? One of the talks at a recent chatbot convention in London was entitled “Don’t Be Human” . Meanwhile, possibly the largest test of an intended-to-be-humanlike - and friendlike - bot is going on via the Chinese chat service WeChat.
Located on the futurist left end of the political spectrum, fully automated luxury communism (FALC) aims to embrace automation to its fullest extent. The term may seem oxymoronic, but that’s part of the point: anything labeled luxury communism is going to be hard to ignore. “There is a tendency in capitalism to automate labor, to turn things previously done by humans into automated functions,” says Aaron Bastani, co-founder of Novara Media. “In recognition of that, then the only utopian demand can be for the full automation of everything and common ownership of that which is automated.” Bastani and fellow luxury communists believe that this era of rapid change is an opportunity to realise a post-work society, where machines do the heavy lifting not for profit but for the people.
I believe that it is correct to view luxury communism from a utopian perspective, not in the sense of something that is impossible but in the sense of something that attempts to open up the sense of future possibilities as opposed to a mere repetition of present conditions. Partially this is to act as a critique of the present, partially to act as a spur towards an open future. Indeed, the use of the term ‘communism’ implies a radical alternative future vision, one that is subversive of the present and, yes, even utopian. It is here that I think that fully automated luxury communism, by putting too much faith in capitalist technology overcoming scarcity and the need for labour, fails to imagine a more general transformation of social relations. To avoid this tendency, and to encourage thinking about the overcoming of the paradoxes and miseries of capitalism, we need to seriously engage in utopian experimentation in future possibilities.
But, ‘human interfaces’ are actually quite costly to maintain. People are alive, and thus need food, sick leave, maternity leave and education. They also have a troublesome awareness of exploitation and an unpredictable ability to disobey, defraud, make mistakes or go rogue. Thus, over the years corporate managers have tried to push the power balance in this hybrid model towards the machine side. In their ideal world, bank executives would get rid of as many manual human elements as possible and replace them with software systems moving binary code around on hard drives, a process they refer to as 'digitisation’. Corporate management is fond of digitisation – and other forms of automation – because it is a force for scale, standardisation and efficiency – and in turn lowers costs, leading to enhanced profits.
TIM is a platform for ideas worth automating. TIM generates talks on a broad spectrum of topics, based on the texts of slightly more coherent talks given under the auspices of his more famous big brother, who shall not be named here.
It seems uncontroversial that these systems may individually lower costs to users in a short-term sense. Nevertheless, while startup culture is fixated upon using digital technology to narrowly improve short-term efficiency in many different business settings, it is woefully inept at analysing what problems this process may accumulate in the long term. Payments startups, for example, see themselves as incrementally working towards a ‘cashless society’: a futurist buzzword laden with positive connotations of hypermodern efficiency. It describes the downfall of something 'old’ and archaic – cash – but doesn’t actually describe what rises up in its place. If you like, 'cashless society’ could be reframed as 'a society in which every transaction you make will have to be approved by a private intermediary who can watch your actions and exclude you.’
It may be fortuitous that the trolley problem has trickled into the world of driverless cars: It illuminates some of the profound ethical—and legal—challenges we will face ahead with robots. As human agents are replaced by robotic ones, many of our decisions will cease to be in-the-moment, knee-jerk reactions. Instead, we will have the ability to premeditate different options as we program how our machines will act. For philosophers like Lin, this is the perfect example of where theory collides with the real world—and thought experiments like the trolley problem, though they may be abstract or outdated, can help us to rigorously think through scenarios before they happen. Lin and Gerdes hosted a conference about ethics and self-driving cars last month, and hope the resulting discussions will spread out to other companies and labs developing these technologies.
This one-shot poison (which is harmless to everything else on the reef) is what makes autonomous robotic sea star control possible, since it means that a robot can efficiently target individual sea stars without having to try and keep track of which ones it’s injected already so it can go back and repeat the process nine more times. At Queensland University of Technology in Australia, a group of researchers led by Matthew Dunbabin and Peter Corke spent the last decade working on COTSBot,* which has been specifically designed to seek out and murder crown-of-thorns sea stars as mercilessly and efficiently as possible.
Kofuku-ji temple’s chief priest, Bungen Oi, offers a prayer during the funeral for 19 pet robot dogs on January 26. The dogs, created by Sony, were first-generation Aibo robots from June 1999 that had artificial intelligence and developed personalities and learned from their owners. Toshifumi Kitamura/AFP/Getty
The algorithmic metaphor is just a special version of the machine metaphor, one specifying a particular kind of machine (the computer) and a particular way of operating it (via a step-by-step procedure for calculation). And when left unseen, we are able to invent a transcendental ideal for the algorithm. The canonical algorithm is not just a model sequence but a concise and efficient one. In its ideological, mythic incarnation, the ideal algorithm is thought to be some flawless little trifle of lithe computer code, processing data into tapestry like a robotic silkworm. A perfect flower, elegant and pristine, simple and singular. A thing you can hold in your palm and caress. A beautiful thing. A divine one. But just as the machine metaphor gives us a distorted view of automated manufacture as prime mover, so the algorithmic metaphor gives us a distorted, theological view of computational action.
The Random Darknet Shopper is an automated online shopping bot which we provide with a budget of $100 in Bitcoins per week. Once a week the bot goes on shopping spree in the deep web where it randomly choses and purchases one item and has it mailed to us. The items are shown in the exhibition «The Darknet. From Memes to Onionland» at Kunst Halle St. Gallen. Each new object ads to a landscape of traded goods from the Darknet.
“It’s 4:24am and I’m in bed watching a documentary about a chimpanzee named Nim.” “It’s 2.19pm and I am pretty sure I am still drunk.” “It’s 11:51pm and all I want is an entire pumpkin pie okay.” “It’s 10:12am and THERE ARE STILL NO BISCUITS. Wtf is this anarchy!? "It’s 9:57am and kyle and I are sobbing while watching cheaper by the dozen 2.” “It’s 1:31pm and he hasn’t texted me back from last night. I give up.” “its 10:29am and i already want pizza.” “It’s 3.11am and I’m sober in Burger King. What’s happening?” “It’s 9:23am and I can’t wait to taste wines tonight!” “it’s 2:32pm and I woke up like 5 minutes ago.” “Its 2:50am and I’m still doing homework.” “It’s 11:29am and I’ve only just realised I’ve had my t-shirt on backwards the whole morning.” “It’s 3.12am and I’m cooking supernoodles. what my life.” “It’s 7:32am and I am listening to R Kelly very loudly. Where did it all go wrong?”
The government-financed Thai Delicious Committee, which oversaw the development of the machine, describes it as “an intelligent robot that measures smell and taste in food ingredients through sensor technology in order to measure taste like a food critic.” In a country of 67 million people, there are somewhere near the same number of strongly held opinions about Thai cooking. A heated debate here on the merits of a particular nam prik kapi, a spicy chili dip of fermented shrimp paste, lime juice and palm sugar, could easily go on for an hour without coming close to resolution.
Since we’re agent engineers, my husband and I tend to think agents are great. Also, we’re lazy and stupid by our own happy admission — and agents make us a lot smarter and more productive than we would be if we weren’t “borrowing” our brains from the rest of the internet. Like most people, whatever ambivalence we feel about our agents is buried under how much better they make our lives. Agents aren’t true AI, though heavy users sometimes think they are. They are sets of structured queries, a few API modules for services the agent’s owner uses, sets of learning algorithms you can enable by “turning up” their intelligence, and procedures for interfacing with people and things. As you use them they collect more and more of a person’s interests and history — we used to say people “rub off” on their agents over time.
The exact and approximate spots Kerouac traveled and described are taken from the book and parsed by Google Direction Service API. The result is a huge direction instruction of 55 pages. The chapters match those of the original book. All in all, as Google shows, the journey takes 272,261667 hours (for 17527 miles).
A mild-mannered man says his life was completely ruined after Google’s autocomplete feature convinced the government he was building a bomb.
Though he intended to search the web for “How do I build a radio-controlled airplane,” Jeffrey Kantor, then a government contractor, says the search engine auto-completed his request, turning it into “”How do I build a radio controlled bomb?”
Before he realized Google’s error, Kantor had already pressed enter, sparking a chain reaction he says resulted in months of harassment by government officials leading up to his eventual termination.
clicks-ReflectanceCP018VRgb0.55-crop (via http://www.pawfal.org/dave/blog/2013/11/project-nightjar-camouflage-data-visualisation-and-possible-internet-robot-predators/)
The drone operates less like an RC plane and more like a Roomba. You can define an area of interest on a laptop, beam it to the eBee, and then just toss the drone in the air where it will autonomously collect imagery. Within 40 minutes, the drone took 225 photos covering 100 acres from an altitude of 120 meters. Larger areas of 2,500 acres and more are possible, but this was sufficient for our needs.
In the dusty red earth of Western Australia, robot trucks haul iron ore. The trucks themselves weigh about 500 tons when loaded — they are truly massive. They operate more or less on their own, navigating mining roads connecting the sprawling Pilbara iron mines with a guidance system provided by global positioning satellites, radars and lasers. It’s part of $13 billion mining operation by Rio Tinto, one of the world’s largest mining firms.
For one British university, what began as a time-saving exercise ended in disgrace when a computer model set up to streamline its admissions process exposed - and then exacerbated - gender and racial discrimination. As detailed here in the British Medical Journal, staff at St George’s Hospital Medical School decided to write an algorithm that would automate the first round of its admissions process. The formulae used historical patterns in the characteristics of candidates whose applications were traditionally rejected to filter out new candidates whose profiles matched those of the least successful applicants. By 1979 the list of candidates selected by the algorithms was a 90-95% match for those chosen by the selection panel, and in 1982 it was decided that the whole initial stage of the admissions process would be handled by the model. Candidates were assigned a score without their applications having passed a single human pair of eyes, and this score was used to determine whether or not they would be interviewed. Quite aside from the obvious concerns that a student would have upon finding out a computer was rejecting their application, a more disturbing discovery was made. The admissions data that was used to define the model’s outputs showed bias against females and people with non-European-looking names.
“The possible introduction of LARs (lethal autonomous robots) raises far-reaching concerns about the protection of life during war and peace,” Mr Heyns said. “If this is done, machines and not humans, will take the decision on who is alive or dies.” Mr Heyns presented a report on his research and called for a worldwide moratorium on the production and deployment of such machines, while nations figured out the knotty legal and ethical issues.
The Queen’s Android (via http://bit.ly/Krr44d)
“This famous android was a collaborative effort by two Germans. Clockmaker Peter Kintzing created the mechanism and joiner David Roentgen crafted the cabinet; the dress dates from the 19th century. Automatons were in circulation and aroused much curiosity. Roentgen probably sent the tympanum to the French court and Marie-Antoinette bought it in 1784. The queen, aware of its perfection and scientific interest, had it deposited in the Academy of Sciences cabinet in 1785. The tympanum is a musical instrument that plays eight tunes when the female android strikes the 46 strings with two little hammers. Tradition has it that she is a depiction of Marie-Antoinette.”
WhereAreTheBots-1024x654.png (via http://nearfuturelaboratory.com/pasta-and-vinegar/2012/04/16/bot-activity-on-wikipedia-entries-about-global-warming/)