If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.
—Alan Kay, from A Conversation with Alan Kay
“in this effort to ‘simplify’ these routines by making the office paperless, Zuboff found that the implementation of computers wound up eradicating the basis of the clerks’ situated knowledge. Suddenly, making changes to a client’s account meant simply inputting data in an order that was constrained by the computer itself. Work became a process of filling in blanks; there was no longer anywhere for the clerks to experience decision-making in their jobs. What Zuboff observed was that as intellectual engagement with the work went down, the necessity of concentration and attention went up. What the computer did was make the work so routine, so boring, so mindless, clerical workers had to physically exert themselves to be able to focus on what they were even doing. This transition, from work being about the application of knowledge to work being about the application of attention, turned out to have profound physical and psychological impact on the clerical workers themselves.”
For people who want their computer, networking and information history ready to go, Ted Nelson’s 1977 work “Selected Papers” is now up. It contains various writings from 1965-1977 and convinced Autodesk to fund the Xanadu Project.
Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent. It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.
Our software is bullshit, our literary essays are too long, the good editors all quit or got fired, hardly anyone is experimenting with form in a way that wakes me up, the IDEs haven’t caught up with the 1970s, the R&D budgets are weak, the little zines are badly edited, the tweets are poor, the short stories make no sense, people still care too much about magazines, the Facebook posts are nightmares, LinkedIn has ruined capitalism, and the big tech companies that have arisen are exhausting, lumbering gold-thirsty kraken that swim around with sour looks on their face wondering why we won’t just give them all our gold and save the time. With every flap of their terrible fins they squash another good idea in the interest of consolidating pablum into a single database, the better to jam it down our mental baby duck feeding tubes in order to make even more of the cognitive paté that Silicon Valley is at pains to proclaim a delicacy. Social media is veal calves being served tasty veal. In the spirit of this thing I won’t be editing this paragraph.
This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of Silicon Valley. Promoted in magazines, books, TV programmes, websites, newsgroups and Net conferences, the Californian Ideology promiscuously combines the free-wheeling spirit of the hippies and the entrepreneurial zeal of the yuppies. This amalgamation of opposites has been achieved through a profound faith in the emancipatory potential of the new information technologies. In the digital utopia, everybody will be both hip and rich.
Not surprisingly, this optimistic vision of the future has been enthusiastically embraced by computer nerds, slacker students, innovative capitalists, social activists, trendy academics, futurist bureaucrats and opportunistic politicians across the USA. As usual, Europeans have not been slow in copying the latest fad from America. While a recent EU Commission report recommends following the Californian free market model for building the information superhighway, cutting-edge artists and academics eagerly imitate the post human philosophers of the West Coast’s Extropian cult.[3] With no obvious rivals, the triumph of the Californian Ideology appears to be complete.
But, in truth, it’s not that difficult to understand Ethereum, blockchains, Bitcoin and all the rest — at least the implications for people just going about their daily business, living their lives. Even a programmer who wants a clear picture can get a good enough model of how it all fits together fairly easily. Blockchain explainers usually focus on some very clever low-level details like mining, but that stuff really doesn’t help people (other than implementers) understand what is going on. Rather, let’s look at how the blockchains fit into the more general story about how computers impact society.
Rest in peace, HyperCard. It was one of the most important applications in the history of personal computing, in my humble opinion, and responsible for the “amazing bloom” of ideas and applications noted by Ben Hyde and Matt Jones. I made a few things with it, and I’m pretty sure they weren’t in the ‘amazing bloom’ class — but I can certainly say HyperCard was a massive influence on who I am now. (Ed. This article was originally published at cityofsound.com on 4th April 2004.)
Twine is a tool that lets you make point-and-click games that run in a web browser—what a lot of people refer to as “choose your own adventure” or CYOA games. It’s pretty easy to make a game, which means that the Twine community is fairly big and diverse.
There are a lot of tools that you can use to do information architecture and to sketch out processes. Visio, PowerPoint, Keynote, or Omnigraffle, for example. In the programming world, some people use UML tools to draw pictures of how a program should operate, and then turn that into code, and a new breed of product prototyping apps are blurring the line between design and code, too. But it has always bummed me out that when you draw a picture on a computer it is, for the most part, just a picture. Why doesn’t the computer make sense of those boxes and arrows for you? Why is it so hard to turn a picture of a web product into a little, functional website?
This is a huge topic — why are most digital documents not presented as dynamic programs? (One good recent exploration of the subject is Bret Victor’s “Up and Down the Ladder of Abstraction.”) And in some ways the Twine interface is a very honest testing and prototyping environment, because it is so good at modeling choices (as in, choose your own adventure).
In our work developing and understanding these near term applications we’ve developed a framework to help us work with the algorithms concretely. Today the Software and Applications team at Rigetti is excited to share the description of this work in our whitepaper [A Practical Quantum Instruction Set Architecture]. We have focused on a simple model that is compatible with the types of devices that are likely to be available first.
Five years ago, Matthew Kirschenbaum, an English professor at the University of Maryland, realized that no one seemed to know who wrote the first novel with the help of a word processor. He’s just published the fruit of his efforts: Track Changes, the first book-length story of word processing. It is more than a history of high art. Kirschenbaum follows how writers of popular and genre fiction adopted the technology long before vaunted novelists did. He determines how their writing habits and financial powers changed once they moved from typewriter to computing. And he details the unsettled ways that the computer first entered the home. (When he first bought a computer, for example, the science-fiction legend Isaac Asimov wasn’t sure whether it should go in the living room or the study.)
It was the ultimate goal of many schools of occultism to create life. In Muslim alchemy, it was called Takwin. In modern literature, Frankenstein is obviously a story of abiogenesis, and not only does the main character explicitly reference alchemy as his inspiration but it’s partially credited for sparking the Victorian craze for occultism. Both the Golem and the Homunculus are different traditions’ alchemical paths to abiogenesis, in both cases partially as a way of getting closer to the Divine by imitating its power. And abiogenesis has also been the fascinated object of a great deal of AI research. Sure, in recent times we might have started to become excited by its power to create a tireless servant who can schedule meetings, manage your Twitter account, spam forums, or just order you a pizza, but the historical context is driven by the same goal as the alchemists - create artificial life. Or more accurately, to create an artificial human. Will we get there? Is it even a good idea? One of the talks at a recent chatbot convention in London was entitled “Don’t Be Human” . Meanwhile, possibly the largest test of an intended-to-be-humanlike - and friendlike - bot is going on via the Chinese chat service WeChat.
This is the first part of ‘A Brief History of Neural Nets and Deep Learning’. In this part, we shall cover the birth of neural nets with the Perceptron in 1958, the AI Winter of the 70s, and neural nets’ return to popularity with backpropagation in 1986.
The algorithmic metaphor is just a special version of the machine metaphor, one specifying a particular kind of machine (the computer) and a particular way of operating it (via a step-by-step procedure for calculation). And when left unseen, we are able to invent a transcendental ideal for the algorithm. The canonical algorithm is not just a model sequence but a concise and efficient one. In its ideological, mythic incarnation, the ideal algorithm is thought to be some flawless little trifle of lithe computer code, processing data into tapestry like a robotic silkworm. A perfect flower, elegant and pristine, simple and singular. A thing you can hold in your palm and caress. A beautiful thing. A divine one. But just as the machine metaphor gives us a distorted view of automated manufacture as prime mover, so the algorithmic metaphor gives us a distorted, theological view of computational action.
Some people identify the birth of virtual reality in rudimentary Victorian “stereoscopes,” the first 3D picture viewers. Others might point to any sort of out-of-body experience. But to most, VR as we know it was created by a handful of pioneers in the 1950s and 1960s. In 1962, after years of work, filmmaker Mort Heilig patented what might be the first true VR system: the Sensorama, an arcade-style cabinet with a 3D display, vibrating seat, and scent producer. Heilig imagined it as one in a line of products for the “cinema of the future,”
This post is a crash course on the notation used in programming language theory (“PL theory” for short). For a much more thorough introduction, I recommend Types and Programming Languages by Benjamin C. Pierce and Semantic Engineering with PLT Redex by Felleisen, Findler, and Flatt. I’ll assume the reader is an experienced programmer but not an experienced mathematician or PL theorist. I’ll start with the most basic definitions and try to build up quickly.
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper we introduce a formal framework that can be used to determine whether or not a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, drawing the comparison with the use of mathematical models to represent physical objects in experimental science. This powerful formulation allows a precise description of the similarities between experiments, computation, simulation, and technology.
Programming is complicated. Different programs have different abstraction levels, domains, platforms, longevity, team sizes, etc ad infinitum. There is something fundamentally different between the detailed instructions that goes into, say, computing a checksum and the abstractions when defining the flow of data in any medium-sized system. I think that the divide between coding the details and describing the flow of a program is so large that a programming language could benefit immensely from keeping them conceptually separate. This belief has led me to design a new programming language - Glow - that has this separation at its core.
In Yugoslavia in the 1980s, computers were a rare luxury. A ZX Spectrum or Commodore 64 could easily cost a month’s salary, and that’s if you could even get through the tough importation laws. Then in 1983, while on holiday in Risan, Voja Antonić dreamt up plans for a new computer, a people’s machine that could be built at home for a fraction of the cost of foreign imports. The Galaksija was born, and with it a computer revolution.
Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use
If you attempt to make sense of Engelbart’s design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart’s intent. Engelbart hated our present-day systems. If you truly want to understand NLS, you have to forget today. Forget everything you think you know about computers. Forget that you think you know what a computer is. Go back to 1962. And then read his intent. The least important question you can ask about Engelbart is, “What did he build?” By asking that question, you put yourself in a position to admire him, to stand in awe of his achievements, to worship him as a hero. But worship isn’t useful to anyone. Not you, not him. The most important question you can ask about Engelbart is, “What world was he trying to create?” By asking that question, you put yourself in a position to create that world yourself.
By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble. And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers–whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human “feel for a situation” usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.
Multics is no longer produced or offered for sale; Honeywell no longer even makes computers. People edit on computers on their desktop so cheap and fast that not only do redisplay algorithms no longer matter, but the whole idea of autonomous redisplay in a display editor is no longer a given (although autonomous redisplay’s illustrious child, WYSIWYG, is now the standard paradigm of the industry.). There is now no other kind of editor besides what we then called the “video editor”. Thus, all of the battles, acrimony, and invidious or arrogant comparisons in what follows are finished and done with, and to be viewed in the context of 1979 – this is a historical document about Multics and the evolution of an editor. It is part of the histories of Multics, of Emacs, and of Lisp.
To formulate a theory about a future society both very modern and not dominated by industry, it will be necessary to recognize natural scales and limits. We must come to admit that only within limits can machines take the place of slaves; beyond these limits they lead to a new kind of serfdom. Only within limits can education fit people into a man-made environment: beyond these limits lies the universal schoolhouse, hospital ward, or prison. Only within limits ought politics to be concerned with the distribution of maximum industrial outputs, rather than with equal inputs of either energy or information. Once these limits are recognized, it becomes possible to articulate the triadic relationship between persons, tools, and a new collectivity. Such a society, in which modern technologies serve politically interrelated individuals rather than managers, I will call “convivial.”
We present what we argue is the generic generalization of Conway’s “Game of Life” to a continuous domain. We describe the theoretical model and the explicit implementation on a computer.