Posts tagged computing
“this is not a hall of shame. the intent is to awaken you to many of the peculiarities and weirdness of computers. hopefully, after reading these articles, you will have learned a lot and will embrace chaos.”
Blackle Mori (@suricrasia) - 2021
But, in truth, it’s not that difficult to understand Ethereum, blockchains, Bitcoin and all the rest — at least the implications for people just going about their daily business, living their lives. Even a programmer who wants a clear picture can get a good enough model of how it all fits together fairly easily. Blockchain explainers usually focus on some very clever low-level details like mining, but that stuff really doesn’t help people (other than implementers) understand what is going on. Rather, let’s look at how the blockchains fit into the more general story about how computers impact society.
“Small corrections to the programmed sequence could be done by patching over portions of the paper tape and re-punching the holes in that section.”
Rest in peace, HyperCard. It was one of the most important applications in the history of personal computing, in my humble opinion, and responsible for the “amazing bloom” of ideas and applications noted by Ben Hyde and Matt Jones. I made a few things with it, and I’m pretty sure they weren’t in the ‘amazing bloom’ class — but I can certainly say HyperCard was a massive influence on who I am now. (Ed. This article was originally published at cityofsound.com on 4th April 2004.)
Twine is a tool that lets you make point-and-click games that run in a web browser—what a lot of people refer to as “choose your own adventure” or CYOA games. It’s pretty easy to make a game, which means that the Twine community is fairly big and diverse.There are a lot of tools that you can use to do information architecture and to sketch out processes. Visio, PowerPoint, Keynote, or Omnigraffle, for example. In the programming world, some people use UML tools to draw pictures of how a program should operate, and then turn that into code, and a new breed of product prototyping apps are blurring the line between design and code, too. But it has always bummed me out that when you draw a picture on a computer it is, for the most part, just a picture. Why doesn’t the computer make sense of those boxes and arrows for you? Why is it so hard to turn a picture of a web product into a little, functional website?This is a huge topic — why are most digital documents not presented as dynamic programs? (One good recent exploration of the subject is Bret Victor’s “Up and Down the Ladder of Abstraction.”) And in some ways the Twine interface is a very honest testing and prototyping environment, because it is so good at modeling choices (as in, choose your own adventure).
In our work developing and understanding these near term applications we’ve developed a framework to help us work with the algorithms concretely. Today the Software and Applications team at Rigetti is excited to share the description of this work in our whitepaper [A Practical Quantum Instruction Set Architecture]. We have focused on a simple model that is compatible with the types of devices that are likely to be available first.
Five years ago, Matthew Kirschenbaum, an English professor at the University of Maryland, realized that no one seemed to know who wrote the first novel with the help of a word processor. He’s just published the fruit of his efforts: Track Changes, the first book-length story of word processing. It is more than a history of high art. Kirschenbaum follows how writers of popular and genre fiction adopted the technology long before vaunted novelists did. He determines how their writing habits and financial powers changed once they moved from typewriter to computing. And he details the unsettled ways that the computer first entered the home. (When he first bought a computer, for example, the science-fiction legend Isaac Asimov wasn’t sure whether it should go in the living room or the study.)
It was the ultimate goal of many schools of occultism to create life. In Muslim alchemy, it was called Takwin. In modern literature, Frankenstein is obviously a story of abiogenesis, and not only does the main character explicitly reference alchemy as his inspiration but it’s partially credited for sparking the Victorian craze for occultism. Both the Golem and the Homunculus are different traditions’ alchemical paths to abiogenesis, in both cases partially as a way of getting closer to the Divine by imitating its power. And abiogenesis has also been the fascinated object of a great deal of AI research. Sure, in recent times we might have started to become excited by its power to create a tireless servant who can schedule meetings, manage your Twitter account, spam forums, or just order you a pizza, but the historical context is driven by the same goal as the alchemists - create artificial life. Or more accurately, to create an artificial human. Will we get there? Is it even a good idea? One of the talks at a recent chatbot convention in London was entitled “Don’t Be Human” . Meanwhile, possibly the largest test of an intended-to-be-humanlike - and friendlike - bot is going on via the Chinese chat service WeChat.
This is the first part of ‘A Brief History of Neural Nets and Deep Learning’. In this part, we shall cover the birth of neural nets with the Perceptron in 1958, the AI Winter of the 70s, and neural nets’ return to popularity with backpropagation in 1986.
564_0206.jpg (via http://flic.kr/p/tHpFzD )
The Osborne 1 among the Mujahideen
The algorithmic metaphor is just a special version of the machine metaphor, one specifying a particular kind of machine (the computer) and a particular way of operating it (via a step-by-step procedure for calculation). And when left unseen, we are able to invent a transcendental ideal for the algorithm. The canonical algorithm is not just a model sequence but a concise and efficient one. In its ideological, mythic incarnation, the ideal algorithm is thought to be some flawless little trifle of lithe computer code, processing data into tapestry like a robotic silkworm. A perfect flower, elegant and pristine, simple and singular. A thing you can hold in your palm and caress. A beautiful thing. A divine one. But just as the machine metaphor gives us a distorted view of automated manufacture as prime mover, so the algorithmic metaphor gives us a distorted, theological view of computational action.
Some people identify the birth of virtual reality in rudimentary Victorian “stereoscopes,” the first 3D picture viewers. Others might point to any sort of out-of-body experience. But to most, VR as we know it was created by a handful of pioneers in the 1950s and 1960s. In 1962, after years of work, filmmaker Mort Heilig patented what might be the first true VR system: the Sensorama, an arcade-style cabinet with a 3D display, vibrating seat, and scent producer. Heilig imagined it as one in a line of products for the “cinema of the future,”
This post is a crash course on the notation used in programming language theory (“PL theory” for short). For a much more thorough introduction, I recommend Types and Programming Languages by Benjamin C. Pierce and Semantic Engineering with PLT Redex by Felleisen, Findler, and Flatt. I’ll assume the reader is an experienced programmer but not an experienced mathematician or PL theorist. I’ll start with the most basic definitions and try to build up quickly.
Computing is a high-level process of a physical system. Recent interest in non-standard computing systems, including quantum and biological computers, has brought this physical basis of computing to the forefront. There has been, however, no consensus on how to tell if a given physical system is acting as a computer or not; leading to confusion over novel computational devices, and even claims that every physical event is a computation. In this paper we introduce a formal framework that can be used to determine whether or not a physical system is performing a computation. We demonstrate how the abstract computational level interacts with the physical device level, drawing the comparison with the use of mathematical models to represent physical objects in experimental science. This powerful formulation allows a precise description of the similarities between experiments, computation, simulation, and technology.
Programming is complicated. Different programs have different abstraction levels, domains, platforms, longevity, team sizes, etc ad infinitum. There is something fundamentally different between the detailed instructions that goes into, say, computing a checksum and the abstractions when defining the flow of data in any medium-sized system. I think that the divide between coding the details and describing the flow of a program is so large that a programming language could benefit immensely from keeping them conceptually separate. This belief has led me to design a new programming language - Glow - that has this separation at its core.
In Yugoslavia in the 1980s, computers were a rare luxury. A ZX Spectrum or Commodore 64 could easily cost a month’s salary, and that’s if you could even get through the tough importation laws. Then in 1983, while on holiday in Risan, Voja Antonić dreamt up plans for a new computer, a people’s machine that could be built at home for a fraction of the cost of foreign imports. The Galaksija was born, and with it a computer revolution.
Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use
If you attempt to make sense of Engelbart’s design by drawing correspondences to our present-day systems, you will miss the point, because our present-day systems do not embody Engelbart’s intent. Engelbart hated our present-day systems. If you truly want to understand NLS, you have to forget today. Forget everything you think you know about computers. Forget that you think you know what a computer is. Go back to 1962. And then read his intent. The least important question you can ask about Engelbart is, “What did he build?” By asking that question, you put yourself in a position to admire him, to stand in awe of his achievements, to worship him as a hero. But worship isn’t useful to anyone. Not you, not him. The most important question you can ask about Engelbart is, “What world was he trying to create?” By asking that question, you put yourself in a position to create that world yourself.
By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble. And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers–whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human “feel for a situation” usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.
Multics is no longer produced or offered for sale; Honeywell no longer even makes computers. People edit on computers on their desktop so cheap and fast that not only do redisplay algorithms no longer matter, but the whole idea of autonomous redisplay in a display editor is no longer a given (although autonomous redisplay’s illustrious child, WYSIWYG, is now the standard paradigm of the industry.). There is now no other kind of editor besides what we then called the “video editor”. Thus, all of the battles, acrimony, and invidious or arrogant comparisons in what follows are finished and done with, and to be viewed in the context of 1979 – this is a historical document about Multics and the evolution of an editor. It is part of the histories of Multics, of Emacs, and of Lisp.
Even if we win the right to own and control our computers, a dilemma remains: what rights do owners owe users?
To formulate a theory about a future society both very modern and not dominated by industry, it will be necessary to recognize natural scales and limits. We must come to admit that only within limits can machines take the place of slaves; beyond these limits they lead to a new kind of serfdom. Only within limits can education fit people into a man-made environment: beyond these limits lies the universal schoolhouse, hospital ward, or prison. Only within limits ought politics to be concerned with the distribution of maximum industrial outputs, rather than with equal inputs of either energy or information. Once these limits are recognized, it becomes possible to articulate the triadic relationship between persons, tools, and a new collectivity. Such a society, in which modern technologies serve politically interrelated individuals rather than managers, I will call “convivial.”
We present what we argue is the generic generalization of Conway’s “Game of Life” to a continuous domain. We describe the theoretical model and the explicit implementation on a computer.