“One of the PL community questions that has been bugging me for a long time is what is and what isn’t a programming language […] the way we construct what is a programming language is social, groups decide what is in and out […] If we want to study this phenomenon, we cannot do that in the realm of PL itself, you will need theories about how social constructs work, and that is where feminism can help!”
If you look at software today, through the lens of the history of engineering, it’s certainly engineering of a sort—but it’s the kind of engineering that people without the concept of the arch did. Most software today is very much like an Egyptian pyramid with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.
—Alan Kay, from A Conversation with Alan Kay
A common perspective is that types are restrictions. Static types restrict the set of values a variable may contain, capturing some subset of the space of “all possible values.” Under this worldview, a typechecker is sort of like an oracle, predicting which values will end up where when the program runs and making sure they satisfy the constraints the programmer wrote down in the type annotations. Of course, the typechecker can’t really predict the future, so when the typechecker gets it wrong—it can’t “figure out” what a value will be—static types can feel like self-inflicted shackles. But that is not the only perspective.
Probabilistic modeling and inference are core tools in diverse fields including statistics, machine learning, computer vision, cognitive science, robotics, natural language processing, and artificial intelligence. To meet the functional requirements of applications, practitioners use a broad range of modeling techniques and approximate inference algorithms. However, implementing inference algorithms is often difficult and error prone. Gen simplifies the use of probabilistic modeling and inference, by providing modeling languages in which users express models, and high-level programming constructs that automate aspects of inference. Like some probabilistic programming research languages, Gen includes universal modeling languages that can represent any model, including models with stochastic structure, discrete and continuous random variables, and simulators. However, Gen is distinguished by the flexibility that it affords to users for customizing their inference algorithm. It is possible to use built-in algorithms that require only a couple lines of code, as well as develop custom algorithms that are more able to meet scalability and efficiency requirements. Gen’s flexible modeling and inference programming capabilities unify symbolic, neural, probabilistic, and simulation-based approaches to modeling and inference, including causal modeling, symbolic programming, deep learning, hierarchical Bayesiam modeling, graphics and physics engines, and planningand reinforcement learning.
Bayesian Methods for Hackers is designed as a introduction to Bayesian inference from a computational/understanding-first, and mathematics-second, point of view. Of course as an introductory book, we can only leave it at that: an introductory book. For the mathematically trained, they may cure the curiosity this text generates with other texts designed with mathematical analysis in mind. For the enthusiast with less mathematical-background, or one who is not interested in the mathematics but simply the practice of Bayesian methods, this text should be sufficient and entertaining.
The reason why I’m writing about [Six Memos for the Next Millennium] is that while I think that they are great memos about writing, the more I think about them, the more they apply to programming. Which is a weird coincidence, because they were supposed to be memos for writers in the next millennium, and programming is kind of a new form of writing that’s becoming more important in this millennium. Being a game developer, I also can’t help but apply these to game design. So I will occasionally talk about games in here, but I’ll try to keep it mostly about programming.
Programming time, dates, timezones, recurring events, leap seconds… everything is pretty terrible. The common refrain in the industry is Just use UTC! Just use UTC! And that’s correct… sort of. But if you’re stuck building software that deals with time, there’s so much more to consider.
Luna is a WYSIWYG visual and textual, purely functional data processing language. Its goal is to revolutionize the way people are able to gather, understand and manipulate data. Luna targets domains where data processing is the primary focus, including data science, machine learning, IoT, bioinformatics, computer graphics or architecture. Each domain requires a highly tailored data processing toolbox and Luna provides both an unified foundation for building such toolboxes as well as growing library of existing ones. At its core, Luna delivers a powerful data flow modeling environment and an extensive data visualization and manipulation framework.
Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent. It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.
Our software is bullshit, our literary essays are too long, the good editors all quit or got fired, hardly anyone is experimenting with form in a way that wakes me up, the IDEs haven’t caught up with the 1970s, the R&D budgets are weak, the little zines are badly edited, the tweets are poor, the short stories make no sense, people still care too much about magazines, the Facebook posts are nightmares, LinkedIn has ruined capitalism, and the big tech companies that have arisen are exhausting, lumbering gold-thirsty kraken that swim around with sour looks on their face wondering why we won’t just give them all our gold and save the time. With every flap of their terrible fins they squash another good idea in the interest of consolidating pablum into a single database, the better to jam it down our mental baby duck feeding tubes in order to make even more of the cognitive paté that Silicon Valley is at pains to proclaim a delicacy. Social media is veal calves being served tasty veal. In the spirit of this thing I won’t be editing this paragraph.
“Texture v.2 is getting interesting now, reminds me of fabric travelling around a loom. Everything apart from the DSP is implemented in Haskell. The functional approach has worked out particularly well for this visualisation — because musical patterns are represented as functions from time to events (using my Tidal EDSL), it’s trivial to get at future events across the graph of combinators. Still much more to do though.”
Why do we need perceptually uniform color spaces? Because working with color in code is different than working with color in traditional design tools. Traditional tools encourage designers to think in manual workflows with the color picker as the primary way of choosing color combinations. In this scenario, designers use their eyes to decide whether a color is right or wrong, and the RGB values play no role in this decision. Code is different, because programming languages encourage designers to think about colors as numbers or positions within the chosen color model. This skill is hard to learn if the numbers do not correspond with the output. Perceptually uniform color spaces allow us to align numbers in our code with the visual effect perceived in our viewers.
The idea of a programming language that can be molded by its users—I like the phrase language extensibility—is almost as old as our oldest programming languages, given the history of macros in Lisp. So why isn’t everyone already using macros to extend languages? Like garbage collection, macros may seem like a cool idea in principle, but with too much overhead to be practical (but with the overhead in program understanding, instead of program execution). Like first-class functions, macros add an extra dimension to code that may seem too mind-twisting for an average programmer. And like a type system, the theory behind hygienic macros may seem too daunting to be worth the extra guarantees that hygiene provides. Maybe so. But Beautiful Racket makes the case that the time for language extensibility has come. That’s why this book is important. It’s not an abstract argument about the benefits of macros or a particular style of macros. Instead, this book shows you, step by step, how to use Racket’s macro system on real problems and, as a result, get a feel for its benefits.
If you’re already a coder: Glitch makes every other development environment feel lonely and old-fashioned, as coding starts to feel more like simultaneous editing in Google Docs and less like the chore of reviewing pull requests. Everything you create is automatically deployed in realtime onto cloud servers, so there’s no provisioning of servers or management of infrastructure, just the joy of creating. If you’ve never coded before: Glitch is the place to start. We’ve got a friendly and welcoming community (we don’t tolerate people being jerks) and you start by remixing apps that already work, running on real web servers that you don’t have to learn how to manage. If you do get stuck, anyone in the Glitch community can come in and offer to help, just as easy as raising your hand.
It’s hard to pin down what Processing is, precisely. I admit, it can be confusing, but here it is: it’s both a programming environment and a programming language, but it’s also an approach to building a software tool that incorporates its community into the definition. It’s more accurate to call Processing a platform — a platform for experimentation, thinking, and learning. It’s a foundation and beginning more than a conclusion. Processing was (and still is) made for sketching and it was created as a space for collaboration. It was born at the MIT Media Lab, a place where C. P. Snow’s two cultures (the humanities and the sciences) could synthesize. Processing had the idea to expand this synthesis out of the Lab and into new communities with a focus on access, distribution, and community. Processing is what it is today because of the initial decisions that Ben and I made back in 2001 and the subsequent ways we’ve listened to the community and incorporated contributions and feedback since the beginning. Processing was inspired by the programming languages BASIC and Logo in general, and specifically by John Maeda’s Design By Numbers, C++ code created by the Visual Language Workshop and Aesthetics and Computation Group at the MIT Media Lab, and PostScript. Processing wasn’t pulled from the air, it was deeply rooted in decades of prior work.
A modern data scientist often has to work on multiple platforms with multiple languages. Some projects may be in R, others in Python. Or perhaps you have to work on a cluster with no gui. Or maybe you need to write papers with latex. You can do all that with Emacs and customize it to do whatever you like. I won’t lie though. The learning curve can be steep, but I think the investment is worth it.
Mathematical notation provides perhaps the best-known and best-developed example of language used consciously as a tool of thought. Recognition of the important role of notation in mathematics is clear from the quotations from mathematicians given in Cajori’s A History of Mathematical Notations [2, pp.332,331]. Nevertheless, mathematical notation has serious deficiencies. In particular, it lacks universality, and must be interpreted differently according to the topic, according to the author, and even according to the immediate context. Programming languages, because they were designed for the purpose of directing computers, offer important advantages as tools of thought. Not only are they universal (general-purpose), but they are also executable and unambiguous. Executability makes it possible to use computers to perform extensive experiments on ideas expressed in a programming language, and the lack of ambiguity makes possible precise thought experiments. In other respects, however, most programming languages are decidedly inferior to mathematical notation and are little used as tools of thought in ways that would be considered significant by, say, an applied mathematician.
Truffle is a framework for writing interpreters with annotations and small bits of extra code in them which, when Truffle is paired with its sister project Graal, allow those interpreters to be converted into JIT compiling VMs … automatically. The resulting runtimes have peak performance competitive with the best hand-tuned language-specific compilers on the market. For example, the TruffleJS engine which implements JavaScript is competitive with V8 in benchmarks. The RubyTruffle engine is faster than all other Ruby implementations by far. The TruffleC engine is roughly competitive with GCC. There are Truffle implementations in various stages of completeness
In our work developing and understanding these near term applications we’ve developed a framework to help us work with the algorithms concretely. Today the Software and Applications team at Rigetti is excited to share the description of this work in our whitepaper [A Practical Quantum Instruction Set Architecture]. We have focused on a simple model that is compatible with the types of devices that are likely to be available first.
“It used to be the case that people were admonished to not“re-invent the wheel”. We now live in an age that spends a lot of time“reinventing the flat tire!””
Eve is designed for live programming. As the user makes changes, the compiler is constantly re-compiling code and incrementally updating the views. The compiler is designed to be resilient and will compile and run as much of the code as possible in the face of errors. The structural editor restricts partially edited code to small sections, rather than rendering entire files unparseable. The pointer-free relational data model and the timeless views make it feasible to incrementally compute the state of the program, rather than starting from scratch on each edit. We arrived at this design to support live programming but these properties also help with collaborative editing.
The basic unit of meaning in Escher is the reflex. A reflex is a named black-box computational device, which interfaces with other objects in the linguistic environment through a set of named valves. Metaphorically, the valves can be viewed as labeled communication pipes coming out of the black-box.
This post is a crash course on the notation used in programming language theory (“PL theory” for short). For a much more thorough introduction, I recommend Types and Programming Languages by Benjamin C. Pierce and Semantic Engineering with PLT Redex by Felleisen, Findler, and Flatt. I’ll assume the reader is an experienced programmer but not an experienced mathematician or PL theorist. I’ll start with the most basic definitions and try to build up quickly.
Programming is complicated. Different programs have different abstraction levels, domains, platforms, longevity, team sizes, etc ad infinitum. There is something fundamentally different between the detailed instructions that goes into, say, computing a checksum and the abstractions when defining the flow of data in any medium-sized system. I think that the divide between coding the details and describing the flow of a program is so large that a programming language could benefit immensely from keeping them conceptually separate. This belief has led me to design a new programming language - Glow - that has this separation at its core.
Storm is a free and open source distributed realtime computation system. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing. Storm is simple, can be used with any programming language, and is a lot of fun to use
To commemorate this famous event, commonly known as the mother of all demos, SRI held a 40th anniversary celebration at Stanford today. As a small tribute to the innovative ideas that made up the demo, it is befitting to mention some of the programming languages that were used by Engelbart’s team. A few were mentioned in passing in the event today, making me realize that they are not that widely known.
Extempore is a programming language and runtime environment designed with live programming in mind. It supports interactive programming in a REPL style, compiling and binding code just-in-time. Although Extempore has its roots in ‘live coding’ of audiovisual media art1, it is suitable for any task domain where dynamic run-time modifiability and good numerical performance are required. Extempore also has strong timing and concurrency semantics, which are helpful when working in problem spaces where timing is important (such as audio and video).
One important step towards a more systematic approach to online update is to make the dimension of interaction explicit. This is one of the things I’ve focused on in my own research, which I call interactive programming, although that term has probably already been laid claim to. I allow the user to step sideways in time, into a “counterfactual” execution where it is “as though” the program had been written differently from the outset. Inspired by Demaine etal‘s retroactive data structures, which are imperative data structures which permit modifications to the historical sequence of operations performed on them, I’ll refer to this notion of online update as retroactive update. Retroactive update allows the “computational past” to be changed. Self-adjusting computation (SAC) is another system based on retroactive update. SAC explores another crucial aspect of online update: efficient update, via an algorithm called change propagation. SAC’s commitment to retroactivity appears in the correctness of change propagation, which is defined as consistency with a from-scratch run under the modified code.
While building rules.io we found ourselves connecting to lots of APIs. We also found ourselves building user interfaces that we knew would eventually connect to an API of our users’ choosing – but we wouldn’t know which API until runtime. Working with APIs in this very dynamic way led us to build some interesting technology, and gave us some fresh perspectives on how best to use API-based services from web and mobile applications.
Last time, we talked about an interesting generalization of Conway’s Game of Life and walked through the details of how it was derived, and investigated some strategies for discretizing it. Today, let’s go even further and finally come to the subject discussed in the title: Conway’s Game of Life for curved surfaces
“I have repeatedly been confounded to discover just how many mistakes in both test and application code stem from misunderstandings or misconceptions about time. By this I mean both the interesting way in which computers handle time, and the fundamental gotchas inherent in how we humans have constructed our calendar — daylight savings being just the tip of the iceberg.”
Anyone who has tried to edit code on the iPad through a traditional textview knows that it doesn’t work well. Editing source code character by character is a concept wedded to the keyboard and it is inappropriate for the iPad, a device with no keyboard. Lisping abandons this model and allows you to edit your code via the parse tree. Rather than manipulating ranges of characters Lisping focusses on selecting, creating and moving syntax elements, a task ideally suited to the iPad’s touchscreen interface, and also - more than a little bit fun.