“Cascadian Sigils Used for Warding Away Drones from Structures and Fields” - Courtesy of Institute of Atemporal Studies. Exhibit…

transceiverfreq:

wolvensnothere:

idlnmclean:

wolvensnothere:

idlnmclean:

wolvensnothere:

interdome:

wolvensnothere:

teratocybernetics:

wolvensnothere:

interdome:

“Cascadian Sigils Used for Warding Away Drones from Structures and Fields” - Courtesy of Institute of Atemporal Studies. Exhibit at Weird Shift Con 3013. #WSC

I’m about to say the kind of thing that gets me shunned from “serious” conversations in 3…2…1…

These are a fantastic thought experiment, but wouldn’t they do better if they A) Incorporated the language of  those most affected by said drone strikes and B) were actually installed in said structures and fields? Just… sigil efficacy and all…

I want to know if it’s possible to make graphics like QR codes that affect drones upon their cameras processing the pattern.

Though Interdome DID learn me a thing about the Cascadian Language (http://interdome.tumblr.com/post/52737713908), Teratocybernetics’ question still stands. I think the answer is “Yes, If…” and As A Science Fiction Writer, I would like to know the parameters of that “IF.”

Everyone likes reblogging this particular post (thanks!) but the full exposition of Cascadian Drone Sigils went up as part of Murmuration, and I highly suggest checking that out for many more details about the sigils and their historical use in Cascadia.

http://murmurationfestival.tumblr.com/post/54186398655/cascadian-drone-sigils-an-instance-of-drone-culture

As for the question about QR codes, that is technically unlikely. The drones would have to be loaded with software to detect a particular code, and as drones (at least the military drones of Operation Green Perimeter) are designed to record video for human eyes, it is unlikely there is any sort of “machine vision” software installed. The video is just video to the drone, not information. Patterns have meaning to human eyes and not computers, unless those computers are programmed to detect patterns.

However, if there was some kind of facial recognition used on the video feed, there might be a way of affecting these tracking algorithms (see William Gibson’s “ugly shirt” in Zero History). There is always a way of confusing an algorithm, if someone figures out what bugs it contains.

A better way of messing with a drone’s camera is to fuck with the data it is meant to pick up. Dazzling it with a laser, or IR source. This is messing with the camera sensor itself rather than any computer algorithm. The camera sensor, of course, every drone relies upon.

It should be noted that drone sigils are not understood to affect the drones visually. That is, while they are placed visibly, they are not necessarily meant to be visible to the drone itself. They will work on the drones regardless, according to the makers. The relationship between the sigil and the drone is not based upon the drones’ cameras, it seems.

I think the Ugly Shirt is more the idea here, with QR being (for me) the closest out-in-the-world analog. Something that is of a class of things thata drone or other semi-autonomous agent is programmed to recognise, but which is devised in such a way as to play on the drones flaws, and cause alterations in the system.

A visual virus for drones.

Also, for those who maybe don’t know sigil theory, sigils never need to be seen, unless they Need To Be Seen. Their existence, as Interdome indicates, is a thing that allows the unconscious will and intent of the sigil worker to contact and interact with the numinous/collective unconscious/superflow/whatever, and find the path of least resistance to obtaining those results in the physical world.

Let me explain you a thing. We have four major classifications of languages—indifferent to medium in general—which correspond to four major classifications of machines: regular, context free, context sensitive, and recursively enumerable. There’s this thing in high quality software engineering and computer architecture called code completion where things like NASA software are theoretically proven to have no bugs. Other industries use code completion proofs like Eternal Darkness on the Game Cube which has no unhandled bugs—its NASA grade software and provably bug free.

Let’s go back a bit. Regular languages correspond to finite state machines; what is a finite state machine? The switch that operates the light bulb in your room? That’s a finite state machine; tell me, do you know how to hack an arbitrary light switch? In general, you can’t unless you are physically located at the switch and capable of switching it. We can talk about the metalanguage of the light switch which is the power grid, but we’d need to ask questions about the security and degree of bug-freeness of the power grid.

For now, we can say that finite state machines are internally secured; they will never produce within themselves unexpected behavior, so they are generally not hackable. It is highly likely the camera systems of the drones are finite state machines that are isolated from the critical systems of the drone’s operation. Even if they’re integrated in the critical operations of the drone, we have three classifications of languages which can be rendered totally decidable: regular, context-free, and context-sensitive. If the behavior of a system is rendered totally decidable from the point of view of the drone operators then there isn’t a bug to exploit. That’s code completion. Even if something got into the system, the system can be compartmentalized and individual systems can be reset; it is probable that the designers of the drones take advantage of virtual machines to isolate and control critical systems, so the drones can’t be compromised in part or totally by QR code equivalents.

The major question in asking about a system isn’t in general the system itself but the metasystem which controls it. In many cases, the people who design, build, and operate the systems. If the designers are incompetent then the system will have vulnerabilities, and I can attest to the general incompetence of the security and IT community, so I figure there are vulnerabilities, but I highly doubt they are in the camera system. The camera system is in large part for human eyes, and in large part, controlled by human hands even if only remotely. The machine is likely to be built in at least three separable component systems: the camera feed (totally finite state), the eye recognition system (not necessarily finite state but likely has only read only access to the camera feed), and the interface (likely a feedback system between the control surfaces of the drone, the joystick system over encrypted R/C transceiver, and the eye recognition system).

The vulnerable system is likely to be a component within the interface of the drones. Hence, why we’ve seen a couple of articles over the years about engineers in other countries trying to hijack the drones. The drones rely on a radio signal to operate. In the absence of that radio signal, they are unguided and would be vulnerable to crashing. The more nuanced vulnerability could probably be located by evolutionary algorithm analyzing feedback between the control surfaces of the drones, their radio signals, and the sensor system. Each component has to react in a particular way, so a computer should in general be able to optimally isolate causal influences to each system; a human operator influences the movement of the drone which makes some of the control movements unpredictable, but the unpredictability is precisely the criterion by which we can rule out the human influence; for autopilot and auto tracking behaviors, there has to be a non-trivial feedback relationship between the sensor systems and the control surfaces; feed a drone sensory information and watch it react; chaos likely exists somewhere in the autonomic system, so with sufficient sensory flashes, you can probably get the system to throw an epileptic fit. Shutting off the sensory systems would not generally be an option, and feedback between the sensory systems and the control surfaces would not generally be an option. But we’re not talking about QR code or a single system vulnerability like of the camera; the vulnerability is likely dynamic, distributed, and hard for human engineers to predict and probably in the class of NP complexity.

This tracks with what I was thinking: The more autonomous the system, the more likely an exploit based on sensory data. That complexity and dynamism of exploit isn’t really a problem, per se, but just requires a viable, repeatable Starting point. …You know, for the Science Fiction Stories We’re Writing.

The hard part is designing and building the sensory projection array that would allow for dynamic real time analysis of the drone’s reactions in response to sensory flashing. Once you have a couple of those arrays, you could deal with drones in your airspace pretty easily. Use the Japanese technique of looking for differential atmospheric conditions to triangulate the position of the drones; start with any randomized sequence of coherent and decoherent emf and sound flashes; evolutionary algorithm does statistical analysis looking for non-random drone responses and whittles down to sets of regular languages mediated in sensory flash sequences which might have some causal effect on the drone’s control surface or active systems. Candidates are strobed and candidates that fail to produce an effect within some specified range of precision are eliminated. No need to worry generally about damaging the drones with high powered burst transmissions because blinding or deafening the drones isn’t an undesirable consequence in general—unless you only want to analyze them rather than bring them down in your airspace.

Hmm. Yes.

world’s ugliest t-shirt style