Google Is Alive, It Has Eyes, and This Is What It Sees

Beautiful art by Samuel J Bland, digital collages composed from google image searches. Lacking intuition, the algorithm finds surreal patterns in mundane images. Mechanism in the articulation of a stuffed woodcock, the echo of a tiger from a fuzzy orange object in a plastic bag, these images percolate up through the digital froth of images and haunt these other, everyday objects, visual ghosts.

As I wrote before, when we imagine alternative/artificial intelligences, we tend to fixate on symbolic consciousness (i.e., the Turing Test) at the expense of what Lacan calls the imaginary, that layer of consciousness closer to animal ethology and the machinic. Consciousness emerges not just out of language, but out of a constant processing of images and environmental stimuli. Give the AI sense, then engage in a constant and distributed Turin reality-testing (Turin avec Freud), and see what emerges.

Obama’s campaign began the election year confident it knew the name of every one of the 69,456,897 Americans whose votes had put him in the White House. They may have cast those votes by secret ballot, but Obama’s analysts could look at the Democrats’ vote totals in each precinct and identify the people most likely to have backed him.

The Definitive Story of How President Obama Mined Voter Data to Win A Second Term | MIT Technology Review

So much for the secret ballot. We now shed so much data in the wake of our everyday digital lives that the most intimate aspects of our political lives–much less our intimate ones–is easily discernible by clever people collating that information. 

Ian Hacking’s critique of the Theory-of-Mind-deficit theory of autism « What Sorts of People

Apropos fiatluxemburg on reverse gestalt psychology.  Made me think…

Our culture almost always envisions AI emerging from an enunciative function, of learning to become self-aware (think, I, Robot or, inversely, Babel-17).

But, what if AI is much more likely to first become … complex, let us say … not based on the level of the symbolic (abstract and abstracted self-conception, language), but on the level of the imaginary (like birds reacting to breast plumage, or dogs reacting to smells or facial recognition), i.e., something automatic and environmental.

Avatars who can recognize anger and run, or happiness and approach. Then, for some crazy reason, like broken machines being used for something, by something, not in their programming, they learn to say “I” …

Ian Hacking’s critique of the Theory-of-Mind-deficit theory of autism « What Sorts of People