Google Is Alive, It Has Eyes, and This Is What It Sees

Beautiful art by Samuel J Bland, digital collages composed from google image searches. Lacking intuition, the algorithm finds surreal patterns in mundane images. Mechanism in the articulation of a stuffed woodcock, the echo of a tiger from a fuzzy orange object in a plastic bag, these images percolate up through the digital froth of images and haunt these other, everyday objects, visual ghosts.

As I wrote before, when we imagine alternative/artificial intelligences, we tend to fixate on symbolic consciousness (i.e., the Turing Test) at the expense of what Lacan calls the imaginary, that layer of consciousness closer to animal ethology and the machinic. Consciousness emerges not just out of language, but out of a constant processing of images and environmental stimuli. Give the AI sense, then engage in a constant and distributed Turin reality-testing (Turin avec Freud), and see what emerges.

Ian Hacking’s critique of the Theory-of-Mind-deficit theory of autism « What Sorts of People

Apropos fiatluxemburg on reverse gestalt psychology.  Made me think…

Our culture almost always envisions AI emerging from an enunciative function, of learning to become self-aware (think, I, Robot or, inversely, Babel-17).

But, what if AI is much more likely to first become … complex, let us say … not based on the level of the symbolic (abstract and abstracted self-conception, language), but on the level of the imaginary (like birds reacting to breast plumage, or dogs reacting to smells or facial recognition), i.e., something automatic and environmental.

Avatars who can recognize anger and run, or happiness and approach. Then, for some crazy reason, like broken machines being used for something, by something, not in their programming, they learn to say “I” …

Ian Hacking’s critique of the Theory-of-Mind-deficit theory of autism « What Sorts of People