Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, April 08, 2009

If I Only Had a Brain

In my prior post, I record another thrilling episode in my ongoing exchange with some superlative futurologists over at Michael Anissimov's place, and of course that exchange has continued still further onward. I had said, you will recall, "I guess I find it rather perplexing that anybody here would think I would find this development surprising [Wired reported the development of a complex new computer program and about its uses], or that superlative futurologists see this sort of thing as a 'vindication' of their faith-based perspective. Of course, for True Believers there are only vindications when all is said and done, after all."

To this, "Thom" replied: EVERYTHING you say, everywhere I look, is exactly that. Something like deducing physics from raw data by a computer -- “is just a superlative fantasy example." As far as I understand you. If not, what is your point then?"

What, you think readers of poetry have never seen a calculator? Look, you quite palpably don't understand me. And I honestly don't think you are really interested in understanding me, or in grasping the point at hand. Even if you didn't read any of the thousands upon thousands of words in which I've elaborated my superlativity critique (Michael, to his credit, linked to it from the beginning -- even if he remains unconvinced he understands there is a context out of which my present comments arise) I've already provided plenty of arguments over the course of our own conversation that complicate the facile gloss you attribute to me here.

"Roko," on the other hand keeps on demanding from me what he is calling "the facts" and "rigorous arguments" over and over again. He seems to be quite bolstered through this joyless ritual, which no doubt is its primary objective. Needless to say, the demand for "facts" here actually looks to me to constitute a radical circumscription of the kinds of attention one can usefully and intelligently pay to the discourses he identifies with as a way of insulating himself from discomfiting criticism.

Not that any of you will listen or care, any more than you ever do, but once again, this time with feeling: Superlativity in my view is a discourse not a research program, it relies for its force and intelligibility on the citation of other, specifically theological/ transcendentalizing discourses, it is a way of framing a constellation of descriptions taken for facts, embedding them into a narrative that solicits personal identification, and forms the basis for moralizing advocacy.

I don't think there is any way of talking about this sort of thing that "Roko" would regard as "fact-based" "rigorous" and not "name calling" according to his reckoning, not so much because the critique is counter-factual, sloppy, or essentially name-calling but because it is pitched at a level pf generality and attuned to concerns he has swept off the field of consideration from the get-go. This circumscription, by my lights, is an indictment of "Roko" and not me.

Lots of intelligent people care and attend to the world differently than superlative futurologists seem to do, and this matters for Robots Cultists in at least two ways that they don't seem to grasp. First, since even technoscientific progress is not just a matter of a socially-culturally-politically indifferent accumulation of inventions but a process fueled and distributed by norms, laws, expectations, and vicissitudes in social struggles, they simply won't get anything remotely like what they want in the way of useful technoscientific change unless they gain more understanding of and sensitivity to the sorts of issues I raise and which they seem to think they can simply define or ignore out of existence. Second, given their apparent faith that technoscience is about to nudge us all via "The Singularity" into something Utterly Unimaginably Other, and they think in their heart of hearts that they pine to be rubbing elbows with post-biological superintelligences of the Robot God variety and so on, surely they should find some pause in the fact that they are apparently suffused with revulsion and incomprehension at the confrontation with even a conventional intelligence like mine that differs from their own simply in a handful of assumptions and aspirations and matters of style. Doesn't this set off any "be careful what you wish for" buzzers in their brains?

Michael, for his part, drudges up the barnacle-encrusted "Sokal Affair," and then declares: "Lots of smart people (including very liberal smart people) in SF make fun of Berkeley professors, so this derision is hardly limited to the 'futurological rodeo'". I must say that I am a bit surprised that Michael would want to cast a net of derision over the whole cohort of Berkeley professors, or even over the whole cohort of Berkeley professors in the humanities, given how many of them are incomparably more accomplished in the eyes of the world than I am. But, yes, it's true, as Michael says, certain rather embarrassingly ignorant people like to laugh at Berkeley, as they do "educated elites" more generally. I understand that in Indianapolis a couple of years ago there was a billboard announcing the wish of some of the good real Americans living there that America should "Bomb Berkeley," for example.

So, too, certain rather embarrassingly ignorant misguided people, caught up in the storm-churn of disruptive technoscientific change, seem to have gotten the idea that software is going to spit out a Robot God either to solve all their problems for them or end the world, that either a perfectly efficacious medicine or a brain scan that is somehow the same thing as them rather than a snapshot of them is going to spit out an invulnerable immortalization of them, and that nanoscale technique is going to spit out a superabundance through which we can circumvent the impasse of stakeholder politics in a still-finite world we share with an ineradicable diversity of peers.

I am quite content to count myself among the number who laugh at the know-nothings and futurologists and not at the Berkeley professoriate as a cohort. So are many of the people on whom superlative futurologists depend for the actual scientific and political progress they go on to sensationalize and hyperbolize in order to sell their scam or cope with their fear of death and contingency or indulge their social or bodily alienation or lose themselves in wish-fulfillment fantasies inspired by science fiction or try to gain some sense of purchase, however delusive, on their precarious inhabitation of a dangerously unstable corporate-militarized world or whatever it is that made them personally become silly Robot Cultists in the first place.

16 comments:

Anonymous said...

You're right Dale, few will listen to you about superlativity, now or ever. Meaning, few will start saying "yes Dale, you're right, transhumanist discourse is wrong or silly or harmful." To the contrary, a lof of serious, bright, and thoughtful people will likely continue to see transhumanist discourse as having value no matter how many times you repeat your critique.

That's not to say your critique is not valuable. It improves the quality and sophistication of the discourse, if not immediately and in response to every word you say, then certainly over time. So cheer up, just don't hope or expect that the result of all your criticisim will be to bring a bunch of ideas crashing down around you while you laugh in victory. It just doesn't work that way.

Anonymous said...

I don't think advances in evolutionary algorithims advance or detract from the notion of The Singularity. It is an impressive development but it doesn't prove that any of the implicit assumptions in the Singularity premise are true.

Dale Carrico said...

You're right Dale, few will listen to you about superlativity, now or ever.

Well, it's a philosophical critique and only a minority of people engage in such philosophical critique. By the way, this isn't an elitist condemnation of the majority -- people can be thoughtful without being philosophical, strictly speaking, after all.

Meaning, few will start saying "yes Dale, you're right, transhumanist discourse is wrong or silly or harmful." To the contrary, a lot of serious, bright, and thoughtful people will likely continue to see transhumanist discourse as having value no matter how many times you repeat your critique.

Ah, poor little Robot Cultist. Nearly everybody who comes into contact with the transhumanists decides they are silly and wrong and dismisses them on the spot as kooks, and quite rightly so. No doubt some people will continue to be drawn to superlativity, for reasons like the ones I mention at the end of this post: namely, "in order to sell their scam[s] or cope with their fear of death and contingency or indulge their social or bodily alienation or lose themselves in wish-fulfillment fantasies inspired by science fiction or try to gain some sense of purchase, however delusive, on their precarious inhabitation of a dangerously unstable corporate-militarized world[.]"

Only a vanishingly small minority of people are transhumanist-identified or avowed singularitarians and so on. Thoughtfulness is not exactly the quality these people share in my view.

Most people don't take the superlative futurologists and Robot Cultists seriously enough in the first place to understand why I devote the time I do to critiquing them. I don't think many grasp that superlative futurology is a symptom and clarifying extreme expression of corporate-militarist developmental discourse more generally, and that such futurology is the quintessential ideological expression of neoliberalism.

I do think it is regrettable that I have not managed to attract more attention from critics of corporate-militarism, but not convincing a few dumb boys who fetishize their toys to give up their Robot Cult is hardly any kind of abiding regret where the critique superlativity is concerned.

Anonymous said...

As the original anonnymous in this thread you really sidestepped my point.

Very few people find transhumanist discourse compelling, but this is not because of you, it is because of the nature of the discource. Everyone acknowledges this stuff is way out there.

My point was a completely personal one: you, Dale Carrico, aren't having the impact that you seem to fancy you are. Your impact is in improving the transhumanist discource, not bringing it down.

Dale Carrico said...

I say that something that looks to me stupid and dangerous is stupid and dangerous and then I say why. I don't have any expectation that I could singlehandedly "bring down" even anything so marginal and silly as the Robot Cultists, nor certainly overcome through critique the neoliberal ideology that seems to me essentially futurological in ways that Robot Cultism speaks to, but I must admit I don't see the Robot Cultists "improving" particularly, either, certainly not as a consequence of their taking me of all people seriously. As far as I can see they can't even read me.

Anonymous said...

If you can't see it you're not paying attention. Michael Anisimov and the others who spar with you regularly really love that you've devoted yourself to full time transhumanism criticism, although they probably wouldn't admit it.

They say you can judge a man's importance by the quality of his enemies. If that's true then you've certainly upped the reputation of transhumanists -- which is not saying terribly much for either you or transhumanism, but everyone has to start somewhere. You're the loyal opposition Dale. Nothing wrong with that, was just curious if you were aware of the symbiotic relationship you have with them.

Anonymous said...

But getting off of the endlessly interesting subject of Dale Carrico for a moment, what do you mean when say "the neoliberal ideology that seems to me essentially futurological in ways that Robot Cultism speaks to"? Could you name me some of the neoliberal ideologues I might have read in The New Republic or seen on TV so I know what you are talking about?

Dale Carrico said...

David Harvey's A Brief History of Neoliberalism provides a good and quite readable introduction to the topic. But there are countless thousands upon thousands of books and articles that take it up for anybody who is inclined to explore.

Not only do I not agree with you that the people who "are paying attention" see intelligent machines or embryonic soon-to-be-intelligent machines thronging the world (the number of leprechauns in the world is comparable); nor do I agree that the number of actually published and cited consensus scientists who expect to live to see a world in which nanofactories have delivered superabundance or NBIC med-techs have delivered superlongevity would overcrowd a hotel elevator were they to share it; indeed, I suspect that those who are paying attention are noticing that rather than superlative technologies on the horizon we are confronted with a world in which we are losing the capacity to build and maintain infrastructure of a kind we built a century ago while we teeter on the brink of collapse as we reap the legacies of extractive- industrial- petrochemical technoscience boosted by futurological cheerleaders as clueless in mid-century as superlatives are now.

And, no, I don't agree with you that transhumanists are thrilled at my criticisms.

jimf said...

> And, no, I don't agree with you that transhumanists are thrilled
> at my criticisms. . .

You're certainly right about that. They would shut you up if they
could. No one is permitted to make these sorts of criticisms in
forums under the control of transhumanists themselves, even though
they're pretty obvious criticisms when you think about it -- if
folks as divergent as you, me, John Bruce, Mike Darwin, Paulina Borsook,
Jaron Lanier, and -- who else? -- Annalee Newitz? can independently
arrive at awfully similar conclusions about the >Hists, then
ya gotta think something's going on here.

That's another characteristic that screams CULT! They're so
damned thin-skinned that they screech in outrage and agony
when any "outsider" dares to question **any part** of what
Richard Jones recently called their "belief package".

And they call that "science" with a straight face!

Anonymous said...

Dale, when I said "if you can't see it you're not paying attention," I meant "see" the attention that transhumanists give you. They are 90% of your readership, and they pay attention because you spend so much time attacking them. I don't know what transhumanist types would read all day if your blog wasn't here.

It's just how half of Rush Limbaugh's audience is reportedly comprised of liberals who want to know what he's saying. Not a difficult thing to figure out, if you are paying attention.

I didn't mean that you were blind not to notice the imminent coming of robotopia. Really, who gives a fuck? That's one of the least interesting parts of transhumanist discourse, yet it attracts a disproportionate amount of attention from both you and (admittedly) the movement adherents.

Anonymous said...

You refer me to a book? Thanks a bunch, like I'm going to read that.

From the wikipedia entry on Neoliberalism:

"In the United States, neoliberalism can also refer to a political movement in which members of the American left (such as Michael Kinsley, Robert Kaus, Mickey Kaus, and Randall Rothenberg) endorsed some free market positions, such as free market economics and welfare reform."

And on the opposite side... Noam Chomsky. I think I get it.

I think you'll find transhumanist tendencies in both the neolib and neocon branches of the political spectrum. It's the paleolibs and cons who seem to have the biggest problem with the thrust of the discourse.

Dale Carrico said...

transhumanists... are 90% of your readership

I don't think that is true, and I would find it truly depressing if it were.

Dale Carrico said...

You refer me to a book? Thanks a bunch, like I'm going to read that.

I'm going to assume you're yanking my chain.

jimf said...

> I didn't mean that you were blind not to notice the imminent
> coming of robotopia. Really, who gives a fuck? That's one of the
> least interesting parts of transhumanist discourse, yet it
> attracts a disproportionate amount of attention from both you
> and (admittedly) the movement adherents.

Well I, for one, give a fuck.

For me (at least, once upon a time), the coming of "robotopia"
was **the** most plausible element of >Hism (Ayn Randism,
cryonics, and even molecular nanotechnology were almost
below my radar back then).

From reading Moravec's _Mind Children_ when it came out back
in '88, through Eliezer Yudkowsky's Web-published "Staring into
the Singularity", to the '99 pair of books (Moravec's
_Robot: Mere Machine to Transcendent Mind_ and Kurzweil's
_The Age of Spiritual Machines_), it's been the plausibility
and prospects (or lack thereof) for artificial intelligence
that's been at the core of the >Hist agenda.

The AI-fueled mechanism for the "Singularity" -- as Damien
Broderick put it:

"If machine-minds outstrip ours, and then continue to evolve or
rewrite their own intellects at the rate of a new generation per year,
and then per month, and then per day..."

is the hope that the AI dreamers have nursed in private
(and even occasionally in public) since the heyday of Marvin Minsky
in the 60s.

The superficial plausibility of AI, together with the fruits of
Moore's Law (which **everybody** has gotten to see first-hand during
the past 15 years) made that rocket-ride seem almost ready to
launch.

Sure, there's biotech (and >Hism a la Stapledon's _Odd John_
or that 60s _Outer Limits_ episode "The Sixth Finger") but biotech --
despite the explosion of knowledge since the discovery of
DNA in the 50s -- just hasn't seemed to translate into progress
at the doctor's office or the drug store with anything like
the reliability of the succession of consumer goodies available
at CompUSA (back when CompUSA was, uh, in business).

jimf said...

> The AI-fueled mechanism for the "Singularity" -- as Damien
> Broderick put it:
>
> "If machine-minds outstrip ours, and then continue to evolve or
> rewrite their own intellects at the rate of a new generation per year,
> and then per month, and then per day..."
>
> is the hope that the AI dreamers have nursed in private
> (and even occasionally in public) since the heyday of Marvin Minsky
> in the 60s.

"In order for a program to improve itself substantially it would
have to have at least a rudimentary understanding of its own
problem-solving process and some ability to recognize an
improvement when it found one. There is no inherent reason
why this should be impossible for a machine. Given a model of its
own workings, it could use its problem-solving power to work
on the problem of self-improvement. . .

Once we have devised programs with a genuine capacity for
self-improvement a rapid evolutionary process will begin. As
the machine improves both itself and its model of itself,
we shall begin to see all the phenomena associated with the
terms 'consciousness,' 'intuition' and "'ntelligence' itself.
It is hard to say how close we are to this threshold, but once
it is crossed the world will not be the same."

-- Minsky, "Artificial Intelligence," _Scientific American_,
Vol. 215, No. 3 (September 1966), p. 257

(cited in Hubert L. Dreyfus' _What Computers
**Still** Can't Do_ [Part I "Ten Years of Research in
Artificial Intelligence (1957 - 1967)", Chapter 2, "Phase II
(1962 - 1967) Semantic Information Processing"], pp. 135 - 136).

jimf said...

> [Marvin Minsky wrote, in a 1966 _Scientific American_ article:]
>
> "It is hard to say how close we are to this threshold, but once
> it is crossed the world will not be the same."

Vernor Vinge, the math professor and SF author, can be given credit
only for 1) drawing the (admittedly clever) analogy between this
"threshold" and a mathematical "singularity", thereby extending the
usage of the latter term and 2) using the concept in some decent
SF (the "Across Realtime" suite).

The idea of self-improving machines goes back to Samuel Butler's
"The Book of the Machines" in the 1872 _Erewhon_ (inspired, they
say, by Butler's reading of Darwin's 1859 _Origin of Species_).
The "Butlerian Jihad" (against AIs) in Frank Herbert's _Dune_ is
presumably an homage to _Erewhon_.

Just came across an interesting thread on Charlie Stross's blog
that mentions the Vingean Singularity as an example of a genre-changing
SF trope.

http://www.antipope.org/charlie/blog-static/2008/05/why_the_fermi_paradox_isnt_mor.html

> > We run out of engineerable scale, and exponentiation-by-getting-small
> > ends. Thereafter, computation merely grows polynomially, as a sphere of
> > computronium expanding at light speed.
>
> Hans Moravec already covered this in his strange work _Robot: Mere Machine
> to Transcendent Mind_, a sort of paean to the perils of prediction (although
> I don't think it was intended that way). It starts out rigorously logical
> and practical and then goes off the rails in remarkable style, with
> chapter 4 talking about easily foreseeable household robots, chapter 5
> dragging in such things as the economics of complete mechanization,
> space elevators, and bush robots, and chapter 6 opening with
> 'a bubble of Mind expanding at near lightspeed' and dragging in fundamental
> limits to computation and the Bekenstein bound, the use of time machines
> for computation and much else... by chapter 7, he's pointing out that
> if you look at it right a randomly selected rock is an intelligent being
> (in fact an unlimited number of different intelligent beings), and one
> has to wonder if the whole book is one marvellous piss-take.

> Actually, "Robot" is just a recapitulation of Moravec's earlier and
> even more interesting book "Mind Children" (1990).
>
> If he'd gone the same way as Vernor Vinge, they'd have co-founded a major
> movement in SF.

> I believe Kurzweil is very good at publicizing the ideas that were common
> currency on the Extropians mailing list in the mid to late nineties.
> Ken MacLeod and I hung out there in the very early 90s. I give him very
> little credit for inventing those ideas, because I'd seem 'em all a long
> time before he started writing about them.
>
> (Hint: the stuff that came out in "Lobsters", in 1999, was partially
> catalysed by me dipping back into my old stomping grounds six or seven
> years later.)
>
> Minsky is ... well, he devoted his career to pushing the idea of
> procedural AI, and aside from one howling mistake he did very well -- but
> I think he was trying to till barren soil.
>
> Vernor has had the one huge idea in his career, which is as much as most
> first-rank SF writers can ever hope for. (This says nothing about his
> career as a computer scientist ;-)

> Clarke didn't really do fiction about satellites, which is what I was
> getting at -- to invent a **fictional** trope (and one that you can
> quantify in scientific terms) is rare. Larry Niven got there with
> Ringworld (the quintessential Big Dumb Object), for example. Greg Bear
> got there with genetics-as-a-computational-process in "Blood Music".
> William Gibson gave us cyberspace, while Ursula le Guin took the
> pre-existing assumptions of class and gender in SF and stuck several
> sticks of dynamite under them (in, for example, "The Dispossessed"
> and "The Left Hand of Darkness").
>
> But a surprising number of SF writers never have a major idea in their
> entire career.