Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, October 31, 2007

Okay, It's Hallowe'en

[Found via Queerty]

Why Pay Attention to Marginal Techno Discourses?

Over on his blog "Question Technology" Kevin Arthur posted two quotations without much comment (and I'm providing even less, but sometimes little comment is necessary), and their juxtaposition struck me as rather evocative given my latest heated exchanges with transhumanists and singularitarians and such. The title of the post is Techno-Globalization Hype -- from the 1920s.

The author of the first quotation is that social progressive paragon Henry Ford (irony impaired technobooster ignoramuses lurking hereabouts, please insert smiley with emphatically gasping mouth here), from his memoir My Philosophy of Industry, published in 1929:
"Machinery is accomplishing in the world what man has failed to do by preaching, propaganda, or the written word. The aeroplane and wireless know no boundary. They pass over the dotted lines on the map without heed or hindrance. They are binding the world together in a way no other system can. The motion picture with its universal language, the aeroplane with its speed, and the wireless with its coming international programme -- these will soon bring the world to a complete understanding. Thus may we vision a United States of the World. Ultimately it will surely come!"

Notice here, Superlative Technoboosters, first the evocation of the figure of the preacher now secularized through technoscience, second the conjuration of the transcension of all limits, third the facile misconstrual of the parochial with the universal, fourth the easy transition from an overassured predictive judgment of "soon" to the messianic cadences of "Ultimately it will surely come!" These dance steps are by now as dusty and dull and routinized as a stiff minuet, but it pays to be reminded that they have been dull an awfully long time by now -- ever heard of Jacques de Vaucanson, you triumphalist Singularitarians and scientistic reductionists? Hence, in reaction to such tired rhetoric George Orwell is quoted from a 1944 column:
"Reading recently a batch of rather shallowly optimistic 'progressive' books, I was struck by the automatic way people go on repeating certain phrases which were fashionable before 1914. Two great favourites are the 'abolition of distance' and the 'disappearance of frontiers'. I do not know how often I have met with statements that 'the aeroplane and the radio have abolished distance' and 'all parts of the world are now interdependent'.

This is an especially enjoyable quote to unearth since I recently had a set of exchanges with would-be Singularitarian guru Eliezer Yudkowsky in which he rather hilariously seemed to imagine himself a latter-day avatar of Orwell (except, you know, as an authoritarian High Priest awaiting the arrival of the Robot God). This leads me, by way of conclusion, to a reminder of why I devote so much attention to the marginal and curious sub(cult)ural futurisms of transhumanism, singularitarianism, extropianism, techno-immortalism, and so on in the first place.

For one thing, these sub(cult)ures offer up discourses that -- despite their abiding marginality and extremity in the strict sense -- combine attitudes and formulations favoring technocratic elitism over democratic deliberation, technofixes over engagement with structural social problems (like unsustainable industry and hyper-consumerism, like anti-social hyper-individualism, like the interminable cycles of violence maintained by militarism, like the corrupt and antidemocratizing manufacture of consent via broadcast mediation), foreground reductionist explanatory vocabularies that valorize instrumental rationality over moral, aesthetic, ethical, and political rationalities (thereby deranging them all -- including instrumental rationality itself), insist on an impoverishment and naturalization of the "developmental" imaginary to terms expressive of corporate-military competitiveness -- all in ways that are too readily appropriated (sometimes in slightly diluted forms) by incumbent elite interests I oppose as a champion of democracy, sustainability, and social justice.

Just as bad, as I have repeatedly tried to show, they cite and mobilize transcendental vocabularies that activate irrational passions at precisely the moment when planetary technodevelopmental social struggles demand clear deliberation, evoking the inevitabilities of providential discourse, the rapturous totalities of apocalyptic and transcension discourses, the acquiescence to authority of Priestly discourses, and so on, all of which I abhor as barriers to nonviolent deliberation and contestation of the terms of ongoing technoscientific change I demand as a champion of democracy.

For another thing -- apart from this practical point of their sometimes disproportionate influence on public discourse as an especially nice fit with incumbent politics (even if many incumbents would publicly disassociate themselves from the letter of Superlative formations) -- it is also true that one can sometimes understand the dynamics and categories and relations in more prevailing discourses by locating especially pure variations of these discourses at their extremes or at their vital cores.

It is immensely clarifying to one's understanding of the market fundamentalism of neoliberal globalization (the most influential political ideology of the last thirty years, amounting to the incumbent conventional wisdom governing the mostly dreadful domestic and international policies of my entire adult lifetime) to study the especially clear-eyed anarcho-capitalist formulations of David Friedman (son of neoliberal luminary Milton) and the especially unabashed narratives of Ayn Rand (best-selling reactionary crap-novelist), even if the mainstream wonks, pundits, legislators, and functionaries who actually implement the neoliberal vision on the ground are little likely to invoke these uncompromising figures at the vital core of their discourse, or necessarily even to have read them.

So, too, one arrives at an especially clear understanding of the curious aspirations, distractions, emphases, aporias, and tics that drive privileged technocentric and technocratic public discourses (from the apologias for various devastating extractive industries to the technophilic neoliberalism of Tom Friedman or the technophilic neoconservatism of Glenn Reynolds), by focusing on the workings of the Superlative Technology Discourses that express undiluted, if possibly more deluded, forms of these aspirations, distractions, emphases, aporias, and tics.

As I have said before, the technophilic libertopian cult of Extropianism was incomparably more interesting and influential as a symptom, or even condensed essence, of the irrational exuberance of the "Long Boom" digirati of the 1990s of which they were also a part, than they ever were on their own terms, and it was as a symptom, as a signifier of those more dense, more qualified prevailing tendencies that Extropianism attracted the critical attention it did from technocultural scholars and others. The same remains true of my own interest in Superlativity in our own historical moment.

Both of the quotations above may be found David Edgerton's book The Shock of the Old: Technology and Global History since 1900, by the way, which Arthur recommends in his blog-post (I haven't read it myself, but it looks quite interesting).

Two Faces of Populism

Adapted from a pithy one-liner in a post by Digby earlier today:

When the rich get richer -- with all the exploitation, corruption, and discontent that inevitably follows from such confiscatory wealth concentration -- uninformed people blame those who are below them on the economic scale, while more informed people scrutinize those who are above them on the economic scale. This is what distinguishes more than anything else the racist reactionary face of right wing populism (say, Lou Dobbs or Pat Buchanan) from the anti-corporatist, anti-militarist face of left-wing populism.

Richard Jones on "The Uses and Abuses of Speculative Futurism"

Over at his blog Soft Machines, Richard Jones has expanded and clarified his own discussion of "Superlativity" and "Speculative Futurism" in response to some criticisms he has received (and some he has observed me receiving lately).

He expresses a perplexity that I have to admit feels very much like my own when he declares: "I think transhumanists genuinely don’t realise quite how few informed people outside their own circles think that the full, superlative version of the molecular manufacturing vision is plausible."

Later in the comments section he makes a comparable claim about the faith in an Artificial General Intelligence that organizes the Singularitarian sub(cult)ure: "[D]iscussions of likely futures… go well beyond making lists of plausible technologies to consider the socio-economic realities that determine whether technologies will actually be adopted. One also needs to recognise that some advances are going to need conceptual breakthroughs whose nature or timing simply cannot be predicted, not just technology development (I believe AGI to be in this category)."

Needless to say, I think that the claims of so-called "Technological Immortalists" that the personal lives of people now living might plausibly be immortalized by recourse to emerging genetic therapies, "uploading" selves into informational forms, or cryonic suspension also belong to this category.

Taken together, these three basic technodevelopmental derangements constitute what I have described elsewhere as the key super-predicated idealized "outcomes" that drive much contemporary Superlative Technology Discourse: Superintelligence, Superlongevity, Superabundance. Thus schematized, it isn't difficult to grasp that Superlative Technology Discourse will depend for much of its intuitive force on its citation and translation of the terms of the omni-predicated terms omniscience, omnipotence, omnibenevolence that have long "characterized" godhood for those who aspire to "know" it, but now updated into hyperbolic pseudo-scientific "predictions" for those who would prosthetically aspire to "achieve" it in their own persons.

It is interesting to note that these idealizations also organize the Sub(cult)ural Futurist formations to which I also direct much of my own Critique: In place of open futures arising out of the unpredictable contestation and collective effort of a diversity of stakeholders with whom we share and are building the world in the present that becomes future presents, Sub(cult)ural Futurists substitute idealized outcomes with which they have identified and which they seek to implement unilaterally in the world. This identificatory gesture in Sub(cult)ural Futurists tends to be founded on an active dis-identification with the present (and with futurity as future presents open in the way the political present is open) and identification with idealized futures in which they make their imaginative home, as well as on an active correlated dis-identification with that portion of the diversity of stakeholders with whom they share the world in fact and any actual futures available to that world who they take to oppose their implementation of that idealized future.

And so, Richard Jones writes: "The only explanation I can think of for the attachment of many transhumanists to the molecular manufacturing vision is that it is indeed a symptom of the coupling of group-think and wishful thinking." And it isn't surprising that one can reel off with utter documentary ease a host of curious marginal sub(cult)ural self-identifications affirmed by most of the prominent participants in Superlative Technology Discourse -- Extropians, Transhumanists, Singularitarians, Immortalists (it seems to me there are further connections to Randian Objectivism, and illuminating resonances one discerns in comparing them to Scientology, Raelians, and even Mormonism) -- nor surprising to stumble on conjurations of tribal "outsiders" against which Sub(cult)ural Futurists imagine themselves arrayed -- "Luddites," "Deathists," "Postmodernists," and so on.

To those who charge that his critique (and by extension, my own) amounts to a straightjacketing of speculative imagination, Jones offers up this nice response with which I have quite a lot of sympathy:
[M]y problem is not that I think that transhumanists have let their imaginations run wild. Precisely the opposite, in fact; I worry that transhumanists have just one fixed vision of the future, which is now beginning to show its age somewhat, and are demonstrating a failure of imagination in their inability to conceive of the many different futures that have the potential to unfold.

And as I have pointed out elsewhere myself, these Superlative and especially Sub(cult)ural Futurisms tend to have
a highly particular vision of what the future will look like, and [are] driven by an evangelical zeal to implement just that future. It is a future with a highly specific set of characteristics, involving particular construals of robotics, artificial intelligence, nanotechnology, and technological immortality (involving first genetic therapies but culminating in a techno-spiritualized "transcendence" of the body through digitality). These characteristics, furthermore, are described as likely to arrive within the lifetimes of lucky people now living and are described as inter-implicated or even converging outcomes, crystallizing in a singular momentous Event, the Singularity, an Event in which people can believe, about which they can claim superior knowledge as believers, which they go on to invest with conspicuously transcendental significance, and which they declare to be unimaginable in key details but at once perfectly understood in its essentials. [The] highly specific vision in Stiegler's story ["The Gentle Seduction"] is one and the same with the vision humorously documented in Ed Regis's rather ethnographic Great Mambo Chicken and the Transhuman Condition, published… in 1990, and in Damien Broderick's The Spike, published twelve years later, and, although the stresses shift here and there… sometimes emphasizing robotics and consciousness uploading (as in Hans Moravek's Mind Children…), sometimes emphasizing Drexlerian nanotechnology (as in Chris Peterson's Unbounding the Future), or longevity (as in Brian Alexander's Rapture), it is fairly astonishing to realize just how unchanging this vision is in its specificity, in its ethos, in its cocksure predictions, even in its cast of characters. Surely a vision of looming incomprehensible transformation should manage to be a bit less… static than transhumanism seems to be?

Jones goes on to remind us, crucially, just how much "futurism is not, in fact, about the future at all -- it’s about the present and the hopes and fears that people have about the direction society seems to be taking now." This is why it can be so illuminating to treat futurological discourse generally as symptomatic rather than predictive and also it explains, when we make the mistake of taking it at "face value" as a straightforwardly predictive exercise, "precisely why futurism ages so badly, giving us the opportunity for all those cheap laughs about the non-arrival of flying cars and silvery jump-suits." When tech talk turns Superlative, I fear, we are relieved of the necessity to wait: the cheap laughs and groaners are abundantly available already in the present.

Tuesday, October 30, 2007

Today's Random Wilde

As long as war is regarded as wicked, it will always have its fascination. When it is looked upon as vulgar, it will cease to be popular.

Monday, October 29, 2007

Debating Singularitarians

I keep thinking I'll take a break from the discussion of Technological Superlativity, but all the energetic conversation keeps pulling me back in. This is upgraded and adapted from Comments from yesterday's post. There's quite a lot of interesting and contentious debate happening there, well worth reading. As always, thanks for all the comments, everyone.

Friend of Blog Michael Anissimov points out that: SIAI [The Singularity Institute for Artificial Intelligence for those who don't know -- something like Robot Cult Ground Zero as far as I can make out, and with some rather prominent "transhumanist" folks on board] works towards Friendly (through whatever means works, something other than mathematical-deductive if necessary) seed AGI because the people in the organization see it as a high moral priority.

Michael, how's about a nice definition of "Friendly Seed AGI" for the kids at home? In a nice little sentence or two, without frills. The terminology isn't at all widespread, as you know -- especially among professional and academic computer scientists. Of course, I have my own sense of what this "high moral priority" amounts to, in fact, but I'd like to hear it from you. As extra credit, I'd be curious if you could define "intelligence" (a concept on which "Friendly Seed AGI" depends) in a comparably pithy way.

Michael continues: This is humanity's first experience of stepping beyond...

Cue the music.

The question is not "if" intelligence enhancement technologies will be available, but "when".

Actually, there is still quite palpably a question of "if," when we are talking about whether the Strong Program of AI (in any of its current variations) will bear fruit, in fact. And, yes, Virginia, one can say that while still maintaining that human intelligence is an entirely worldly non-supernatural phenomenon. By the way, quite apart from the fact that the question of "if" actually does remain on the table for anybody with any sense, at least enough so to prompt caveats in one's pronouncements on the topic, there also remain questions as to when the question of "when" might as well amount to the question of "if" due to the timescales and complexities involved.

Now there's nothing at all wrong with these conventional human patterns,

Gosh, that's big of you. And I for one would like to thank our future Robot Overlords...

but we have to note [we are compelled to note, by some unspecified necessity -- believe me, it isn't logic] that the introduction of enhancement technology is bound to [again the conjuration of necessity, certainty -- bound to by what exactly? where from? One wonders.] throw the existing order out of whack.

"The Existing Order Out of Whack." A heady vision, to be sure. You say, "out of whack," I notice, as if to suggest complete transformation, derangement, confusion, unpredictability, but my guess is you think you have a pretty clear idea of how it's gonna go down, Michael, when all is said and done. This reminds me of my reading of the short story "A Gentle Seduction," a few days ago -- Jack claims that the Singularity will involve unprecedented unfathomable change, but the truth is he already knows everything that will come to pass with the clarity of an Old Testament Prophet, and hence confronts the scrambling of everybody else's world with relative equanimity. This is how True Believers always feel about their Pet Raptures, of course.

What I enjoy about this spectacle is all the misplaced certainty and necessity of the phrasing, never with much in the way of admitting how freighted all of these pronouncements are by caveats, qualifications, unintended consequences, sweeping ignorance of fields of relevant knowledge, indifference to historical vicissitudes, and so on. All that stuff is bracketed away or perhaps never even enters the Singularitarian Mastermind in the first place, and only the stainless steel trajectory to Singularity luminously remains.

It shouldn't be hard to imagine

Of course not. We've all read sf, watched sf movies and tv shows, seen the ubiquitous iconography on commercials, etc. No, it isn't hard to "imagine" at all.

that enhanced humans or AGIs won't get to the point of being substantially smarter than the smartest given humans.

Smarter -- how? Of just what does this smartness consist? How many dimensions can it have? How do they relate to one another? How does its embodiment enable and delimit it? But bracketing all that stuff for a moment, I have to wonder why Singularitarians seem to be so little interested in the actually existing facts on the ground that there is greater "intelligence" in cooperation, in non-duressed functional division of labor, in digital networked p2p production already, here and now in the real world? Why not devote yourself to unleashing the intelligence humans already palpably demonstrate a capacity for, a desire for, in the service of shared problems that are all around us?

Why do you think I advocate a basic income guarantee? Of course it's the right thing to do, of course it provides a basic democratizing stake protecting all people from exploitation by elites, but also it would function to subsidize citizen participation in p2p networks, creating, editing, criticizing, organizing in the service of freedom.

So much of the Superlative Singularitarian Robot God discourse just looks to me like a funhouse mirror in which symptomatic hopes and fears are being expressed (fine as far as that goes -- it's like literature in that respect, to the study and teaching of which, you may have noticed, I have devoted no small part of my life), superficially and parasitically glomming on to a few scattered software and robotic security problems, handwaving some generalized millennial anxieties about technological change via some utterly conventional science fiction tropes, and then selling the moonshine -- whether earnestly or cynically is a matter of "if the shoe fits, wear it" -- to the masses, tossing out the misleading oversimplifying usually anti-democratizing frames into public discourse, hoping to skew budgetary priorities from more urgent needs, likely deranging the form funding and research take as they apply themselves to actual problems of malware and automation (rather as cybernetic totalist ideology deranges coding practices already), and so on.

If intelligence enhancement tech really does produce a superintelligence, then we have a moral duty to maximize the probability that said superintelligence cares about humanity as a whole, not itself or any narrow group of humans. Otherwise the outcome could be grim. A few thousand Europeans enslaved native populations of millions with "only" somewhat more advanced technology

If if if if if if if if if if if if if if if if -- and then, miraculously, the conjuration of global devastation and enslavement. You just cannot know how clownish and cartoonish this appears outside the bubble of True Belief. A scenario that once caveated is diminished into near total irrelevance is instead hyperbolized back into pseudo-relevance. You might as well be talking about when Jesus comes or when the flying saucers arrive.

Now, before you inevitably misread the substantial force of what I am saying here as yet more evidence of my lack of vision and imagination, or my lack of scientificity and know-how (these two critiques are the most common ones so far -- I wonder if their proponents have noticed that they are making literally opposite claims)... let me stress that to the extent that software actually can and does produce catastrophic social impacts (networked malware, infowar utilities, automated weapons systems, asymmetric surveillance and data-manipulation and so on) these are actual problems that should be addressed with actual programs on terms none of which are remotely clarified by the Superlative Discourse or the sf iconography of entitative post-biological superintelligent AI or eugenicized intelligence-"enhancement."

It's not that I utterly "discount" the "5% risk" you guys constantly use to justify passing the collection plate at your Robot God Revival Meetings (even though, truth be told, you pull that number out of your asses and can't yet even define basic terms -- "intelligence" "friendliness" -- on which you depend, at least not to the satisfaction of Non-Believers in the very fields you claim to lead as visionary sooper-geniuses), nor do I claim "certainty" that nothing like the scenarios that preoccupy your attention "will" or "can" come to pass.

My critique of the Singularitarian Variation of Superlativity has never taken that form. That's because, not to put too fine a point on it, I don't think you guys are ready for prime time critique in that vein. While you are playing at being scientists, most of your discourse has far too much Amway and Heinlein in it to really qualify for that designation by the standards I am familiar with (bad news for you: I have actually taught courses in the history and philosophy of science -- I know that will come as a shock to those of you guys who like to dismiss me as an effete elite aesthete too muzzy-headed to grasp the hard chrome dildo of your True Science -- and I fear that by my lights what you guys are spinning is scarcely science, apart from some occasional nibbling at the edges).

But, anyway, again it's not that I dismiss your various likelihood and timeline estimations (considering them more as a line of hype than real efforts at science in the main), so much as that I think such risks as one can actually reasonably attribute to networked malware and lethal automation and the like are best addressed by people concerned with present, actually emerging, and palpably proximately upcoming technodevelopmental capacities rather than uncaveated and hyperbolic Superlative idealizations freighted with science fiction iconography and symptomatic of the pathologies of agency very well documented in association with technology discourse in general.

(Some advice, one day, when the mood strikes you, you might read some Adorno and Horkheimer, Heidegger, Arendt, Kuhn, Ellul, Marcuse, Foucault, Feyerabend, Winner, Latour, Tenner, Haraway, Hayles, Noble, for some sense of the things people know about you that you don't seem to know very well about yourselves as technocentrics. You won't agree with all of it, as neither do I, but if you take it seriously you will come out of the experience feeling a bit embarrassed about the unexamined assumptions on which Superlative Technology Discources always fatally depend.)

So, the idea is to "get them while they're young": create superintelligences with altruistic goal systems. SIAI is the only organization pursuing this goal in a structured manner.

Yes, yes, I know. The idea is that the Singularitarians are the Good Guy sooper-geniuses in the thankless role of saving humanity from the Bad Guy sooper-geniuses who by design or through accident will create the Bad Robot God who will destroy or enslave us, while you want to get there first and create the Good Robot God who will solve all of our problems ('cause, he's infinitely "smarter," see, since "smartness" is a reductively instrumental problem-solving capacity and problems are reductively solvable through the implementation of instrumental rationality) and save the world. It's like a Robot God arms race, a race for time, urgent, in fact nothing is more urgent once you "grasp" the stakes, hell, billions of lives are at stake, etc etc etc etc etc. Complete and utter foolishness. But, of course, very "serious." Very "serious" indeed.

Look, if an SIAI Singularitarian looked to be on the verge of creating anything remotely like its Robot God (just bracketing for a moment the deranging conceptual entanglements of talking in these terms in the first place), you can be sure the Secret Lab or what have you would be closed down immediately and the people involved thrown in jail as net-criminals or even terrorists (and a good job too) as far as I can tell. Otherwise, it would only be corporate-militarists themselves who would have the resources and the will and the authorization to create such a "thing." They certainly would use advanced lethal automation and malware and infowar utilities for malign purposes (as they do almost everything already).

If you were really serious about Unfriendly Robot Gods or their non-superlatized real-world analogues, you would be engaging in education, agitation, and organizing to diminish the role of hierarchical formations like the military in our democratic society -- demanding an end to secret budgets and ops, making war unprofitable, supporting international war crimes and human rights tribunals, and so on. That anti-militarist politics coupled with support of international and multilateral projects to monitor and police the propagation of networked malware, stringent international conventions on automated weapons systems, and similar politics is what a Technoprogressive sounds like on this topic.

Nothing is clarified by the introduction of Superlative Discourse to such deliberation, only the activation of irrational passions in general, and an increased vulnerability to hyperbolic sales-pitches and terror discourse of a kind that incumbent interests use to foist absurdly expensive centralized programs down our throats to nobody's benefit but their own.

Sunday, October 28, 2007

Today's Random Wilde

It is always the unreadable that occurs.

How to Account for Today's Steven A Boylin Incident? Option A: Idiot, or Option B: Idiot

It sure looks to me like the personal spokesman for General David Petraeus -- current Commander of the historically unprecedented Iraq debacle -- just sent Glenn Greenwald a crazy petty intimidating embarrassingly dumb e-mail and then petulantly refused to admit it once his hand got caught in the cookie jar.

So, either we discover yet again that the Bush Administration has put an incompetent idiot in an important post because they demand loyalty over competence as evidenced by his writing of the actual letter in the first place; or we discover yet again that the Bush Administration has put an incompetent idiot in an important post because they demand loyalty over competence as evidenced by his subsequent response -- which either demonstrates an unbelievably oafish obnoxious willingness to lie (badly) or an apparent lack of concern that somebody is, on his implausible hastily patched together explanation, posting outrageous e-mails from the account of the personal spokesman for General David G. Petraeus to prominent journalists.

My Sadly "Outdated Heuristics"

On a blog called Transhuman Goodness my Superlative Technology Critique has come in for rebuke and in terms that are becoming depressingly familiar. From the post, "Imagination Is Banned," I give you Roko:

I take issue with... Dale and [likeminded others] when they want to stop people from letting their imaginations run wild

Once again, my sad failure of imagination is exposed. You should know that I am actually trying to institute a national holiday to precisely this effect, Stamp Out All Imagination Day. "Ban" is exactly the right word to evoke when confronted with my critiques of sub(cult)ural futurisms, for, indeed, my plans call for Black Helicopters and ThoughtCrime tribunals and the whole nine. Oh! If there's one thing I just can't stand it's somebody with an original thought or any stray insight that might warp my fragile little mind. (As with yesterday's post, I fear you will have to wade through some snark to get to argumentative substance. For substance, you might do better to peruse the Summary texts.)

and instead focus attention only onto things which will happen for certain (or almost for certain) and which will happen soon. They are telling us to take our heads out of the clouds and get our noses back down to the grindstone… But I think that they should acknowledge the value of what they have started calling the "superlative" perspective.

If people confined their conversation to certainties we'd all have a dull time of it indeed. This response to criticism certainly feels quite odd, directed at a lover of William Burroughs and Donna Haraway, and at a blog the most consistent feature of which is the posting of paradoxical aphorisms by Oscar Wilde. For whom but the faithful does it feel like censorship to encounter criticism? For whom but the cultist does the exposure to rejection threaten the sense of self so much that it feels like imagination itself is under threat? For whom but the snake-oil salesman does the exposure of pretensions to knowledge where they have not been earned provoke such hysterical defensiveness?

By all means keep your heads in the clouds, it's not for me to straightjacket your wild hopes or flights of fancy, even the ones that I think are stupid. I don't have the power to "ban" anything, nor obviously would I covet such a power. What an odd thing to say! What a weird response to criticism! By all means be a poet, a pervert, a pleasure hound, please, but just don't expect me to pretend that any of that makes you a policy wonk or, far worse, a Priest who deserves his collection plate. Superlative Technocentrics should realize that if what they are looking for are the pleasures of a literary salon or an amateur futurological blue-skying convention they should just own up to that honestly and run with it, drop the outmoded and self-marginalizing identity politics, the policy think-tank pretensions, the obvious cult paraphernalia, and the endless defensive pseudo-science. If it's Imagination you really think you're defending, just try being an aesthete or a philosopher for real, and stop sounding so much like a corrupt lobbyist trying to squirrel some cash for a Bridge to Nowhere or a salesman hawking boner pills and 80s virtual reality rigs in Vegas.

If you accept that technological change is accelerating, you must also accept that our prediction horizon is becoming ever shorter. 1000 years ago there was no need to wonder about what technologies may (or may not) arrive in the next 5-10 years, because technology moved so slowly that one would always have ample warning before anything even moderately new arrived on the scene…

First of all, no, I do not accept that technological change is "accelerating." Indeed, as I have often written before, this actually seems to me a manifestly absurd thing to say when quite obviously some "lines" of technoscientific change are accelerating for now, some are stalling, some appear to be failing altogether, some are combining in unexpected ways yielding jolts and leaps, some are ripe for opportunistic appropriations that will send them off who knows where, and so on.

The Superlative obsession with Acceleration and even "Acceleration of Acceleration" (which really cracks me up as an especially egregious line in hype) may well reflect what the deepening instability of expanding neoliberal financialization of the economy looks like to its beneficiaries (whether real, imaginary, or just short-term), but I don't think it is a very good figure to capture the actual dynamics and complexities of contemporary technodevelopmental churn. All these technodevelopmental arrows hiking hyperbolically up up up the futurological charts at futurological congresses depend for their morphologies on all sorts of definitional hanky-panky, absurd levels of technological determinism and autonomism, utter indifference to the always absurdly uneven distribution of actual developmental costs, risks, and benefits involved, and so on.

I quite understand that the whole Accelerationalism move does conjure up a kind of providential current you can pretend to be riding if you happen to hanker after reassurance in the face of unintended technodevelopmental consequences or want a rationale for conquest or prefer not to have to explain yourself too much or clean up after your own messes, and it has a nicely bolstering ring to it, kinda sorta like Manifest Destiny did to the people with their hands on the triggers. Believe me, I get it, I get it.

I myself am not much interested in the whole acceleration of acceleration model so much (except for a laugh) as I am in the idea of an ever deepening democratization of technodevelopmental social struggle. I want the costs, risks, and benefits of technodevelopmental change to reflect the stakes and the say of the actual diversity of stakeholders to that change. I want this because it is right, because it expands the responsiveness of emerging developmental compromise formations we must cope with together come what may, and because it expands the intelligence of those formations precisely because it better reflects a diversity of valuable perspectives. The emphasis on "acceleration" too easily becomes an alibi for elitist circumventions of democracy (skip down to your own metaphorical evocation of your role at the head of the vanguard, at the top of the mast, among the visionaries, and so on), the risks are too urgent, the benefits too great, ends justify means, it's easier to ask forgiveness than to get permission, blah blah blah, all the same old tired reactionary elitist shit -- except, of course, you know, it's the future! Yes, I propose that there is a loose complementarity between this Superlative acceleration fixation and anti-democratization, between a democracy emphasis like my own and a suspicion of general accelerationalism.

If you refuse to try and look over that horizon, you may end up getting a very nasty shock. Dale is quick to belittle the concept of AGI as a “Robot God”… so presumably he thinks we should not waste our time working on it or thinking about it. After all, it’s over the prediction horizon. It’s “Idle speculation”. But Dale is using outdated heuristics; if we follow his advice, we might find out that he is mistaken the hard way.

If Professor Zed's goofog exhales its brittle brown breath from the sinister stack poking up from his deep underground jungle lab a billion innocent scouts might well die. How can you justify not devoting resources to finding Zed and stamping out the goofog or working up a decent anti-goofog goophage? Sure, this is all probably bullshit, but just think about the scale of destruction I'm invoking here… Even if the goofog has only a 5% chance of coming true won't you feel stupid as you gasp your last breath that you didn't spend a billion on the Technotastic Institute for Stamping out Goofog and Other Arbitrary Awfulnesses? A billion dollars is chicken feed compared to the destruction of every mammal and mollusk on planet earth, even you can see that surely?

Look, did I say one billion? Dig, this is a thought-experiment, I can ratchet up the death count interminably. Yay! It's Imagination! What if I said two billion? What about five billion? Five billion souls might be spared by one billion Dead Presidents sliding over to these bright boys from the TISGOAA here.

If we follow the advice of those timid types who shun the shattering scenarios of TISGOAA and devote their energies instead to neglected treatable diseases in the overexploited world, switching to decentralized renewable energy provision, ending war profiteering, bolstering up international law and standards, and providing basic income guarantees… well we might just find out they are mistaken the hard way when that goofog comes barreling down our asses and then, boy howdy, won't we all wish we had listened more to the Brain Trust over at TISGOAA!

Most people will just throw out ideas that sound silly to them without a moment’s thought. As a mathematician and scientist, I have been trained not to do this. Theory of mechanics where time is not absolute? That sounds silly, but it’s actually true. Atoms which are in two places at once or even everywhere at once? Sounds silly but is also true.

Yes, yes, I know, I know, you're all Einstein and Tesla and the Wright Brothers and possibly Ayn Rand, too, all condensed into one radioactively brainy scientastic package. What "sounds silly" to me after the less than "a moment's thought" that elapsed while I was writing thousands upon thousands of words over years of time on these subjects of Murderous Robot Gods, nanoscale swarm weapons reducing the earth to goo, or nanoabundance owned by the rich nevertheless installing a post-scarcity gift-society for all, rejuvenation pills offering sexy immortal lives to billions now living, and uploads into digital networks bequeathing anybody who craves eternity an angelic informational existence, all that stuff that "sounds silly" to me sounds instead like the soundest science and the most serious foresight to them as knows the Truths of the Elect.

Transhumanists look over the horizon… In the ship of society, we are like the man in the crow's nest. If we say that we see something like AGI or Advanced Nanotechnology over that horizon, don’t take it as a certainty, because there’s a good chance that we’re wrong. But at least take the idea as a serious possibility, and start making contingency plans.

You're right, you guys really are awesome, if you do say so yourselves.

Saturday, October 27, 2007

My Failures of Imagination

Upgraded and adapted from Comments. I fear my response to long-time Friend of Blog Giulio Prisco is a bit testy, but I've been told that my writing is clearer when I am in this temper, and I've also been told that my Superlative Critique is presented in terms that are too abstruse in general whatever its usefulness, so here goes:

I criticize your intolerance for those who, while basically agreeing with you on the points above, have ideas different from yours on other, unrelated things, and affirm their right to think with their own head.

I distinguish instrumental, moral, esthetic, ethical, and political modes of belief. (I spell out this point at greater length here, among other places.) Rationality, for me, consists not only in asserting beliefs that comport with the criteria of warrant appropriate to each mode, but also in applying to our different ends the mode actually appropriate to it.

I'm perfectly tolerant of estheticized or moralized expressions of religiosity, for example, but I keep making the point that religiosity (even in its scientistic variations) when misapplied to domains, ends, and situations for which it is categorically unsuited creates endless mischief.

Superlativity as a discourse consists of a complex of essentially moral and esthetic beliefs mistaking themselves for or overambitious to encompass other modes of belief.

This sort of thing is quite commonplace in fundamentalist formations, as it happens, and one of the reasons religiosity comes up so often in discussions of Superlativity is because many people already have a grasp of what happens when fundamentalist faiths are politicized or pseudo-scientized and so the analogy (while imperfect in some respects) can be a useful way to get at part of the point of the critique of Superlativity.

Because, my friend, you will never persuade me that one who finds intellectual or spiritual pleasure in contemplating nanosanta-robot god-superlative technology-etc. cannot be a worthy political, social and cultural activists.

This line is total bullshit, and I'm growing quite impatient with it. I don't know how else to say this, I feel like throwing up my hands. Look, I'm a big promiscuous fag, a theoryhead aesthete, and an experimentalist in matters of, well, experiences available at the, uh, extremes as these things are timidly reckoned among the charming bourgeoisie. Take your pleasures where you will, I say, and always have done. Laissez les bons temps rouler. I'm a champion of multiculture, experimentalism, and visionary imagination, and that isn't exactly a secret given what I write about endlessly here and elsewhere.

But -- now read this carefully, think about what I am saying before you reply -- if you pretend your religious ritual makes you a policy wonk expect me to call bullshit; if you demand that people mistake your aesthetic preferences and preoccupations for scientific truths expect me to call bullshit; if you go from pleasure in to proselytizing for your cultural and subcultural enthusiasms expect me to call bullshit; if you seek legitimacy for authoritarian circumventions of democracy in a marginal defensive hierarchical sub(cult)ural organization or as a way to address risks you think your cronies see more clearly than the other people in the world who share those risks and would be impacted by your decisions, all in the name of "tolerance," expect me to call bullshit.

"I can believe in Santa Claus and Eastern Bunny if I like, and still agree with you on political issues."

No shit, Sherlock. I've never said otherwise.

But -- If you form a Santa cult and claim Santa Science needs to be taught in schools instead of Darwin, or if you become a Santa True Believer who wants to impose his Santa worldview across the globe as the solution to all the world's problems, or you try to legitimize the Santalogy Cult by offering up "serious" policy papers on elf toymaking as the real solution to global poverty and then complain that those who expose this as made up bullshit are denying the vital role of visionaries and imagination and so on, well, then that's a problem. (Please don't lose yourself in the details of this off-the-cuff analogy drawn from your own comment, by the way, I'm sure there are plenty of nit-picky disanalogies here, I'm just making a broad point here that anybody with a brain can understand.)

Unless, of course, you persuade me that the two things are really incompatible.

I despair of the possibility of ever managing such a feat with you. (Irony impaired readership insert smiley here.)

I will gladly take the Robot God and Easter Bunny then.

Take Thor for all I care. None of them exist, and any priesthood that tries to shore up political authority by claiming to "represent" them in the world I will fight as a democrat opposed to elites -- whether aristocratic, priestly, technocratic, oligarchic, military, "meritocratic" or what have you. I can appreciate the pleasures and provocations of a path of private perfection organized through the gesture of affirming faith in a Robot God, Thor, or the Easter Bunny. I guess.

I have no "trouble" with spirituality, faith, aestheticism, moralism in their proper place, even where their expressions take forms that aren't my own cup of tea in the least. I've said this so many times by now that your stubborn obliviousness to the point is starting to look like the kind of conceptual impasse no amount of argument can circumvent between us.

Perhaps you guys are so scared of "superlative technology discourse" because you are afraid of falling back into the old religious patterns of thought, that perhaps you found difficult to shed.

I've been a cheerful nonjudgmental atheist for twenty-four years. It wasn't a particularly "difficult" transition for me, as it happens. Giving up pepperoni when I became a vegetarian was incomparably more difficult for me than doing without God ever was. And I'm not exactly sure what frame of mind you imagine I'm in when I delineate my Superlative Discourse Critiques when you say I'm "so scared." I think Superlativity is wrong, I think it is reckless, I think it is comports well with a politics of incumbency I abhor, I think it produces frames and formulations that derange technodevelopmental discourse at an historical moment when public deliberation on technoscientific questions urgently needs to be clear. It gives me cause for concern, it attracts my ethnographic and critical interest. But "so scared"? Don't flatter yourself.

Some of us, yours truly included, never gave much importance to religion. So we feel free to consider interesting ideas for their own sake, regardless of possible religious analogies.

You are constantly claiming to have a level of mastery over your conscious intentions and expression that seems to me almost flabbergastingly naïve or even deluded. It's very nice that you feel you have attained a level of enlightenment that places you in a position to consider ideas "for their own sake," unencumbered one presumes by the context of unconscious motives, unintended consequences, historical complexities, etymological sedimentations, figural entailments, and so on. I would propose, oh so modestly, that no one deserves to imagine themselves enlightened in any useful construal of the term who can't see the implausibility of the very idea of the state you seem so sure you have attained.

Friday, October 26, 2007

MundiMuster: Tell Harry Reid, "No Immunity for Lawbreaking Companies"!

[via NoRetroactiveImmunity.com]
The Senate is considering a bill that would grant immunity to any telecom company that assisted in the administration's illegal wiretapping. Chris Dodd promised to put a hold on any such bill, and Joe Biden and Barack Obama pledged to uphold it. We believe that any bill coming before the Senate that includes provisions for so-called 'amnesty' for large companies involved in illegally spying on Americans should be opposed, and have authored a letter to this effect addressed to Majority Leader Reid. You can co-sign it below. The letter will also be sent to Senate Democratic leadership and the Senate Judiciary Committee members. You can read the full text of the letter here.

Today's Random Wilde

Friendship is far more tragic than love. It lasts longer.

A Superlative Schema

In the first piece I wrote critiquing Superlative Technology Discourse a few years ago, Transformation Not Transcendence," I wrote that
It pays to recall that theologians never have been able comfortably to manage the reconciliation of the so-called omnipredicates of an infinite God. Just when they got a handle on the notion of omnipotence, they would find it impinging on omniscience. If nothing else, the capacity to do anything would seem to preclude the knowledge of everything in advance. And of course omnibenevolence never played well with the other predicates. How to reconcile the awful with the knowledge of it and the power to make things otherwise is far from an easy thing, after all… As with God, so too with a humanity become Godlike. Any “posthuman” conditions we should find ourselves in will certainly be, no less than the human ones we find ourselves in now, defined by their finitude. This matters, if for no other reason, because it reminds us that we will never transcend our need of one another.
My point in saying this was to highlight the incoherence in principle of the superlative imaginary, to spotlight what looks to me like the deep fear of finitude and contingency (exacerbated, no doubt, by the general sense that we are all of us caught up in an especially unsettling and unpredictable technoscientific storm-churn) that drives this sort of hysterical transcendental turn, and to propose in its stead a deeper awareness and celebration of our social, political, and cultural inter-dependence with one another to cope with and find meaning in the midst of this change.

Of course, there is no question that no technology, however superlative, could deliver literally omni-predicated capacities, nor is it immediately clear even how these omni-predicates might function as regulative ideals given their basic incoherence (although this sort of incoherence hasn't seemed to keep "realists" from claiming interminably that vacuous word-world correspondences function as regulative ideals governing warranted assertions concerning instrumental truth, so who knows?). Rather like the facile faith of a child who seeks to reconcile belief with sense by imagining an unimaginable God as an old man with a long beard in a stone chair, Superlativity would reconcile the impossible omnipredicated ends at which it aspires with the terms of actual possibility through a comparable domestication: of Omniscience into "Superintelligence," of Omnipotence into "Supercapacitation" (especially in its "super-longevity" or techno-immortalizing variations), of Omnibenevolence into "Superabundance."

In such Superlative Technology Discourses, it will always be the disavowed discourse of the omni-predicated term that mobilizes the passion of Superlative Techno-fixations and Techno-transcendentalisms and organizes the shared identifications at the heart of Sub(cult)ural Futurisms and Futurists. Meanwhile, it will be the disavowed terms of worldly and practical discourses that provide all the substance on which these Superlative discourses finally depend for their actual sense: Superintelligence will have no actual substance apart from Consensus Science and other forms of warranted knowledge and belief, Supercapacitation (especially the superlongevity that is the eventual focus of so much "enhancement" talk) will have no actual substance apart from Consensual Healthcare and other forms of public policy administered by harm-reduction norms, Superabundance will have no actual substance apart from Commonwealth and other forms of public investment and private entrepreneurship in the context of general welfare. In each case a worldly substantial reality -- and a reality substantiated consensually, peer-to-peer, at that -- is instrumentalized, hyper-individualized, de-politicized via Superlativity in the service of a transcendental project re-activating the omni-predicates of the theological imaginary.

As with most fundamentalisms -- that is to say, as with all transcendental projects that redirect their energies to political ends to which they are categorically unsuited -- whenever Superlativity shows the world its Sub(cult)ural "organizational" face, it will be the face of moralizing it shows, driven by the confusion of the work of morals/mores with that of ethics/politics, a misbegotten effort to impose the terms of private-parochial moral or aesthetic perfection with the terms of public ethics (which formally solicits universal assent to normative prescriptions), politics (which seeks to reconcile the incompatible aspirations of a diversity of peers who share the world), and science (which provisionally attract consensus to instrumental descriptions).

Very Schematically, I am proposing these correlations:

OMNI-PREDICATED THEOLOGICAL / TRANSCENDENTAL DISCOURSE

Omniscience
Omnipotence
Omnnibenevolence

SUPER-PREDICATED SUPERLATIVE DISCOURSE

Superintelligence
Supercapacitation (often amounting to Superlongevity)
Superabundance

WORDLY SUBSTANTIAL (democratizing/p2p) DISCOURSE

Reasonableness -- that is to say, the work and accomplishments of Warranted Beliefs applied in their proper plural precincts, scientific, moral, aesthetic, ethical, political, legal, commercial, etc.
Civitas -- that is to say the work and accomplishments of Consensual Culture, where culture is presumed to be-extensive with the prosthetic, and health and harm reduction policy are construed as artifice.
Commonwealth -- that is to say, the work and accomplishments of collaborative problem-solving, public investment, and private entrepreneurship in the context of consensual civitas.

On one hand the Super-Predicated term in a Superlative Technology Discourse always deranges and usually disavows altogether -- but, crucially, while nonetheless depending on -- the collaboratively substantiated term in a Worldly Discourse correlated with it, while on the other hand activating the archive of figures, frames, irrational passions, and idealizations of the Omni-Predicated term in a Transcendental Discourse (usually religious or pan-ideological) correlated with it. The pernicious effects of these shifts are instrumental, ethical, and political in the main, but quite various in their specificities.

That complexity accounts for all the ramifying dimensions of the Superlativity Critique one finds in the texts collected in my Superlative Summary at this point. I would like to think one discerns in my own formulations some sense of what more technoscientifically literate and democratically invested worldly alternatives to Superlativity might look like. In these writings, I try to delineate a perspective organized by a belief in technoethical pluralism, on an insistence on a substantiated rather than vacuous scene of informed, nonduressed consent, on the consensualization of non-normative experimental medicine (as an elaboration of the commitment to a politics of Choice) and the diversity of lifeways arising from these consensual practices, on the ongoing implementation of sustainable, resilient, experimentalist, open, multicultural, cosmopolitan models of civilization, on the celebration and subsidization of peer-to-peer formations of expressivity, criticism, credentialization, and the collaborative solution of shared problems, and, through these values and for them, a deep commitment to the ongoing democratization of technodevelopmental social struggle -- using technology (including techniques of education, agitation, organization, legislation) to deepen democracy, while using democracy (the nonviolent adjudication of disputes, good accountable representative governance, legible consent to the terms of everyday commerce, collective problem-solving, peer-to-peer, ongoing criticism and creative expressivity) to ensure that technology benefits us all as Amor Mundi's signature slogan more pithily puts the point.

It should go without saying that there simply is no need to join a marginal Robot Cult as either a True Believer or would-be guru to participate in technodevelopmental social struggle peer-to-peer, nor to indulge in the popular consumer fandoms, digital plutocratic financial and developmental frauds, or pseudo-scientific pop-tech infomercial entertainment of more mainstream futurology. There is no need to assume the perspective of a would-be technocratic elite. There is nothing gained in identifying with an ideology that you hope will "sweep the world" or provide the "keys to history." There is nothing gained in claiming to be "pro-technology" or "anti-technology" at a level of generality at which no technologies actually exist. There is nothing gained in foreswearing the urgencies of today for an idealized and foreclosed "The Future" nor in dis-identifying with your human peers so as to better identify with imaginary post-human or transhuman ones. There is nothing gained in the consolations of faith when there is so much valuable, actual work to do, when there are so many basic needs to fulfill, when there is so much pleasure and danger in the world of our peers at hand. There is nothing gained by an alliance with incumbent interests to secure a place in the future when these incumbents are exposed now as having no power left but the power to destroy the world and the open futurity altogether.

The Superlative Technology Critique is not finally a critique about technology, after all, because it recognizes that "technology" is functioning as a surrogate term in these discourses it critiques, the evocation of "technology" functions symptomatically in these discourses and sub(cult)ures. The critique of Superlativity is driven first of all by commitments to democracy, diversity, equity, sustainability, and substantiated consent. I venture to add, it is driven by a commitment to basic sanity, sanity understood as a collectively substantiated worldly and present concern itself. The criticisms I seem to be getting are largely from people who would either deny the relevance of my own political, social, and cultural emphasis altogether (a denial that likely marks them as unserious as far as I'm concerned) or who disapprove of my political commitment to democracy, my social commitment to commons, and my cultural commitment to planetary polyculture (a disapproval that likely marks them as reactionaries as far as I'm concerned). There is much more for me to say in this vein, and of course I will continue to do so as best I can, and everyone is certainly free and welcome to contribute to or to disdain my project as you will, but I am quite content with the focus my Critique has assumed so far and especially by the enormously revealing responses it seems to generate.

Sanewashing Superlativity (For a More Gentle Seduction)

In his latest deft response to my "so-called [?] Superlative Technology Critique," Michael Anissimov reassures his readers that "[Richard] Jones and Carrico are both wrong [to say] that transhumanists have a 'strong, pre-existing attachment to a particular desired outcome.' A minority of transhumanists maybe, but not a majority." Since Michael isn't a muzzy literary type like me (Is this critique postmodernism or something? wonders one of Michael's readers anxiously in his Comments section, No, no: It's Marxism another reader grimly corrects her), we can be sure that when Michael insists that "a majority" of "transhumanists" have no strong attachments to particular desired outcomes, well, no doubt he has crunched all the relevant numbers before saying so. Who am I to doubt him?

"What transhumanists want is for humanity to enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products," Anissimov patiently explains. Since there are hundreds of millions of people who would surely cheerfully affirm such vacuities (among them, me) and yet after over twenty years of organizational effort the archipelago of technophilic cult organizations that trumpet their "transhumanism" -- so-called! -- has never managed yet to corral together more than a few thousand mostly North Atlantic white middle-class male enthusiasts from among these teeming millions to their Cause, one suspects that there may be some more problematic transhumanistical content that is holding them back. Contrary to the rants about a dire default "Deathism" and "Luddism" in the general populace one hears from some transhumanists exasperated that their awesome faith, er, "movement," has not yet swept the world, I will venture to suggest that it isn't actually a rampaging general desire for short unhealthy unsafe unfree lives of poverty or feudalism that keeps all these people from joining their fabulous Robot Cult.

Back in 1989 Marc Stiegler wrote a short story entitled "The Gentle Seduction" that has assumed a special place in the transhumanist sub(cult)ural imaginary. In the opening passage one of the main characters, Jack, asks the other main character -- who never gets a name, interestingly enough, and is referred to merely pronomially as "her" and "she" -- the following portentious question: "Have you ever heard of Singularity?" "She" hasn't, of course, and Jack explains the notion with relish:
"Singularity is a time in the future as envisioned by Vernor Vinge. It'll occur when the rate of change of technology is very great -- so great that the effort to keep up with the change will overwhelm us. People will face a whole new set of problems that we can't even imagine." A look of great tranquility smoothed the ridges around his eyes.

It is very curious that after the conjuration of such a looming unimaginably transformative and overwhelming change Jack would become tranquil rather than concerned like any sensible person would, however optimistic, at such a prospect, but of course the reason for this is that he is lying. Already we have been told that when he speaks "of the future… [it was as if] he could see it all very clearly. He spoke as if he were describing something as real and obvious as the veins of a leaf..." Of course, in Superlative discourses, especially in the Singularitarian variations that would trump history through a technodevelopmental secularization of the Rapture, the use of the term "unimaginable" is deployed rather selectively: to invest pronouncements with an appropriately prophetic cadence or promissory transcendentalizing significance, or to finesse the annoying fact that while Godlike outcomes are presumably certain the ways through all those pesky intermediary technical steps and political impasses that stand between the way we live now and all those marvelous Godlike eventualities remain conspicuously uncertain.

The future that Jack "sees" so clearly, as it happens, is not one he characterizes in Anissimov's reassuringly mainstream terms; that is to say, as a future in which people "enjoy healthier, longer lives and higher standards of living provided by safe, cheap, personalized products." No, Jack insists, in his future "you'll be immortal." But, wait, there's more. "You'll have a headband… It'll allow you to talk right to your computer." He continues on: "[W]ith nanotechnology they'll build these tiny little machines -- machines the size of a molecule… They'll put a billion of them in a spaceship the size of a Coke can, and shoot it off to an asteroid. The Coke can will rebuild the asteroid into mansions and palaces. You'll have an asteroid all to yourself, if you want one." Gosh, immortality alone on an asteroid stuffed with mansions and jewels and a smart AI to keep you company. How seductive (see story title)! Even better is this rather gnomic addendum, a favorite of would-be gurus everywhere: "'I won't tell you all the things I expect to happen,' he smiled mischievously, 'I'm afraid I'd really scare you!" Father Knows Best, eh? And it's hard not to like the boyish oracularity of that "mischievous smile." As the story unfolds, we discover that Jack likely refers here to the fact that "she" will eventually download her consciousness into a series of increasingly exotic, and eventually networked, robot "bodies" and then utterly disembodied informational forms.

The story is a truly odd and symptomatic little number -- definitely an enjoyable and enlightening read for all that -- juxtaposing emancipatory rhetoric in a curious way to the sort of reactionary details one has come to expect from especially American technophilic discourse. (For some of the reasons, Richard Barbrook and Andy Cameron's The Californian Ideology always repays re-reading, as does Jedediah Purdy's The God of the Digirati, and Paulina Borsook's excellent book Cyberselfish, which entertainingly provides a wealth of supplementary detail.) The very first sentence mobilizes archetypes so bruisingly old-fashioned (but, you know, it's the future!) to make you blush even if you never even heard of eco-feminism: "He worked with computers; she worked with trees." By sentence two we are squirming with discomfort: "She was surprised that he was interested in her. He was so smart; she was so… normal." ("Normal" people aren't "smart"? "Normal" people should feel privileged when our smart betters deign to notice us?) Later in the story, progress and emancipation and even revolution are drained of social struggle and political content altogether and reduced to a matter of shopping for ever more powerful gizmos offered for sale in catalogues -- elaborate robots, rejuvenation pills, genius pills, brain-computer interfaces, robot bodies, the promised asteroid mansions, and so on. Politics as consumption, how enormously visionary. One also detects in the story a discomforting insinuation of body-loathing, rather like the hostility to the "meat body" one encounters in some Cyberpunk fiction, from the initial curious fact that Jack and the unnamed protagonist sleep together but never have sex (an odd detail in a story that so clearly means to invoke the conventions of romantic love), and that the emancipatory sequence of technological empowerments undergone by the protagonist are always phrased as a series of relinquishments, of her morphology, of her body, of embodiment altogether, of narrative selfhood by the end, and each relinquishment signaled by the repeated refrain, "it just didn't seem to matter," where it is a loss of matter that fails to matter.

Be all that as it may, the specific point I would want to stress here is that "The Gentle Seduction" has a highly particular vision of what the future will look like, and is driven by an evangelical zeal to implement just that future. It is a future with a highly specific set of characteristics, involving particular construals of robotics, artificial intelligence, nanotechnology, and technological immortality (involving first genetic therapies but culminating in a techno-spiritualized "transcendence" of the body through digitality). These characteristics, furthermore, are described as likely to arrive within the lifetimes of lucky people now living and are described as inter-implicated or even converging outcomes, crystallizing in a singular momentous Event, the Singularity, an Event in which people can believe, about which they can claim superior knowledge as believers, which they go on to invest with conspicuously transcendental significance, and which they declare to be unimaginable in key details but at once perfectly understood in its essentials. This highly specific vision in Stiegler's story is one and the same with the vision humorously documented in Ed Regis's rather ethnographic Great Mambo Chicken and the Transhuman Condition, published the following year, in 1990, and in Damien Broderick's The Spike, published twelve years later, and, although the stresses shift here and there, sometimes emphasizing connections between cybernetics and psychedelia (as in early Douglas Rushkoff), sometimes emphasizing robotics and consciousness uploading (as in Hans Moravek's Mind Children -- whose work is critiqued exquisitely in N. Katherine Hayle's How We Became Posthuman), sometimes emphasizing Drexlerian nanotechnology (as in Chris Peterson's Unbounding the Future), or longevity (as in Brian Alexander's Rapture), it is fairly astonishing to realize just how unchanging this vision is in its specificity, in its ethos, in its cocksure predictions, even in its cast of characters. Surely a vision of looming incomprehensible transformation should manage to be a bit less… static than transhumanism seems to be?

Although Anissimov wants to reassure the world that transhumanists have no peculiar commitments to particular superlative outcomes one need only read any of them for any amount of time to see the truth of the matter. Far more amusing than his denials and efforts at organizational sanewashing go, however, is his concluding admonishment of those -- oh, so few! -- transhumanists or Singularitarians who might be vulnerable to accusations of Superlativity: "If any transhumanists do have specific attachments to particular desired outcome," Anissimov warns, "I suggest they drop them — now." Well, then, that should do it. "The transhumanist identity," he continues, "should not be defined by a yearning for such outcomes. It is defined by a desire to use technology to open up a much wider space of morphological diversity than experienced today." It is very difficult to see how a transhumanist "identity" would long survive being evacuated of its actual content apart from a commitment to something that looks rather like mainstream secular multicultural pro-choice attitudes that seem to thrive quite well, thank you very much, without demanding people join Robot Cults. The truth is, of course, that this is all public relations spin on the part of a Director of the Singularity Insitutute for Artificial Intelligence (Robot Cult Ground Zero) and co-founder of The Immortality Institute (a Technological Immortalist outfit), and all around muckety muck and bottle washer in the World Transhumanist Association (Sub(cult)ural Superlativity Grand Central Station), and so on. Although one can be sure that none of the sub(cult)ural futurists among his readership will really take Michael up on his suggestion to icksnay on the azycray obotray ultcay stuff in public places, at least he has posted something to which he can regularly refer whenever sensible people gently suggest he and his friends are sounding a little bit nuts on this or that burning issue concerning Robot Gods Among Us, the Pleasures of Spending Eternity Uploaded into a Computer, or coping with the Urgent Risks of a World Turned into Nano-Goo, from time to time.

I will remind my own readers that Extropians, Dynamists, Raelians, Singularitarians, Transhumanists, Technological Immortalists and so on have formed a number of curious subcultures and advocacy organizations which I regularly castigate for their deranging impact on technodevelopmental policy discourse and for the cult-like attributes they seem to me to exhibit. Since these organizations and identity movement are really quite marginal as far as their actual memberships go, it is important to stress that apart from some practical concerns I have about the damaging and rather disproportionate voice these Superlative Sub(cult)ural formulations have in popular technology discourse and on public technoscientific deliberation it is really the way these extreme sub(cult)ures represent and symptomize especially clearly what are more prevailing general attitudes toward and broader tendencies exhibited in technodevelopmental change that makes them interesting to me and worthy of this kind of attention.

Wednesday, October 24, 2007

Today's Random Wilde

The growing influence of women is the one reassuring thing in our political life.

Richard Jones Critiques Superlativity

Over on the blog Soft Machines yesterday, Richard Jones -- a professor of physics, science writer, and currently Senior Advisor for Nanotechnology for the UK's Engineering and Physical Sciences Research Council -- offered up an excellent (and far more readable than I tend to manage to be) critique of Superlative Technology Discourses, in a nicely portentiously titled post, “We will have the power of the gods”. Follow the link to read the whole piece, here a some choice bits:
"Superlative technology discourse… starts with an emerging technology with interesting and potentially important consequences, like nanotechnology, or artificial intelligence, or the medical advances that are making (slow) progress combating the diseases of aging. The discussion leaps ahead of the issues that such technologies might give rise to at the present and in the near future, and goes straight on to a discussion of the most radical projections of these technologies. The fact that the plausibility of these radical projections may be highly contested is by-passed by a curious foreshortening….

[T]his renders irrelevant any thought that the future trajectory of technologies should be the subject of any democratic discussion or influence, and it distorts and corrupts discussions of the consequences of technologies in the here and now. It’s also unhealthy that these “superlative” technology outcomes are championed by self-identified groups -- such as transhumanists and singularitarians -- with a strong, pre-existing attachment to a particular desired outcome - an attachment which defines these groups’ very identity. It’s difficult to see how the judgements of members of these groups can fail to be influenced by the biases of group-think and wishful thinking….

The difficulty that this situation leaves us in is made clear in [an] article by Alfred Nordmann -- “We are asked to believe incredible things, we are offered intellectually engaging and aesthetically appealing stories of technical progress, the boundaries between science and science fiction are blurred, and even as we look to the scientists themselves, we see cautious and daring claims, reluctant and self- declared experts, and the scientific community itself at a loss to assert standards of credibility.” This seems to summarise nicely what we should expect from Michio Kaku’s forthcoming series, “Visions of the future”. That the program should take this form is perhaps inevitable; the more extreme the vision, the easier it is to sell to a TV commissioning editor…

Have we, as Kaku claims, “unlocked the secrets of matter”? On the contrary, there are vast areas of science -- areas directly relevant to the technologies under discussion -- in which we have barely begun to understand the issues, let alone solve the problems. Claims like this exemplify the triumphalist, but facile, reductionism that is the major currency of so much science popularisation. And Kaku’s claim that soon “we will have the power of gods” may be intoxicating, but it doesn’t prepare us for the hard work we’ll need to do to solve the problems we face right now.

More like this, please.

Superlativity as Anti-Democratizing

Upgraded and adapted from Comments:

Friend of Blog Michael Anissimov said: Maybe "superlative" technologies have a media megaphone because many educated people find these arguments persuasive.

There is no question at all that many educated people fall for Superlative Technology Discourses. It is very much a discourse of reasonably educated, privileged people (and also, for that matter, mostly white guys in North Atlantic societies). One of the reasons Superlativity comports so well with incumbent interests is that many of its partisans either are or identify with such incumbents themselves.

However, again, as I have taken pains to explain, even people who actively dis-identify with the politics of incumbency might well support such politics inadvertently through their conventional recourse to Superlative formulations, inasumuch as these lend themselves so forcefully to anti-pluralistic reductionism, to elite technocratic solutions and policies, to the naturalization of neoliberal corporate-military "competitiveness" and "innovation" and such as the key terms through which technoscientific "development" can be discussed, to special vulnerability to hype, groupthink, and True Belief, and so on, all of which tend to conduce to incumbent interests and reactionary politics in general.

If a majority

Whoa, now, just to be clear: The "many" of your prior sentence, Michael, represents neither a "majority" of "educated" people (on any construal of the term "educated" I know of), nor a "majority" in general.

If a majority decides to allocate research funds towards Yudkowskian AGI and Drexlerian MNT, who would you be to question the democratic outcome?

Who would I be to question a democratic outcome? Why, a democratic citizen with an independent mind and a right to free speech, that's who.

I abide by democratic outcomes even where I disapprove of them time to time, and then I make my disapproval known and understood as best I can in the hopes that the democratic outcome will change for the better -- or if I fervently disapprove of such an outcome, I might engage in civil disobedience and accept the criminal penalty involved to affirm the law while disapproving the concrete outcome. All that is democracy, too, in my understanding of it.

In the past, Michael, you have often claimed to be personally insulted by my suggestions that Superlative discourses have anti-democratizing tendencies -- you have wrongly taken such claims as equivalent to the accusation that Superlative Technocentrics are consciously Anti-Democratic, which is not logically implied in the claim at all (although I will admit that the evidence suggests that Superlativity is something of a strange attractor for libertopians, technocrats, Randroids, Bell Curve racists and other such anti-democratic dead-enders). For me, structural tendencies to anti-democratization are easily as or more troubling than explicit programmatic commitment to anti-democracy (which are usually marginalized into impotence in reasonably healthy democratic societies soon enough, after all). When you have assured me that you are an ardent democrat in your politics yourself, whatever your attraction to Superlative technodevelopmental formulations, I have tended to take your word for it.

But when you seem to suggest that "democracy" requires that one "not question" democratic outcomes I find myself wondering why on earth you would advocate democracy on such terms? It's usually only reactionaries, after all, who falsely characterize democracy as "mob rule" -- and they do so precisely because they hate democracy and denigrate common people (with whom they dis-identify). Actual democratically-minded folks tend not to characterize their own views in such terms. Democracy is just the idea that people should have a say in the public decisions that affect them -- for me, democracy is a dynamic, experimental, peer-to-peer formation.

Because that [AGI/MNT funding] is what is likely going to happen in the next couple decades.

Be honest: if you were you as you are now twenty years ago, would you have said the same? What could happen in twenty years' time to make you say otherwise?

I personally think it is an arrant absurdity to think that majorities will affirm specifically Yudkowskian or Drexlerian Superlative outcomes by name in two decades. Of the two, only Drexler seems to me likely to be remembered at all on my reckoning (don't misunderstand me, I certainly don't expect to be "remembered" myself, I don't think that is an indispensable measure of a life well-lived, particularly).

On the flip side, it seems to me that once one has dropped the Superlative-tinted glasses, one can say that funding decisions by representatives democratically accountable to majorities are already funding research and development into nanoscale interventions and sophisticated software. I tend to be well pleased by that sort of thing, thank you very much. If one is looking for Robot Gods or Utility Fogs, however, I suspect that in twenty years' time one will find them on the same sf bookshelves where one properly looks for them today, or looked for them twenty years ago.

Tuesday, October 23, 2007

One Wonders

Southern California is on fire. One wonders if the California National Guard might have something else they could be doing right about now rather than being forced to kill and die in a catastrophic completely unpopular illegal war and occupation based on lies in Iraq.

Today's Random Wilde

To be really mediæval one should have no body. To be really modern one should have no soul. To be really Greek one should have no clothes.

Monday, October 22, 2007

Zombie Trojan Elephant Not Dead Yet?

[via Calitics] Keep your eyes open, the scoundrels may be at it again:
[T]he dirty trick initiative [to split Dem-friendly California's electoral votes -- and only California's electoral votes of all the States in the Union, ya'know, fer Democracy! -- and so skew the national election to stealth yet another unpopular bloodthirsty crony capitalist idiot thug Republican into the White House so that our long National Nightmare can continue on] appears to have some new life. It is unclear quite how feasible it is for them to gather enough signatures to get it on the ballot, or if they really have the money to give it a shot. The bottom line is that there are paid signature gatherers out there trying to get names… Dave Gilliard, a Republican consultant in Sacramento who was involved in shepherding the recall petition against Gray Davis to the ballot, is reportedly involved… The deadline for qualifying the initiative for the June 2008 ballot (so that it could take effect before next year's presidential election) is Nov. 13, although such deadlines can be pushed…. Bottom line is that we should know within days if they actually have real money. Stay tuned.

My Precarity Piece

I've reworked my Precarity and Experimental Subjection essay (one of the handful of Amor Mundi posts I'm proudest of, actually) and would like to solicit comments and suggestions from you all.

Sunday, October 21, 2007

Antar in Libertopia: A Technoprogressive Dialogue on the Constitution of Liberty, Excerpted and Slightly Adapted from Kim Stanley Robinson's Blue Mars

I have been re-reading (for possibly the fifth time, I've lost count over the years) Kim Stanley Robinson's magisterial Mars Trilogy. I wanted to refresh my sense of the trilogy's structure and themes, since I'm hoping to teach a course next year in the Rhetoric Department at Berkeley devoted entirely to Robinson's three Green trilogies, the Mars Trilogy, the most recent Science in the Capital Trilogy, and his earlier Three Californias.

Anyway, I was reading a section of "A New Constitution," from Volume Three, this morning in the bath (hey, it's lazy Sunday!). In this chapter the plotting of the book is more or less suspended, and the narrative is propelled forward by little but the dynamic interplay of the characters we have grown to love as they argue politics and ideas for page after page.

In the section that I'm talking about, a stretch of pages is devoted to an exchange between one of the First Hundred -- a superannuated first-generation settler on Mars, well over a century old in consequence of medical techniques he himself had a hand in the creation of -- and a much younger Mars-born delegate to the Constitutional Convention that has been convened in the aftermath of a Revolutionary upheaval. The exchange functions very much in the manner of a Platonic Dialogue, with the elder Vlad in the role of Socrates, and Antar the misguided youngster ripe for elenchus. The subject of the Dialogue is one that is especially dear to my heart, the skewering of libertopian idiocies and their assumptions and the proposal of sensible (techno)progressive alternatives to them.

I have undertaken to reproduce the rhetorical and theoretical content of the exchange as such a Socratic dialogue, properly speaking, by removing plot specific references and narrative devices, and focusing simply on the dialogic delineation of ideas. Antar remains as a character, but as is usual in such dialogues functions as little more than an ineffectual bowling pin for Vlad's extended formulations. I'd like to think were this a more proper Socratic dialogue the exchange would end with Vlad having seduced Antar not only to his way of thinking but into his bed, earning a nice healthy fuck from the youthful Antar to let everbody know there are no hard feelings all around (one suspects from the concluding rant of Alcibiades in the Symposium that Socrates would enjoy inverting the usual Greek terms of intercourse in such a circumstance: but that controversial, or possibly wishful, claim of mine is one for another time). As it happens, Robinson takes the plot elsewhere, or at any rate does not record whether or not Vlad and Antar found their way to an exchange of fluids in the aftermath of this exchange of ideas (one strongly suspects, not!).

Needless to say, I strongly recommend that anyone who has not read Robinson's trilogy do so, especially if they find technoprogressive formulations appealing, are interested in the utopian green imaginary, or simply crave science fiction that manages to be both "hard" and "literary" at once as these rather dim designations put it. Come what may, on to Antar in Libertopia!


Delegates at the Table of Tables at the Martian Constitutional Congress squabble over the legal institutionalization of the provisions of eco-economics formulated by Vlad Taneev and Marina Tokareva. These eco-economics governed the Pre-Revolutionary Economic Underground and many delegates want to codify them into the new Constitution. Some delegates are complaining that eco-economics are too radical, because they impinge on local autonomy, others because they have more faith in traditional capitalist economics than in any new system. Antar spoke often for this last group.

Antar: This new economy that's being proposed is a radical and unprecedented intrusion of government into business….

Vlad: What you said about government and business is absurd…. Governments always regulate the kinds of business they allow. Economics is a legal matter, a system of laws. So far, we have been saying… that as a matter of law, democracy and self-government are the innate rights of every person, and that these rights are not to be suspended when a person goes to work. You… do you believe in democracy and self-rule?

Antar: Yes!

Antar: Do you believe in democracy and self-rule as the fundamental values that government ought to encourage?

Antar: Yes!

Vlad: Very well. If democracy and self-rule are the fundamentals, then why should people give up these rights when they enter the workplace? In politics we fight like tigers for freedom, for the right to elect our leaders, for freedom of movement, choice of residence, choice of what work to pursue -- control of our lives, in short. And then we wake up in the morning and go to work, and all those rights disappear. We no longer insist on them. And so for most of the day we return to feudalism. That is what capitalism is -- a version of feudalism in which capital replaces land, and business leaders replace kings. But the hierarchy remains. And so we still hand over our lives' labor, under duress, to feed rulers who do no real work.

Antar: Business leaders work. And they take the financial risks --

Vlad: The so-called risk of the capitalist is merely one of the privileges of capital.

Antar: Management --

Vlad: Management is a real thing, a technical matter. But it can be controlled by labor just as well as by capital. Capital itself is simply the useful residue of the work of past laborers, and it could belong to everyone as well as to a few. There is no reason why a tiny nobility should own the capital, and everyone else therefore be in service to them. There is no reason they should give us a living wage and take all the rest we produce. No! The system called capitalist democracy as not really democracy at all. That is why it was able to turn so quickly into the [corporatist] system, in which democracy grew ever weaker and capitalism ever stronger. In which one percent of the population owned half of the wealth, and five percent of the population owned ninety-five percent of the wealth. History has shown which values were real in that system. And the sad thing is that the injustice and suffering caused by it were not at all necessary, in that the technical means have existed since the eighteenth century to provide the basics of life to all.

So. We must change. It is time. If self-rule is a fundamental value, if simple justice is a value, then they are values everywhere, including in the workplace where we spend so much of our lives…. [E]veryone's work is their own, and the worth of it cannot be taken away… [T]he various modes of production belong to those who created them, and to the common good of the future generations… [T]he world is something we all steward together… [W]e have developed an economic system that can keep all those promises. That has been our work… In the system we have developed, all economic enterprises are to be small cooperatives, owned by their workers and by no one else. They hire their management, or manage themselves. Industry guilds and co-op associations will form the larger structures necessary to regulate trade and the market, share capital, and create credit.

Antar: These are nothing but ideas. It is utopianism and nothing more.

Vlad: Not at all… The system is based on [historical] models… and its various parts have all been tested… and have succeeded very well. You don't know about this partly because you are ignorant, and partly because [corporatism] itself steadfastly ignored and denied all alternatives to it. But most of our microeconomy has been in successful operation for [decades] in the Mondragon region of Spain… These organizations were the precursors to our economy, which will be democratic in a way capitalism never even tried to be."

[Another delegate objects from the opposite direction, noting that eco-economics seems to be abandoning the very possibility of the gift-economy.}

Vlad: Pure gift exchange co-exis[s] with a monetary exchange, in which neoclassical market rationality, that is to say the profit mechanism, [is] bracketed and contained by society to direct it to serve higher values, such as justice and freedom. Economic rationality is simply not the highest value. It is a tool to calculate costs and benefits, only one part in a larger equation concerning human welfare. The larger equation is called a mixed economy, and that is what we are constructing here. We are proposing a complex system, with public and private spheres of economic activity. It may be that we ask people to give, throughout their lives, about a year of their work to the public good… That labor pool, plus taxes on private co-ops for use of the land and its resources, will enable us to guarantee the so-called social rights we have been discussing -- housing, health care, food, education -- things that should not be at the mercy of market rationality. Because la salute non si paga, as the Italian workers used to say, Health is not for sale!

Antar: So nothing will be left to the market.

Vlad: The market will always exist. It is the mechanism by which things and services are exchanged. Competition to provide the best product at the best price, this is inevitable and healthy. But… it will be directed by society in a more active way. There will be not-for-profit status to vital life support matters, and then the freest part of the market will be directed away from the basics of existence toward nonessentials, where venture enterprises can be undertaken by worker-owned co-ops, who will be free to try what they like. When the basics are secured and when the workers own their own businesses, why not? It is the process of creation we are talking about.

Jackie [another young delegate, allied to Antar]: What about the ecological aspects of this economy that you used to emphasize?

Vlad: They are fundamental… [T]he land, air, and water… belong to no one… [W]e are the stewards of it for all the future generations. This stewardship will be everyone's responsibility, but in case of conflicts we propose strong environmental courts, perhaps as part of the constitutional court, which will estimate the real and complete environmental costs of economic activities, and help to coordinate plans that impact the environment.

Antar: But this is simply a planned economy!

Vlad: Economies are plans. Capitalism planned just as much as this, and [corporatism] tried to plan everything. No, an economy is a plan.

Antar: It’s simply socialism returned.

Vlad: [This] is a new totality. Names from earlier totalities are deceptive. They become little more than theological terms. There are elements one could call socialism in this system, of course. How else remove injustice from economy? But private enterprises will be owned by their workers rather than being nationalized, and this is not socialism, at least not socialism as it was usually attempted on Earth. And all the co-ops are businesses -- small democracies devoted to some work or other, all needing capital. There will be a market, there will be capital. But in our system workers will hire capital rather than the other way around. It's more democratic that way, more just. Understand me -- we have tried to evaluate each feature of this economy by how well it aids us to reach the goals of more justice and more freedom. And justice and freedom do not contradict each other as much as has been claimed, because freedom in an unjust system is no freedom at all. They both emerge together. And so it is not so impossible, really. It is only a matter of enacting a better system, by combining elements that have been tested and shown to work. This is the moment for that. We have been preparing for this opportunity for… years. And now that the chance has come, I see no reason to back off just because someone is afraid of some old words.