Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Wednesday, February 26, 2014

GO FAI a Kite!

From my reply to a comment by my friend "JimF" in the Moot:
So, the "bottom-up" con-artists want us to take them seriously because they pretend they can deliver dead-ender GOFAI ["Good Old-Fashioned Artificial Intelligence"] without understanding intelligence as such, while the "top-down" con-artists want us to take them seriously because they pretend still to want to understand intelligence to deliver dead-ender GOFAI even though they don't understand it any more than they ever did and show little sign of doing anything substantially different about that?

4 comments:

jimf said...

> So, the "bottom-up" con-artists want us to take them seriously
> because they pretend they can deliver. . . "Good Old-Fashioned Artificial
> Intelligence"] without understanding intelligence as such, while the "top-down"
> con-artists want us to take them seriously because they. . .
> still to want to understand intelligence. . . even though they don't
> understand it any more than they ever did. . .

Something like that. ;->

But, as a very Smart person once said, we have to find the
right balance between the Evo and the Devo.

Here's something from one of the, er, horses', er,
mouths:

http://www.goertzel.org/books/DIExcerpts.htm
------------------
Nets versus Rules

When I first started studying AI in the mid-1980's, it seemed
that AI researchers were fairly clearly divided into two camps,
the neural net camp and the logic-based or rule-based camp.
This isn't quite so true anymore, but in reviewing the history of AI,
it's an interesting place to start. Both of these camps wanted
to make AI by simulating human intelligence, but they focused
on very different aspects of human intelligence. One modeled
the brain, the other modeled the mind.

The neural net approach starts with neurons, the nerve cells
the brain is made of. It tries to simulate the ways in which these
cells are linked together, and in which they achieve cooperative
behaviors by nonlinearly spreading electricity among each other,
and modulating each other's chemical properties. . .

Rule-based models, on the other hand, try to simulate the mind's
ability to make logical, rational decisions, without asking how the
brain does this biologically. They trace back to a century of revolutionary
developments in mathematical logic, culminating in the realization
that Leibniz's dream of a complete logical formalization of all knowledge
is actually achievable in principle, although very difficult in practice.

To most any observer not caught up on one or another side of the debate,
it's obvious that both of these ways of looking at the mind are extremely
limited. True intelligence requires more than following carefully defined
rules, and it also requires more than random links between a few thousand
artificial neurons. . .

jimf said...

Toward a Middle Way

I've presented a dichotomy between symbolic and connectionist AI -
rule-based and neural-net AI. . .

[This] glosses over the peculiar vagueness of the notions of "symbolic"
and "connectionist" themselves. . . There is a valid distinction between AI
that is inspired by the brain, and AI that is inspired by conscious reasoning
and problem-solving behavior. But the distinction between "symbolic" and
"connectionist" knowledge representation is not as clear as it's usually
thought to be. . .

Of course, there are extremes of symbolic AI and extremes of connectionism. . .
[But] real intelligence only comes about when the two kinds of knowledge
representation intersect, interact and build on each other.

I'm certainly not alone in coming to the conclusion that the middle way
is where it's at. For instance, Gerald Edelman, a Nobel Prize-winning
biologist, proposed a theory of "neuronal group selection" or Neural Darwinism,
which describes how the brain constructs larger-scale networks called
"maps" out of neural modules, and selects between these maps in an evolutionary
manner, in order to find maps of optimum performance. And Marvin Minsky,
the champion of rule-based AI, had moved in an oddly similar direction,
proposing a "Society of Mind" theory in which mind is viewed as a kind of
society of actors or processes that send messages to each other and form
alliances into temporary working groups.

Minsky's and Edelman's ideas differ on many details. Edelman thinks
rule-based AI is claptrap of the worst possible kind. Minsky still upholds
the rule-based paradigm --though he now admits that it may sometimes be
productive to model the individual "actors" or "processes" of the mind
using neural nets. . . But even so, the Society of Mind theory and the
Neural Darwinism approach are both indicative of a shift toward a new
view of the structure of intelligence, one which I believe is fundamentally
correct. . .

What Minsky and Edelman share is a focus on the intermediate level of
process dynamics. They are both looking above neurons and below rigid
rational rules, and trying to find the essence of mind in the interactions
of large numbers of middle-level psychological processes. I believe this
is the correct perspective, in large part because I think it is how the
human mind works. . .
====

Interesting that Goertzel mentions Edelman so often in
his recent papers. He didn't used to think much of the guy,
IIRC.

Dale Carrico said...

Goertzel needs to spend a decade or two in Nauru rethinking his priorities, once you've read Edelman there is little reason to wade into silly mind-ecologists and singularitarians, if you ask me. If one is looking for a satisfying balance between evo and devo and I think it goes a little something like this.

jimf said...

More propaganda from the usual suspects:

http://hplusmagazine.com/2014/02/28/saving-the-world-with-analytical-philosophy/
----------------
Saving the World with Analytical Philosophy
Ben Goertzel
February 28, 2014

Stuart Armstrong, a former mathematician currently employed
as a philosopher at Oxford University's Future of Humanity Institute,
has recently released an elegant little booklet titled Smarter Than Us.
The theme is the importance of AGI to the future of the world. . .

Armstrong wrote Smarter Than Us at the request of the
Machine Intelligence Research Institute, formerly called the
Singularity Institute for AI -- and indeed, the basic vibe of the
booklet will be very familar to anyone who has followed SIAI/MIRI
and the thinking of its philosopher-in-chief Eliezer Yudkowsky.
Armstrong, like the SIAI/MIRI folks, is an adherent of the school
of thought that the best way to work toward an acceptable future
for humans is to try and figure out how to create superintelligent
AGI systems that are provably going to be friendly to humans,
even as the systems evolve and use their intelligence to
drastically improve themselves. . .

It's worth reading as an elegant representation of a certain
perspective on the future of AGI, humanity and the world.

Having said that, though, I also have to add that I find some of
the core ideas in the book highly unrealistic.

The title of this article summarizes one of my main disagreements.
Armstrong seriously seems to believe that doing analytical philosophy
(specifically, moral philosophy aimed at formalizing and
clarifying human values so they can be used to structure
AGI value systems) is likely to save the world.

I really doubt it!
====