2016/11/07 David Gelenter, “Consciousness, Computers, and the Tides of Mind”, Econtalk (MP3 audio)

The most destructive analogy in the last 100 years, says @DavidGelernter with @econtalker : “Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer”. Interview states position that consciousness won’t be found in a computer.

The cited source is visible on a page on Google Books:

In his famous 1950 paper about artificial intelligence, Alan Turing mentions consciousness, in passing, as a phenomenon associated with minds, in some ways mysterious. But he treats it as irrelevant. If you define the purpose of mind as rational thought, then consciousness certainly seems irrelevant. And for Turing, rational thought was indeed the purpose of mind.

Turing’s favorite word in this connection is “intelligence”: he saw the goal of technology not as an artificial mind (with all its unnecessary emotions, reminiscences, fascinating sensations, and upsetting nightmares), but as artificial intelligence, which is why the field has the name it does.

In no sense did this focus reflect narrowness or lack of imagination on Turing’s part. Few more imaginative men have ever lived. But he needed digital computers for practical purposes. Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer—the most important, most influential and (intellectually) most destructive analogy in the last hundred years (the last hundred at least). [emphasis added]

Turing writes in his 1950 paper that, with time and thought, one might well be able to build a digital computer that could “enjoy” strawberries and cream. But, he adds, don’t hold your breadth. Such a project would be “idiotic’—so why should science bother? In practical terms, he has a point.

To understand the mind, we must go over the ground beyond logic as carefully as we study logic and reasoning. That’s not to say that rational thought does not underlie man’s greatest intellectual achievements. Cynthia Ozick reminds us, furthermore, of a rational person’s surprise at “how feeling could be so improbably distant from knowing” (Foreign Bodies). It’s much easier to feel something is right than to prove it. And when you do try to prove it, you might easily discover that despite your perfectly decided, rock-solid feeling of certainty, your feelings are total nonsense.

We have taken this particular walk, from the front door to the far end of Rationality Park, every day for the last two thousand years. Why not go a little farther this time, and venture beyond the merely rational?

David Gelernter, The Tides of Mind (2016), Chapter 5

The idea is further explored in the interview.

42:44 Russ Roberts:  [….] So, you are a skeptic about the ability of artificial intelligence to eventually mimic or emulate a brain. So, talk about why. And then why you feel that that analogy is so destructive: because it is extremely popular and accepted by many, many people. Not by me, but by many people, smarter than I am, actually. So, what’s wrong with that analogy, and why is it destructive?

David Gelernter: Well, I think you have to be careful in saying what exactly the analogy is.

On the one hand, I think AI (Artificial Intelligence) has enormous potential in terms of imitating or faking it, when it comes to intelligence. I think we’ll be able to build software that certainly gives you the impression of solving problems in a human-like or in an intelligent way. I think there’s a tremendous amount to be done that we haven’t done yet.

On the other hand, if by emulating the mind you mean achieving consciousness–having feelings, awareness–I think as a matter of fact that computers will never achieve that.

Any program, any software that you deal with, any robot that you deal with will always be a zombie in the sense that–in the Hollywood and philosophers’ sense of zombie–zombie a very powerful word in philosophy. In the sense that it’s behavior might be very impressive–I mean, you might give it a typical mathematics problem to solve or read it something from a newspaper and ask it to comment or give it all sorts of tests you think of, and it might pass with flying colors. You might walk away saying, ‘This guy is smarter than my best friend,’ and, you know, ‘I look forward to chatting with him again.’ But when you open up the robot’s head, there’s nothing in there. There’s nothing inside. There’s no consciousness.

Source

“David Gelernter on Consciousness, Computers, and the Tides of Mind” | Russ Roberts | Nov. 7, 2016 | Econtalk at http://www.econtalk.org/david-gelernter-on-consciousness-computers-and-the-tides-of-mind , MP3 audio downloadable at http://files.libertyfund.org/econtalk/y2016/Gelernterconsciousness.mp3

About

David Ing blogs at coevolving.com , photoblogs at daviding.com , and microblogs at http://ingbrief.wordpress.com . A profile appears at , and an independent description is on .

Tagged with: , , ,
Posted in Talk Audio Download
One comment on “2016/11/07 David Gelenter, “Consciousness, Computers, and the Tides of Mind”, Econtalk (MP3 audio)
  1. antlerboy - Benjamin P Taylor says:

    Reblogged this on Systems Community of Inquiry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Translate
Beyond this media queue
This content is syndicated to Twitter. For professional perspectives, look to Coevolving Innovations; for a photoblog, look to Reflections, Distractions.
  • Redesigning Our Theories of Theories of Change, Peter H Jones + Ryan J A Murphy (ST-ON 2020/11/19)
    While the term “theory of change” is often used by funders expecting an outcome of systems change for their investment, is there really a theory there? The November 2020 Systems Thinking Ontario session was an opportunity for Peter H. Jones (OCADU) and Ryan J. A. Murphy (Memorial U. of Newfoundland) to extend talks that they had given over a few days for the […]
  • Learning With Humility: Systems Thinking and Reordering Priorities (Global Change Days, 2020/10/22)
    For the third of three workshops by the Systems Changes Learning Circle in October 2020, Kelly Okamura, Dan Eng and Joanne Dong led a Beacon Event for Global Change Days. This session was one in a series for global changemakers.  Our expectation was that they would be hands-on practitioners, with relatively low familiarity with systems thinking methods and t […]
  • Theoretical Grounds, Pragmatic Grounds: Methods for Reordering our Priorities through Systems Changes Learning (ST-ON 2020/10/19)
    For the second of three workshops by the Systems Changes Learning Circle in October 2020, we convened a session for the monthly Systems Thinking Ontario meeting.  The focus of this workshop was a review of progress to date on methods by the scholarly team, informed by the adoption and use by the field team. The framing of this presentation centered on develo […]
  • Reordering Our Priorities Through Systems Changes Learning (RSD9, 2020/10/14)
    For the first of three workshops by the Systems Changes Learning Circle in October 2020, Zaid Khan led a session for the Relating Systems Thinking and Design RSD9 Symposium.  Our team had developed a set of reference slides for the three workshops, from which content that would most resonate with the audience could be selected.  RSD attracts designers across […]
  • Strategic Communications + The Brand Stack, Zaid Khan + David Akermanis (ST-ON 2020/09/14)
    Two Major Research Projects (MRPs) — they might be called master’s theses elsewhere — by Zaid Khan and David Akermanis reflect the Systemic Design agenda within the OCADU program on Strategic Foresight and Innovation (SFI).    To graduate, all SFI students complete an MRP.  With many subjects and techniques covered during SFI studies, only a few exercise Sys […]
  • Beyond the Tavistock and S-cubed legacy
    While it’s important to appreciate the systems thinking foundations laid down by the Tavistock Institute and U. Pennsylvania Social Systems Science (S3, called S-cubed) program, practically all of the original researchers are no longer with us.  Luminaries who have passed include Eric L. Trist (-1993), Fred E. Emery (-1997), and Russell L. Ackoff (-2009).  T […]
Contact
I welcome your e-mail. If you don't have my address, here's a contact page.
%d bloggers like this: