2016/11/07 David Gelenter, “Consciousness, Computers, and the Tides of Mind”, Econtalk (MP3 audio)

The most destructive analogy in the last 100 years, says @DavidGelernter with @econtalker : “Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer”. Interview states position that consciousness won’t be found in a computer.

The cited source is visible on a page on Google Books:

In his famous 1950 paper about artificial intelligence, Alan Turing mentions consciousness, in passing, as a phenomenon associated with minds, in some ways mysterious. But he treats it as irrelevant. If you define the purpose of mind as rational thought, then consciousness certainly seems irrelevant. And for Turing, rational thought was indeed the purpose of mind.

Turing’s favorite word in this connection is “intelligence”: he saw the goal of technology not as an artificial mind (with all its unnecessary emotions, reminiscences, fascinating sensations, and upsetting nightmares), but as artificial intelligence, which is why the field has the name it does.

In no sense did this focus reflect narrowness or lack of imagination on Turing’s part. Few more imaginative men have ever lived. But he needed digital computers for practical purposes. Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer—the most important, most influential and (intellectually) most destructive analogy in the last hundred years (the last hundred at least). [emphasis added]

Turing writes in his 1950 paper that, with time and thought, one might well be able to build a digital computer that could “enjoy” strawberries and cream. But, he adds, don’t hold your breadth. Such a project would be “idiotic’—so why should science bother? In practical terms, he has a point.

To understand the mind, we must go over the ground beyond logic as carefully as we study logic and reasoning. That’s not to say that rational thought does not underlie man’s greatest intellectual achievements. Cynthia Ozick reminds us, furthermore, of a rational person’s surprise at “how feeling could be so improbably distant from knowing” (Foreign Bodies). It’s much easier to feel something is right than to prove it. And when you do try to prove it, you might easily discover that despite your perfectly decided, rock-solid feeling of certainty, your feelings are total nonsense.

We have taken this particular walk, from the front door to the far end of Rationality Park, every day for the last two thousand years. Why not go a little farther this time, and venture beyond the merely rational?

David Gelernter, The Tides of Mind (2016), Chapter 5

The idea is further explored in the interview.

42:44 Russ Roberts:  [….] So, you are a skeptic about the ability of artificial intelligence to eventually mimic or emulate a brain. So, talk about why. And then why you feel that that analogy is so destructive: because it is extremely popular and accepted by many, many people. Not by me, but by many people, smarter than I am, actually. So, what’s wrong with that analogy, and why is it destructive?

David Gelernter: Well, I think you have to be careful in saying what exactly the analogy is.

On the one hand, I think AI (Artificial Intelligence) has enormous potential in terms of imitating or faking it, when it comes to intelligence. I think we’ll be able to build software that certainly gives you the impression of solving problems in a human-like or in an intelligent way. I think there’s a tremendous amount to be done that we haven’t done yet.

On the other hand, if by emulating the mind you mean achieving consciousness–having feelings, awareness–I think as a matter of fact that computers will never achieve that.

Any program, any software that you deal with, any robot that you deal with will always be a zombie in the sense that–in the Hollywood and philosophers’ sense of zombie–zombie a very powerful word in philosophy. In the sense that it’s behavior might be very impressive–I mean, you might give it a typical mathematics problem to solve or read it something from a newspaper and ask it to comment or give it all sorts of tests you think of, and it might pass with flying colors. You might walk away saying, ‘This guy is smarter than my best friend,’ and, you know, ‘I look forward to chatting with him again.’ But when you open up the robot’s head, there’s nothing in there. There’s nothing inside. There’s no consciousness.

Source

“David Gelernter on Consciousness, Computers, and the Tides of Mind” | Russ Roberts | Nov. 7, 2016 | Econtalk at http://www.econtalk.org/david-gelernter-on-consciousness-computers-and-the-tides-of-mind , MP3 audio downloadable at http://files.libertyfund.org/econtalk/y2016/Gelernterconsciousness.mp3

About

David Ing blogs at coevolving.com , photoblogs at daviding.com , and microblogs at http://ingbrief.wordpress.com . A profile appears at , and an independent description is on .

Tagged with: , , ,
Posted in Talk Audio Download
One comment on “2016/11/07 David Gelenter, “Consciousness, Computers, and the Tides of Mind”, Econtalk (MP3 audio)
  1. antlerboy - Benjamin P Taylor says:

    Reblogged this on Systems Community of Inquiry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Translate
Beyond this media queue
This content is syndicated to Twitter. For professional perspectives, look to Coevolving Innovations; for a photoblog, look to Reflections, Distractions.
  • How do Systems Changes become natural practice?
    The fourth of four lectures for the Systemic Design course at OCADU SFI focused on (a) situated practice + history-making (reframing disclosing new worlds), and on (b) commitments and the language-action perspective (applying conversations for action).
  • Whom, when + where do Systems Changes situate?
    The third of four lectures for the Systemic Design course at OCADU covered value(s), the science of service systems, and the socio-technical systems perspective.
  • Why (Intervene in) Systems Changes?
    A lecture on ecological systems for the OCADU SFI master's program opened up opportunities to discuss wei and wuwei, and get beyond an anthropocentric perspective the Canadian beaver in its habitat.
  • Are Systems Changes Different from System + Change?
    The second session of the Systemic Design course in the OCADU SFI master's program was an opportunity to share the current state of knowledge on Systems Change, in light of recent interest in Systems Change and Theory of Change.
  • Ecology and Economy: Systems Changes Ahead?
    A workshop with David L. Hawk at the CANSEE meeting in May 2019 led to an invitation to publish an article, "Ecology and Economy: Systems Changes Ahead?" in WEI Magazine.
  • Open Learning Commons, with the Digital Life Collective
    Questions about governance of online social communities led to launching on the Open Learning Commons and the Digital Life Collective, while issues of content moderation on a Facebook Group has reignited.
  • 2020/02 Moments February 2020
    Winter has discouraged enjoying the outside, so more occasions for friend and family inside.
  • 2020/01 Moments January 2020
    Back to school, teaching and learning at 2 universities.
  • 2019/12 Moments December 2019
    First half of December in finishing up course assignments and preparing for exams; second half on 11-day family vacation in Mexico City.
  • 2019/11 Moments November 2019
    Wrapped up paperwork on closing out family buildings in Gravenhurst, returned to classes and technical conferences in usual pattern of learning.
  • 2019/10 Moments October 2019
    Tightly scheduled weekdays at Ryerson Chang School, weekends in Gravenhurst clearing out family building as we're leaving the town permanently.
  • 2019/09 Moments September 2019
    Full month, winding down family business in Gravenhurst, starting Ryerson Chang certificate program in Big Data, with scheduled dinners with family and friends.
  • Plans as resources for action (Suchman, 1988)
    Two ways of thinking about practice put (i) “plans as determinants of action”, and (ii) “plans as resources for action”. The latter has become a convention, particularly through research into Human Computer Interaction (HCI) and Computer Supported Collaborative Work (CSCW). While the more durable explanation appears the Suchman (1987) book (specifically sect […]
  • The best time to plant a tree was twenty years ago
    Does “the best time to plant a tree was twenty years ago and the second best time is now” date back further than 1988? It is time to look long and hard at the value of the urban forest and create the broad-based efforts — in research, funding and citizen participation — needed to improve […]
  • 2019/11/05 13:15 “Barriers to Data Science Adoption: Why Existing Frameworks Aren’t Working”, Workshop at CASCON-Evoke, Markham, Ontario
    Workshop led by @RohanAlexander and @prof_lyons at #CASCONxEvoke on "Barriers to Data Science Adoption: Why Existing Frameworks Aren't Working". For discussion purposes the challenges are grouped within three themes: regulatory; investment; and workforce.
  • Own opinion, but not facts
    “You are entitled to your own opinions, but not to your own facts” by #DanielPatrickMoynihan is predated on @Freakonomics by #BernardMBaruch 1950 “Every man has a right to his own opinion, but no man has a right to be wrong in his facts”. Source: “There Are Opinions, And Then There Are Facts” | Fred Shapiro […]
  • R programming is from S, influenced by APL
    History of data science tools has evolved to #rstats of the 1990s, from the S-Language at Bell Labs in the 1970s, and the
  • Bullshit, Politics, and the Democratic Power of Satire | Paul Babbitt | 2013
    Satire can be an antidote, says Prof. #PaulBabbitt @muleriders , to #bullshit (c.f. rhetoric; hypocrisy; crocodile tears; propaganda; intellectual dishonesty; politeness, etiquette and civility; commonsense and conventional wisdom; symbolic votes; platitudes and valence issues).
Contact
I welcome your e-mail. If you don't have my address, here's a contact page.
%d bloggers like this: