2016/11/07 David Gelenter, “Consciousness, Computers, and the Tides of Mind”, Econtalk (MP3 audio)

The most destructive analogy in the last 100 years, says @DavidGelernter with @econtalker : “Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer”. Interview states position that consciousness won’t be found in a computer.

The cited source is visible on a page on Google Books:

In his famous 1950 paper about artificial intelligence, Alan Turing mentions consciousness, in passing, as a phenomenon associated with minds, in some ways mysterious. But he treats it as irrelevant. If you define the purpose of mind as rational thought, then consciousness certainly seems irrelevant. And for Turing, rational thought was indeed the purpose of mind.

Turing’s favorite word in this connection is “intelligence”: he saw the goal of technology not as an artificial mind (with all its unnecessary emotions, reminiscences, fascinating sensations, and upsetting nightmares), but as artificial intelligence, which is why the field has the name it does.

In no sense did this focus reflect narrowness or lack of imagination on Turing’s part. Few more imaginative men have ever lived. But he needed digital computers for practical purposes. Post-Turing thinkers decided that brains were organic computers, that computation was a perfect model of what minds do, that minds can be built out of software, and that mind relates to brain as software relates to computer—the most important, most influential and (intellectually) most destructive analogy in the last hundred years (the last hundred at least). [emphasis added]

Turing writes in his 1950 paper that, with time and thought, one might well be able to build a digital computer that could “enjoy” strawberries and cream. But, he adds, don’t hold your breadth. Such a project would be “idiotic’—so why should science bother? In practical terms, he has a point.

To understand the mind, we must go over the ground beyond logic as carefully as we study logic and reasoning. That’s not to say that rational thought does not underlie man’s greatest intellectual achievements. Cynthia Ozick reminds us, furthermore, of a rational person’s surprise at “how feeling could be so improbably distant from knowing” (Foreign Bodies). It’s much easier to feel something is right than to prove it. And when you do try to prove it, you might easily discover that despite your perfectly decided, rock-solid feeling of certainty, your feelings are total nonsense.

We have taken this particular walk, from the front door to the far end of Rationality Park, every day for the last two thousand years. Why not go a little farther this time, and venture beyond the merely rational?

David Gelernter, The Tides of Mind (2016), Chapter 5

The idea is further explored in the interview.

42:44 Russ Roberts:  [….] So, you are a skeptic about the ability of artificial intelligence to eventually mimic or emulate a brain. So, talk about why. And then why you feel that that analogy is so destructive: because it is extremely popular and accepted by many, many people. Not by me, but by many people, smarter than I am, actually. So, what’s wrong with that analogy, and why is it destructive?

David Gelernter: Well, I think you have to be careful in saying what exactly the analogy is.

On the one hand, I think AI (Artificial Intelligence) has enormous potential in terms of imitating or faking it, when it comes to intelligence. I think we’ll be able to build software that certainly gives you the impression of solving problems in a human-like or in an intelligent way. I think there’s a tremendous amount to be done that we haven’t done yet.

On the other hand, if by emulating the mind you mean achieving consciousness–having feelings, awareness–I think as a matter of fact that computers will never achieve that.

Any program, any software that you deal with, any robot that you deal with will always be a zombie in the sense that–in the Hollywood and philosophers’ sense of zombie–zombie a very powerful word in philosophy. In the sense that it’s behavior might be very impressive–I mean, you might give it a typical mathematics problem to solve or read it something from a newspaper and ask it to comment or give it all sorts of tests you think of, and it might pass with flying colors. You might walk away saying, ‘This guy is smarter than my best friend,’ and, you know, ‘I look forward to chatting with him again.’ But when you open up the robot’s head, there’s nothing in there. There’s nothing inside. There’s no consciousness.

Source

“David Gelernter on Consciousness, Computers, and the Tides of Mind” | Russ Roberts | Nov. 7, 2016 | Econtalk at http://www.econtalk.org/david-gelernter-on-consciousness-computers-and-the-tides-of-mind , MP3 audio downloadable at http://files.libertyfund.org/econtalk/y2016/Gelernterconsciousness.mp3

About

David Ing blogs at coevolving.com , photoblogs at daviding.com , and microblogs at http://ingbrief.wordpress.com . A profile appears at , and an independent description is on .

Tagged with: , , ,
Posted in Talk Audio Download
One comment on “2016/11/07 David Gelenter, “Consciousness, Computers, and the Tides of Mind”, Econtalk (MP3 audio)
  1. antlerboy - Benjamin P Taylor says:

    Reblogged this on Systems Community of Inquiry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Translate
Beyond this media queue
This content is syndicated to Twitter. For professional perspectives, look to Coevolving Innovations; for a photoblog, look to Reflections, Distractions.
  • Causal Texture of the Environment
    For those who haven’t read the 1965 Emery and Trist article, its seems as though my colleague Doug McDavid was foresighted enough to blog a summary in 2016!  His words have always welcomed here, as Doug was a cofounder of this web site.  At the time of writing, the target audience for this piece was primarily Enterprise Architecture practitioners.   [DI] Pub […]
  • Causal texture, contextualism, contextural
    In the famous 1965 Emery and Trist article, the terms “causal texture” and “contextual environment” haven’t been entirely clear to me.  With specific meanings in the systems thinking literature, looking up definitions in the dictionary generally isn’t helpful.  Diving into the history of the uses of the words provides some insight. 1. Causal texture 2. Conte […]
  • Trist in Canada, Organizational Change, Action Learning
    Towards appreciating “action learning”, the history of open systems thinking and pioneering work in organization science, the influence of Action Learning Group — in the Faculty of Environment Studies founded in 1968 at York University (Toronto) — deserves to be resurfaced. 1. Trist in Canada 2. Environmental studies, and contextualism in organizational-chan […]
  • Remembering Doug McDavid
    The news that Doug McDavid — my friend, colleague, and one of the original cofounders of the Coevolving Innovations web site in 2006 — had passed, first came through mutual IBM contacts.  More details subsequently showed up on LinkedIn from Mike McClintock. Doug left us on May 9, while working at his desk, likely in the very earliest hours of the morning. Hi […]
  • Pattern language, form language, general systems theory, R-theory
    One of the challenges with the development of pattern languages is the cross-appropriation of approaches of techniques from one domain (i.e. built physical environments) into others (e.g. software development, social change). The distinction between pattern language and form language is made by Nikos Salingaros. Design in architecture and urbanism is guided […]
  • How do Systems Changes become natural practice?
    The 1995 article by Spinosa, Flores & Dreyfus on “Disclosing New Worlds” was assigned reading preceding the fourth of four lectures for the Systemic Design course in the Master’s program in Strategic Foresight and Innovation at OCAD University.  In previous years, this topic was a detail practically undiscussed, as digging into social theory and the phen […]
  • 2020/06 Moments June 2020
    Most of month in Covid-19 shutdown Phase 1, so every photograph is an exterior shot. Bicycling around downtown Toronto, often exercising after sunset.
  • 2020/05 Moments May 2020
    Life at home is much the same with the pandemic sheltering-in-place directives, touring city streets on bicycle, avoiding the parks on weekends.
  • 2020/04 Moments April 2020
    Living in social isolation in our house with 5 family members, finishing off teaching courses and taking courses.
  • 2020/03 Moments March 2020
    The month started with a hectic coincidence of events as both a teacher and student at two universities, abruptly shifting to low gear with government directives for social distancing.
  • 2020/02 Moments February 2020
    Winter has discouraged enjoying the outside, so more occasions for friend and family inside.
  • 2020/01 Moments January 2020
    Back to school, teaching and learning at 2 universities.
  • Wholism, reductionism (Francois, 2004)
    Proponents of #SystemsThinking often espouse holism to counter over-emphasis on reductionism. Reading some definitions from an encyclopedia positions one in the context of the other (François 2004).
  • It matters (word use)
    Saying “it doesn’t matter” or “it matters” is a common expression in everyday English. For scholarly work, I want to “keep using that word“, while ensuring it means what I want it to mean. The Oxford English Dictionary (third edition, March 2001) has three entries for “matter”. The first two entries for a noun. The […]
  • Systemic Change, Systematic Change, Systems Change (Reynolds, 2011)
    It's been challenging to find sources that specifically define two-word phrases -- i.e. "systemic change", "systematic change", "systems change" -- as opposed to loosely inferring reductively from one-word definitions in recombination. MartinReynolds @OpenUniversity clarifies uses of the phrases, with a critical eye into mo […]
  • Environmental c.f. ecological (Francois, 2004; Allen, Giampietro Little 2003)
    The term "environmental" can be mixed up with "ecological", when the meanings are different. We can look at the encyclopedia definitions (François 2004), and then compare the two in terms of applied science (i.e. engineering with (#TimothyFHAllen @MarioGiampietro and #AmandaMLittle, 2003).
  • Christopher Alexander’s A Pattern Language: Analysing, Mapping and Classifying the Critical Response | Dawes and Ostwald | 2017
    While many outside of the field of architecture like the #ChristopherAlexander #PatternLanguage approach, it's not so well accepted by his peers. A summary of criticisms by #MichaelJDawes and #MichaelJOstwald @UNSWBuiltEnv is helpful in appreciating when the use of pattern language might be appropriate or not appropriate.
  • Field (system definitions, 2004, plus social)
    Systems thinking should include not only thinking about the system, but also its environment. Using the term "field" as the system of interest plus its influences leaves a lot of the world uncovered. From the multiple definitions in the International Encyclopedia of Systems and Cybernetics , there is variety of ways of understanding "field […]
Contact
I welcome your e-mail. If you don't have my address, here's a contact page.
%d bloggers like this: