Amazon.ca:Customer reviews: The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do
Skip to main content
.ca
Hello Select your address
All
Select the department you want to search in
Hello, Sign in
Account & Lists
Returns & Orders
Cart
All
Best Sellers Deals Store Customer Service New Releases Prime Fashion Sell Electronics Home Books Toys & Games Sports & Outdoors Health & Household Coupons Computers Gift Cards Gift Ideas Kindle Books Pet Supplies Computer & Video Games Automotive Grocery Beauty & Personal Care Home Improvement Baby Audible Registry Subscribe & save
The best in Beauty
Today's Deals Watched Deals Outlet Deals Warehouse Deals Coupons eBook Deals Subscribe & Save

  • The Myth of Artificial Intelligence: Why Computers Can’t Think the Way...
  • ›
  • Customer reviews

Customer reviews

4.5 out of 5 stars
4.5 out of 5
137 global ratings
5 star
70%
4 star
18%
3 star
8%
2 star
3%
1 star
1%
The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do

The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do

byErik J. Larson
Write a review
How are ratings calculated?
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzes reviews to verify trustworthiness.
See All Buying Options

Search
Sort by
Top reviews
Filter by
All reviewers
All stars
Text, image, video
137 total ratings, 23 with reviews

There was a problem filtering reviews right now. Please try again later.

From Canada

There are 0 reviews and 2 ratings from Canada

From other countries

Gary Shorthouse
5.0 out of 5 stars “Humm I don’t know that one…”
Reviewed in the United Kingdom on May 22, 2021
Verified Purchase
“Humm I don’t know that one…”
We have all heard it from Alexa. Given the level of investment that has gone into such projects the result is more than disappointing. Having been a keen (amateur) follower of AI, I avidly soaked up Nick Bostrom’s book “Superintelligence” recommending it to friends. But lately I have grown suspicious of the hype. What is AI and machine learning after all? As a Go player for many decades, I was astonished at the power of AlphaGo. The “divine move” in game four against Lee Sedol astonished the Go playing community. A sign of true creativity?

Of course not. And Erick J. Larson spells it all out beautifully. It is a brave book. It meticulously dismantles all the hype and, in easy to understand words. tells us the true story of AI and its evolution from Turing to today. Clear, well-paced, insightful and fun to read. One of the best books on the subject I have ever read. And that includes Bostrom’s work. Larson assures us that AI holds nothing to be scared of – and I think he’s right.
5 people found this helpful
Report abuse
Anderson R.
5.0 out of 5 stars The best AI book of 2021 in my opinion.
Reviewed in Brazil on January 3, 2022
Verified Purchase
The best AI book of 2021 in my opinion. Enticing, criticism, well-grounded. It’s is really thought-provoking. Totally recommend it!
Report abuse
William A. Dembski
5.0 out of 5 stars Unseating the Inevitability Narrative
Reviewed in the United States on April 12, 2021
Verified Purchase
Back in 1998, I moderated a discussion at which Ray Kurzweil gave listeners a preview of his then forthcoming book THE AGE OF SPIRITUAL MACHINES, in which he described how machines were poised to match and then exceed human cognition, a theme he doubled down on in subsequent books (such as THE SINGULARITY IS NEAR and HOW TO CREATE A MIND). For Kurzweil, it is inevitable that machines will match and then exceed us: Moore's Law guarantees that machines will attain the needed computational power to simulate our brains, after which the challenge will be for us to keep pace with machines.. 

Kurzweil's respondents at the discussion were John Searle, Thomas Ray, and Michael Denton, and they were all to varying degrees critical of his strong AI view. Searle recycled his Chinese Room thought experiment to argue that computers don't/can't actually understand anything. Denton made an interesting argument about the complexity and richness of individual neurons and how inadequate is our understanding of them and how even more inadequate our ability is to realistically model them computationally. At the end of the discussion, however, Kurzweil's overweening confidence in the glowing prospects for strong AI's future were undiminished. And indeed, they remain undiminished to this day (I last saw Kurzweil at a Seattle tech conference in 2019 -- age seemed to have mellowed his person but not his views).

Erik Larson's THE MYTH OF ARTIFICIAL INTELLIGENCE is far and away the best refutation of Kurzweil's overpromises, but also of the hype pressed by those who have fallen in love with AI's latest incarnation, which is the combination of big data with machine learning. Just to be clear, Larson is not a contrarian. He does not have a death wish for AI. He is not trying to sabotage research in the area (if anything, he is trying to extricate AI research from the fantasy land it currently inhabits). In fact, he has been a solid contributor to the field, coming to the problem of strong AI, or artificial general intelligence (AGI) as he prefers to call it, with an open mind about its possibilities. 

The problem, as he sees it with the field, is captured in the parable of the drunk looking for keys under a light post even though he dropped them far from it because that's where the light is. In the spirit of this parable, Larson makes a compelling case that actual research on AI is happening in those areas where the keys to artificial general intelligence simply cannot exist. But he goes the parable even one better: because no theory exists of what it means for a machine to have a cognitive life, he suggests it's not clear that artificial general intelligence even has a solution -- human intelligence may not in the end be reducible to machine intelligence. In consequence, if there are keys to unlocking AGI, we're looking for them in the wrong places; and it may even be that there are no such keys.

Larson does not argue that artificial general intelligence is impossible but rather that we have no grounds to think it must be so. He is therefore directly challenging the inevitability narrative promoted by people like Ray Kurzweil, Nick Bostrom, and Elon Musk. At the same time, Larson leaves AGI as a live possibility throughout the book, and he seems genuinely curious to hear from anybody who might have some good ideas about how to proceed. His central point, however, is that such good ideas are for now wholly lacking -- that research on AI is producing results only when it works on narrow problems and that this research isn't even scratching the surface of the sorts of problems that need to be resolved in order to create an artificial general intelligence. Larson's case is devastating, and I use this adjective without exaggeration. 

I've followed the field of AI for four decades. In fact, I received an NSF graduate fellowship in the early 1980s to make a start at constructing an expert system for doing statistics (my advisor was Leland Wilkinson, founder of SYSTAT, and I even worked for his company in the summer of 1987 -- unfortunately, the integration of LISP, the main AI language back then, with the Fortran code that underlay his SYSTAT statistical package proved an intractable problem at the time). I witnessed in real time the shift from rule-based AI (common with expert systems) to the computational intelligence approach to AI (evolutionary computing, fuzzy sets, and neural nets) to what has now become big data and deep/machine learning. I saw the rule-based approach to AI peter out. I saw computational intelligence research, such as conducted by my colleague Robert J. Marks II, produce interesting solutions to well-defined problems, but without pretensions for creating artificial minds that would compete with human minds. And then I saw the machine learning approach take off, with its vast profits for big tech and the resulting hubris to think that technologies created to make money could also recreate the inventors of those technologies.

Larson comes to this project with training as a philosopher and as a programmer, a combination I find refreshing in that his philosophy background makes him reflective and measured as he considers the inflated claims made for artificial general intelligence (such as the shameless promise, continually made, that it is just right around the corner -- is there any difference with the Watchtower Society and its repeated failed prophecies about the Second Coming?). I also find it refreshing that Larson has a humanistic and literary bent, which means he's not going to set the bar artificially low for what can constitute an artificial general intelligence. 

The mathematician George Polya used to quip that if you can't solve a given problem, find an easier problem that you can solve. This can be sound advice if the easier problem that you can solve meaningfully illuminates the more difficult problem (ideally, by actually helping you solve the more difficult problem). But Larson finds that this advice is increasingly used by the AI community to substitute simple problems for the really hard problems facing artificial general intelligence, thereby evading the hard work that needs to be done to make genuine progress. So, for Larson, world-class chess, Go, and Jeopardy playing programs are impressive as far as they go, but they prove nothing about whether computers can be made to achieve AGI.

Larson presents two main arguments for why we should not think that we're anywhere close to solving the problem of AGI. His first argument centers on the nature of inference, his second on the nature of human language. With regard to inference, he shows that a form of reasoning known as abductive inference, or inference to the best explanation, is for now without any adequate computational representation or implementation. To be sure, computer scientists are aware of their need to corral abductive inference if they are to succeed in producing an artificial general intelligence. True, they've made some stabs at it, but those stabs come from forming a hybrid of deductive and inductive inference. Yet as Larson shows, the problem is that neither deduction, nor induction, nor their combination are adequate to reconstruct abduction. Abductive inference requires identifying hypotheses that explain certain facts of states of affairs in need of explanation. The problem with such hypothetical or conjectural reasoning is that that range of hypotheses is virtually infinite. Human intelligence can, somehow, sift through these hypotheses and identify those that are relevant. Larson's point, and one he convincingly establishes, is that we don't have a clue how to do this computationally.

His other argument for why an artificial general intelligence is nowhere near lift-off concerns human language. Our ability to use human language is only in part a matter of syntactics (how letters and words may be fit together). It also depends on semantics (what the words mean, not only individually, but also in context, and how words may change meaning depending on context) as well as on pragmatics (what the intent of the speaker is in influencing the hearer by the use of language). Larson argues that we have, for now, no way to computationally represent the knowledge on which the semantics and pragmatics of language depend. As a consequence, linguistic puzzles that are easily understood by humans and which were identified over fifty years ago as beyond the comprehension of computers are still beyond their power of comprehension. Thus, for instance, single-sentence Winograd schemas, in which a pronoun could refer to one of two antecedents, and where the right antecedent is easily identified by humans, remains to this day opaque to machines -- machines do no better than chance in guessing the right antecedents. That's one reason Siri and Alexa are such poor conversation partners.

THE MYTH OF ARTIFICIAL INTELLIGENCE is not just insightful and timely, but it is also funny. Larson, with an insider's knowledge, describes how the sausage of AI is made, and it's not pretty -- it can even be ridiculous. Larson retells with enjoyable irony the story of Eugene Goostman, the Ukranian 13-year old chatbot, who/which through sarcasm and misdirection convinced a third of judges in a Turing test, over a five-minute interaction, that it was an actual human being. No, argues Larson, Goostman did not legitimately pass the Turing test and computers are still nowhere near passing it, especially if people and computers need to answer rather than evade questions. With mirth, Larson also retells the story of Tay, the Microsoft chatbot that very quickly learned how to make racist tweets, and got him/itself just as quickly retired. 

And then there's my favorite, Larson's retelling of the Google image recognizer that identified a human as a gorilla. By itself that would not be funny, but what is funny is what Google did to resolve the problem. You'd think that the way to solve this problem, especially for a tech giant like Google, would be simply to fix the problem by making the image recognizer more powerful in its ability to discriminate humans from gorillas. But not Google. Instead, Google simply removed all references to gorillas from the image recognizer. Problem solved! It's like going to a doctor with an infected finger. You'd like the doctor to treat the infection and restore the finger to full use. But what Google did is more like a doctor just chopping off your finger. Gone is the infection. But -- gosh isn't it too bad -- so is the finger. 

We live in a cultural climate that loves machines and where the promise of artificial general intelligence assumes, at least for some, religious proportions. The thought that we can upload ourselves onto machines intrigues many. So why not look forward to the prospect of them doing so, especially since some very smart people guarantee that machine supremacy is inevitable. Larson in THE MYTH OF ARTIFICIAL INTELLIGENCE successfully unseats this inevitability narrative. After reading this book, believe if you like that the singularity is right around the corner, that humans will soon be pets of machines, that benign or malevolent machine overlords are about to become our masters. But after reading this book, know that such a belief is unsubstantiated and that neither science nor philosophy backs it up. 
147 people found this helpful
Report abuse
Mike Pool
5.0 out of 5 stars Comprehensive and accessible
Reviewed in the United States on April 11, 2021
Verified Purchase
Excellent book that is accessible and clearly written and effectively argued. The author's framing of the our current thinking about AI in the context of the important work and thought of Godel and Turing is worth the cover price on its own. He then does an excellent survey of the history and current state of affairs surrounding AI and incisively undermines the (i) widespread assumption/belief that general AI will emerge soon and naturally from our current strategies and (ii) demonstrates why this assumption is counterproductive.
13 people found this helpful
Report abuse
Gary G. Forbis
5.0 out of 5 stars A thought provoking read.
Reviewed in the United States on April 22, 2021
Verified Purchase
I have been an armchair hobbyist for decades so was familiar with most of the information presented here. None the less, this was an easy read quite accessible to most people.

I am a technologist at heart so it may come as a surprise that I fell in love with John Searle’s “Chinese Room” intuition pump when I first read about it in the early 1980s. There is indeed a difference between real and simulation. It troubles me that both John Searle and Erik Larson seem to think there is something distinctly different between what a computer is doing, and what a human is doing. Both are merely engaged in physical processes. All else is attribution by an outside system, including assertion about a system using deductive, inductive, and abductive logic. No mapping between physical processes and attribution by which we claim to understand them changes the physical processes and their properties.
8 people found this helpful
Report abuse
Never Stop Learning
5.0 out of 5 stars Artificial “Intelligence”?
Reviewed in the United States on June 14, 2021
Verified Purchase
Well developed exposure with many specific supporting examples of the popular notion that human intelligence embodied in machinery is inevitable. The book contends that general human intelligence does not even have any underlying credible theory that could lead to the construction of such a machine. And the book asserts that such a theory would be a radical departure from any of the current so-called artificial intelligence applications. My question is whether such a theory would even bring into question a materialistic view of the universe… a thought that is anathema to the current scientific establishment.
6 people found this helpful
Report abuse
Gordon Silverman
5.0 out of 5 stars Laying down the gauntlet
Reviewed in the United States on June 30, 2021
Verified Purchase
Larson lays down the gauntlet in the introduction, “The myth of AI is that it’s arrival is inevitable”. And he follows through on this challenge. The narrative follows an arc from the fallacies of logic- the foundation of AI algorithms (including Godel’s incompleteness theorems) to the limits of “imitation” that characterize modern AI systems. A compelling argument is presented that learning systems are actually just narrow problem-solving systems. There is a thorough examination of the quest for the Singularity and it’s inherent problems and limitations. To date, we have no cognitive understanding of human consciousness that is essential before we can hope to reach the goal of Artificial General Intelligence (AGI). For those who have embraced the inevitability of AGI, addressing these arguments is essential.
3 people found this helpful
Report abuse
Allen Meece
2.0 out of 5 stars Fractured writing
Reviewed in the United States on September 25, 2021
Verified Purchase
I agreed AI will never be totally fluent in English conversation. Too much we say is unsaid and understood.
The author went on and on defining the problem with our grammar that machines can't get their Fortran around it. That feels good, like a protection from their mastery of our lives.
Most of the book read like a haystack of information that was not fluid and clear. Textbookish and unclear sentences. A lot of the jargon was not clarified, making sections of the book undecipherable to the general public. I am a novelist and a naval veteran, having written "Tin Can" a naval novel about my destroyer experience off the coast of Viet Nam. I learned that prose has to be polished and edited and rewritten two or three times if it is to make a good book. That was not done for The Myth of AI. See my novel "Tin Can" by searching "Allen Meece" on this amazon website.
2 people found this helpful
Report abuse
@BobbyGvegas
5.0 out of 5 stars Thorough, accessible, and a fun read
Reviewed in the United States on October 22, 2021
Verified Purchase
I read all of the reviews. I second, in particular, Bill Dembski's assessment. I can't improve on that. Erik's stuff on C.S. Peirce and "abductive inference" alone was worth the price. Erik is right: we would do well to dial back the AI hyperbole. My personal interest is in ordinary language "argument analysis and evaluation," which I studied in grad school. It's tedious. AI NLU (Natural Language Understanding) is unlikely to materially help anytime soon.

Great Job, Mr. Larson. Excellent.
2 people found this helpful
Report abuse
Laurence Prusak
5.0 out of 5 stars Excellent overview of what Ai is and Isn't...
Reviewed in the United States on July 20, 2021
Verified Purchase
This is a sophisticated and highly intelligent review of what "intelligence" actually is and how no machine will ever think the way humans do..we have intelligence(well, almost all of us) and machines do not...the author provides some very well explained philosophical reasons as to how deduction and induction-the major reasoning dynamics in Ai ...exclude abduction-which is mainly how we, as humans, understand how things work. This book is a fine remedy to ward off all the techno-utopians selling us the latest device they can devise. Read it and think!
One person found this helpful
Report abuse
  • ←Previous page
  • Next page→

Need customer service? Click here
‹ See all details for The Myth of Artificial Intelligence: Why Computers Can’t Think the Way...

Your recently viewed items and featured recommendations
›
View or edit your browsing history
After viewing product detail pages, look here to find an easy way to navigate back to pages that interest you.

Back to top
Get to Know Us
  • Careers
  • Amazon and Our Planet
  • Investor Relations
  • Press Releases
  • Amazon Science
Make Money with Us
  • Sell on Amazon
  • Amazon Associates
  • Sell on Amazon Handmade
  • Advertise Your Products
  • Independently Publish with Us
  • Host an Amazon Hub
Amazon Payment Products
  • Amazon.ca Rewards Mastercard
  • Shop with Points
  • Reload Your Balance
  • Amazon Currency Converter
  • Gift Cards
  • Amazon Cash
Let Us Help You
  • COVID-19 and Amazon
  • Shipping Rates & Policies
  • Amazon Prime
  • Returns Are Easy
  • Manage your Content and Devices
  • Customer Service
EnglishChoose a language for shopping.
CanadaChoose a country/region for shopping.
Amazon Music
Stream millions
of songs
Amazon Advertising
Find, attract, and
engage customers
Amazon Business
Everything for
your business
Amazon Drive
Cloud storage
from Amazon
Amazon Web Services
Scalable Cloud
Computing Services
 
Book Depository
Books With Free
Delivery Worldwide
Goodreads
Book reviews
& recommendations
IMDb
Movies, TV
& Celebrities
Amazon Photos
Unlimited Photo Storage
Free With Prime
Shopbop
Designer
Fashion Brands
 
Warehouse Deals
Open-Box
Discounts
Whole Foods Market
We Believe in
Real Food
Amazon Renewed
Like-new products
you can trust
Blink
Smart Security
for Every Home
 
  • Conditions of Use
  • Privacy Notice
  • Interest-Based Ads
© 1996-2022, Amazon.com, Inc. or its affiliates