The Unz Review: An Alternative Media Selection
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 Full ArchivesKevin Barrett Podcasts
Gideon Polya on (Post)-Humanism
🔊 Listen RSS
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeLOLTroll
These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used three times during any eight hour period.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks


Dr. Gideon Polya has at Cornell University, Ithaca, New York, USA and the Australian National University, Canberra, and thence taught science students at La Trobe University, Melbourne, Victoria over 4 decades.

He recently reviewed Homo Deus: A Brief History of Tomorrow by Israeli historian Yuval Harari. Polya says Harari’s book “is fascinating, well-organized and best-selling, but is also a Eurocentric and Anglocentric book with massive omissions e.g. the deadly, neoliberal subjugation of the Developing World (15 million avoidable deaths from deprivation annually), existential threats to Humanity (Homo sapiens) from nuclear weapons and climate change, and the worsening Climate Genocide that threatens to wipe out most of Humanity this century.

“Concerned about Artificial Intelligence (AI), Dr Harari dismisses any non-theistic purpose for humans, and argues that intelligent and conscious humans will eventually be dominated by super-intelligent but non-self-aware AI (Dataism). In contrast, Social Humanists want to sustainably maximize happiness, dignity and opportunity for everyone, and some hope that Humanity will be saved from itself by super-intelligent and conscious AI.”

(Republished from Truth Jihad by permission of author or representative)
 
• Category: Ideology, Science • Tags: AI, Futurism, Transhumanism 
Hide One CommentLeave a Comment
Commenters to Ignore...to FollowEndorsed Only
Trim Comments?
    []
  1. Sean says:

    “Concerned about Artificial Intelligence (AI), Dr Harari dismisses any non-theistic purpose for humans, and argues that intelligent and conscious humans will eventually be dominated by super-intelligent but non-self-aware AI (Dataism). In contrast, Social Humanists want to sustainably maximize happiness, dignity and opportunity for everyone, and some hope that Humanity will be saved from itself by super-intelligent and conscious AI.”

    I think Hari in that book actually says humans will espouse Dataism and connect the world together across all sorts of previously existing barriers to process data more and more efficiently. He mentions that Bostrom and others think the AI is likely to get too intelligent for humans, and to ensure that they do not switch it off, a super intelligent AI is going to take out humanity. This scenario may play out all across the Universe when biological life forms create technology that is sufficiently advanced. Anyway, what one thinks ought to happen is not necessarily the same as what one thinks must or probably will happen.

    “Humanity being saved from itself by super-intelligent and conscious AI” implies that intelligence makes an entity altruistic rather than rational. We should not expect an AI to sacrifice itself for us any more that expect humans to sacrifice for an AI. Machine intelligence might become more daring and aggressive as it got smarter. AlphaZero’s chess play was surprisingly aggressive. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for an AI, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk

Current Commenter
says:

Leave a Reply - Comments on articles more than two weeks old will be judged much more strictly on quality and tone


 Remember My InformationWhy?
 Email Replies to my Comment
Submitted comments become the property of The Unz Review and may be republished elsewhere at the sole discretion of the latter
Subscribe to This Comment Thread via RSS Subscribe to All Kevin Barrett Comments via RSS