Dr. Gideon Polya has at Cornell University, Ithaca, New York, USA and the Australian National University, Canberra, and thence taught science students at La Trobe University, Melbourne, Victoria over 4 decades.
He recently reviewed Homo Deus: A Brief History of Tomorrow by Israeli historian Yuval Harari. Polya says Harari’s book “is fascinating, well-organized and best-selling, but is also a Eurocentric and Anglocentric book with massive omissions e.g. the deadly, neoliberal subjugation of the Developing World (15 million avoidable deaths from deprivation annually), existential threats to Humanity (Homo sapiens) from nuclear weapons and climate change, and the worsening Climate Genocide that threatens to wipe out most of Humanity this century.
“Concerned about Artificial Intelligence (AI), Dr Harari dismisses any non-theistic purpose for humans, and argues that intelligent and conscious humans will eventually be dominated by super-intelligent but non-self-aware AI (Dataism). In contrast, Social Humanists want to sustainably maximize happiness, dignity and opportunity for everyone, and some hope that Humanity will be saved from itself by super-intelligent and conscious AI.”

I think Hari in that book actually says humans will espouse Dataism and connect the world together across all sorts of previously existing barriers to process data more and more efficiently. He mentions that Bostrom and others think the AI is likely to get too intelligent for humans, and to ensure that they do not switch it off, a super intelligent AI is going to take out humanity. This scenario may play out all across the Universe when biological life forms create technology that is sufficiently advanced. Anyway, what one thinks ought to happen is not necessarily the same as what one thinks must or probably will happen.
“Humanity being saved from itself by super-intelligent and conscious AI” implies that intelligence makes an entity altruistic rather than rational. We should not expect an AI to sacrifice itself for us any more that expect humans to sacrifice for an AI. Machine intelligence might become more daring and aggressive as it got smarter. AlphaZero’s chess play was surprisingly aggressive. Given that there are few precedents to guide an understanding what, pure, non-anthropocentric rationality, would dictate for an AI, the relatively unlimited means of superintelligence might make for its analysis moving along different lines to the evolved “diminishing returns” assessments that in humans confer a basic aversion to risk