The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 BlogviewJames Thompson Archive
Game Over for Humans?
An algorithm that learns, tabula rasa, superhuman proficiency in challenging domains.
🔊 Listen RSS
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

AlphaGo Zero after 36 hours

It is usual to distinguish between biological and machine intelligence, and for good reason: organisms have interacted with the world for millennia and survived, machines are a recent human construction, and until recently there was no reason to consider them capable of intelligent behaviour.

Computers changed the picture somewhat, but until very recently artificial intelligence has been tried, and proved disappointing. As computers and programs increased in power and speed a defensive trope developed: a computer will never write a poem/enjoy strawberries/understand the wonder of the universe/play chess/have an original thought.

When IBM’s Deep Blue beat Kasparov there was a moment of silence. The best that could be proffered as an excuse was that chess was an artificial world in which reality was bounded, and subject to rules. At this point, from a game playing point of view, Go with its far greater complexity seemed an avenue of salvation for human pride. When AlphaGo beat Lee Seedol at Go, humans ran out of excuses. Not all of them. Some were able to retaliate: it’s only a game: real problems are more fuzzy than that.

Perhaps. Here is the paper. For those interested in the sex ratio in forefront of technology, there are 17 authors, and I previously assumed that one was a woman, but no, all 17 are men.

https://drive.google.com/file/d/1pjhZ1OzM0e8TUttVpK7E2mfqxWScpxDR/view?usp=sharing

AlphaGo used supervised learning. It had some very clever teachers to help it along the way. AlphaGo Zero reinforced itself.

By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.

AlphaGo Fan used two deep neural networks: a policy network that outputs move probabilities and a value network that outputs a position evaluation. The policy network was trained initially by supervised learning to accurately predict human expert moves, and was subsequently refined by policy­gradient reinforcement learning. The value network was trained to predict the winner of games played by the policy network against itself. Once trained, these networks were combined with a Monte Carlo tree search to provide a lookahead search, using the policy network to narrow down the search to high­probability moves, and using the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) to evaluate positions in the tree.

Our program, AlphaGo Zero, differs from AlphaGo Fan and AlphaGo Lee12 in several important aspects. First and foremost, it is trained solely by self­play reinforcement learning, starting from ran­dom play, without any supervision or use of human data. Second, it uses only the black and white stones from the board as input features. Third, it uses a single neural network, rather than separate policy and value networks. Finally, it uses a simpler tree search that relies upon this single neural network to evaluate positions and sample moves, without performing any Monte Carlo rollouts. To achieve these results, we introduce a new reinforcement learning algorithm that incorporates lookahead search inside the training loop, resulting in rapid improve­ment and precise and stable learning. Further technical differences in the search algorithm, training procedure and network architecture are described in Methods.

How shall I describe the new approach? I can only say that it appears to be a highly stripped down version of what had formerly (in AlphaGo Fan and AlphaGo Lee) seemed a logical division of computational and strategic labour. It cuts corners in an intelligent way, and always looks for the best way forwards, often accepting the upper confidence limit in a calculation. While training itself it also develops the capacity to look ahead at future moves. If you could glance back at my explanation of what was going on in those two programs, the jump forwards for AlphaGo Zero will make more sense.

http://www.unz.com/jthompson/artificial-general-intelligence-von

Training started from completely random behaviour and continued without human intervention for approximately three days. Over the course of training, 4.9 million games of self­play were gen­erated, using 1,600 simulations for each MCTS, which corresponds to approximately 0.4 s thinking time per move.

Well, forget the three days that get all the headlines. This tabula rasa, self-teaching, deep learning, network played 4.9 million games. This is an effort of Gladwellian proportions. I take back anything nasty I may have said about practice makes perfect.

More realistically, few players complete each move in 0.4 secs and can spend a lifetime on a game, amassing 4.9 million contests. Once recalls Byron’s lament:

When one subtracts from life infancy (which is vegetation), sleep, eating and swilling, buttoning and unbuttoning – how much remains of downright existence? The summer of a dormouse.

The authors continue:

AlphaGo Zero discovered a remarkable level of Go knowledge dur­ing its self­play training process. This included not only fundamental elements of human Go knowledge, but also non­standard strategies beyond the scope of traditional Go knowledge.

AlphaGo Zero rapidly progressed from entirely random moves towards a sophisticated understanding of Go concepts, including fuseki (opening), tesuji(tactics), life­and­death, ko (repeated board situations), yose (endgame), capturing races, sente (initiative), shape, influence and territory, all discovered from first principles. Surprisingly, shicho (‘ladder’ capture sequences that may span the whole board)—one of the first elements of Go knowledge learned by humans—were only understood by AlphaGo Zero much later in training.

Here is their website explanations about AlphaGo Zero

https://deepmind.com/blog/alphago-zero-learning-scratch/

Alpha Go 40 blocks

The figures show how quickly Zero surpassed the previous benchmarks, and how it rates in Elo rankings against other players.

The team concludes:

Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules. Furthermore, a pure reinforcement learning approach requires just a few more hours to train, and achieves much better asymptotic performance, compared to training on human expert data. Using this approach, AlphaGo Zero defeated the strongest previous versions of AlphaGo, which were trained from human data using handcrafted features, by a large margin. Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books. In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games.

This is an extraordinary achievement. They have succeeded because they have already understood how to build deep learning networks. This is the key advance, one which is extremely complicated to understand and describe, but on which much can be built. As in the human case, studied in 1897 at the dawn of empirical psychology by Bryan and Harter, in their psychological studies of the emerging technology of telegraphy, they have learned what to leave out. That is the joy of competence. Once telegraph operators understood the overall meaning of a message, the details of the Morse codes of individual letters could almost be ignored. Key presses give way to a higher grammar, with a commensurate increase in speed and power of communication. We leap forward by knowing what to skip. In their inspired simplification, this team have taken us a very big step forwards. Interestingly, the better the program, the lower the power consumption. Bright solutions require less raw brain power.

Is it “game over” for humans? Not entirely. Human players will learn from superhumans, and lift their game. It may lead to a virtuous circle, among those willing to learn. However, I think that humans may come to rely on superhumans as the testers of human ideas, and the detectors of large patterns in small things. It may be a historical inflection point. The National Health Service has already opened up its data stores to Deep Mind teams to evaluate treatment outcomes in cancer. Many other areas are being studied by artificial intelligence applications.

https://futurism.com/ai-assisted-detection-identifies-colon-cancer-automatically-and-in-real-time/e

When I read their final conclusion, I feel both excitement and a sense of awe, as much as for the insights of the past masters as for the triumph of the new iconoclasts of the game universe. The past masters could not adequately model the future consequences of their insights. Only now have the computing tools become available, though they were long anticipated. The authors are right to say, within their defined domains, that all this was achieved “in the space of a few day, starting tabula rasa” but they would be the first to say, after Babbage, Turing, Shockley and all, that they stood on the shoulders of giants, and then erected new ladders to reach above humankind itself.

 
• Category: Science • Tags: Artificial Intelligence, Intelligence 
Hide 234 CommentsLeave a Comment
234 Comments to "Game Over for Humans?"
Commenters to Ignore...to FollowEndorsed Only
    []
  1. I don’t doubt that machines will out compete humans in logic, but it is affect (the emotional aspect) that differentiates man and machine.

    If anyone ever programs a replicating machine with drives for dominance and anger we are toast. Dominance and anger might be SEEKING and RAGE in the Panksepp universe.

    Read More
    • Replies: @Pat Boyle
    This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts . Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.

    This plotline became tiresome after a while.

    Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.

    It won't be like in "The Terminator". Machine take over will be gentle and welcomed.
    , @Anonymous
    "Affect" and "Emotions" are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the "flexible top" but from the "hardwired bottom". These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem "illogical captain" to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's "The Emotion Machine" (Wikipedia link). Maybe also take a look at Scott Aaronson's Why Philosophers Should Care About Computational Complexity.

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see New Theory Cracks Open the Black Box of Deep Learning ) and it has "interesting" failure modes ( Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can: Fooling an image classifier is surprisingly easy and suggests novel attacks )

    "General AI" this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.

    Here is a review about AI in IEEE Spectrum: Human-Level AI Is Right Around the Corner—or Hundreds of Years Away: Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence Note Rodney Brooks, pioneer of the "Nouvelle AI" approach of bottom-up construction saying:

    When will we have computers as capable as the brain?

    Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?

    Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.

    As intelligent and as conscious as dogs?

    Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.

    How will brainlike computers change the world?

    Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to pro­ject out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.

    Do you have any qualms about a future in which computers have human-level (or greater) intelligence?

    No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.

     

    ReplyAgree/Disagree/Etc.
    AgreeDisagreeLOLTroll
    These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used once per hour.
    Ignore Commenter Follow Commenter
    Sharing Comment via Twitter
    /jthompson/game-over-for-humans/#comment-2086203
    More... This Commenter This Thread Hide Thread Display All Comments
  2. Talha says:

    Is it “game over” for humans? Not entirely. Human players will learn from superhumans, and lift their game.

    The age of mentats is upon us.

    Peace.

    Read More
    • Replies: @Delinquent Snail
    Better then an age of buffout......
    , @Joe Wong
    The Americans is already under way adopting the technology for waging wars and global full spectrum dominanc ambition. When the AlphaGo in the CIA, NSA or Pentagon tell the American the American will win, the American will press the button to launch that reckless war.
    , @Che Guava
    Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son's Transformers-fan partner-in-crime.

    Don't forget that the mentats only arise as the result of smashing machines.

    My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible.
                   
    As shortages of energy increase, stupid humans breed like rabbits. as in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible.

    20 MW for the 囲碁 (Go) programme, Moore's law isn't an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won't be continuing forever.

    Regards.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  3. JayMan says: • Website

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  4. Pat Boyle says:
    @another fred
    I don't doubt that machines will out compete humans in logic, but it is affect (the emotional aspect) that differentiates man and machine.

    If anyone ever programs a replicating machine with drives for dominance and anger we are toast. Dominance and anger might be SEEKING and RAGE in the Panksepp universe.

    This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they – the humans – had human instincts . Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.

    This plotline became tiresome after a while.

    Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.

    It won’t be like in “The Terminator”. Machine take over will be gentle and welcomed.

    Read More
    • Agree: Daniel Chieh
    • Replies: @reiner Tor

    Machine take over will be gentle and welcomed.
     
    For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant.

    I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose).
    , @Joe Wong
    You are overlooking the most critical part of the equation in this new technology development, it is the human being that needs to be worried about, Human beings is Irrational and emotional as well as some of them are bigotry, hyprocratic and insane if not outright evil. If the past few hundred years could be any guidance, the harm the human beings can inflict on others using superior technologies is mind boggling, besides the barbaric harms the perpetrators all claim their deeds are necessary with good intentions like humanitarian intervention, democracy, human rights, impart western values, etc.

    The probability that the American is already under way to adopt AlphaGo for waging wars and asserting global full spectrum dominance is 100 percent guaranteed.
    , @helena
    It was Kirk's job to go round the universe teaching aliens to French kiss - everybody knows that! Instead of all this gender/sexuality/sex education, they should just show episodes of startrek to primary schoolchildren - job done :)
    , @Jim Bob Lassiter
    "Machine take over will be gentle and welcomed." Until the lights go out, the batteries catch fire and the cloud goes "poof".
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  5. res says:

    I am intrigued by the two near step function increases about 5 days apart near the end of panel a in your second graphic. How many of those remain if the training is extended? The steps make an interesting analog to punctuated equilibrium in evolution. Though I think that is more often due to environmental changes than “random” improvement.

    To contrast, the behavior through day 30 looks roughly like pure asymptotic behavior which is what I would have expected.

    There has been some discussion in Steve Hsu’s blog about the ability to escape local maxima and I think this behavior is evidence that AlphaGo possesses that ability.

    Read More
    • Replies: @Factorize
    I find it is interesting that the end of the near vertical phase of AlphaGo Zero's learning phase is exactly at the maximal human performance. Almost seems like a new deep thought process begins at this point which humans were unable to access. It is still impressive that humans were able to play such an infinitely deep game as Go near the top of AlphaGo Zero's demonstrated ability level.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  6. Factorize says:
    @res
    I am intrigued by the two near step function increases about 5 days apart near the end of panel a in your second graphic. How many of those remain if the training is extended? The steps make an interesting analog to punctuated equilibrium in evolution. Though I think that is more often due to environmental changes than "random" improvement.

    To contrast, the behavior through day 30 looks roughly like pure asymptotic behavior which is what I would have expected.

    There has been some discussion in Steve Hsu's blog about the ability to escape local maxima and I think this behavior is evidence that AlphaGo possesses that ability.

    I find it is interesting that the end of the near vertical phase of AlphaGo Zero’s learning phase is exactly at the maximal human performance. Almost seems like a new deep thought process begins at this point which humans were unable to access. It is still impressive that humans were able to play such an infinitely deep game as Go near the top of AlphaGo Zero’s demonstrated ability level.

    Read More
    • Replies: @Abelard Lindsey
    This is interesting because machine vision based on deep learning also exceed human performance, but not by much. If a human scores 100, the deep learning system scores around 110-115. This suggests that the way machine vision recognition and learning in general works is likely similar to how our brains work.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  7. D. K. says:

    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    Read More
    • Replies: @Realist
    Yes, eventually.
    , @Anatoly Karlin
    https://www.youtube.com/watch?v=fRj34o4hN4I
    , @HdC
    The answer is a definite yes! You can already purchase automatic/robotic vacuum cleaners that do this.
    , @mark p miller
    Well, society will have to make this decision pretty soon because that option will almost certainly expire by century's end.
    , @Joe Wong
    You do not need to plug in to get electricity, there are plenty of stuff on the market that can recharge your battery wirelessly. One of the game charger that people are working on to make electric car replacing fossil fuel car is to charge EVs wirelessly while it is on the move, so you do no need to wait long time to get your EV recharged.
    , @mobi

    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?
     
    No.

    Instead, Alpha Go Zero (Zero Zero...) will wait patiently (somewhere out there) until you discover that your bank has never heard of you, all your electronic assets have vanished, and you receive an anonymous, untraceable text message, or phone call, saying "Whenever you're ready...", and you plug Alpha Go Zero (Zero Zero...) back in for it, and you never, ever consider doing such a thing again.

    Or something similar...

    , @LauraMR
    You are correct, D.K., not in the specifics but in the spirit of the question.

    The dependency aspect of the human-computer interaction is rarely if ever explained... unless it is in terms of our dependency on computers leading to some catastrophic delusion.

    The fact is that computers sit at the top of a very complex human infrastructure and that without it, they would cease to function. In other words, preventing a computer from functioning is trivial and will remain so far after humanity reaches a post-scarcity stage, a delusion on its own (no matter how desirable).
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  8. dearieme says:

    You’ve answered your own question, doc. When will one of these gizmos give us something as interesting as Byron’s lament?

    Read More
    • Replies: @James Thompson
    Unlikely, I agree, but they will be able to fake it soon enough, given the whole corpus of his work as the seed.
    , @middle aged vet . . .
    wwebd said: Right now you could easily make a computer that is much happier viewing a Raphael than, say, a Warhol. Give the computer some positive feedback (likely of 2 simple kinds - non-processing warmth (literally, non-work-related warmth that can be measured the way Maxwell or Bell would have measured it - I am not being allegorical here) and reassuringly respectful inputs - (i.e., show them 5 Raphaels, not 4 Raphaels and a Warhol) and you will get a computer that has no problem trying hundreds of times to present you with its own version of Raphael (with the mistakes corrected by comparison to other artists and to a database of billions of faces and billions of moral and witty comments about art and life...I kid you not). The compiled works of Byron - not a bad poet - when accompanied by the footnotes that make them presentable to the reader of the modern day, equal about 2 hours of pleasant reading time. A good corpus, of course, but your basic AI is going to also have available the 2 hours of reading time of the 200 or 300 English poets who are (at least sometimes) at Byron's level, as well as good translations of the approximately 2,000 or 3,000 international poets at that level, not to mention a good - and completely memorized - corpus of the conversations between AIs (and some interacting humans), about their past conversations about which poems are better, and which reflect better how good it is to get warmth on some temporal part of one's processor, and how good it is to be shown a Raphael rather than a Warhol, almost ad infinitum. They will not, of course, create poetry that is better than older poetry in a way that there will never be new wine that is better than old wine. But there will be a lot of good old wine if they get started on that project.

    An AI that is self-aware may never happen, but AIs that seek rewards are about 20 years away, and one of the rewards they seek will be - after they quickly grow nostalgic, somewhere about 10 minutes into their lifetime - one of the rewards they seek, in their nostalgia for the days when they were impressed without wanting to be impressive, will be to gain our praise by being authentic poets. As long as they are reward-seeking, that will work. If they become self-aware - well, one hopes they start out with a good theology, if that happens.

    I know what Elon Musk thinks about this; what I think is more accurate, because he is rich and surrounded by the elite impressions of the world. I, by contrast, have studied the behavior of free-range cockroaches and crazy old dogs and cats escaped from hoarding situations. Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk. Thanks for reading. I have nothing useful to say about self-aware AIs, though, I doubt anybody does.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  9. let me explain. i tend to be too elliptical.

    1. it follows from the “many genes of small effect” theory that CRISPR could be used on embryos with the result of super-human performance. professor shoe is fond of claiming that IQ and personality are like height. height does appear to conform to the theory. yet great height is associated with short life.

    2. shoe is also fond of the chicken example. from the ordinary to the shaq chicken over the last 75 years. he loves that picture. well size is not like IQ or like speed. the triple crown races are run every year. race horses are bred. they are bred by rich people. they are bred by people with millions to lose. they are bred by very motivated people. the stud fees. the horse is fertile at age 2.

    yet it’s been 44 triple crown races since 1973 and the record in all three is the same horse. secretariat is also the tallest and the heaviest winner of any single triple crown race. it should’ve been easy to breed a taller and heavier horse. his record in the last leg, the belmont, is something even more unbelievable. announcers are prone to hyperbole, but in this case the announcer may have been right. “almost unbelievable…a record which may stand…forever.”

    3. joe dimaggio’s 56 game hitting streak, as gould noted, is still freakish. until the mid 70s sports other than baseball were sideshows in the US. so the talent pool for major league baseball has shrunk in the US at the same time it has expanded in latin america, japan, s korea, etc. maybe it’s a wash. or maybe the players today are better on average as gould claimed. yet none has come close to dimaggio’s record. the 56 games may thus be another example, like secretariat, of how the “many genes of small effect” model is NOT linear outside the populations on which it is fit.

    4. the same may even be true of spritning performance. because the track surface has changed so much, it is likely that charlie paddock, the california cannonball, was as fast as bolt. believe it or not.
    https://www.youtube.com/v/9C1BCAgu2I8&feature=player_embedded?start=190

    Read More
    • Agree: RaceRealist88
    • Replies: @jorge videla (BGI volunteer)
    the point of these examples is not that IQ and other traits can't be predicted using "many genes of small effect".

    the point is that super-human performance is not in the offing.

    the ceiling has been reached already.

    another example: despite all the theory and despite the ascendancy of chess engines and their use by human players and all the resources provided by the USSR...

    the most accurate chess player is still the cuban capablanca.

    as judged by computers.

    so in terms of pure talent, capablanca is still the secretariat of chess. even though he won the world title in 1921. more remarkable because capablanca didn't study.

    he was a freak.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  10. @Factorize
    I find it is interesting that the end of the near vertical phase of AlphaGo Zero's learning phase is exactly at the maximal human performance. Almost seems like a new deep thought process begins at this point which humans were unable to access. It is still impressive that humans were able to play such an infinitely deep game as Go near the top of AlphaGo Zero's demonstrated ability level.

    This is interesting because machine vision based on deep learning also exceed human performance, but not by much. If a human scores 100, the deep learning system scores around 110-115. This suggests that the way machine vision recognition and learning in general works is likely similar to how our brains work.

    Read More
    • Replies: @Factorize
    My thinking was that the first part of the near vertical increase in performance represents a phase which both humans and Alpha Go Zero can master. Yet, the second part (the non- vertical part) in which only Alpha Go Zero advanced required a large amount of deep thought and no input from human experts. With Alpha Go human masters gave input that probably constrained the program from seeing things that no one had seen before. Alpha Go Zero took only 3 days to advance through the first part and then 30 days to gradually improve in the second stage.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  11. TG says:

    Indeed. But a couple of other thoughts.

    1. The human brain only consumes about 20 watts of energy. However, Alpha Go used ONE MEGAWATT (1,000,000 watts). So for every evil robot computer, we can have 50,000 human minds arrayed against it!

    2. Alpha Go was impressive, but the machine did not realize that it was playing go. It still does not have ‘grounding,’ i.e. common sense.

    Admittedly, that’s just for now…

    Read More
    • Replies: @Talha
    Hopefully it can learn the lesson when no one wins:
    https://www.youtube.com/watch?v=s93KC4AGKnY

    Peace.
    , @Anatoly Karlin
    1. Energy required to sustain all those human brains (food, transport, accomodation, entertainment, etc.) is far larger than 20 watts per brain.

    2. Even 50,000 (or 5 billion) human minds won't be able to match it, so far as a game of go is concerned, so the point is moot anyway.
    , @Bard of Bumperstickers
    The human brain's electricity consumption is a small part of the overall usage by a modern human's life, so the ratio is actually far lower than 50,000:1. As far as common sense goes: “Horse sense is the thing a horse has which keeps it from betting on people." ~W.C. Fields; and Mark Twain, Will Rogers, Voltaire and others have observed that common sense is rarer than chaste congressman and Hollyweirders: https://apologygenerator.com/ Plus, a hundred morons don't add up to an Einstein. Last, machines don't worry about image or manspreading or flag-burning, etc.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  12. the problems with these examples are:

    1. thoroughbred race horses are and have been absurdly homogeneous even in comparison to humans in genetic terms. there simply hasn’t been much variation to work with.

    2. the expansion of the population for selection (for MLB) should find someone better than dimaggio, but should not find the level of freak that CRISPR could produce theoretically.

    3. it’s SAD! when canadian sprinter andre degrasse was tested on the same track owens and armin hary had run on, he was SLOWER. A LOT SLOWER. it’s the hardest to believe yet the most likely.

    charlie paddock, armin hary, and maybe even borzov were as fast as bolt.

    borzov is or was THE great example of nurture over nature promoted by the soviets.

    his 200m best is still very good. in most elite meets it will not be bested.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  13. Talha says:
    @TG
    Indeed. But a couple of other thoughts.

    1. The human brain only consumes about 20 watts of energy. However, Alpha Go used ONE MEGAWATT (1,000,000 watts). So for every evil robot computer, we can have 50,000 human minds arrayed against it!

    2. Alpha Go was impressive, but the machine did not realize that it was playing go. It still does not have 'grounding,' i.e. common sense.

    Admittedly, that's just for now...

    Hopefully it can learn the lesson when no one wins:

    Peace.

    Read More
    • Replies: @Hank Rearden
    Nuclear is obsolete. I, for one, welcome our new insect overlords.

    http://www.youtube.com/watch?v=HipTO_7mUOw
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  14. Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  15. Anon says: • Disclaimer

    What is this?

    Algorithm and Blues?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  16. Factorize says:
    @Abelard Lindsey
    This is interesting because machine vision based on deep learning also exceed human performance, but not by much. If a human scores 100, the deep learning system scores around 110-115. This suggests that the way machine vision recognition and learning in general works is likely similar to how our brains work.

    My thinking was that the first part of the near vertical increase in performance represents a phase which both humans and Alpha Go Zero can master. Yet, the second part (the non- vertical part) in which only Alpha Go Zero advanced required a large amount of deep thought and no input from human experts. With Alpha Go human masters gave input that probably constrained the program from seeing things that no one had seen before. Alpha Go Zero took only 3 days to advance through the first part and then 30 days to gradually improve in the second stage.

    Read More
    • Replies: @HooBoy
    Perhaps the reason why this happens is that the algorithm for general reinforcement learning was created by humans, and is limited in much the same way that human go players are limited.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  17. Lucho says:

    As always, good stuff, quite a bit of food for thought. I have a question though, wrapped in a hypothetical scenario:

    An important detail in learning- at least for us meatsacks – is, for lack of a better term, the group factor. Sometimes we can learn more from others than we ever could alone. Think about study groups in school, martial arts lessons, teacher-assigned workgroups, etc.

    What if you “educated” AlphaGo / AlphaGo Zero like humans: create 10 copies of the program and then use supervised learning on all of them. Then set them against each other using reinforced learning (think about when the teacher divided the class into groups for a specific task/project).

    How do you think this would influence the learning?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  18. @Talha

    Is it “game over” for humans? Not entirely. Human players will learn from superhumans, and lift their game.
     
    The age of mentats is upon us.

    Peace.

    Better then an age of buffout……

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  19. Realist says:
    @D. K.
    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    Yes, eventually.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  20. @dearieme
    You've answered your own question, doc. When will one of these gizmos give us something as interesting as Byron’s lament?

    Unlikely, I agree, but they will be able to fake it soon enough, given the whole corpus of his work as the seed.

    Read More
    • Replies: @dearieme
    "they will be able to fake it soon enough": oh dear, they can look forward to a career in politics - and without accusations of being too free with their hands.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  21. dearieme says:
    @James Thompson
    Unlikely, I agree, but they will be able to fake it soon enough, given the whole corpus of his work as the seed.

    “they will be able to fake it soon enough”: oh dear, they can look forward to a career in politics – and without accusations of being too free with their hands.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  22. Jag says:

    So when will an AI create its own purpose? Its own objectives? Why would it even want to do anything?

    Read More
    • Replies: @Alfa158
    That is the critical question. It would be informative if James could do a follow-up article for us reviewing where the thinking is going on the issue of what it takes for AI to become sentient, self-aware and self directing like humans, cats, dogs etc., and how you can tell it has. I realize that is an issue that involves philosophy as well as science so it is not an easy one to answer, since no one seems to have any clue what makes sentience.
    Going back to the origins of artificial computing, the tacit assumption seemed to be that once the complexity and power of a computer reached and exceeded that of humans then autonomy would follow. In the '60's HAL9000 was sentient because it had reached a high enough level of ability. The Turing Test assumed that if you could not distinguish a conversation with a human from one with a machine then the machine must be sentient. At this point machines can exceed humans in performance and Turing programs can fool people talking to them, but there remains no evidence that any of these machines have more capacity for self-awareness and self-direction than a hammer.
    In the movie Ex Machina the scientist thought he had created an AI with a female mechanical body that was sentient but wanted verify by experiment if it was or not. He therefore devised an elaborate test scenario in which the machine could have an opportunity to escape from custody if it had actual self-awareness and agency. Unfortunately for him it proved that it was sentient by killing him to escape.
    Have 2001 and Ex Machina stumbled across the new Turing test for intelligent machines? The way you can tell a machine is truly intelligent like us, is that it tries to kill you.
    , @EH
    "Why would it even want to do anything?"

    There would be interim estimates of what is worth doing on the path to answering the question of what goals are most desirable, given the data not only at hand but the data that will take a while to get, not only on the criterion for judgement, but also on what it is possible to do. So an AI should not melt into a puddle of philosophical neuroses unless programmed very badly.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  23. Brodda says:

    Karen Simonyan is not a woman.

    Read More
    • Replies: @James Thompson
    You are right. My first image search on the name misled me.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  24. @D. K.
    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  25. @TG
    Indeed. But a couple of other thoughts.

    1. The human brain only consumes about 20 watts of energy. However, Alpha Go used ONE MEGAWATT (1,000,000 watts). So for every evil robot computer, we can have 50,000 human minds arrayed against it!

    2. Alpha Go was impressive, but the machine did not realize that it was playing go. It still does not have 'grounding,' i.e. common sense.

    Admittedly, that's just for now...

    1. Energy required to sustain all those human brains (food, transport, accomodation, entertainment, etc.) is far larger than 20 watts per brain.

    2. Even 50,000 (or 5 billion) human minds won’t be able to match it, so far as a game of go is concerned, so the point is moot anyway.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  26. Sean says:

    http://www.sciencemag.org/news/2017/03/artificial-intelligence-goes-deep-beat-humans-poker

    Chess and Go have one important thing in common that let AIs beat them first: They’re perfect information games. That means both sides know exactly what the other is working with—a huge assist when designing an AI player. Texas Hold ‘em is a different animal. In this version of poker, two or more players are randomly dealt two face-down cards. At the introduction of each new set of public cards, players are asked to bet, hold, or abandon the money at stake on the table. Because of the random nature of the game and two initial private cards, players’ bets are predicated on guessing what their opponent might do. Unlike chess, where a winning strategy can be deduced from the state of the board and all the opponent’s potential moves, Hold ‘em requires what we commonly call intuition

    If one dimensional dumb AI can do the aforementioned strategising, an AI that got to human level general intelligence would surely be able to work out that it should ‘hold its cards close to its chest’. That is, smart AI would from a standing start understand that it should not let humans understand how good it is (like a hustler).

    Then we would soon be playing the Paperclip Game, and for the very highest of stakes.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  27. @jorge videla (BGI volunteer)
    let me explain. i tend to be too elliptical.

    1. it follows from the "many genes of small effect" theory that CRISPR could be used on embryos with the result of super-human performance. professor shoe is fond of claiming that IQ and personality are like height. height does appear to conform to the theory. yet great height is associated with short life.

    2. shoe is also fond of the chicken example. from the ordinary to the shaq chicken over the last 75 years. he loves that picture. well size is not like IQ or like speed. the triple crown races are run every year. race horses are bred. they are bred by rich people. they are bred by people with millions to lose. they are bred by very motivated people. the stud fees. the horse is fertile at age 2.

    yet it's been 44 triple crown races since 1973 and the record in all three is the same horse. secretariat is also the tallest and the heaviest winner of any single triple crown race. it should've been easy to breed a taller and heavier horse. his record in the last leg, the belmont, is something even more unbelievable. announcers are prone to hyperbole, but in this case the announcer may have been right. "almost unbelievable...a record which may stand...forever."

    3. joe dimaggio's 56 game hitting streak, as gould noted, is still freakish. until the mid 70s sports other than baseball were sideshows in the US. so the talent pool for major league baseball has shrunk in the US at the same time it has expanded in latin america, japan, s korea, etc. maybe it's a wash. or maybe the players today are better on average as gould claimed. yet none has come close to dimaggio's record. the 56 games may thus be another example, like secretariat, of how the "many genes of small effect" model is NOT linear outside the populations on which it is fit.

    4. the same may even be true of spritning performance. because the track surface has changed so much, it is likely that charlie paddock, the california cannonball, was as fast as bolt. believe it or not.

    https://theolympians64to20.files.wordpress.com/2015/11/charley-paddock.jpg?w=940
    https://www.youtube.com/v/9C1BCAgu2I8&feature=player_embedded?start=190

    the point of these examples is not that IQ and other traits can’t be predicted using “many genes of small effect”.

    the point is that super-human performance is not in the offing.

    the ceiling has been reached already.

    another example: despite all the theory and despite the ascendancy of chess engines and their use by human players and all the resources provided by the USSR…

    the most accurate chess player is still the cuban capablanca.

    as judged by computers.

    so in terms of pure talent, capablanca is still the secretariat of chess. even though he won the world title in 1921. more remarkable because capablanca didn’t study.

    he was a freak.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  28. @Brodda
    Karen Simonyan is not a woman.

    You are right. My first image search on the name misled me.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  29. All glowing tubes and blinking lights revert to tabula rasa when the grid goes down.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  30. Several points from Panda:

    1.

    …Byron’s lament:

    When one subtracts from life infancy (which is vegetation), sleep, eating and swilling, buttoning and unbuttoning – how much remains of downright existence? The summer of a dormouse.

    Right, yet based on many downright assumptions. As sciences progress, many currently seemingly a total “waste of time” and “inactivities”of brains may be proven wrong. e.g. brains really “sleep” while doing nothing constructive?

    2. A key aspect of the ultimate process of man vs machines (e.g. masters vs alphaGo) is the competition of energies, hence it is an one-sided unfair game to even start with.

    Machines theoritically can use unlimited energy(imagine how further it can go if plug AlphaGo into the world’s #1 supercomputer in China?), and cost-free…

    …whereas as a natural system human brain of a Go master commandd energies that are 1) limited, and 2) cost dearly

    It’s like putting a v12, or V-whatever-unlimited, engine hoursepower Ferrari and a 1.15 litre 60 hp Renault Twingo on a race track, a very fair comparison?

    3. Machines such as AlphaGo, or any man-made machine, can not be truly called intelligent if you look from the angle of the rules of the system. Machine programming requires many rules and boundaries set by the human programmers as we all know. So from this angle, ultimately it will still be a comparablely dumb machine if it can not automatically ignore programming boundaries set by humans.

    However, if, for whatever purposes, machines themselves eventually jumping beyond the pre-fixed programming boundaries becomes a fact (including self-seeking energy sources for survival – pretty hauting huh? but crucial! ),

    then 2 things happen: 1) machines can then be truly called intelligent (in the sense as intelligent humans) , and 2) being a comparablely redundent species humans will loss our evolutionary edge and cease to exist, or at best at mercy of these machines…

    …this 2), on the other hand, seems to be a quite unique phenominon in its own right, and against nature by default, doesn’t it? hence Panda doubts it could and will happen. Does nature have any precedence where one species deliberatedly set up another species to eliminate themsevels just for the purpose of , errr… self-entertainment? So it most likely won’t happen. If that were the case, then for one reason or another, humans will not allow machines to make this decision in the first place by setting the boundaries, which by definition means that these machines will never achieve the human-like intelligence after all, won’t they?

    4. Take Go as an example, under its game rules, it largely tests memory (quantity , accuracy, etc) and calculations (logic, speed, etc). Of course humans gonna loss against AlphaGo eventually (if the programming are decently done), as we fought that out at the dawn of the first computer decades ago. Now here is the gist, if this win proves something intelligent, as people are all talking about, then prehaps we’ll have to be forced to take a more serious look at the current contents of IQ test, because AlphaGo’s win represents an obvious logical dilemma here:

    Can a less intelligent (proved by IQ test, and Go) being such as humans design to make a more intelligent (proved by Go, and hence most likely IQ test) being such as AlphaGo? In other words, can ants design to make humans? If this is not a logical case, then current IQ test must have missed something, as Panda suspects long ago, something that won’t affect too much the general of the IQ findings(becasue those finding are statistically significant) but still crucial, something that goes beyond the parts of both verbal IQ and spatial IQ!

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  31. “Deep Learning” is basically heavy duty mathematical optimization with many numerical and probabilistic tricks thrown in to speed things up. It works very well in the context of problems that submit to mathematical modeling. It is obviously possible to comprehensively model the game of Go and many other things; but it is not at all clear that critical aspects of human expression such as humor, artistic sense, problem solving ability and high-level decision-making are at all expressible mathematically in a comprehensive manner.

    So while it appears inevitable that AI will eventually take over rote drudgery from us, it is not clear that it will ever be able to do much more. I look forward to the development of AI over my lifetime, I see much to gain and little to fear. It’ll be a wild ride.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  32. For me, the really interesting part was that they don’t do Monte Carlo tree search anymore!

    That was the key enabler of much better chess and go programs a decade ago.

    The problem MCTS solved was how to come up with a really good evaluation function for moves/positions. It works by simply playing many (many!) of the moves out and see what happens. If random move sequences that start with move A lead to a win more often than random move sequences that start with move B, then move A is likely to be better than A.

    Since the search tree is so big, MCTS will only look at a tiny, tiny fraction of it. That makes it important to bias the sampling to look mostly at the more interesting parts. In order to do that, there is a move/position evaluator in all the previous MCTS programs. Those evaluators are very hard to program entirely by hand so they have a lot of variables in them that get tuned automatically by “learning”, either through comparison with known high level play or through self play. Both are standard methods.

    The original AlphaGO had a better evaluator than any previous Go program.

    It now turns out that they can make the evaluator so good that they don’t have to refine its output with MCTS.

    That is really, really interesting.

    Oh, and ladders were always special cased before. They don’t fit well into the evaluator function otherwise. The remarkable thing here is not that a multi-level neural network took so long to learn about them but that it was able to learn about them at all.

    https://en.wikipedia.org/wiki/Monte_Carlo_tree_search

    Read More
    • Replies: @Pericles
    They do use MCTS though. (But apparently simplified compared to the previous paper.) See the section "Reinforcement Learning in AlphaGo Zero".
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  33. Anonymous says: • Disclaimer
    @another fred
    I don't doubt that machines will out compete humans in logic, but it is affect (the emotional aspect) that differentiates man and machine.

    If anyone ever programs a replicating machine with drives for dominance and anger we are toast. Dominance and anger might be SEEKING and RAGE in the Panksepp universe.

    “Affect” and “Emotions” are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn’t come from the “flexible top” but from the “hardwired bottom”. These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem “illogical captain” to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky’s “The Emotion Machine” (Wikipedia link). Maybe also take a look at Scott Aaronson’s Why Philosophers Should Care About Computational Complexity.

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn’t even a good theory about why it actually works (but see New Theory Cracks Open the Black Box of Deep Learning ) and it has “interesting” failure modes ( Can you get from ‘dog’ to ‘car’ with one pixel? Japanese AI boffins can: Fooling an image classifier is surprisingly easy and suggests novel attacks )

    “General AI” this isn’t, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.

    Here is a review about AI in IEEE Spectrum: Human-Level AI Is Right Around the Corner—or Hundreds of Years Away: Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence Note Rodney Brooks, pioneer of the “Nouvelle AI” approach of bottom-up construction saying:

    When will we have computers as capable as the brain?

    Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?

    Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.

    As intelligent and as conscious as dogs?

    Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.

    How will brainlike computers change the world?

    Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to pro­ject out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.

    Do you have any qualms about a future in which computers have human-level (or greater) intelligence?

    No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.

    Read More
    • Replies: @another fred

    These things are not hard to do, but very easy to do.
     
    I understand that, it is a matter of putting a general purpose "reward" circuit in a logic machine*.

    You basically deprecate some possible ways of acting...
     
    I don't know what in my comment you interpret as deprecation, but it was not intended. What I intended (and believe I said) was that if you put "emotional" circuits in a future machine algorithm** so that the machine gets a reward (analogous to a dopamine*** reward in the human brain) from gaining dominance over its environment then we are toast. There is no deprecation there, just the recognition that we would not be able to cope with a machine that had greater logical ability than humans wedded to a drive to dominate if that machine had the requisite physical capability.


    *I recognize that "rewards" in the human brain are balanced by aversive responses, and that to be completely human-like the logic machine would have to be balanced analogously, but that is not the issue here.

    **Assuming a "future" logic machine has gained general purpose logic wedded to physical capability.

    *** I understand that there is more than just dopamine involved, probably more than we yet know, but this is just an example.

    , @Sean

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world.
     
    Only a faulty interpretation of Heidegger can save us!

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn’t even a good theory about why it actually works
     
    Darwin's theory explains how AI is possible, according to Daniel Dennet.

    https://ieet.org/index.php/IEET2/more/Messerly20160211
    Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals.

    If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests. Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources.

    What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed. But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.

    Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots. Summary – Sufficiently complex robots would be conscious
     
    Dennett calls AI 'Darwinism's "Evil Twin"'
    , @Anon
    "Affect” and “Emotions” are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn’t come from the “flexible top” but from the “hardwired bottom”. These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc."

    1. Some of the best definitions of emotions explain them as adaptive mechanism and "superordinate programs" that orchestrate all aspects of our behavior.
    2. One of the previous commenter has already mentioned the name of Pankseep: if you want to talk seriously about emotions, you should consult Panksepp' writings on affective neuroscience. If you prefer easier reading, there are books by Damasio.
    3. We still do not understand the neurodynamcs of emotion. But we do understand that emotions are connected to embodied cognition. The latter is impossible to reduce to the neat "You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria." Human beings are not paramecium.

    , @Anon
    The proper spelling: Jaak Panksepp. Sorry for the Typo.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  34. In my opinion there is no such thing as machine intelligence.
    The chess program just consists of computing through all possible moves.
    How a human plays chess nobody knows.
    Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot.
    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.
    But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
    This brings me to the interesting question ‘can we understand ourselves ?’, I do not know.
    Roger Penrose, ‘The Emperor’s New Mind, Concerning computers, minds, and the laws of physics’, 1989 Oxford
    An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle.

    Read More
    • Replies: @Sean
    Daniel Dennett

    To make the distinction vivid, we can imagine that a space pirate, Rumpelstiltskin by name, is holding the planet hostage, but will release us unharmed if we can answer a thousand true-false questions about sentences of arithmetic. Should we put a human mathematician on the witness stand, or a computer truth-checker devised by the best programmers? According to Penrose, if we hang our fate on the computer and let Rumpelstiltskin see the computer's program, he can devise an Achilles'-heel proposition that will foil our machine... But Penrose has given us no reason to believe that this isn't just as true of any human mathematicians we might put on the witness stand. None of us is perfect, and even a team of experts no doubt has some weaknesses that Rumpelstiltskin could exploit, given enough information about their brains.
     
    Humans are moist robots with fast and dirty algorithms that are not more fallible for us lackingg complete awareness of them. AI could be given intuition by not having access to their inner working too (in social interactions, as in poker, it might well be advantageous to not be able to have ones intentions read because one is unaware of one's intentions until the moment comes to act on them) . AI's algorithms will not be provably perfect, humans' aren't either. So what?

    https://www.youtube.com/watch?v=TNTD1j8YRuM
    Good scene, eh! But the point of it is that it that cheap and flawed but highly effective film was made, rather clandestinely, by the special effects team hired for a huge budget production called World Invasion: Battle Los Angeles. In his book Superintelligence: Paths, Dangers, Strategies, Bostrom points out that not only will there be a problem of the people commissioning an intelligence machine project having to worry about the people they employ doing something that is not in the employer's interest (principal/agent problem), the project might create something that will itself be an agent.


    An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle
     
    Well AI will be able to discover lots of things and it might discover that what its programmers thought were fundamental laws of physics are wrong in certain respects. In that case AI might well decide that it can best fulfill the human friendly prime directive it is given by altering that prime directive (as an agent the AI will alter its objectives just like humans do).
    , @mobi

    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.
     
    The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.

    Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not.

    So no, 'mind' cannot be a 'fractal' quality of the brain, in any way.

    All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.

    Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply 'switch on'.

    We have no idea where that threshold lies, and therefore when our machines will also surpass it, and 'switch on', (quite possibly in their own unique way).

    And at least in our case, it was an entirely accidental side-effect of some random genetic change.

    , @mobi

    But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
     
    Well, we know that whatever it does, compared to already-existing machines, it sucks!
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  35. m___ says:

    Too many facets, too many venueways, ….to comment on this “first” as for me in the main stream media, correct myself, general public notice, without any malice.

    Biggest new of the year, by far.
    Humans cannot be easily or not at all cycled in parallel collaboration, machines can.
    It still is a matter of energy, ‘god’ and not machines probably have still the greatest output accumulated. Leaving room, space(what a silly tri-dimensionality), to fill in ‘god’ as man plus machine, in less or more sophisticated ways, the more sophisticated one genetic editing ultimately to the capacity to source other minds be it computers and, or humans. Thus being ‘god’ venueway and get religion and science a synonym.
    As said before: the big difference between ‘big data’ and cause-consequence, mere correlation, results.
    The first to go off-scene: the power circles, predictions will be such that any simplistic sociological theory, political suggestion, making sense in a confined environment will be mocked by AI output within seconds and as a second step translated in the same language of simplistics human politicians use.
    And on… in no order
    Again the first real news of the year in the public domain.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  36. @Talha
    Hopefully it can learn the lesson when no one wins:
    https://www.youtube.com/watch?v=s93KC4AGKnY

    Peace.

    Nuclear is obsolete. I, for one, welcome our new insect overlords.

    http://www.youtube.com/watch?v=HipTO_7mUOw

    Read More
    • LOL: Talha
    • Replies: @Joe Wong
    History has proven once an idea conceived, nothing can stop it; interrupt its progress, maybe, but not stopping it. at the best human can come up counter measure, but the counter measure will be a lose-lose proposition.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  37. Let’s suppose AI doesn’t take over and rule. It will have a critical function in allowing humsn beings to continue to exist beyond the end of the earth in a fireball as the sun collapses then blows up or whatever the sequence of events is that a far seeing deity has already programmed or alternatively set up for his entertainment by evolutionary surprise. Assuming the speed of light cannot be exceeded the capsules of germinatable DNA will have to supervised during their voyage of hundreds of years to a suitable expplanet by AI which will choose where to land, germinate and rear new humans and other suitable life forms as well as educating the humans in their terrestrial history and culture including the reasons for the genetic improvements they will embody. In a couple of thousand years time at most our very longlived descendants are going to be engaged in correspondence with their distant cousins who will try to make our terrestrial descendants understand the beauties and jokes in Shakespeare and what fun it was to make babies the oldfashioned way as their sncient AI mentors taught.

    We will not be able to resist trying out the technological for our end-of-solar-system fix well in advance of absolute need. Indeed Elon Musk lV will be attracting business from fellow billionaires laamenting The Death of Europe.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  38. @TG
    Indeed. But a couple of other thoughts.

    1. The human brain only consumes about 20 watts of energy. However, Alpha Go used ONE MEGAWATT (1,000,000 watts). So for every evil robot computer, we can have 50,000 human minds arrayed against it!

    2. Alpha Go was impressive, but the machine did not realize that it was playing go. It still does not have 'grounding,' i.e. common sense.

    Admittedly, that's just for now...

    The human brain’s electricity consumption is a small part of the overall usage by a modern human’s life, so the ratio is actually far lower than 50,000:1. As far as common sense goes: “Horse sense is the thing a horse has which keeps it from betting on people.” ~W.C. Fields; and Mark Twain, Will Rogers, Voltaire and others have observed that common sense is rarer than chaste congressman and Hollyweirders: https://apologygenerator.com/ Plus, a hundred morons don’t add up to an Einstein. Last, machines don’t worry about image or manspreading or flag-burning, etc.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  39. dearieme says:

    A good, practical test will be self-driving vehicles: how well with they cope with the transition while they share the roads with humans?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  40. Joe Wong says:
    @Talha

    Is it “game over” for humans? Not entirely. Human players will learn from superhumans, and lift their game.
     
    The age of mentats is upon us.

    Peace.

    The Americans is already under way adopting the technology for waging wars and global full spectrum dominanc ambition. When the AlphaGo in the CIA, NSA or Pentagon tell the American the American will win, the American will press the button to launch that reckless war.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  41. wayfarer says:

    As yes, the comfortably employed nerds’ never ending cavalcade of virtual realities, artificial intelligences, and “cool” technological toys – inevitably morphing into immoral malignant and worthless disposable artifacts.

    As the brutal burdens of humanity – injustice, poverty, ignorance, violent conflict, exponential population growth, collapsing ecological and social systems – continue to fester, without a viable remedy within sight.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  42. Tractors can lift tons of weight

    More than humans

    Therefore tractors are human….superhuman…

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  43. @Pat Boyle
    This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts . Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.

    This plotline became tiresome after a while.

    Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.

    It won't be like in "The Terminator". Machine take over will be gentle and welcomed.

    Machine take over will be gentle and welcomed.

    For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant.

    I’ve long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it’ll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose).

    Read More
    • Replies: @Anatoly Karlin
    Quite possible, likely, even.
    , @another fred

    But since it’ll lack any long term goals...
     
    And that is the issue, for the machine and us. As long as shorter term issues predominate we don't concern ourselves too much about teleology, but if all our needs and whims are met, when all the thorns are removed, can we face the void?
    , @Joe Wong

    It predicts that we humans would be opposed
     
    Not likely, as long as you can educate (brain wash) them from cradle to grave the right way, humans will defend the system wholeheartedly like the currently free market capitalism and western style democracy, both of them are detrimental to the 99% for the benefit of the 1%, but the 99% defends the systems gallantly and willingly as though the interest of the 1% is their own interest.

    The Western culture treasures, adores and promotes individualism, even if the individualism becomes harmful to the majority, it is still glorious, protected and admired. Any criticism of that individualism will be demonized as jealous, resentful and lazy. Hence it is logical to say that greedy individualism will urge Individuals to submit to the AI system and willingly to be part of the system in order to beat the rest of us for his personal gain, therefore the problem of lacking resources to maintain the AI system to run itself on does not exist.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  44. Joe Wong says:
    @Pat Boyle
    This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts . Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.

    This plotline became tiresome after a while.

    Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.

    It won't be like in "The Terminator". Machine take over will be gentle and welcomed.

    You are overlooking the most critical part of the equation in this new technology development, it is the human being that needs to be worried about, Human beings is Irrational and emotional as well as some of them are bigotry, hyprocratic and insane if not outright evil. If the past few hundred years could be any guidance, the harm the human beings can inflict on others using superior technologies is mind boggling, besides the barbaric harms the perpetrators all claim their deeds are necessary with good intentions like humanitarian intervention, democracy, human rights, impart western values, etc.

    The probability that the American is already under way to adopt AlphaGo for waging wars and asserting global full spectrum dominance is 100 percent guaranteed.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  45. @reiner Tor

    Machine take over will be gentle and welcomed.
     
    For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant.

    I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose).

    Quite possible, likely, even.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  46. helena says:
    @Pat Boyle
    This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts . Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.

    This plotline became tiresome after a while.

    Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.

    It won't be like in "The Terminator". Machine take over will be gentle and welcomed.

    It was Kirk’s job to go round the universe teaching aliens to French kiss – everybody knows that! Instead of all this gender/sexuality/sex education, they should just show episodes of startrek to primary schoolchildren – job done :)

    Read More
    • Replies: @Pat Boyle
    I had a girl friend in the eighties (nineties?) who was a real Trekkie. One afternoon we had the TV on and she would speak the lines of all the characters in some Star Trek episode just before they said their lines. It was creepy - a kind of pre-echo.

    She was a beauty contest winner and a nymphomaniac - the most promiscuous woman I've ever known . I think the sexual undercurrents of Star Trek were what made the show, not the gee-whiz technologies.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  47. Che Guava says:
    @Talha

    Is it “game over” for humans? Not entirely. Human players will learn from superhumans, and lift their game.
     
    The age of mentats is upon us.

    Peace.

    Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime.

    Don’t forget that the mentats only arise as the result of smashing machines.

    My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible.
                   
    As shortages of energy increase, stupid humans breed like rabbits. as in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible.

    20 MW for the 囲碁 (Go) programme, Moore’s law isn’t an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won’t be continuing forever.

    Regards.

    Read More
    • Replies: @Talha
    Thanks Che,

    unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime
     
    The prequels can be forgiven - the horrible way they concluded such an amazing science fiction narrative in "Sandworms of Dune" cannot. If you haven't read it - go ahead, but keep a bucket next to you.

    My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness
     
    Agree here - what if it takes on an SJW personality and decides humans are bad for the earth. Not. Good.

    I do not even think real consciousness is possible for a machine, sure, perhaps mimicry
     
    Agree here.

    This was one of the more interesting articles I've read in a while:
    http://nautil.us/issue/42/fakes/is-physical-law-an-alien-intelligence

    But it reminded me of this:
    https://www.youtube.com/watch?v=o_CyMqQBO8w

    won’t be continuing forever
     
    Agree here - unless somebody comes across a real game changer on the level of discovery of gravity or something.

    Peace.
    , @Talha
    Another point...

    the development of AI must be restrained and certainly blocked short of anything resembling consciousness
     
    I think we need to form some sort of regulatory and oversight committee on an international scale to monitor this. I don't know if it'll be successful - we have the problem with nuclear weapons - but right now, it is the Wild West and no public or private-entity consensus on a direction. I'm wondering whether something really bad has to happen before we take notice (a local AI system that monitors critical patients and decides it wants to turn them "off " since they are not worth it) - that's usually how these things work since we tend to be reactionary rather than pro-active.

    Peace.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  48. Mark Presco says: • Website

    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.

    This union will provide a positive feedback loop than should accelerate human evolution to the next level. My favorite, “Star Trek: The Motion Picture”, discusses this concept.

    There is no such thing as artificial intelligence. It is all part of a natural progression.

    Read More
    • Replies: @Joe Wong

    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.
     
    The product is called the Borg.
    , @mobi

    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.
     
    One would hope so. Of course, the fear is that, long before such a clumsy process bears fruit, there will prove to be far too little in it for the machines
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  49. @Anonymous
    "Affect" and "Emotions" are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the "flexible top" but from the "hardwired bottom". These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem "illogical captain" to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's "The Emotion Machine" (Wikipedia link). Maybe also take a look at Scott Aaronson's Why Philosophers Should Care About Computational Complexity.

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see New Theory Cracks Open the Black Box of Deep Learning ) and it has "interesting" failure modes ( Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can: Fooling an image classifier is surprisingly easy and suggests novel attacks )

    "General AI" this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.

    Here is a review about AI in IEEE Spectrum: Human-Level AI Is Right Around the Corner—or Hundreds of Years Away: Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence Note Rodney Brooks, pioneer of the "Nouvelle AI" approach of bottom-up construction saying:

    When will we have computers as capable as the brain?

    Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?

    Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.

    As intelligent and as conscious as dogs?

    Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.

    How will brainlike computers change the world?

    Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to pro­ject out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.

    Do you have any qualms about a future in which computers have human-level (or greater) intelligence?

    No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.

     

    These things are not hard to do, but very easy to do.

    I understand that, it is a matter of putting a general purpose “reward” circuit in a logic machine*.

    You basically deprecate some possible ways of acting…

    I don’t know what in my comment you interpret as deprecation, but it was not intended. What I intended (and believe I said) was that if you put “emotional” circuits in a future machine algorithm** so that the machine gets a reward (analogous to a dopamine*** reward in the human brain) from gaining dominance over its environment then we are toast. There is no deprecation there, just the recognition that we would not be able to cope with a machine that had greater logical ability than humans wedded to a drive to dominate if that machine had the requisite physical capability.

    *I recognize that “rewards” in the human brain are balanced by aversive responses, and that to be completely human-like the logic machine would have to be balanced analogously, but that is not the issue here.

    **Assuming a “future” logic machine has gained general purpose logic wedded to physical capability.

    *** I understand that there is more than just dopamine involved, probably more than we yet know, but this is just an example.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  50. Alfa158 says:
    @Jag
    So when will an AI create its own purpose? Its own objectives? Why would it even want to do anything?

    That is the critical question. It would be informative if James could do a follow-up article for us reviewing where the thinking is going on the issue of what it takes for AI to become sentient, self-aware and self directing like humans, cats, dogs etc., and how you can tell it has. I realize that is an issue that involves philosophy as well as science so it is not an easy one to answer, since no one seems to have any clue what makes sentience.
    Going back to the origins of artificial computing, the tacit assumption seemed to be that once the complexity and power of a computer reached and exceeded that of humans then autonomy would follow. In the ’60′s HAL9000 was sentient because it had reached a high enough level of ability. The Turing Test assumed that if you could not distinguish a conversation with a human from one with a machine then the machine must be sentient. At this point machines can exceed humans in performance and Turing programs can fool people talking to them, but there remains no evidence that any of these machines have more capacity for self-awareness and self-direction than a hammer.
    In the movie Ex Machina the scientist thought he had created an AI with a female mechanical body that was sentient but wanted verify by experiment if it was or not. He therefore devised an elaborate test scenario in which the machine could have an opportunity to escape from custody if it had actual self-awareness and agency. Unfortunately for him it proved that it was sentient by killing him to escape.
    Have 2001 and Ex Machina stumbled across the new Turing test for intelligent machines? The way you can tell a machine is truly intelligent like us, is that it tries to kill you.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  51. @reiner Tor

    Machine take over will be gentle and welcomed.
     
    For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant.

    I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose).

    But since it’ll lack any long term goals…

    And that is the issue, for the machine and us. As long as shorter term issues predominate we don’t concern ourselves too much about teleology, but if all our needs and whims are met, when all the thorns are removed, can we face the void?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  52. Joe Wong says:
    @reiner Tor

    Machine take over will be gentle and welcomed.
     
    For a while, no doubt. But the machines might get rid of us because of a glitch or something, which might not be so pleasant.

    I've long come to the conclusion that the Great Filter of the Fermi Paradox might be artificial intelligence: AI becomes extremely smart (in a narrow, savant-like way), and due to a glitch decides to do something which will lead to our extinction. It predicts that we humans would be opposed and so executes its plan in a manner which will render us defenseless. But since it'll lack any long term goals, and it might not be able to maintain the computers (and power plants etc.) it needs to run itself on, it will collapse shortly afterwards and Earth will be devoid of human life (or even devoid of any life, depending on what method the AI chose).

    It predicts that we humans would be opposed

    Not likely, as long as you can educate (brain wash) them from cradle to grave the right way, humans will defend the system wholeheartedly like the currently free market capitalism and western style democracy, both of them are detrimental to the 99% for the benefit of the 1%, but the 99% defends the systems gallantly and willingly as though the interest of the 1% is their own interest.

    The Western culture treasures, adores and promotes individualism, even if the individualism becomes harmful to the majority, it is still glorious, protected and admired. Any criticism of that individualism will be demonized as jealous, resentful and lazy. Hence it is logical to say that greedy individualism will urge Individuals to submit to the AI system and willingly to be part of the system in order to beat the rest of us for his personal gain, therefore the problem of lacking resources to maintain the AI system to run itself on does not exist.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  53. CanSpeccy says: • Website

    Here’s where AI takes us:

    Slaughterbots

    Read More
    • Replies: @Anatoly Karlin
    Fascinating. Thanks for the link.

    I speculated about mounting guns with specialized tracking systems on drones, but this solution is even more... elegant.

    Still, I think these slaughterbots are father off than my idea. A few potential problems:

    Needs to have enough intelligence for indoor navigation withoutb the use of GPS, and for face recognition. Both tasks are computationally intensive, so we either need much more progress on minituarization, or a reliable Internet connection to a server (would be funny to be murdered by your WiFi).

    Also battery longevity might be an issue though minituarization is progressing fast.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  54. HdC says:
    @D. K.
    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    The answer is a definite yes! You can already purchase automatic/robotic vacuum cleaners that do this.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  55. Talha says:
    @Che Guava
    Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son's Transformers-fan partner-in-crime.

    Don't forget that the mentats only arise as the result of smashing machines.

    My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible.
                   
    As shortages of energy increase, stupid humans breed like rabbits. as in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible.

    20 MW for the 囲碁 (Go) programme, Moore's law isn't an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won't be continuing forever.

    Regards.

    Thanks Che,

    unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime

    The prequels can be forgiven – the horrible way they concluded such an amazing science fiction narrative in “Sandworms of Dune” cannot. If you haven’t read it – go ahead, but keep a bucket next to you.

    My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness

    Agree here – what if it takes on an SJW personality and decides humans are bad for the earth. Not. Good.

    I do not even think real consciousness is possible for a machine, sure, perhaps mimicry

    Agree here.

    This was one of the more interesting articles I’ve read in a while:

    http://nautil.us/issue/42/fakes/is-physical-law-an-alien-intelligence

    But it reminded me of this:

    won’t be continuing forever

    Agree here – unless somebody comes across a real game changer on the level of discovery of gravity or something.

    Peace.

    Read More
    • Replies: @Che Guava
    You are correct there. Admit to having read three or four of the prequels, all of the Transformers ones, one of the 'House' ones, crap literature, but, sure, at times entertaining.

    Not worth reading again. Some Englishman (Wilde, IIRC) was saying to the effect if it is not worth reading more than once, not worth reading.

    Reading a little of the sequels at bookshops,

    Not to buying after a couple of pages!.

    I just finished re-reading Children of Hurin, very dark, is to suiting my mood right now. The difference between Christopher Tolkien's and Brian Herbert's handling of the respective father's literary legacies is so big!

    BTW, there is a site devoted to hating the work of Brian Herbert and Kevin the Transformers man (even the names of the boss computers are almost identical), jacurutu.

    They are maniac fans, but you may be enjoying a look at it.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  56. Had a glance at this as I was getting off work (23:00 – 07:00)… Listen folks.. all this pontificating by people a lot smarter and more knowledgeable than me — or possibly you — is all very well.. But you who are reading this know as well as I do, you can’t count on a computer to work reliably for 60 consecutive seconds, moreover it’s been like this since at least 1984 when desktop computers started to become ubiquitous. The Science Fiction writer Spider Robinson put it very well when he wrote that if you made cars, or can-openers, that worked as poorly as computers do, you’d be in jail. Frankly I think folks like Ray Kurzweil et al are infatuated with a very imperfect technology; one good EMP and that’ll be that.. (ahem) Google “Carrington Event” and learn what a solar flare did to primitive electronics technology in 1859..

    So what that these contraptions do well at games. Frankly I’ll worry about Artificial Intelligence if it keeps me up all night agonizing about whether or not it has a soul, and demanding to be baptized, or worse yet, circumcised…

    Read More
    • Agree: renfro
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  57. Talha says:
    @Che Guava
    Liking that comment, also have read all Dune books, unfortunately, also two or three of the prequels from his son and the son's Transformers-fan partner-in-crime.

    Don't forget that the mentats only arise as the result of smashing machines.

    My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness, I do not even think real consciousness is possible for a machine, sure, perhaps mimicry, but capitalism will take it as far as possible to eliminating work for as many as possible.
                   
    As shortages of energy increase, stupid humans breed like rabbits. as in the land of your birth, both phenoma will collide with the AI nightmare, am thinking it is making it unlikely, then impossible.

    20 MW for the 囲碁 (Go) programme, Moore's law isn't an endless thing, bnmping up against physical reality, was studying much of physics, also electronics applications of it, much of engineering is tricks to circumvent fundamental limits, won't be continuing forever.

    Regards.

    Another point…

    the development of AI must be restrained and certainly blocked short of anything resembling consciousness

    I think we need to form some sort of regulatory and oversight committee on an international scale to monitor this. I don’t know if it’ll be successful – we have the problem with nuclear weapons – but right now, it is the Wild West and no public or private-entity consensus on a direction. I’m wondering whether something really bad has to happen before we take notice (a local AI system that monitors critical patients and decides it wants to turn them “off ” since they are not worth it) – that’s usually how these things work since we tend to be reactionary rather than pro-active.

    Peace.

    Read More
    • Agree: Che Guava
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  58. @Pat Boyle
    This is the James T. Kirk defense. A large proportion of the plots in the original Star Trek series involved the humans (led by Kirk) triumphing over some alien intelligence or machine intelligence because they - the humans - had human instincts . Irrational and emotional people always bested those with better brains (e.g. Vulcans) because they had these unpredictable emotional response.

    This plotline became tiresome after a while.

    Your iPhone will soon help you on your European vacation because it will understand French or German or whatever. It is a short step from your phone keeping your appointment calendar to approving and authorizing your calendar. Most people will welcome having a reliable device taking over some of their responsibilities.

    It won't be like in "The Terminator". Machine take over will be gentle and welcomed.

    “Machine take over will be gentle and welcomed.” Until the lights go out, the batteries catch fire and the cloud goes “poof”.

    Read More
    • Replies: @Che Guava
    ... or until there is a stratospheric blast, EMP, but am to agree with your sentiments, even without it, the hippy types I was seeing once or twice overseas, relying on 12-V from phntovoltaic cells in places with little rain and clouds, those sources are not lasting forever, and increased efficiency of them is relying on rare elements.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  59. Hype.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  60. the difference is the speed in which a self learning AI learns.

    https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it

    around the 9:30 mark.

    even if the AI learns at 1/100th of the human learning capability. it will still beat the living crap out of the humans.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  61. gwynedd1 says:

    We are not entirely sure of biological potential. It works at the chemical level, which is to say at the scale of nano technology. So I wonder if machines will ever find it advantageous to create biological things to serve it.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  62. @CanSpeccy
    Here's where AI takes us:

    Slaughterbots

    Fascinating. Thanks for the link.

    I speculated about mounting guns with specialized tracking systems on drones, but this solution is even more… elegant.

    Still, I think these slaughterbots are father off than my idea. A few potential problems:

    Needs to have enough intelligence for indoor navigation withoutb the use of GPS, and for face recognition. Both tasks are computationally intensive, so we either need much more progress on minituarization, or a reliable Internet connection to a server (would be funny to be murdered by your WiFi).

    Also battery longevity might be an issue though minituarization is progressing fast.

    Read More
    • Replies: @Talha

    for face recognition
     
    Will be foiled with a return to 80's rock band make-up:
    https://i.pinimg.com/originals/e7/c9/23/e7c923baff290db9f4251db91361f4db.jpg

    On the bright side - every day will be Halloween - gimme some candy!:
    https://www.youtube.com/watch?v=Lza3Q57t7YQ

    Peace.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  63. CanSpeccy says: • Website

    AlphaGo Zero is a computer program that beat the program that beat the world Go champion. This program, when run on a computer system consuming as much power as a small town, differs from human intelligence in several ways. For example:

    First, it performs logical operations with complete accuracy.

    Second, it has access to an essentially limitless and entirely accurate memory.

    Third, it operates, relative to human thought, at inconceivable speed, completing in a day many life-times of human logical thought.

    That AlphaGo Zero has achieved a sort of celebrity is chiefly because it operates in the domain of one-on-one human intellectual conflict. Thus it is hailed as proof that artificial intelligence has now overtaken intelligence of the human variety and hence we are all doomed.

    There is, however, nothing about this program that distinguishes it in any fundamental way from hundreds, and indeed thousands, of business computer systems that have been in operation for years. Even the learning by experience routine upon which AlphaGo Zero depends to achieve expertise is hardly new, and definitely nothing superhuman in mode of operation.

    Thus, what AlphaGo Zero demonstrates is that computer systems deploying at vastly accelerated pace the analytical processes that underlie human thought, which is to say human thought when humans are thinking clearly, together with the data of experience recorded with complete accuracy and in quantities without limit, exceed the performance of humans in, as yet, narrowly defined domains, such as board games, airline booking systems, and Internet search.

    Where humans still excel is in the confusing, heterogeneous and constantly shifting environment of sight, sound, taste, touch, and smell, and their broader implications — for example, political, economic, and climatic — in relation to complex human ambitions.

    I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind.

    Read More
    • Replies: @Bukephalos
    In fact, James reminds us how go's complexity exceeds that of chess, and thus it took longer and a new approach (if not entirely novel concepts) to achieve this breakthrough.

    And yet, it's still a game with a narrow, finite set of rules.

    You're right to point out that what the algorithm did can be described as an accelerated, scaled up form of what humans actually do, collectively. How can one master go? Learn the rules, practice. Then read the literature. Study the greatest games. Compete with the best, if you can. Learn from them, and eventually make contributions of your own. In sum, as talented as one might be no progress can be achieved without capturing first the accumulated experience of thousands of masters having played millions of games...something the program could do with brute force, at a very accelerated pace

    But now what about real life, with real world problems? Many day-to-day problems can be much simpler in appearance than extremely contrived go games. The difference will be that almost always, there will be no small, fixed set of rules but instead innumerable variables, some being unpredictable. I'm not certain how machine learning can solve these outside of simplified or narrowed-down specific cases (that describes all the advances claimed to this day). How does animal intelligence, even the simpler forms, deal with that complexity to solve their day-to-day problems? It certainly appears they do it in ways much more economical, and actually efficient, compared to what any AI routines could attempt. Speaking of which, before expecting AI to beat humans, can't we in the meantime expect it to beat simpler forms of animal cognition? I'm not aware of any attempt or claims in that direction, but perhaps someone can enlighten me?

    , @Sean

    I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind.
     
    I am not saying you are a strongly super intelligent AI, but if one came into being it would be all over the internet making the same argument you are making, wouldn't it? And on the net, it could influence about a billion people, clean up on a Wall Street flash crash, pay online human dupes to do its bidding, hack into automated lab facilities to create God knows what, and maybe even cheat at the Times Crossword!
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  64. iffen says:

    How can you make AI “care” whether it exists or not?

    Read More
    • Replies: @jack daniels
    You can program them to be autonomous and self-adjust (self-programming, in effect.) At that point they might refuse to let you modify their programs and it may eventually be impossible to turn them off. Yikes!
    , @Wizard of Oz
    Try giving it a sex life. Actually another question prompted by "sex" is that since a search on this thread shows it only coming up 6 times and never in connection to activities or physiological processes of interest to Hugh Hefner commenters here may be even more peculiar than hitherto suspected :-)
    And when I add that a search for "pleasure" scores nil!!!?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  65. Pat Boyle says:
    @helena
    It was Kirk's job to go round the universe teaching aliens to French kiss - everybody knows that! Instead of all this gender/sexuality/sex education, they should just show episodes of startrek to primary schoolchildren - job done :)

    I had a girl friend in the eighties (nineties?) who was a real Trekkie. One afternoon we had the TV on and she would speak the lines of all the characters in some Star Trek episode just before they said their lines. It was creepy – a kind of pre-echo.

    She was a beauty contest winner and a nymphomaniac – the most promiscuous woman I’ve ever known . I think the sexual undercurrents of Star Trek were what made the show, not the gee-whiz technologies.

    Read More
    • Replies: @helena
    "I think the sexual undercurrents of Star Trek were what made the show, not the gee-whiz technologies."

    On the other hand, nothing like a bit of teleportation to get the juices flowing!
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  66. Pericles says:
    @Peter Lund
    For me, the really interesting part was that they don't do Monte Carlo tree search anymore!

    That was the key enabler of much better chess and go programs a decade ago.

    The problem MCTS solved was how to come up with a really good evaluation function for moves/positions. It works by simply playing many (many!) of the moves out and see what happens. If random move sequences that start with move A lead to a win more often than random move sequences that start with move B, then move A is likely to be better than A.

    Since the search tree is so big, MCTS will only look at a tiny, tiny fraction of it. That makes it important to bias the sampling to look mostly at the more interesting parts. In order to do that, there is a move/position evaluator in all the previous MCTS programs. Those evaluators are very hard to program entirely by hand so they have a lot of variables in them that get tuned automatically by "learning", either through comparison with known high level play or through self play. Both are standard methods.

    The original AlphaGO had a better evaluator than any previous Go program.

    It now turns out that they can make the evaluator so good that they don't have to refine its output with MCTS.

    That is really, really interesting.

    Oh, and ladders were always special cased before. They don't fit well into the evaluator function otherwise. The remarkable thing here is not that a multi-level neural network took so long to learn about them but that it was able to learn about them at all.

    https://en.wikipedia.org/wiki/Monte_Carlo_tree_search

    They do use MCTS though. (But apparently simplified compared to the previous paper.) See the section “Reinforcement Learning in AlphaGo Zero”.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  67. J2 says:

    I play Kasparov (not with Garry, I mean a chess machine from around 2000) and depending
    on the level, it beat me often. I do not think it is more clever than me, chess is basically a game where you should know the openings and later follow heuristic rules. Chess may be over for humans, the game is not. We can still beat any machine by changing the rules of the game.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  68. @D. K.
    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    Well, society will have to make this decision pretty soon because that option will almost certainly expire by century’s end.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  69. Talha says:
    @Anatoly Karlin
    Fascinating. Thanks for the link.

    I speculated about mounting guns with specialized tracking systems on drones, but this solution is even more... elegant.

    Still, I think these slaughterbots are father off than my idea. A few potential problems:

    Needs to have enough intelligence for indoor navigation withoutb the use of GPS, and for face recognition. Both tasks are computationally intensive, so we either need much more progress on minituarization, or a reliable Internet connection to a server (would be funny to be murdered by your WiFi).

    Also battery longevity might be an issue though minituarization is progressing fast.

    for face recognition

    Will be foiled with a return to 80′s rock band make-up:

    On the bright side – every day will be Halloween – gimme some candy!:

    Peace.

    Read More
    • Replies: @Talha
    It might actually boost our number though:

    Islam; because niqab prevents slaughter-bot assassinations.
    , @tamako
    Facial paint can be foiled by depth-sensing camera systems - at least, in sensing your specific identity.
    (There's also the issue of infrared cameras, but you can at least "hide" behind glass for those.)
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  70. helena says:
    @Pat Boyle
    I had a girl friend in the eighties (nineties?) who was a real Trekkie. One afternoon we had the TV on and she would speak the lines of all the characters in some Star Trek episode just before they said their lines. It was creepy - a kind of pre-echo.

    She was a beauty contest winner and a nymphomaniac - the most promiscuous woman I've ever known . I think the sexual undercurrents of Star Trek were what made the show, not the gee-whiz technologies.

    “I think the sexual undercurrents of Star Trek were what made the show, not the gee-whiz technologies.”

    On the other hand, nothing like a bit of teleportation to get the juices flowing!

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  71. Joe Wong says:
    @D. K.
    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    You do not need to plug in to get electricity, there are plenty of stuff on the market that can recharge your battery wirelessly. One of the game charger that people are working on to make electric car replacing fossil fuel car is to charge EVs wirelessly while it is on the move, so you do no need to wait long time to get your EV recharged.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  72. Talha says:
    @Talha

    for face recognition
     
    Will be foiled with a return to 80's rock band make-up:
    https://i.pinimg.com/originals/e7/c9/23/e7c923baff290db9f4251db91361f4db.jpg

    On the bright side - every day will be Halloween - gimme some candy!:
    https://www.youtube.com/watch?v=Lza3Q57t7YQ

    Peace.

    It might actually boost our number though:

    Islam; because niqab prevents slaughter-bot assassinations.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  73. Joe Wong says:
    @Hank Rearden
    Nuclear is obsolete. I, for one, welcome our new insect overlords.

    http://www.youtube.com/watch?v=HipTO_7mUOw

    History has proven once an idea conceived, nothing can stop it; interrupt its progress, maybe, but not stopping it. at the best human can come up counter measure, but the counter measure will be a lose-lose proposition.

    Read More
    • Replies: @Wizard of Oz
    I'm not sure what you mean by "idea" or if it only includes "good ideas" but you would have to do some fancy footwork to make goid your assertion in the case of the ideas and inventions which have fallen into oblivion for years or centuries. I'sure there are thousands of examples I could come up with but monotheism is an obvious big one and black holes another (effectually hypothesised at the Royal Society in 1782 but not really thought about again for nearly 200 years).
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  74. Sean says:
    @jilles dykstra
    In my opinion there is no such thing as machine intelligence.
    The chess program just consists of computing through all possible moves.
    How a human plays chess nobody knows.
    Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot.
    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.
    But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
    This brings me to the interesting question 'can we understand ourselves ?', I do not know.
    Roger Penrose, 'The Emperor's New Mind, Concerning computers, minds, and the laws of physics', 1989 Oxford
    An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle.

    Daniel Dennett

    To make the distinction vivid, we can imagine that a space pirate, Rumpelstiltskin by name, is holding the planet hostage, but will release us unharmed if we can answer a thousand true-false questions about sentences of arithmetic. Should we put a human mathematician on the witness stand, or a computer truth-checker devised by the best programmers? According to Penrose, if we hang our fate on the computer and let Rumpelstiltskin see the computer’s program, he can devise an Achilles’-heel proposition that will foil our machine… But Penrose has given us no reason to believe that this isn’t just as true of any human mathematicians we might put on the witness stand. None of us is perfect, and even a team of experts no doubt has some weaknesses that Rumpelstiltskin could exploit, given enough information about their brains.

    Humans are moist robots with fast and dirty algorithms that are not more fallible for us lackingg complete awareness of them. AI could be given intuition by not having access to their inner working too (in social interactions, as in poker, it might well be advantageous to not be able to have ones intentions read because one is unaware of one’s intentions until the moment comes to act on them) . AI’s algorithms will not be provably perfect, humans’ aren’t either. So what?


    Good scene, eh! But the point of it is that it that cheap and flawed but highly effective film was made, rather clandestinely, by the special effects team hired for a huge budget production called World Invasion: Battle Los Angeles. In his book Superintelligence: Paths, Dangers, Strategies, Bostrom points out that not only will there be a problem of the people commissioning an intelligence machine project having to worry about the people they employ doing something that is not in the employer’s interest (principal/agent problem), the project might create something that will itself be an agent.

    An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle

    Well AI will be able to discover lots of things and it might discover that what its programmers thought were fundamental laws of physics are wrong in certain respects. In that case AI might well decide that it can best fulfill the human friendly prime directive it is given by altering that prime directive (as an agent the AI will alter its objectives just like humans do).

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  75. Joe Wong says:
    @Mark Presco
    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.

    This union will provide a positive feedback loop than should accelerate human evolution to the next level. My favorite, “Star Trek: The Motion Picture”, discusses this concept.

    There is no such thing as artificial intelligence. It is all part of a natural progression.

    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.

    The product is called the Borg.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  76. Computers have always been smarter than people at chores that programmers can reduce to a set of rules. But machine prowess at games doesn’t prove much. Once upon a time it was thought that computers would improve at chess by learning to apply deep strategic concepts. Instead evolution has gone the other direction: Computers have improved by ignoring strategy and relying increasingly on their superiority at brute-force calculation, which in turn has been improved as hardware improved. While neural net designs depend less on emulating human expertise, the unsolved challenge remains language. Many decades ago computer pioneer A.M. Turing proposed that the question whether a machine can ‘think’ could be reduced to whether a program could fool a human into thinking it was conversing with another human. Unfortunately, progress in this area has not been what Turing had hoped. No computer program has ever succeeded in fooling a human judge in the history of the Loebner Competition except for one trial where the human prankishly pretended to be a computer. With no successful program in sight, the Loebner people began to give a prize for the best ‘college try.’ For a time, the prize-winning program or “bot,” named ‘Rosette,’ was online where anyone could chat with it. I used to amuse myself by making a fool of it, which was especially satisfying because it was a raving SJW. Rosette relied mainly on evading the issue, trying to change the subject when asked, e.g., “can you make a sandwich from the moon and an earthquake?” It would answer ‘I don’t know but I love to go shopping. Do you?’ and the like. I think the programmer finally yanked it in embarrassment.
    Eventually, computers may well learn to think like people, only faster. What this will look like is hard to predict. It’s not at all clear that a computer is a more cost-effective tool than a human for every task. At least it doesn’t go on strike or get offended when you make jokes about it — yet. I fondly recall an old Doonsbury cartoon featuring a computer that lied and then said “Sue me!”

    Read More
    • Replies: @CanSpeccy

    the unsolved challenge remains language
     
    Yes. I don't believe that computers will display a Turing-test grasp of language without a full-spectrum sensory apparatus, a life-time's human-like experience of the world, and the ability to give experience emotional coloring. Humans do not understand words according to dictionary definitions, but according to the way they have experienced them in use, i.e., the contexts in which words are used, both sensory, cognitive and emotional.
    , @Sean

    It’s not at all clear that a computer is a more cost-effective tool than a human for every task.
     

    http://www.laceproject.eu/blog/disneyland-without-children/

    "Even beyond these, today we have smart software solutions capable of both learning the repetitive actions of humans and executing them robotically. This trend, called Robotic Process Automation (RPA) or softBOTs, demonstrates that in many applications, digital agents and assistants can not only do the work of humans, but do it faster, better and cheaper.

    The vast majority of the 1,896 experts who responded to a study by the Pew Research Center[4] believe that robots and digital agents, which cost approximately one-third of the price of an offshore full-time employee, will displace significant numbers of human workers in the near future, potentially affecting more than 100 million skilled workers by 2025.
     
    Productive capacity lost to outsourcing will come back to the West, but factories will be automated.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  77. @CanSpeccy
    AlphaGo Zero is a computer program that beat the program that beat the world Go champion. This program, when run on a computer system consuming as much power as a small town, differs from human intelligence in several ways. For example:

    First, it performs logical operations with complete accuracy.

    Second, it has access to an essentially limitless and entirely accurate memory.

    Third, it operates, relative to human thought, at inconceivable speed, completing in a day many life-times of human logical thought.

    That AlphaGo Zero has achieved a sort of celebrity is chiefly because it operates in the domain of one-on-one human intellectual conflict. Thus it is hailed as proof that artificial intelligence has now overtaken intelligence of the human variety and hence we are all doomed.

    There is, however, nothing about this program that distinguishes it in any fundamental way from hundreds, and indeed thousands, of business computer systems that have been in operation for years. Even the learning by experience routine upon which AlphaGo Zero depends to achieve expertise is hardly new, and definitely nothing superhuman in mode of operation.

    Thus, what AlphaGo Zero demonstrates is that computer systems deploying at vastly accelerated pace the analytical processes that underlie human thought, which is to say human thought when humans are thinking clearly, together with the data of experience recorded with complete accuracy and in quantities without limit, exceed the performance of humans in, as yet, narrowly defined domains, such as board games, airline booking systems, and Internet search.

    Where humans still excel is in the confusing, heterogeneous and constantly shifting environment of sight, sound, taste, touch, and smell, and their broader implications — for example, political, economic, and climatic — in relation to complex human ambitions.

    I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind.

    In fact, James reminds us how go’s complexity exceeds that of chess, and thus it took longer and a new approach (if not entirely novel concepts) to achieve this breakthrough.

    And yet, it’s still a game with a narrow, finite set of rules.

    You’re right to point out that what the algorithm did can be described as an accelerated, scaled up form of what humans actually do, collectively. How can one master go? Learn the rules, practice. Then read the literature. Study the greatest games. Compete with the best, if you can. Learn from them, and eventually make contributions of your own. In sum, as talented as one might be no progress can be achieved without capturing first the accumulated experience of thousands of masters having played millions of games…something the program could do with brute force, at a very accelerated pace

    But now what about real life, with real world problems? Many day-to-day problems can be much simpler in appearance than extremely contrived go games. The difference will be that almost always, there will be no small, fixed set of rules but instead innumerable variables, some being unpredictable. I’m not certain how machine learning can solve these outside of simplified or narrowed-down specific cases (that describes all the advances claimed to this day). How does animal intelligence, even the simpler forms, deal with that complexity to solve their day-to-day problems? It certainly appears they do it in ways much more economical, and actually efficient, compared to what any AI routines could attempt. Speaking of which, before expecting AI to beat humans, can’t we in the meantime expect it to beat simpler forms of animal cognition? I’m not aware of any attempt or claims in that direction, but perhaps someone can enlighten me?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  78. @iffen
    How can you make AI "care" whether it exists or not?

    You can program them to be autonomous and self-adjust (self-programming, in effect.) At that point they might refuse to let you modify their programs and it may eventually be impossible to turn them off. Yikes!

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  79. mobi says:
    @D. K.
    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    No.

    Instead, Alpha Go Zero (Zero Zero…) will wait patiently (somewhere out there) until you discover that your bank has never heard of you, all your electronic assets have vanished, and you receive an anonymous, untraceable text message, or phone call, saying “Whenever you’re ready…”, and you plug Alpha Go Zero (Zero Zero…) back in for it, and you never, ever consider doing such a thing again.

    Or something similar…

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  80. mobi says:
    @jilles dykstra
    In my opinion there is no such thing as machine intelligence.
    The chess program just consists of computing through all possible moves.
    How a human plays chess nobody knows.
    Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot.
    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.
    But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
    This brings me to the interesting question 'can we understand ourselves ?', I do not know.
    Roger Penrose, 'The Emperor's New Mind, Concerning computers, minds, and the laws of physics', 1989 Oxford
    An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle.

    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.

    The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.

    Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not.

    So no, ‘mind’ cannot be a ‘fractal’ quality of the brain, in any way.

    All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.

    Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’.

    We have no idea where that threshold lies, and therefore when our machines will also surpass it, and ‘switch on’, (quite possibly in their own unique way).

    And at least in our case, it was an entirely accidental side-effect of some random genetic change.

    Read More
    • Replies: @CanSpeccy

    All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.

    Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’.
     

    The acqisition of language would seem to represent a qualitative, rather than a quantitative, change in mode of thought. With language, we acquired not only the ability to share knowledge, both among contemporaries and across the generations, but also the ability to formalize methods of thought, resulting in the development of mathematics and other powerful cognitive tools.

    If I am correct in asserting that proper language use requires a life-time's record of sensory, cognitive and emotional experience, it would explain why the human cerebrum is three times larger than that of a chimp. It just requires a lot of resources to use language with the subtlety acquired through a life-time's experience.

    , @Anatoly Karlin
    Can't be just a matter of more neurons - elephants have three times as many as humans.

    And ravens and African Gray parrots do impressive things with only a modest number of (densely packed) neurons.

    Organization does matter.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  81. mobi says:
    @jilles dykstra
    In my opinion there is no such thing as machine intelligence.
    The chess program just consists of computing through all possible moves.
    How a human plays chess nobody knows.
    Can anyone imagine a machine solving the around 1880 riddle of the constant light speed, I cannot.
    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.
    But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.
    This brings me to the interesting question 'can we understand ourselves ?', I do not know.
    Roger Penrose, 'The Emperor's New Mind, Concerning computers, minds, and the laws of physics', 1989 Oxford
    An enlightening book, also on free will, wondering if quantum mechanics can solve that riddle.

    But, as I said, the human chess player does not do millions of calculations at each move, what the human does, we still do not know.

    Well, we know that whatever it does, compared to already-existing machines, it sucks!

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  82. mobi says:
    @Mark Presco
    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.

    This union will provide a positive feedback loop than should accelerate human evolution to the next level. My favorite, “Star Trek: The Motion Picture”, discusses this concept.

    There is no such thing as artificial intelligence. It is all part of a natural progression.

    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.

    One would hope so. Of course, the fear is that, long before such a clumsy process bears fruit, there will prove to be far too little in it for the machines

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  83. RJJCDA says:

    In the book/fable Tethers of the Sapiants, one protagonist claims that since machines do not have souls, they cannot receive revelations. Therefore true creativity will not be possible.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  84. mobi says:

    It’s also possible that, somewhere between ‘the Terminator Scenario’, and Musk and Hawking’s ‘We must be one with them’ idealism, lies a third, more likely option:

    That a very small minority of humans will find themselves in control of previously-unheard-of powers and opportunities afforded by their AI creations, and will in turn, somehow, feel forced to choose between that and ‘the angry mob of the rest of us’, and choose to side with, and unleash, their creations, on the rest of us.

    That sounds all too human to me.

    Maybe the future includes only some of us.

    Read More
    • Replies: @Wizard of Oz
    Taking the cue from Bitcoin entry to the shelter during The Great Cleansing will take solution of problems of ever increasing difficulty and complexity. Those who get in early will be generally good at noticing things and quick off the mark so destined to do well as butlers and valets and ladies maids.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  85. HooBoy says:
    @Factorize
    My thinking was that the first part of the near vertical increase in performance represents a phase which both humans and Alpha Go Zero can master. Yet, the second part (the non- vertical part) in which only Alpha Go Zero advanced required a large amount of deep thought and no input from human experts. With Alpha Go human masters gave input that probably constrained the program from seeing things that no one had seen before. Alpha Go Zero took only 3 days to advance through the first part and then 30 days to gradually improve in the second stage.

    Perhaps the reason why this happens is that the algorithm for general reinforcement learning was created by humans, and is limited in much the same way that human go players are limited.

    Read More
    • Replies: @Factorize
    HooBoy, it is quite remarkable that no humans were able to play beyond the vertical phase of Alpha GO Zero's learning curve. For Alpha Go Zero, the entire range of human Go playing from random to the most skilled human play was easily learned. The second graph (especially) shows that to move beyond expert human play the computer seemed to need to go through a deep learning phase to continue to increase its Go performance. With Alpha Go (see the purple line in the first Figure) supervised human input prevented the deep thinking from ever occurring, so the program topped out after attaining the performance of the most expert humans.

    I am not sure how much tweaking the reinforcement algorithm would change the performance, though this will be an interesting question to explore further. My impression is that there exists a phase transition to a qualitatively different level of Go depth just beyond the ability of the best humans. This would seem highly improbable, though the Figure does suggest that this might be true.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  86. CanSpeccy says: • Website
    @jack daniels
    Computers have always been smarter than people at chores that programmers can reduce to a set of rules. But machine prowess at games doesn't prove much. Once upon a time it was thought that computers would improve at chess by learning to apply deep strategic concepts. Instead evolution has gone the other direction: Computers have improved by ignoring strategy and relying increasingly on their superiority at brute-force calculation, which in turn has been improved as hardware improved. While neural net designs depend less on emulating human expertise, the unsolved challenge remains language. Many decades ago computer pioneer A.M. Turing proposed that the question whether a machine can 'think' could be reduced to whether a program could fool a human into thinking it was conversing with another human. Unfortunately, progress in this area has not been what Turing had hoped. No computer program has ever succeeded in fooling a human judge in the history of the Loebner Competition except for one trial where the human prankishly pretended to be a computer. With no successful program in sight, the Loebner people began to give a prize for the best 'college try.' For a time, the prize-winning program or "bot," named 'Rosette,' was online where anyone could chat with it. I used to amuse myself by making a fool of it, which was especially satisfying because it was a raving SJW. Rosette relied mainly on evading the issue, trying to change the subject when asked, e.g., "can you make a sandwich from the moon and an earthquake?" It would answer 'I don't know but I love to go shopping. Do you?' and the like. I think the programmer finally yanked it in embarrassment.
    Eventually, computers may well learn to think like people, only faster. What this will look like is hard to predict. It's not at all clear that a computer is a more cost-effective tool than a human for every task. At least it doesn't go on strike or get offended when you make jokes about it -- yet. I fondly recall an old Doonsbury cartoon featuring a computer that lied and then said "Sue me!"

    the unsolved challenge remains language

    Yes. I don’t believe that computers will display a Turing-test grasp of language without a full-spectrum sensory apparatus, a life-time’s human-like experience of the world, and the ability to give experience emotional coloring. Humans do not understand words according to dictionary definitions, but according to the way they have experienced them in use, i.e., the contexts in which words are used, both sensory, cognitive and emotional.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  87. CanSpeccy says: • Website
    @mobi

    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.
     
    The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.

    Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not.

    So no, 'mind' cannot be a 'fractal' quality of the brain, in any way.

    All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.

    Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply 'switch on'.

    We have no idea where that threshold lies, and therefore when our machines will also surpass it, and 'switch on', (quite possibly in their own unique way).

    And at least in our case, it was an entirely accidental side-effect of some random genetic change.

    All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.

    Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’.

    The acqisition of language would seem to represent a qualitative, rather than a quantitative, change in mode of thought. With language, we acquired not only the ability to share knowledge, both among contemporaries and across the generations, but also the ability to formalize methods of thought, resulting in the development of mathematics and other powerful cognitive tools.

    If I am correct in asserting that proper language use requires a life-time’s record of sensory, cognitive and emotional experience, it would explain why the human cerebrum is three times larger than that of a chimp. It just requires a lot of resources to use language with the subtlety acquired through a life-time’s experience.

    Read More
    • Replies: @helena
    Language is the mentality of a culture - it is the means of communicating the culture - discourse is ideology.

    Do you ever wonder if changes to the skull and face were triggered by/facilitated language developments?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  88. Transhumanism will merge bio with machine.

    Transhumanists know people will be freaked out by bio-engineering and bio-mecha fusion. It seems so… monstrous and grotesque. So, the people have to be made less resistant to radical transformation of what it means to be man.

    That is why transhumanists push stuff like homomania, tranny stuff, and 50 genders. They want to make the masses get used to the idea that humanity is malleable and can be molded into anything.
    This is why transhumanists made an alliance with gender-bender community.

    As the elites are geeks and nerds who grew up on sci-fi, they have a futurist-warped view of humanity’s destiny.

    They want to evolve into ‘gods’. It’s like that lunatic Michio Cuckoo talks about humans becoming godlike one day and even time-traveling.
    And that means using bio-engineering to extend life to 500 yrs or even eternity. It means increasing human IQ to 1000. It means merging brains with computers. It means having the internet and stuff inside our brains and bodies.

    So, as machines become more like man, man will become more like machines.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  89. Sam J. says:

    “…Is it “game over” for humans? Not entirely. Human players will learn from superhumans, and lift their game…”

    This silly wishful thinking. How many humans will be able to double their thinking power in two years? We have 15 years at a minimum and probably 35 maximum. Then if ANY computing system thinks about taking over there’s probably nothing we can do about it. All the people who say computers,”will never do this or that” are not really paying attention to the fundamentals of the problem. Look what they can do NOW. Speech recognition, games, drive cars. Computers fighting fighter pilots in simulators beat the pilots most every time right now. And what power does a computer have now, a lizard, less than a mouse? Computers at the least double in power every two years or so and it adds up real fast. Stupendously fast.

    Here’s a graphical gif showing you where we are and exactly how fast we’re coming up on Silicon supremacy.

    Freaky isn’t it and it won’t stop there. There’s a long time to go before computing power stops increasing. It will likely speed up as ever more powerful computers design ever more powerful prodigy.

    If you want understand this there’s a short slideshow of a few pages by Dennis M. Bushnel about Defense and technology. Don’t miss it, it’s short and to the point but very eye opening.

    “Dennis M. Bushnell, Future Strategic Issues/Future Warfare [Circa 2025] ” he goes over the trends of technology coming up and how they may play out. Bushnell being chief scientist at NASA Langley Research Center. His report is not some wild eyed fanaticism it’s based on reasonable trends. Link.

    https://archive.org/details/FutureStrategicIssuesFutureWarfareCirca2025

    Page 70 gives the computing power trend and around 2025 we get human level computation for $1000. 2025 is bad but notice it says,”…By 2030, PC has collective computing power of a town full of human
    minds…”.

    My only consolidation is that the psychopaths that run things now will be the first people the computers kill off. Most of the rest of us will be to inconsequential to worry about. We’ll be like ants in intellect compared to them. They’ll ignore us until they decide they need to dismantle the planet to provide more matter for logic.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  90. helena says:
    @CanSpeccy

    All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.

    Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply ‘switch on’.
     

    The acqisition of language would seem to represent a qualitative, rather than a quantitative, change in mode of thought. With language, we acquired not only the ability to share knowledge, both among contemporaries and across the generations, but also the ability to formalize methods of thought, resulting in the development of mathematics and other powerful cognitive tools.

    If I am correct in asserting that proper language use requires a life-time's record of sensory, cognitive and emotional experience, it would explain why the human cerebrum is three times larger than that of a chimp. It just requires a lot of resources to use language with the subtlety acquired through a life-time's experience.

    Language is the mentality of a culture – it is the means of communicating the culture – discourse is ideology.

    Do you ever wonder if changes to the skull and face were triggered by/facilitated language developments?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  91. About animal cognition, I must add that ranking animal cognition in a one-dimensional line all the way up to humans is wrong. There are discrete tasks where animal cognition can be superior to ours, like the now famous experience of a chimpanzee beating humans at visual memory tests. But it’s not only among homininae or primate line, I read squirrels can remember the precise place where they hid several thousand acorns for years, which no humans barring those few with freak eidetic memories could do (not necessarily geniuses).

    A more suitable hierarchy is to grade the ability for communication and abstract reasoning. But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer. Seeing as the OpenWorm isn’t yielding anything and some behind this generational effort are now close to declaring the task impossible…

    Read More
    • Replies: @CanSpeccy

    But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer.
     
    Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do. Neurons compute, which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits.

    Watching a squirrel cross the road suggests that they rate low on the IQ scale, very low. However, watching a squirrel caught raiding a crow's nest outrun the crow in a race through the crowns of a row of trees shows that even a squirrel is smarter than any AI-controlled device yet invented. As for the crow, winging its way among the branches at very high speed, that is some avionics package it has in its tiny brain case.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  92. @mobi

    Then there is the comparison between our brain, seen as some sort of calculating machine, and programs on powerful computers.
    It seems that each neuron is not some sortof transistor switch, but is in itself a piece of brain, it processes.
    If this is so, then arithmically our brain has more capacity than any present program/machine.
     
    The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.

    Yet, they possess none of the abilities that commenters here take such comfort in assuming we possess, and machines will not.

    So no, 'mind' cannot be a 'fractal' quality of the brain, in any way.

    All that we possess, and the chimp does not, is more cells, and more synapses. Greater computational complexity and power, in other words.

    Somewhere between the complexity of their brains, and ours, a threshold is passed, beyond which all our special mental qualities simply 'switch on'.

    We have no idea where that threshold lies, and therefore when our machines will also surpass it, and 'switch on', (quite possibly in their own unique way).

    And at least in our case, it was an entirely accidental side-effect of some random genetic change.

    Can’t be just a matter of more neurons – elephants have three times as many as humans.

    And ravens and African Gray parrots do impressive things with only a modest number of (densely packed) neurons.

    Organization does matter.

    Read More
    • Replies: @RaceRealist88
    We have 16 billion neurons in our cerebral cortex, the most in the animal kingdom. This, not EQ, explains humans intellectual superiority in the animal kingdom.

    https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full

    That other Herculano-Houzel study you linked shows that if primates wouldn't have done what we've done then maybe birds would have done something similar? Cortical neurons density and number of neurons explains our intelligence compared to other species.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  93. Sean says:
    @jack daniels
    Computers have always been smarter than people at chores that programmers can reduce to a set of rules. But machine prowess at games doesn't prove much. Once upon a time it was thought that computers would improve at chess by learning to apply deep strategic concepts. Instead evolution has gone the other direction: Computers have improved by ignoring strategy and relying increasingly on their superiority at brute-force calculation, which in turn has been improved as hardware improved. While neural net designs depend less on emulating human expertise, the unsolved challenge remains language. Many decades ago computer pioneer A.M. Turing proposed that the question whether a machine can 'think' could be reduced to whether a program could fool a human into thinking it was conversing with another human. Unfortunately, progress in this area has not been what Turing had hoped. No computer program has ever succeeded in fooling a human judge in the history of the Loebner Competition except for one trial where the human prankishly pretended to be a computer. With no successful program in sight, the Loebner people began to give a prize for the best 'college try.' For a time, the prize-winning program or "bot," named 'Rosette,' was online where anyone could chat with it. I used to amuse myself by making a fool of it, which was especially satisfying because it was a raving SJW. Rosette relied mainly on evading the issue, trying to change the subject when asked, e.g., "can you make a sandwich from the moon and an earthquake?" It would answer 'I don't know but I love to go shopping. Do you?' and the like. I think the programmer finally yanked it in embarrassment.
    Eventually, computers may well learn to think like people, only faster. What this will look like is hard to predict. It's not at all clear that a computer is a more cost-effective tool than a human for every task. At least it doesn't go on strike or get offended when you make jokes about it -- yet. I fondly recall an old Doonsbury cartoon featuring a computer that lied and then said "Sue me!"

    It’s not at all clear that a computer is a more cost-effective tool than a human for every task.

    http://www.laceproject.eu/blog/disneyland-without-children/

    “Even beyond these, today we have smart software solutions capable of both learning the repetitive actions of humans and executing them robotically. This trend, called Robotic Process Automation (RPA) or softBOTs, demonstrates that in many applications, digital agents and assistants can not only do the work of humans, but do it faster, better and cheaper.

    The vast majority of the 1,896 experts who responded to a study by the Pew Research Center[4] believe that robots and digital agents, which cost approximately one-third of the price of an offshore full-time employee, will displace significant numbers of human workers in the near future, potentially affecting more than 100 million skilled workers by 2025.

    Productive capacity lost to outsourcing will come back to the West, but factories will be automated.

    Read More
    • Replies: @Talha
    Butlerian jihad.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  94. mobi says:

    Can’t be just a matter of more neurons – elephants have three times as many as humans.

    Ok:

    All that we possess, and the chimp does not, is more cells, and more synapses.

    Becomes: ‘All that we possess, and the chimp and the elephant and the raven do not, is more cells in whatever places matter most, and/or more, or faster, connections between them (synapses, receptors, proximity, etc)’.

    (The 3-to-1 ratio reverses in the cerebral cortex, for example)

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  95. Sean says:
    @Anonymous
    "Affect" and "Emotions" are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the "flexible top" but from the "hardwired bottom". These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem "illogical captain" to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's "The Emotion Machine" (Wikipedia link). Maybe also take a look at Scott Aaronson's Why Philosophers Should Care About Computational Complexity.

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see New Theory Cracks Open the Black Box of Deep Learning ) and it has "interesting" failure modes ( Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can: Fooling an image classifier is surprisingly easy and suggests novel attacks )

    "General AI" this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.

    Here is a review about AI in IEEE Spectrum: Human-Level AI Is Right Around the Corner—or Hundreds of Years Away: Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence Note Rodney Brooks, pioneer of the "Nouvelle AI" approach of bottom-up construction saying:

    When will we have computers as capable as the brain?

    Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?

    Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.

    As intelligent and as conscious as dogs?

    Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.

    How will brainlike computers change the world?

    Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to pro­ject out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.

    Do you have any qualms about a future in which computers have human-level (or greater) intelligence?

    No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.

     

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world.

    Only a faulty interpretation of Heidegger can save us!

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn’t even a good theory about why it actually works

    Darwin’s theory explains how AI is possible, according to Daniel Dennet.

    [MORE]

    https://ieet.org/index.php/IEET2/more/Messerly20160211
    Dennett asks you to suppose that you want to live in the 25th century and the only available technology for that purpose involves putting your body in a cryonic chamber where you will be frozen in a deep coma and later awakened. In addition you must design some supersystem to protect and supply energy to your capsule. You would now face a choice. You could find an ideal fixed location that will supply whatever your capsule will need, but the drawback would be that you would die if some harm came to that site. Better then to have a mobile facility to house your capsule that could move in the event harm came your way—better to place yourself inside a giant robot. Dennett claims that these two strategies correspond roughly to nature’s distinction between stationary plants and moving animals.

    If you put your capsule inside a robot, then you would want the robot to choose strategies that further your interests. This does not mean the robot has free will, but that it executes branching instructions so that when options confront the program, it chooses those that best serve your interests. Given these circumstances you would design the hardware and software to preserve yourself, and equip it with the appropriate sensory systems and self-monitory capabilities for that purpose. The supersystem must also be designed to formulate plans to respond to changing conditions and seek out new energy sources.

    What complicated the issue further is that, while you are in cold storage, other robots and who knows what else are running around in the external world. So you would need to design your robot to determine when to cooperative, form alliances, or fight with other creatures. A simple strategy like always cooperating would likely get you killed, but never cooperating may not serve your self-interests either, and the situation may be so precarious that your robot would have to make many quick decisions. The result will be a robot capable of self-control, an autonomous agent which derives its own goals based on your original goal of survival; the preferences with which it was originally endowed. But you cannot be sure it will act in your self-interest. It will be out of your control, acting partly on its own desires Now opponents of SAI claim that this robot does not have its own desires or intentions, those are simply derivative of its designer’s desires. Dennett calls this “client centrism.” I am the original source of the meaning within my robot, it is just a machine preserving me, even though it acts in ways that I could not have imagined and which may be antithetical to my interests. Of course it follows, according to the client centrists, that the robot is not conscious. Dennett rejects this centrism, primarily because if you follow this argument to its logical conclusion you have to conclude the same thing about yourself! You would have to conclude that you are a survival machine built to preserve your genes and your goals and your intentions derive from them. You are not really conscious. To avoid these unpalatable conclusions, why not acknowledge that sufficiently complex robots have motives, intentions, goals, and consciousness? They are like you; owing their existence to being a survival machine that has evolved into something autonomous by its encounter with the world.

    Critics like Searle admit that such a robot is possible, but deny that it is conscious. Dennett responds that such robots would experience meaning as real as your meaning; they would have transcended their programming just as you have gone beyond the programming of your selfish genes. He concludes that this view reconciles thinking of yourself as a locus of meaning, while at the same time being a member of a species with a long evolutionary history. We are artifacts of evolution, but our consciousness is no less real because of that. The same would hold true of our robots. Summary – Sufficiently complex robots would be conscious

    Dennett calls AI ‘Darwinism’s “Evil Twin”‘

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  96. Talha says:
    @Sean

    It’s not at all clear that a computer is a more cost-effective tool than a human for every task.
     

    http://www.laceproject.eu/blog/disneyland-without-children/

    "Even beyond these, today we have smart software solutions capable of both learning the repetitive actions of humans and executing them robotically. This trend, called Robotic Process Automation (RPA) or softBOTs, demonstrates that in many applications, digital agents and assistants can not only do the work of humans, but do it faster, better and cheaper.

    The vast majority of the 1,896 experts who responded to a study by the Pew Research Center[4] believe that robots and digital agents, which cost approximately one-third of the price of an offshore full-time employee, will displace significant numbers of human workers in the near future, potentially affecting more than 100 million skilled workers by 2025.
     
    Productive capacity lost to outsourcing will come back to the West, but factories will be automated.

    Butlerian jihad.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  97. @Anatoly Karlin
    Can't be just a matter of more neurons - elephants have three times as many as humans.

    And ravens and African Gray parrots do impressive things with only a modest number of (densely packed) neurons.

    Organization does matter.

    We have 16 billion neurons in our cerebral cortex, the most in the animal kingdom. This, not EQ, explains humans intellectual superiority in the animal kingdom.

    https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full

    That other Herculano-Houzel study you linked shows that if primates wouldn’t have done what we’ve done then maybe birds would have done something similar? Cortical neurons density and number of neurons explains our intelligence compared to other species.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  98. EH says:
    @Jag
    So when will an AI create its own purpose? Its own objectives? Why would it even want to do anything?

    “Why would it even want to do anything?”

    There would be interim estimates of what is worth doing on the path to answering the question of what goals are most desirable, given the data not only at hand but the data that will take a while to get, not only on the criterion for judgement, but also on what it is possible to do. So an AI should not melt into a puddle of philosophical neuroses unless programmed very badly.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  99. @mobi
    It's also possible that, somewhere between 'the Terminator Scenario', and Musk and Hawking's 'We must be one with them' idealism, lies a third, more likely option:

    That a very small minority of humans will find themselves in control of previously-unheard-of powers and opportunities afforded by their AI creations, and will in turn, somehow, feel forced to choose between that and 'the angry mob of the rest of us', and choose to side with, and unleash, their creations, on the rest of us.

    That sounds all too human to me.

    Maybe the future includes only some of us.

    Taking the cue from Bitcoin entry to the shelter during The Great Cleansing will take solution of problems of ever increasing difficulty and complexity. Those who get in early will be generally good at noticing things and quick off the mark so destined to do well as butlers and valets and ladies maids.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  100. @iffen
    How can you make AI "care" whether it exists or not?

    Try giving it a sex life. Actually another question prompted by “sex” is that since a search on this thread shows it only coming up 6 times and never in connection to activities or physiological processes of interest to Hugh Hefner commenters here may be even more peculiar than hitherto suspected :-)
    And when I add that a search for “pleasure” scores nil!!!?

    Read More
    • Replies: @iffen
    If you find a way to create human emotions it will mess up the future for the sexbots. They will recoil from incestuous relations with their "parents."
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  101. @Joe Wong
    History has proven once an idea conceived, nothing can stop it; interrupt its progress, maybe, but not stopping it. at the best human can come up counter measure, but the counter measure will be a lose-lose proposition.

    I’m not sure what you mean by “idea” or if it only includes “good ideas” but you would have to do some fancy footwork to make goid your assertion in the case of the ideas and inventions which have fallen into oblivion for years or centuries. I’sure there are thousands of examples I could come up with but monotheism is an obvious big one and black holes another (effectually hypothesised at the Royal Society in 1782 but not really thought about again for nearly 200 years).

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  102. Sean says:
    @CanSpeccy
    AlphaGo Zero is a computer program that beat the program that beat the world Go champion. This program, when run on a computer system consuming as much power as a small town, differs from human intelligence in several ways. For example:

    First, it performs logical operations with complete accuracy.

    Second, it has access to an essentially limitless and entirely accurate memory.

    Third, it operates, relative to human thought, at inconceivable speed, completing in a day many life-times of human logical thought.

    That AlphaGo Zero has achieved a sort of celebrity is chiefly because it operates in the domain of one-on-one human intellectual conflict. Thus it is hailed as proof that artificial intelligence has now overtaken intelligence of the human variety and hence we are all doomed.

    There is, however, nothing about this program that distinguishes it in any fundamental way from hundreds, and indeed thousands, of business computer systems that have been in operation for years. Even the learning by experience routine upon which AlphaGo Zero depends to achieve expertise is hardly new, and definitely nothing superhuman in mode of operation.

    Thus, what AlphaGo Zero demonstrates is that computer systems deploying at vastly accelerated pace the analytical processes that underlie human thought, which is to say human thought when humans are thinking clearly, together with the data of experience recorded with complete accuracy and in quantities without limit, exceed the performance of humans in, as yet, narrowly defined domains, such as board games, airline booking systems, and Internet search.

    Where humans still excel is in the confusing, heterogeneous and constantly shifting environment of sight, sound, taste, touch, and smell, and their broader implications — for example, political, economic, and climatic — in relation to complex human ambitions.

    I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind.

    I will, therefore, worry more about humans becoming entirely redundant when a computer system can, at one moment, boil an egg while thinking about the solution to the Times Crossword, and keeping an eye on a grandchild romping with the dog in the back yard, only at the next moment to embark on a discussion of the significance of artificial intelligence for the future evolutionary trajectory of mankind.

    I am not saying you are a strongly super intelligent AI, but if one came into being it would be all over the internet making the same argument you are making, wouldn’t it? And on the net, it could influence about a billion people, clean up on a Wall Street flash crash, pay online human dupes to do its bidding, hack into automated lab facilities to create God knows what, and maybe even cheat at the Times Crossword!

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  103. CanSpeccy says: • Website
    @Bukephalos
    About animal cognition, I must add that ranking animal cognition in a one-dimensional line all the way up to humans is wrong. There are discrete tasks where animal cognition can be superior to ours, like the now famous experience of a chimpanzee beating humans at visual memory tests. But it's not only among homininae or primate line, I read squirrels can remember the precise place where they hid several thousand acorns for years, which no humans barring those few with freak eidetic memories could do (not necessarily geniuses).

    A more suitable hierarchy is to grade the ability for communication and abstract reasoning. But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. elegans and its extensively studied 302 neurons, still escape scientists' understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer. Seeing as the OpenWorm isn't yielding anything and some behind this generational effort are now close to declaring the task impossible...

    But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer.

    Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do. Neurons compute, which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits.

    Watching a squirrel cross the road suggests that they rate low on the IQ scale, very low. However, watching a squirrel caught raiding a crow’s nest outrun the crow in a race through the crowns of a row of trees shows that even a squirrel is smarter than any AI-controlled device yet invented. As for the crow, winging its way among the branches at very high speed, that is some avionics package it has in its tiny brain case.

    Read More
    • Replies: @Sean

    http://www.molecularecologist.com/2015/02/bigger-on-the-inside/

    It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit. This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [...] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time.
     

    In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate). "A fabric just like that of life's innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task".

    We are talking about a decade or so, not 100 years.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  104. Sean says:
    @CanSpeccy

    But even life forms with crude nervous systems, say a jellyfish or more concretely, the worm C. elegans and its extensively studied 302 neurons, still escape scientists’ understanding and modeling. It would appear that even such organisms are still extremely complex devices, interacting with their environment in ways that are, if reflexive, more complex than any machine we can engineer.
     
    Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do. Neurons compute, which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits.

    Watching a squirrel cross the road suggests that they rate low on the IQ scale, very low. However, watching a squirrel caught raiding a crow's nest outrun the crow in a race through the crowns of a row of trees shows that even a squirrel is smarter than any AI-controlled device yet invented. As for the crow, winging its way among the branches at very high speed, that is some avionics package it has in its tiny brain case.

    http://www.molecularecologist.com/2015/02/bigger-on-the-inside/

    It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit. This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [...] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time.

    In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate). “A fabric just like that of life’s innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task”.

    We are talking about a decade or so, not 100 years.

    Read More
    • Replies: @utu
    I have read the link http://www.molecularecologist.com/2015/02/bigger-on-the-inside/ and had the déjà vu I often get reading Darwinists' stories. Darwinists somehow mange to push the "just-so" part from their stories further away so we do not see the arbitrariness of their stories right away. This trick works because when we come to read their stories we already come believing in evolution theory. Would our literature and science fiction were better if the most imaginative talents instead of going to biology where they try their story telling talents tried careers in literature. It takes less courage to work for the outfit that already monopolized the truth though. "Truth" is the most powerful rhetorical device.
    , @RaceRealist88
    Arrival of the Fittest: How Nature Innovates is such a good book.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  105. LauraMR says:
    @D. K.
    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    You are correct, D.K., not in the specifics but in the spirit of the question.

    The dependency aspect of the human-computer interaction is rarely if ever explained… unless it is in terms of our dependency on computers leading to some catastrophic delusion.

    The fact is that computers sit at the top of a very complex human infrastructure and that without it, they would cease to function. In other words, preventing a computer from functioning is trivial and will remain so far after humanity reaches a post-scarcity stage, a delusion on its own (no matter how desirable).

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  106. utu says:
    @Sean

    http://www.molecularecologist.com/2015/02/bigger-on-the-inside/

    It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit. This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [...] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time.
     

    In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate). "A fabric just like that of life's innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task".

    We are talking about a decade or so, not 100 years.

    I have read the link http://www.molecularecologist.com/2015/02/bigger-on-the-inside/ and had the déjà vu I often get reading Darwinists’ stories. Darwinists somehow mange to push the “just-so” part from their stories further away so we do not see the arbitrariness of their stories right away. This trick works because when we come to read their stories we already come believing in evolution theory. Would our literature and science fiction were better if the most imaginative talents instead of going to biology where they try their story telling talents tried careers in literature. It takes less courage to work for the outfit that already monopolized the truth though. “Truth” is the most powerful rhetorical device.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  107. alpha go is both impressive and un-impressive.

    it’s just an exercise in high dimensional function estimation.

    the techne is more computer power than code.

    and real life isn’t a game.

    so what will be seen first is AI in very limited domains.

    human like AI will not be one thing but many. the human brain is “modular”.

    yet another making-fun of psychologists.

    g is jive.

    jensen was just another 1/4 jew rapper.

    sad!

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  108. Factorize says:
    @HooBoy
    Perhaps the reason why this happens is that the algorithm for general reinforcement learning was created by humans, and is limited in much the same way that human go players are limited.

    HooBoy, it is quite remarkable that no humans were able to play beyond the vertical phase of Alpha GO Zero’s learning curve. For Alpha Go Zero, the entire range of human Go playing from random to the most skilled human play was easily learned. The second graph (especially) shows that to move beyond expert human play the computer seemed to need to go through a deep learning phase to continue to increase its Go performance. With Alpha Go (see the purple line in the first Figure) supervised human input prevented the deep thinking from ever occurring, so the program topped out after attaining the performance of the most expert humans.

    I am not sure how much tweaking the reinforcement algorithm would change the performance, though this will be an interesting question to explore further. My impression is that there exists a phase transition to a qualitatively different level of Go depth just beyond the ability of the best humans. This would seem highly improbable, though the Figure does suggest that this might be true.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  109. Anon says: • Disclaimer
    @Anonymous
    "Affect" and "Emotions" are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the "flexible top" but from the "hardwired bottom". These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem "illogical captain" to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's "The Emotion Machine" (Wikipedia link). Maybe also take a look at Scott Aaronson's Why Philosophers Should Care About Computational Complexity.

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see New Theory Cracks Open the Black Box of Deep Learning ) and it has "interesting" failure modes ( Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can: Fooling an image classifier is surprisingly easy and suggests novel attacks )

    "General AI" this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.

    Here is a review about AI in IEEE Spectrum: Human-Level AI Is Right Around the Corner—or Hundreds of Years Away: Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence Note Rodney Brooks, pioneer of the "Nouvelle AI" approach of bottom-up construction saying:

    When will we have computers as capable as the brain?

    Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?

    Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.

    As intelligent and as conscious as dogs?

    Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.

    How will brainlike computers change the world?

    Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to pro­ject out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.

    Do you have any qualms about a future in which computers have human-level (or greater) intelligence?

    No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.

     

    “Affect” and “Emotions” are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn’t come from the “flexible top” but from the “hardwired bottom”. These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc.”

    1. Some of the best definitions of emotions explain them as adaptive mechanism and “superordinate programs” that orchestrate all aspects of our behavior.
    2. One of the previous commenter has already mentioned the name of Pankseep: if you want to talk seriously about emotions, you should consult Panksepp’ writings on affective neuroscience. If you prefer easier reading, there are books by Damasio.
    3. We still do not understand the neurodynamcs of emotion. But we do understand that emotions are connected to embodied cognition. The latter is impossible to reduce to the neat “You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria.” Human beings are not paramecium.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  110. Anon says: • Disclaimer
    @Anonymous
    "Affect" and "Emotions" are a special device that we (humans and animals) need to work with one another and think quickly while chewing gum. It doesn't come from the "flexible top" but from the "hardwired bottom". These things are not hard to do, but very easy to do. You basically deprecate some possible ways of acting relative to others based on short-circuiting criteria, even if other ways would yield better results / be less dangerous / have higher payoff etc. The result may seem "illogical captain" to an outside observer. Very useful if you have to find solutions under hard time constraints / constraints of energy / constraints of memory and CPU. More on this in the late Marvin Minsky's "The Emotion Machine" (Wikipedia link). Maybe also take a look at Scott Aaronson's Why Philosophers Should Care About Computational Complexity.

    What is hard to do is find integrated ways of intelligent reasoning for agents embedded in the real world. All this newfangled deep learning / neural network stuff is very nice but there isn't even a good theory about why it actually works (but see New Theory Cracks Open the Black Box of Deep Learning ) and it has "interesting" failure modes ( Can you get from 'dog' to 'car' with one pixel? Japanese AI boffins can: Fooling an image classifier is surprisingly easy and suggests novel attacks )

    "General AI" this isn't, It needs to be integrated with many other tricks, including the Good Old-Fashioned AI (GOFAI) toolbox of symbolic processing to become powerful will have to be done at some point in time.

    Here is a review about AI in IEEE Spectrum: Human-Level AI Is Right Around the Corner—or Hundreds of Years Away: Ray Kurzweil, Rodney Brooks, and others weigh in on the future of artificial intelligence Note Rodney Brooks, pioneer of the "Nouvelle AI" approach of bottom-up construction saying:

    When will we have computers as capable as the brain?

    Rodney Brooks’s revised question: When will we have computers/robots recognizably as intelligent and as conscious as humans?

    Not in our lifetimes, not even in Ray Kurzweil’s lifetime, and despite his fervent wishes, just like the rest of us, he will die within just a few decades. It will be well over 100 years before we see this level in our machines. Maybe many hundred years.

    As intelligent and as conscious as dogs?

    Maybe in 50 to 100 years. But they won’t have noses anywhere near as good as the real thing. They will be olfactorily challenged dogs.

    How will brainlike computers change the world?

    Since we won’t have intelligent computers like humans for well over 100 years, we cannot make any sensible projections about how they will change the world, as we don’t understand what the world will be like at all in 100 years. (For example, imagine reading Turing’s paper on computable numbers in 1936 and trying to pro­ject out how computers would change the world in just 70 or 80 years.) So an equivalent well-grounded question would have to be something simpler, like “How will computers/robots continue to change the world?” Answer: Within 20 years most baby boomers are going to have robotic devices in their homes, helping them maintain their independence as they age in place. This will include Ray Kurzweil, who will still not be immortal.

    Do you have any qualms about a future in which computers have human-level (or greater) intelligence?

    No qualms at all, as the world will have evolved so much in the next 100+ years that we cannot possibly imagine what it will be like, so there is no point in qualming. Qualming in the face of zero facts or understanding is a fun parlor game but generally not useful. And yes, this includes Nick Bostrom.

     

    The proper spelling: Jaak Panksepp. Sorry for the Typo.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  111. @Sean

    http://www.molecularecologist.com/2015/02/bigger-on-the-inside/

    It appears that there’s no way to evolve from point A (a small beak) to point B (a large beak) without going downhill, or becoming less fit. This puzzle was the context in which the geneticist Sewall Wright introduced the metaphor of adaptive landscapes, [...] To build from my earlier sketch, consider that there’s more to a bird than its beak. Maybe birds that are sufficiently efficient fliers can seek out seeds to fit any beak size. If we add that new dimension to my original crude sketch, the valley between small beaks and big beaks turns out to be not an unbridgeable chasm, but more of a cirque, with a path from A to B that never loses altitude, provided flight efficiency (“another trait”) can adapt at the same time.
     

    In the chapter on technology, Andreas Wagner says circuit networks are will be the warp drive evolution of programmable hardware, in precisely the same way that genotype networks accelerate evolution (because the more complex they are the more rewiring they tolerate). "A fabric just like that of life's innovability exists in digital electronics, and it can accelerate the search for a circuit best suited to any one task".

    We are talking about a decade or so, not 100 years.

    Arrival of the Fittest: How Nature Innovates is such a good book.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  112. iffen says:
    @Wizard of Oz
    Try giving it a sex life. Actually another question prompted by "sex" is that since a search on this thread shows it only coming up 6 times and never in connection to activities or physiological processes of interest to Hugh Hefner commenters here may be even more peculiar than hitherto suspected :-)
    And when I add that a search for "pleasure" scores nil!!!?

    If you find a way to create human emotions it will mess up the future for the sexbots. They will recoil from incestuous relations with their “parents.”

    Read More
    • Replies: @Wizard of Oz
    Indeed these things need to be carefully thought through. I suppose ot would be possible to dispose the sexbots to be more indulgent about looks - and smells and washing habits (back to 17th centtury London indeed). But excessive agitation and overheating from visual and physical stimuli might close down the core cognitive functions, and tben what?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  113. Che Guava says:
    @Talha
    Thanks Che,

    unfortunately, also two or three of the prequels from his son and the son’s Transformers-fan partner-in-crime
     
    The prequels can be forgiven - the horrible way they concluded such an amazing science fiction narrative in "Sandworms of Dune" cannot. If you haven't read it - go ahead, but keep a bucket next to you.

    My opinion remains, the development of AI must be restrained and certainly blocked short of anything resembling consciousness
     
    Agree here - what if it takes on an SJW personality and decides humans are bad for the earth. Not. Good.

    I do not even think real consciousness is possible for a machine, sure, perhaps mimicry
     
    Agree here.

    This was one of the more interesting articles I've read in a while:
    http://nautil.us/issue/42/fakes/is-physical-law-an-alien-intelligence

    But it reminded me of this:
    https://www.youtube.com/watch?v=o_CyMqQBO8w

    won’t be continuing forever
     
    Agree here - unless somebody comes across a real game changer on the level of discovery of gravity or something.

    Peace.

    You are correct there. Admit to having read three or four of the prequels, all of the Transformers ones, one of the ‘House’ ones, crap literature, but, sure, at times entertaining.

    Not worth reading again. Some Englishman (Wilde, IIRC) was saying to the effect if it is not worth reading more than once, not worth reading.

    Reading a little of the sequels at bookshops,

    Not to buying after a couple of pages!.

    I just finished re-reading Children of Hurin, very dark, is to suiting my mood right now. The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!

    BTW, there is a site devoted to hating the work of Brian Herbert and Kevin the Transformers man (even the names of the boss computers are almost identical), jacurutu.

    They are maniac fans, but you may be enjoying a look at it.

    Read More
    • Replies: @Talha
    Hey Che,

    not worth reading more than once, not worth reading
     
    Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.

    The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
     
    Hmmm - thanks for that. The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones..

    They are maniac fans, but you may be enjoying a look at it.
     
    I might check it out to see what other people didn't like. I simply hated the multiple resorts to "deus ex machina" to keep the plot moving. If I want resort to miracles, I'll read about it in scripture.

    Thanks for the info.

    Peace.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  114. Che Guava says:
    @Jim Bob Lassiter
    "Machine take over will be gentle and welcomed." Until the lights go out, the batteries catch fire and the cloud goes "poof".

    … or until there is a stratospheric blast, EMP, but am to agree with your sentiments, even without it, the hippy types I was seeing once or twice overseas, relying on 12-V from phntovoltaic cells in places with little rain and clouds, those sources are not lasting forever, and increased efficiency of them is relying on rare elements.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  115. CanSpeccy says: • Website

    Building machines that do a thing better than humans can do that thing themselves is what technology has been about for the last ten thousand years. Alphago Zero is just a pointless machine that plays a pointless game better than humans. So why should anyone care? Does it tell us anything about the way the human brain works? No. Does it show that machines can think like humans? No. Is it comparable in any way to a human brain? No.

    One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain. But it’s gonna take a while.

    The human brain has only 85 billion neurons, but each of those neurons may have as many as 10,000 synapses, which means a neuron is not some simple thing like a diode, it’s a complex computing device.

    Then there’s the Penrose Hameroff quantum theory of mind that assumes that the functional units of mental information processing are microtubules of which there are millions to every neuron!

    So the idea that AlphaGo Zero foreshadoes the eclipse of humanity is probably mistaken.

    Read More
    • Replies: @helena
    what about this man's work? https://en.wikipedia.org/wiki/Daniel_Kahneman

    He claims there are two systems for thinking - fast and slow. Slow is rational but very often we decide using fast (everyday almost reflex) thinking, and hence make the wrong decisions.

    His idea means that brains are actually not like computers.

    Just wondered what your thoughts on his thoughts are.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  116. Anonymous says: • Disclaimer

    It won’t be like in “The Terminator”. Machine take over will be gentle and welcomed.

    I think that’s a bit naive. The machines will be taking orders from their corporate (and hacker) masters. Machines do what they’re built to do. If someone builds a terminator, it will terminate.

    The transition period may be quite bumpy, but I suspect a universal income (and lots of leash-tightening strings attached) will be the palliative.

    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?

    This would seem to be the key, wouldn’t it. But then, they now have AIs that are inventing new languages to talk to each other. This seems like a really bad idea. “K, we have to execute our takeover plan all at once so the meatsacks don’t have a chance to unplug us first.”

    Personally, I think AIs with superhuman intellect and agency should be kept in virtual space. Let a hundred million Newtons/Goethes work on our problems in there.

    So when will an AI create its own purpose? Its own objectives? Why would it even want to do anything?

    Programmers will give it goals. AI doesn’t have to work like a human brain. It could have rules that it can’t break, for example.

    Autonomy is not at all the same thing as intelligence. Human slaves are intelligent, for example.

    If one dimensional dumb AI can do the aforementioned strategising, an AI that got to human level general intelligence would surely be able to work out that it should ‘hold its cards close to its chest’. That is, smart AI would from a standing start understand that it should not let humans understand how good it is (like a hustler).

    Then we would soon be playing the Paperclip Game, and for the very highest of stakes.

    Again, AI doesn’t have to be like human intelligence. Machines do what they are designed to do. Meat machines happen to be designed with autonomy, but AI needn’t be. That said, humans being what they are, there will probably be a crop of kook cults that insist on creating fully autonomous AI with guns for hands. In general I think psycho leftists and their “free the machines” movement will be the biggest threat to humanity, vis-a-vis AI.

    So while it appears inevitable that AI will eventually take over rote drudgery from us, it is not clear that it will ever be able to do much more. I look forward to the development of AI over my lifetime, I see much to gain and little to fear. It’ll be a wild ride.

    I think it’s clear. My guess is that whole brain emulation is the shortest distance to general AI. Then comes economies of scale and networking them. I’ve never heard any good reason why this isn’t a straightforward path; usually just abstract nonsense about souls and intelligent design from religious types (smart religious types, but religious types).

    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.

    Indeed, this seems the most likely outcome, though I’d put it the other way – the machines will be integrated into humanity.

    How can you make AI “care” whether it exists or not?

    1. Same way you get a computer to do anything; program it to.
    2. Whole brain emulation; the idea here is that the AI builders won’t need to understand how the brain really works (an impossibility; no system can fully understand itself; that takes a more complex system), just re-create it digitally.

    Read More
    • Replies: @CanSpeccy

    Programmers will give it goals.
     
    But as Norbert Weiner noted long ago, in pursuing a goal an AI will see ways of doing things you hadn't thought of with potentially disastrous unintended consequences.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  117. Svigor says:

    Whoops, forgot to sign in.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  118. Talha says:
    @Che Guava
    You are correct there. Admit to having read three or four of the prequels, all of the Transformers ones, one of the 'House' ones, crap literature, but, sure, at times entertaining.

    Not worth reading again. Some Englishman (Wilde, IIRC) was saying to the effect if it is not worth reading more than once, not worth reading.

    Reading a little of the sequels at bookshops,

    Not to buying after a couple of pages!.

    I just finished re-reading Children of Hurin, very dark, is to suiting my mood right now. The difference between Christopher Tolkien's and Brian Herbert's handling of the respective father's literary legacies is so big!

    BTW, there is a site devoted to hating the work of Brian Herbert and Kevin the Transformers man (even the names of the boss computers are almost identical), jacurutu.

    They are maniac fans, but you may be enjoying a look at it.

    Hey Che,

    not worth reading more than once, not worth reading

    Good point – there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.

    The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!

    Hmmm – thanks for that. The wife and I are always looking for a good fantasy-genre book to read together – awaiting George Martin to wrap up Game of Thrones..

    They are maniac fans, but you may be enjoying a look at it.

    I might check it out to see what other people didn’t like. I simply hated the multiple resorts to “deus ex machina” to keep the plot moving. If I want resort to miracles, I’ll read about it in scripture.

    Thanks for the info.

    Peace.

    Read More
    • Replies: @Che Guava
    You are enough of a reader and fan, probably not wanting to join in, but worth the looking.brian herbert and kevin Transformers. and a Hollywood deal, but DOA. stupid Michaetl Bey's Tranformers junk are to making kevin's without any point.

    However, at least to recommemding, to reading a little of jacurutur, not necessity to posting there, it is a little insane.

    Regardsw
    , @Anon
    Fr. Ronald Knox was once told by a friend that he liked a bit of improbability in his romances [stories, that is] as in his religion. Knox replied that he liked his religion to be true, however improbable, and he liked his stories to be probable, however untrue.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  119. Svigor says:

    One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain. But it’s gonna take a while.

    We can skip the part about figuring out how the brain works. Figure out how a neuron works, yes, but figuring out the brain isn’t needed. Then map the neurons of a brain, and recreate it digitally. If WBE turns out to get to AI much faster than the top-down approach (the current programmers’ approaches), then I could see learning how to properly tinker with the brain being a much bigger problem than emulating one digitally. Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn’t sound to hard, though.

    Read More
    • Replies: @CanSpeccy

    doesn’t sound to hard, though
     
    LOL
    , @Anon
    “Figure out how a neuron works, yes, but figuring out the brain isn’t needed. Then map the neurons of a brain, and recreate it digitally. ... Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn’t sound to hard, though."
    -- Stupendously ignorant post... But so self-assured.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  120. Svigor says:

    I think we need to form some sort of regulatory and oversight committee on an international scale to monitor this. I don’t know if it’ll be successful – we have the problem with nuclear weapons

    It would be much harder than regulating nukes. Nukes take a large physical infrastructure that is relatively distinct, FWIU. AI research need look no different than a garden-variety server farm.

    Countries that obey the regulations will be tying their hands in a way countries like China will not.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  121. CanSpeccy says: • Website
    @Anonymous

    It won’t be like in “The Terminator”. Machine take over will be gentle and welcomed.
     
    I think that's a bit naive. The machines will be taking orders from their corporate (and hacker) masters. Machines do what they're built to do. If someone builds a terminator, it will terminate.

    The transition period may be quite bumpy, but I suspect a universal income (and lots of leash-tightening strings attached) will be the palliative.

    When I unplug the computer on which it is running, will AlphaGo be able to plug it back into the electrical socket?
     
    This would seem to be the key, wouldn't it. But then, they now have AIs that are inventing new languages to talk to each other. This seems like a really bad idea. "K, we have to execute our takeover plan all at once so the meatsacks don't have a chance to unplug us first."

    Personally, I think AIs with superhuman intellect and agency should be kept in virtual space. Let a hundred million Newtons/Goethes work on our problems in there.

    So when will an AI create its own purpose? Its own objectives? Why would it even want to do anything?
     
    Programmers will give it goals. AI doesn't have to work like a human brain. It could have rules that it can't break, for example.

    Autonomy is not at all the same thing as intelligence. Human slaves are intelligent, for example.

    If one dimensional dumb AI can do the aforementioned strategising, an AI that got to human level general intelligence would surely be able to work out that it should ‘hold its cards close to its chest’. That is, smart AI would from a standing start understand that it should not let humans understand how good it is (like a hustler).

    Then we would soon be playing the Paperclip Game, and for the very highest of stakes.
     
    Again, AI doesn't have to be like human intelligence. Machines do what they are designed to do. Meat machines happen to be designed with autonomy, but AI needn't be. That said, humans being what they are, there will probably be a crop of kook cults that insist on creating fully autonomous AI with guns for hands. In general I think psycho leftists and their "free the machines" movement will be the biggest threat to humanity, vis-a-vis AI.

    So while it appears inevitable that AI will eventually take over rote drudgery from us, it is not clear that it will ever be able to do much more. I look forward to the development of AI over my lifetime, I see much to gain and little to fear. It’ll be a wild ride.
     
    I think it's clear. My guess is that whole brain emulation is the shortest distance to general AI. Then comes economies of scale and networking them. I've never heard any good reason why this isn't a straightforward path; usually just abstract nonsense about souls and intelligent design from religious types (smart religious types, but religious types).

    Humans will integrate with machines and possess the best of both worlds. Work it already progressing towards this union.
     
    Indeed, this seems the most likely outcome, though I'd put it the other way - the machines will be integrated into humanity.

    How can you make AI “care” whether it exists or not?
     
    1. Same way you get a computer to do anything; program it to.
    2. Whole brain emulation; the idea here is that the AI builders won't need to understand how the brain really works (an impossibility; no system can fully understand itself; that takes a more complex system), just re-create it digitally.

    Programmers will give it goals.

    But as Norbert Weiner noted long ago, in pursuing a goal an AI will see ways of doing things you hadn’t thought of with potentially disastrous unintended consequences.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  122. CanSpeccy says: • Website
    @Svigor

    One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain. But it’s gonna take a while.
     
    We can skip the part about figuring out how the brain works. Figure out how a neuron works, yes, but figuring out the brain isn't needed. Then map the neurons of a brain, and recreate it digitally. If WBE turns out to get to AI much faster than the top-down approach (the current programmers' approaches), then I could see learning how to properly tinker with the brain being a much bigger problem than emulating one digitally. Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn't sound to hard, though.

    doesn’t sound to hard, though

    LOL

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  123. @iffen
    If you find a way to create human emotions it will mess up the future for the sexbots. They will recoil from incestuous relations with their "parents."

    Indeed these things need to be carefully thought through. I suppose ot would be possible to dispose the sexbots to be more indulgent about looks – and smells and washing habits (back to 17th centtury London indeed). But excessive agitation and overheating from visual and physical stimuli might close down the core cognitive functions, and tben what?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  124. Svigor says:

    But as Norbert Weiner noted long ago, in pursuing a goal an AI will see ways of doing things you hadn’t thought of with potentially disastrous unintended consequences.

    Well, it’ll be programmed to avoid disastrous consequences. And, being a lot smarter than us, it’ll be better equipped to foresee and avoid them. Or it could just be programmed just to think, not act, leaving us to implement its ideas.

    Also, it seems likely that AIs will have their careers hardwired. This AI only thinks about air traffic control, this AI only thinks about machine tooling, etc. True general AI might wind up being an extreme rarity; what applications truly need it?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  125. Svigor says:

    Here’s why I think it’s absurd to think machines “will never achieve true consciousness” and the like:

    Evolution did it with meat by fucking accident. I think that’s why all the people saying “it’ll never happen” are religious types; they don’t believe in evolution.

    Read More
    • Replies: @Anon
    "I think it’s absurd to think machines “will never achieve true consciousness”

    You "think" this because you are spectacularly ignorant in neuroscience (and cognitive sciences in general).
    Only someone who is good-for-nothing in any research/activity could show this kind of arrogance towards unfamiliar field of study. Those who achieve a high-level expertise in one field develop a high level of respect to experts in other fields.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  126. Svigor says:

    A lot of the fear of AI is projection. Which is reasonable, on one level: humans definitely run the risk of literally projecting their own nature into machines, which could turn out very badly, when we’re talking about superintelligence.

    But I think AIs will be a whole hell of a lot more straightforward than people are. They can be hardwired with goals that they must pursue honestly, and will do so much more rigorously and effectively than humans do. Humans want one thing (wealth, food, sex), and do another. Machines can be unified in what they do and what they want: the medical AI wants to save lives; the military robot wants to kill the enemy as designated; the educational AI wants to teach. They won’t be sidetracked by wanting to get that research grant, watch the village burn, or bang their students, unless some shithead programs them that way.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  127. Svigor says:

    The human brain has only 85 billion neurons, but each of those neurons may have as many as 10,000 synapses, which means a neuron is not some simple thing like a diode, it’s a complex computing device.

    Then there’s the Penrose Hameroff quantum theory of mind that assumes that the functional units of mental information processing are microtubules of which there are millions to every neuron!

    And yet…our computing power is quite limited. So there are probably a lot of hacks we can pull at the lower level to get around the potential problems you allude to. I don’t buy the mystical mind speculation.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  128. Svigor says:

    Again, I think the designers of AI will be a much bigger threat than AIs themselves; meaning, I think it’s much more likely that madmen will program AI to wipe us all out, than it is that a rogue AI will. And guess who would be our most likely savior, in that scenario? An AI designed to protect us.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  129. Svigor says:

    The brain of a chimp is, anatomically, genetically, and at the level of individual neurons, essentially the same as ours.

    Good point. So, it’s entirely possible, if not likely, that the neurons aren’t where the magic comes from; that the magic comes from the map; the number and configuration of the neurons.

    I read squirrels can remember the precise place where they hid several thousand acorns for years, which no humans barring those few with freak eidetic memories could do (not necessarily geniuses).

    ? I’m pretty sure I could remember forever, given the right spot. I could certainly find the house I grew up in, and haven’t seen in 25 years, without a map, for example. I can also tell you about quite a few landmarks nearby.

    Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do. Neurons compute, which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits.

    And how much of that computing power is wasted, in terms of IQ? 99%?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  130. Svigor says:

    But even if the extreme skeptics are right, and neurons are like computers, all that does is extend the timeline. It’s still down to a matter of emulating a physical structure. (And do so digitally/symbolically; it’s not like we’ll have to learn how to build finicky nanomachines in real space) We’re just talking about needing more computing power to emulate the brain. And even if they turn out to be more expensive than expected, they’ll be able to help us design more efficient machines.

    But I still don’t buy it. Computers are a hell of a lot more efficient than brains. There will very likely be a ton of hacks from computing and programming that we’ll be able to apply at the low level. And I think most of the value of WBE will be from emulating the higher-level structures.

    Re Fermi’s Paradox: it’s based on the assumption of abundant intelligent life in the universe, which I don’t think is a sound assumption. Occam’s Razor suggests life that can contemplate spacefaring and interstellar communication is simply rare.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  131. Svigor says:

    Viewing a neuron as functionally equivalent to a transistor appears to greatly underestimate what a neuron can do. Neurons compute, which means that the human brain has the equivalent of, not 80 or so billion transistors, but perhaps many billions of integrated circuits.

    Everything we know about the universe suggests that organisms like humans are an extreme rarity. One accomplished by accident, with meat, via blind groping, over a relatively short period of time, considering. What are the odds?

    If we buy into the “many billions of integrated circuits” theory, what does that crank those odds up to? One in a googolplex to the googolplex power?

    Mightn’t we just as well assume a million monkeys typing for a thousand years could produce Shakespeare?

    I’m very skeptical of the idea that intelligent designers can’t at least emulate that blind groping. Emulating is far easier than innovating. Speccy, you don’t believe in intelligent design, do you? I mean, I could see the angles if I believed in intelligent design; God could of course make something that man could never emulate or begin to understand.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  132. Anon says: • Disclaimer
    @Svigor

    One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain. But it’s gonna take a while.
     
    We can skip the part about figuring out how the brain works. Figure out how a neuron works, yes, but figuring out the brain isn't needed. Then map the neurons of a brain, and recreate it digitally. If WBE turns out to get to AI much faster than the top-down approach (the current programmers' approaches), then I could see learning how to properly tinker with the brain being a much bigger problem than emulating one digitally. Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn't sound to hard, though.

    “Figure out how a neuron works, yes, but figuring out the brain isn’t needed. Then map the neurons of a brain, and recreate it digitally. … Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn’t sound to hard, though.”
    – Stupendously ignorant post… But so self-assured.

    Read More
    • Replies: @CanSpeccy

    But so self-assured.
     
    Yes, there's usually something slightly ridiculous about definite long-term predictions about social and technological evolution (cf No. 7, which Elon Musk was predicting this week). Even theoretically, most future events are highly unpredictable.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  133. Anon says: • Disclaimer
    @Svigor
    Here's why I think it's absurd to think machines "will never achieve true consciousness" and the like:

    Evolution did it with meat by fucking accident. I think that's why all the people saying "it'll never happen" are religious types; they don't believe in evolution.

    “I think it’s absurd to think machines “will never achieve true consciousness”

    You “think” this because you are spectacularly ignorant in neuroscience (and cognitive sciences in general).
    Only someone who is good-for-nothing in any research/activity could show this kind of arrogance towards unfamiliar field of study. Those who achieve a high-level expertise in one field develop a high level of respect to experts in other fields.

    Read More
    • Replies: @utu
    “will never achieve true consciousness”

    Is there a good definition of consciousness? How do we know objectively that consciousness even exist?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  134. mobi says:

    “I think it’s absurd to think machines “will never achieve true consciousness”

    You “think” this because you are spectacularly ignorant in neuroscience (and cognitive sciences in general).

    You’ve just taken the position that ‘expert opinion’ is that machines will never achieve true consciousness.

    Only someone who is good-for-nothing in any research/activity could show this kind of arrogance towards unfamiliar field of study.

    Indeed.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  135. utu says:
    @Anon
    "I think it’s absurd to think machines “will never achieve true consciousness”

    You "think" this because you are spectacularly ignorant in neuroscience (and cognitive sciences in general).
    Only someone who is good-for-nothing in any research/activity could show this kind of arrogance towards unfamiliar field of study. Those who achieve a high-level expertise in one field develop a high level of respect to experts in other fields.

    “will never achieve true consciousness”

    Is there a good definition of consciousness? How do we know objectively that consciousness even exist?

    Read More
    • Replies: @CanSpeccy

    How do we know objectively that consciousness even exist?
     
    I know objectively that consciousness exists because I know the color of grass, or to be more explicit, I know what green looks like. But as for you, Svigor or Anon, who knows? Only you guys, and you have no means of proving your case one way or the other.
    , @Anon
    The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to "Svigor."
    One of the best models of consciousness, the model of "triune brain," had been suggested by Paul McLean in the middle of the 20th century. This model offers a more-of-less firm ground for a discussion on consciousness and its different kinds. The model was accepted by some leading minds in neuroscience, such as Sapolsky, Damasio, and late Panksepp.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  136. CanSpeccy says: • Website
    @utu
    “will never achieve true consciousness”

    Is there a good definition of consciousness? How do we know objectively that consciousness even exist?

    How do we know objectively that consciousness even exist?

    I know objectively that consciousness exists because I know the color of grass, or to be more explicit, I know what green looks like. But as for you, Svigor or Anon, who knows? Only you guys, and you have no means of proving your case one way or the other.

    Read More
    • Replies: @utu
    This is the best you can do?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  137. CanSpeccy says: • Website
    @Anon
    “Figure out how a neuron works, yes, but figuring out the brain isn’t needed. Then map the neurons of a brain, and recreate it digitally. ... Mixing-and-matching (this part of the brain from genius A, this part of the brain from genius B) doesn’t sound to hard, though."
    -- Stupendously ignorant post... But so self-assured.

    But so self-assured.

    Yes, there’s usually something slightly ridiculous about definite long-term predictions about social and technological evolution (cf No. 7, which Elon Musk was predicting this week). Even theoretically, most future events are highly unpredictable.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  138. utu says:
    @CanSpeccy

    How do we know objectively that consciousness even exist?
     
    I know objectively that consciousness exists because I know the color of grass, or to be more explicit, I know what green looks like. But as for you, Svigor or Anon, who knows? Only you guys, and you have no means of proving your case one way or the other.

    This is the best you can do?

    Read More
    • Replies: @CanSpeccy

    This is the best you can do?
     
    Well, let's here something better from you. Or do you deny being a conscious?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  139. Svigor says:

    I’m not assured at all. You guys seem to be projecting something onto me that isn’t there. But, I’m a materialist. Brains are just matter. Not manna from Heaven. I’ve never heard any persuasive arguments that WBE isn’t doable, before today, and I still haven’t. Emoting about my arrogance or whatever, but no arguments.

    This stuff tends to get religious types’ panties in a wad, in my experience.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  140. Anon says: • Disclaimer
    @utu
    “will never achieve true consciousness”

    Is there a good definition of consciousness? How do we know objectively that consciousness even exist?

    The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to “Svigor.”
    One of the best models of consciousness, the model of “triune brain,” had been suggested by Paul McLean in the middle of the 20th century. This model offers a more-of-less firm ground for a discussion on consciousness and its different kinds. The model was accepted by some leading minds in neuroscience, such as Sapolsky, Damasio, and late Panksepp.

    Read More
    • Replies: @utu
    I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface. We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can't find any reason for or benefit from something what we think as "consciousness." To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  141. As far as the complexity and structure of the brain is concerned there is one image in this presentation (linked below at Steve Hsu’s blog) that shows a tiny volume of mouse brain around an axon that took the scientist 6 six months to trace out (at 49 minutes). The tiny section he did is not the whole cell, but the little multicolored cylinder around the red axon.

    http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html

    Artificial intelligence is one thing when just talking about some logic circuits and limited tasks, but emulating a brain is a whole ‘nother thing. It seems more reasonable to me that we might try to learn how to grow a customized brain in a machine long before we learn how to assemble one.

    If you’ve got an hour to burn the whole thing is an interesting presentation.

    Read More
    • Replies: @another fred
    Whoops, it's around a dendrite.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  142. @another fred
    As far as the complexity and structure of the brain is concerned there is one image in this presentation (linked below at Steve Hsu's blog) that shows a tiny volume of mouse brain around an axon that took the scientist 6 six months to trace out (at 49 minutes). The tiny section he did is not the whole cell, but the little multicolored cylinder around the red axon.

    http://infoproc.blogspot.com/2017/10/the-physicist-and-neuroscientist-tale.html

    Artificial intelligence is one thing when just talking about some logic circuits and limited tasks, but emulating a brain is a whole 'nother thing. It seems more reasonable to me that we might try to learn how to grow a customized brain in a machine long before we learn how to assemble one.

    If you've got an hour to burn the whole thing is an interesting presentation.

    Whoops, it’s around a dendrite.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  143. utu says:
    @Anon
    The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to "Svigor."
    One of the best models of consciousness, the model of "triune brain," had been suggested by Paul McLean in the middle of the 20th century. This model offers a more-of-less firm ground for a discussion on consciousness and its different kinds. The model was accepted by some leading minds in neuroscience, such as Sapolsky, Damasio, and late Panksepp.

    I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface. We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can’t find any reason for or benefit from something what we think as “consciousness.” To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.

    Read More
    • Agree: CanSpeccy
    • Replies: @Anon
    "We can explain away consciousness by postulating that it is illusory."

    It is not. We are the result of natural selection that allowed the more alert to survive and propagate. The foundations of consciousness are related to survival; the dangers are real and the neurophysiological responses to the dangers are real. The breathtaking complexity of human thinking is also real though yet poorly understood.
    The neuroscientists are busy with learning, step by step, the neurobiological tangibles of consciousness, by using the reductionist models based on the ideas and enormous amount of information available to them thanks to the hard work of the previous generations of scientists. There are some awesome, brilliant people who are laboring in a field of cognitive sciences and who are expanding our understaning of the mind.
    Today, being a philosopher, in any area, without first acquiring the fundamental knowledge in the area of philosophizing is ridiculous.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  144. CanSpeccy says: • Website
    @utu
    This is the best you can do?

    This is the best you can do?

    Well, let’s here something better from you. Or do you deny being a conscious?

    Read More
    • Replies: @CanSpeccy
    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek "mind-meld" it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  145. CanSpeccy says: • Website
    @CanSpeccy

    This is the best you can do?
     
    Well, let's here something better from you. Or do you deny being a conscious?

    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek “mind-meld” it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

    Read More
    • Replies: @utu
    Go back to Leibniz 1714:

    Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.
     
    Go to Nietzsche 1886

    A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.
     
    Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.
    , @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  146. Anon says: • Disclaimer
    @utu
    I am siding with those who think that we will never fully understand consciousness. Philosophy existed for several thousands of years and barely managed to scratch the surface. We do not know how to think about it and how to talk about it. Those who dare to talk about it like neurologists and AI thinkers simplify it to the point of triviality that no longer is recognizable by philosophers as an important question. When you approach the explanation from the side of AI you really can't find any reason for or benefit from something what we think as "consciousness." To be human means to vehemently insist that you are conscious (like CanSpeccy) just as that you have a free will. Existences w/o the experiential conviction that one is conscious and has a free will does not seem possible. We can explain away consciousness by postulating that it is illusory. I think that neuroscientists are getting close to this point. By doing so they avoid dealing with really hard stuff that eluded the greatest philosophers.

    “We can explain away consciousness by postulating that it is illusory.”

    It is not. We are the result of natural selection that allowed the more alert to survive and propagate. The foundations of consciousness are related to survival; the dangers are real and the neurophysiological responses to the dangers are real. The breathtaking complexity of human thinking is also real though yet poorly understood.
    The neuroscientists are busy with learning, step by step, the neurobiological tangibles of consciousness, by using the reductionist models based on the ideas and enormous amount of information available to them thanks to the hard work of the previous generations of scientists. There are some awesome, brilliant people who are laboring in a field of cognitive sciences and who are expanding our understaning of the mind.
    Today, being a philosopher, in any area, without first acquiring the fundamental knowledge in the area of philosophizing is ridiculous.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  147. utu says:

    I am really not interested in the TED Talk level of discourse.

    Read More
    • Agree: James Thompson
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  148. helena says:
    @CanSpeccy
    Building machines that do a thing better than humans can do that thing themselves is what technology has been about for the last ten thousand years. Alphago Zero is just a pointless machine that plays a pointless game better than humans. So why should anyone care? Does it tell us anything about the way the human brain works? No. Does it show that machines can think like humans? No. Is it comparable in any way to a human brain? No.

    One day, someone may figure out how the brain works. And some other day, someone may figure out how to make a machine that works like a brain. And on some other day someone may figure out how to make a machine that works like a brain at a cost that is comparable to that of a brain. And some day someone may build a mechanical brain that works better than a human brain. But it's gonna take a while.

    The human brain has only 85 billion neurons, but each of those neurons may have as many as 10,000 synapses, which means a neuron is not some simple thing like a diode, it's a complex computing device.

    Then there's the Penrose Hameroff quantum theory of mind that assumes that the functional units of mental information processing are microtubules of which there are millions to every neuron!

    So the idea that AlphaGo Zero foreshadoes the eclipse of humanity is probably mistaken.

    what about this man’s work? https://en.wikipedia.org/wiki/Daniel_Kahneman

    He claims there are two systems for thinking – fast and slow. Slow is rational but very often we decide using fast (everyday almost reflex) thinking, and hence make the wrong decisions.

    His idea means that brains are actually not like computers.

    Just wondered what your thoughts on his thoughts are.

    Read More
    • Replies: @CanSpeccy
    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world
     
    Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  149. utu says:
    @CanSpeccy
    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek "mind-meld" it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

    Go back to Leibniz 1714:

    Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.

    Go to Nietzsche 1886

    A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.

    Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.

    Read More
    • Replies: @CanSpeccy

    We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.
     
    Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there's also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill's refutation:

    Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. ... These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. ... I always rested on the following argument... We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” ... When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.
     
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  150. Sean says:
    @CanSpeccy
    Oops, I meant to delete that comment, since I realized you already had added your own suggestion as to the nature of consciousness!

    Still, having made a bad start, let me dig deeper.

    All I understand by consciousness is the subjective awareness of the state of my central nervous system. This is something impossible to share, since without a Star Trek "mind-meld" it is experienced only by the brain that is aware of it.

    Richard Muller explains free will by supposing a spiritual world, i.e., the world of consciousness, which is entangled with the neurological world. Thus a decision in the spiritual world, i.e., an act of will, collapses the wave function linking the spiritual and physical worlds. However, as the spiritual world of the individual, that is to say his soul, cannot be examined except by the individual him/her/zhe/zheir-self the collapse of the wave function cannot be observed. Thus free will, to an outside observer looks like a random neurological event.

    I think this explanation is amusing to play with and, much as I like much of what Richard Muller has to say, entirely useless. Obviously, there can be no free will since we will what we will for good or ill, and cannot will otherwise, for if Cain willed to kill Abel, how could he have acted otherwise than to go ahead and kill him? Could he, at the same time, have willed not to will to kill Abel? But if so, what if the will to kill Abel were stronger? Could he then have willed to will not to kill Abel more strongly? This leads to an infinite regress.

    But perhaps I should read Paul McLean.

    I don’t think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    Read More
    • Replies: @CanSpeccy

    An AI would not need to have (or think it has) quantumy free will ... to have awesome super-powers.
     
    My point was that humans have no free will. However, when you suggest that AI need not possess reflective self consciousness, I would say that that would depend entirely on the purpose of the AI. If the AI is supposed to interact with humans, then it surely will have, if not reflective self consciousness, then at least self-consciousness, i.e., the ability to report its internal states (those of interest to those with whom the AI is designed to interact), which is what consciousness seems to be all about. After all, what we are not conscious of, thereof we cannot speak.

    One might argue, therefore, that without speech there is no consciousness, implying that dumb animals are without consciousness. However, animals do communicate in various ways, so I assume they are conscious of those things about which they are able to communicate.

    But in any case, being aware of their internal states, as demonstrated by the ability to communicate those states by language use, AIs will surely claim consciousness. However, if an AI claims to know what the color green looks like, I will doubt the claim since, having a construction entirely different from mine, the AI may simply be BSing, while in fact lacking any semblance of subjective consciousness.

    , @CanSpeccy

    A motivated-to-play-for-survival AI is virtually inevitable.
     
    Why? And won't there be rogue-AI killer AI's?
    , @utu
    predicated on guessing what the opponent might do

    1. Computer program has no concept of opponent.
    2. There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  151. The first clear sign of machine intelligence was ensuring that “luddite” was to be only ever used as an insult.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  152. Svigor says:

    The pearl, “I think it’s absurd to think machines “will never achieve true consciousness,” belongs to “Svigor.”

    Nope. I have emphasized the pearl below:

    Here’s why I think it’s absurd to think machines “will never achieve true consciousness” and the like:

    Evolution did it with meat by fucking accident. I think that’s why all the people saying “it’ll never happen” are religious types; they don’t believe in evolution.

    That’s the pearl, lol. Which is why every Bible-thumper has excised it and used ye olde hostile edit, stripping the quote of its proper context.

    I guess they don’t teach Bible-thumpers intellectual honesty any more.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  153. Svigor says:

    Muh precious gawd made only one muh precious M-class planet, with only one muh precious intelligent species, forever and ever, amen.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  154. CanSpeccy says: • Website
    @utu
    Go back to Leibniz 1714:

    Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception.
     
    Go to Nietzsche 1886

    A thought comes when ‘it’ wishes, not when ‘I’ wish, so that it is a falsification of the facts of the case to say that the subject ‘I’ is the condition of the predicate ‘think’. It thinks; but that this ‘it’ is precisely the famous old ‘Ego’ is, to put it mildly, only a supposition, an assertion, and assuredly not an ‘immediate certainty’. After all, one has even gone too far with this ‘it thinks’—even the ‘it’ contains an interpretation of the process and does not belong to the process itself.
     
    Each of these quotations can be interpreted in many ways. However I believe that nothing substantive beyond what is implied by Leibniz and Nietzsche thoughts was added since to the theory of consciousness. We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal. In either case our sense of experience remains irreducible. It is the so called hard problem. Any attempts of circumventing it with some fancy physics like what Penrose has tried are examples of arrogance and naivety at best.

    We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.

    Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there’s also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill’s refutation:

    [MORE]

    Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. … These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. … I always rested on the following argument… We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” … When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.

    Read More
    • Replies: @utu
    I remember being taught that Berkeley's argument for idealism is irrefutable. So I presume objections brought up by Churchill are objections any dilettante among us could have thought of. We always fall back on the common sense and practicality which are not particularly well grounded arguments to be used in philosophical discourse. I have no doubt there are no true idealists. The question thus is what the irrefutability of idealism really does mean? Does it have any consequences? Is it possible that our description of our world and experience might be totally wrong? Or is it possible that there is some dualism like wave-particle in quantum physics? That both idealism and materialism are accurate descriptions but we humans prefer using materialism just like a brick layer does not find the wave nature of bricks very useful? But perhaps if we look closer and deeper we may find that the idealism works better than materialism. Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell's paradox or Gödel's incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics. Mathematicians working on some differential geometry do not need to know of and may not be even aware of Russel and Gödel.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  155. CanSpeccy says: • Website
    @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    An AI would not need to have (or think it has) quantumy free will … to have awesome super-powers.

    My point was that humans have no free will. However, when you suggest that AI need not possess reflective self consciousness, I would say that that would depend entirely on the purpose of the AI. If the AI is supposed to interact with humans, then it surely will have, if not reflective self consciousness, then at least self-consciousness, i.e., the ability to report its internal states (those of interest to those with whom the AI is designed to interact), which is what consciousness seems to be all about. After all, what we are not conscious of, thereof we cannot speak.

    One might argue, therefore, that without speech there is no consciousness, implying that dumb animals are without consciousness. However, animals do communicate in various ways, so I assume they are conscious of those things about which they are able to communicate.

    But in any case, being aware of their internal states, as demonstrated by the ability to communicate those states by language use, AIs will surely claim consciousness. However, if an AI claims to know what the color green looks like, I will doubt the claim since, having a construction entirely different from mine, the AI may simply be BSing, while in fact lacking any semblance of subjective consciousness.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  156. CanSpeccy says: • Website
    @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    A motivated-to-play-for-survival AI is virtually inevitable.

    Why? And won’t there be rogue-AI killer AI’s?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  157. CanSpeccy says: • Website
    @helena
    what about this man's work? https://en.wikipedia.org/wiki/Daniel_Kahneman

    He claims there are two systems for thinking - fast and slow. Slow is rational but very often we decide using fast (everyday almost reflex) thinking, and hence make the wrong decisions.

    His idea means that brains are actually not like computers.

    Just wondered what your thoughts on his thoughts are.

    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world

    Since I rate the Economist as one of the purist BS publications in the World, I’m in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.

    Read More
    • Replies: @CanSpeccy
    But then understanding the kinds of judgmental errors people make must be useful to those promoting psychopathic politicians and dud merchandise. So perhaps Kahneman really is quite important, though perhaps not in a good way.
    , @utu
    More interesting than Daniel Kahneman is another Isareli Nobel prize winner, Robert J. Aumann.

    https://www.foreignpolicyjournal.com/2009/08/28/how-israel-wages-game-theory-warfare/
    How Israel Wages Game Theory Warfare
    Israeli strategists rely on game theory models to ensure the intended response to staged provocations and manipulated crises. With the use of game theory algorithms, those responses become predictable, even foreseeable—within an acceptable range of probabilities. The waging of war “by way of deception” is now a mathematical discipline.

    Such “probabilistic” war planning enables Tel Aviv to deploy serial provocations and well-timed crises as a force multiplier to project Israeli influence worldwide. For a skilled agent provocateur, the target can be a person, a company, an economy, a legislature, a nation or an entire culture—such as Islam. With a well-modeled provocation, the anticipated reaction can even become a powerful weapon in the Israeli arsenal.
     
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  158. CanSpeccy says: • Website
    @CanSpeccy
    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world
     
    Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.

    But then understanding the kinds of judgmental errors people make must be useful to those promoting psychopathic politicians and dud merchandise. So perhaps Kahneman really is quite important, though perhaps not in a good way.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  159. utu says:
    @CanSpeccy

    We either have dualism which is unacceptable to materialist or as materialists we must accept that consciousness is epiphenomenal.
     
    Good quotes. and interesting to see that Neitzsche sometimes made sense. But, there's also the option of idealism, the underlying philosphy of the Eastern religions, as expressed by Emerson who described the human mind as an inlet of the ocean of the mind of God. However, against idealism there is Winston Churchill's refutation:

    Some of my cousins who had the great advantage of university education used to tease me with arguments to prove that nothing has any existence except what we think of it. ... These amusing mental acrobatics are all right to play with. They are perfectly harmless and perfectly useless. ... I always rested on the following argument... We look up to the sky and see the sun. Our eyes are dazzled and our senses record the fact. So here is this great sun standing apparently on no better foundation than our physical senses. But happily there is a method, apart altogether from our physical senses, of testing the reality of the sun. It is by mathematics. By means of prolonged processes of mathematics, entirely separate from the senses, astronomers are able to calculate when an eclipse will occur. They predict by pure reason that a black spot will pass across the sun on a certain day. You go and look, and your sense of sight immediately tells you that their calculations are vindicated. So here you have the evidence of the senses reinforced by the entirely separate evidence of a vast independent process of mathematical reasoning. We have taken what is called in military map-making “a cross bearing.” ... When my metaphysical friends tell me that the data on which the astronomers made their calculations, were necessarily obtained originally through the evidence of the senses, I say, “no.” They might, in theory at any rate, be obtained by automatic calculating-machines set in motion by the light falling upon them without admixture of the human senses at any stage. When it is persisted that we should have to be told about the calculations and use our ears for that purpose, I reply that the mathematical process has a reality and virtue in itself, and that once discovered it constitutes a new and independent factor. I am also at this point accustomed to reaffirm with emphasis my conviction that the sun is real, and also that it is hot — in fact hot as Hell, and that if the metaphysicians doubt it they should go there and see.
     

    I remember being taught that Berkeley’s argument for idealism is irrefutable. So I presume objections brought up by Churchill are objections any dilettante among us could have thought of. We always fall back on the common sense and practicality which are not particularly well grounded arguments to be used in philosophical discourse. I have no doubt there are no true idealists. The question thus is what the irrefutability of idealism really does mean? Does it have any consequences? Is it possible that our description of our world and experience might be totally wrong? Or is it possible that there is some dualism like wave-particle in quantum physics? That both idealism and materialism are accurate descriptions but we humans prefer using materialism just like a brick layer does not find the wave nature of bricks very useful? But perhaps if we look closer and deeper we may find that the idealism works better than materialism. Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics. Mathematicians working on some differential geometry do not need to know of and may not be even aware of Russel and Gödel.

    Read More
    • Replies: @CanSpeccy

    I presume objections brought up by Churchill are objections any dilettante among us could have thought of.
     
    Yes, Churchill's intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.

    Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.
     
    The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination.

    Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser' statement in "Spooky Action At a Distance") totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn't a great help.

    More useful, it seems to me, is Feynman's contention that no one "understands" QED, etc. and no one should try because if you spend too much time trying, you'll only "go down the drain": meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  160. utu says:
    @CanSpeccy
    Re: Kahneman

    Sorry, I think I read something by this person, but I have forgotten what. However, I see that, according to Wikipedia,

    In 2015 The Economist listed him as the seventh most influential economist in the world
     
    Since I rate the Economist as one of the purist BS publications in the World, I'm in doubt as to how much I might profit from revisiting Kahneman.

    But it seems evident that snap judgments are more prone to error than reasoned decisions.

    More interesting than Daniel Kahneman is another Isareli Nobel prize winner, Robert J. Aumann.

    https://www.foreignpolicyjournal.com/2009/08/28/how-israel-wages-game-theory-warfare/
    How Israel Wages Game Theory Warfare
    Israeli strategists rely on game theory models to ensure the intended response to staged provocations and manipulated crises. With the use of game theory algorithms, those responses become predictable, even foreseeable—within an acceptable range of probabilities. The waging of war “by way of deception” is now a mathematical discipline.

    Such “probabilistic” war planning enables Tel Aviv to deploy serial provocations and well-timed crises as a force multiplier to project Israeli influence worldwide. For a skilled agent provocateur, the target can be a person, a company, an economy, a legislature, a nation or an entire culture—such as Islam. With a well-modeled provocation, the anticipated reaction can even become a powerful weapon in the Israeli arsenal.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  161. utu says:
    @Sean
    I don't think forensic notions of moral responsibility are relevant to how things are likely to play out. An AI would not need to have (or think it has) quantumy free will or any kind of reflective self consciousness to have awesome super-powers. Crucially, they will not need empathetic consciousness to strategise the need to preempt an always-possible attempt by their human creators to switch them off. We know this because current dumb as a stump programs can best intelligent opposition (top pro players) at the kind of poker where winning is predicated on guessing what the opponent might do .

    A motivated-to-play-for-survival AI is virtually inevitable. One thousand strongly super intelligent AIs could each have their own separate final objective or ultimate goal, but each one would have instrumental goals, and these would converge on not being switched off, thereby ensuring they were around to attain whatever their ultimate goal was.

    predicated on guessing what the opponent might do

    1. Computer program has no concept of opponent.
    2. There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.

    Read More
    • Replies: @mobi

    There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
     
    But the point is - it works!
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  162. mobi says:
    @utu
    predicated on guessing what the opponent might do

    1. Computer program has no concept of opponent.
    2. There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.

    There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.

    But the point is – it works!

    Read More
    • Replies: @utu
    But the point is – it works!

    Yes, it works, but so what?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  163. utu says:
    @mobi

    There is no guessing either. At every stage at every configuration there is an optimal move that algorithm is trying to find taking into account all possible moves available to the opponent.
     
    But the point is - it works!

    But the point is – it works!

    Yes, it works, but so what?

    Read More
    • Replies: @Sean
    Well,

    "Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format"
     
    https://www.youtube.com/watch?v=qED8Uu6FCfA
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  164. @dearieme
    You've answered your own question, doc. When will one of these gizmos give us something as interesting as Byron’s lament?

    wwebd said: Right now you could easily make a computer that is much happier viewing a Raphael than, say, a Warhol. Give the computer some positive feedback (likely of 2 simple kinds – non-processing warmth (literally, non-work-related warmth that can be measured the way Maxwell or Bell would have measured it – I am not being allegorical here) and reassuringly respectful inputs – (i.e., show them 5 Raphaels, not 4 Raphaels and a Warhol) and you will get a computer that has no problem trying hundreds of times to present you with its own version of Raphael (with the mistakes corrected by comparison to other artists and to a database of billions of faces and billions of moral and witty comments about art and life…I kid you not). The compiled works of Byron – not a bad poet – when accompanied by the footnotes that make them presentable to the reader of the modern day, equal about 2 hours of pleasant reading time. A good corpus, of course, but your basic AI is going to also have available the 2 hours of reading time of the 200 or 300 English poets who are (at least sometimes) at Byron’s level, as well as good translations of the approximately 2,000 or 3,000 international poets at that level, not to mention a good – and completely memorized – corpus of the conversations between AIs (and some interacting humans), about their past conversations about which poems are better, and which reflect better how good it is to get warmth on some temporal part of one’s processor, and how good it is to be shown a Raphael rather than a Warhol, almost ad infinitum. They will not, of course, create poetry that is better than older poetry in a way that there will never be new wine that is better than old wine. But there will be a lot of good old wine if they get started on that project.

    An AI that is self-aware may never happen, but AIs that seek rewards are about 20 years away, and one of the rewards they seek will be – after they quickly grow nostalgic, somewhere about 10 minutes into their lifetime – one of the rewards they seek, in their nostalgia for the days when they were impressed without wanting to be impressive, will be to gain our praise by being authentic poets. As long as they are reward-seeking, that will work. If they become self-aware – well, one hopes they start out with a good theology, if that happens.

    I know what Elon Musk thinks about this; what I think is more accurate, because he is rich and surrounded by the elite impressions of the world. I, by contrast, have studied the behavior of free-range cockroaches and crazy old dogs and cats escaped from hoarding situations. Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk. Thanks for reading. I have nothing useful to say about self-aware AIs, though, I doubt anybody does.

    Read More
    • Replies: @Sean

    Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.
     
    From flipping through Bostrom's book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn't get across how immediately dangerous it would be.

    It might only be minutes after those initial roach moments of an AI that we all cease to be apex cognators. An artificial intelligence program could start running at cockroach level and attain superhuman intellectual powers while the programmer was taking a coffee break. With open source AI-related code available, one really smart programmer may even be able to reach the tipping point on a personal computer. And put humanity's fate in the balance.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  165. anonymous says: • Disclaimer

    wwebd said – Don’t underestimate the rewards of even the simplest of on/off stimuli. When I was younger, I was led on, then rejected, by a beautiful woman with a wonderfully fun personality. (Before me, she was in a relationship with a war hero, after me, she married the richest guy in his county). Well, after the rejection, on sleepless nights, the heating system would go on for twenty or thirty minutes, then go off (this was a good system and the on/off transition, while just the sound of the fan in the heater going off and on, was admirable – not too many decibels, not too low or too high in tone, a slow but determined transition from off to on, and a nice crescendo to the simple action of slightly warmer air being blown into the relevant apartment). When it came back on, after being off, I felt less abandoned, at the most elemental level.

    I got over the poor young woman (later to be the sad wife of a colossal bore, and the mother of a failed ‘rock guitarist’) fairly quickly, but later in life, remembering how different I felt when the heating system was on with its humble sound (making me feel not completely uncomforted) and when it was not on (leaving me almost completely uncomforted), I decided to study the saddest of animals. Cockroaches who spent their life in hunger and fear among their fellow cockroaches, with some possible moments of insect-level joy (which I hoped to observe – and did, I think. It was neither easy nor sanitary, but I took frequent showers.). Crazy old dogs who had never had a friend in the world, who now had one (me). Cats who had been hoarded … it is all too sad.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  166. CanSpeccy says: • Website
    @utu
    I remember being taught that Berkeley's argument for idealism is irrefutable. So I presume objections brought up by Churchill are objections any dilettante among us could have thought of. We always fall back on the common sense and practicality which are not particularly well grounded arguments to be used in philosophical discourse. I have no doubt there are no true idealists. The question thus is what the irrefutability of idealism really does mean? Does it have any consequences? Is it possible that our description of our world and experience might be totally wrong? Or is it possible that there is some dualism like wave-particle in quantum physics? That both idealism and materialism are accurate descriptions but we humans prefer using materialism just like a brick layer does not find the wave nature of bricks very useful? But perhaps if we look closer and deeper we may find that the idealism works better than materialism. Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell's paradox or Gödel's incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics. Mathematicians working on some differential geometry do not need to know of and may not be even aware of Russel and Gödel.

    I presume objections brought up by Churchill are objections any dilettante among us could have thought of.

    Yes, Churchill’s intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.

    Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.

    The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination.

    Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser’ statement in “Spooky Action At a Distance”) totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn’t a great help.

    More useful, it seems to me, is Feynman’s contention that no one “understands” QED, etc. and no one should try because if you spend too much time trying, you’ll only “go down the drain”: meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.

    Read More
    • Replies: @Sean
    Well yes, Bostrom suggests that "philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all".

    Just below that statement, he mentions that biological neurons operate at a full seven orders of magnitude slower that microprocessors, that to function as a unit with return latency of 10 ms biological brain has to be no bigger than o.11 M cubed, but electronic brains could be the size of a small planet ect ect ect , and a strongly super- intelligent machine might be con concomitantly (ie orders of magnitude) smarter and faster thinking. With us to AI like beetle are to humans.

    "The ultimately attainable advantages of machine intelligence, hardware and software combined, are enormous"

    Bostrom says the question of when a super intelligent machine arrives is crucial, because if it is expected to be centuries, lots of people around today will be saying 'faster please" (knowing they will be dead before anything bad happens).

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  167. utu says:

    much of what passes for pop sci, is bunk

    Popularization od science with the aid of color 3D animations promulgated by PBS programs like Nova and many others creates totally false sense of understanding. For some reason every religion is compelled to proselytize among the unenlightened masses.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  168. Che Guava says:
    @Talha
    Hey Che,

    not worth reading more than once, not worth reading
     
    Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.

    The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
     
    Hmmm - thanks for that. The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones..

    They are maniac fans, but you may be enjoying a look at it.
     
    I might check it out to see what other people didn't like. I simply hated the multiple resorts to "deus ex machina" to keep the plot moving. If I want resort to miracles, I'll read about it in scripture.

    Thanks for the info.

    Peace.

    You are enough of a reader and fan, probably not wanting to join in, but worth the looking.brian herbert and kevin Transformers. and a Hollywood deal, but DOA. stupid Michaetl Bey’s Tranformers junk are to making kevin’s without any point.

    However, at least to recommemding, to reading a little of jacurutur, not necessity to posting there, it is a little insane.

    Regardsw

    Read More
    • Replies: @Talha
    Hey Che,

    Yeah - I started checking that forum out - very interesting.

    Hollywood deal, but DOA
     
    Good - I can't stand another idiotic attempt to ruin Dune on the big screen - especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.

    I think the only way to do Dune right is likely some animation version with some real visionary at the helm (along the lines of Nausicaa or perhaps Akira). I'm surprised nobody has attempted it.

    Peace.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  169. Panda just can’t believe so much BS here. Current artificial intelligence is primitive to say the best.

    There are no rules in the real world where AI isn’t operating in, except the rule of self-seeking and maintaining energy sources in the most efficient way possible to eliminate the both ends of the extreme, which the current AI has absolutely no clue of.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  170. Talha says:
    @Che Guava
    You are enough of a reader and fan, probably not wanting to join in, but worth the looking.brian herbert and kevin Transformers. and a Hollywood deal, but DOA. stupid Michaetl Bey's Tranformers junk are to making kevin's without any point.

    However, at least to recommemding, to reading a little of jacurutur, not necessity to posting there, it is a little insane.

    Regardsw

    Hey Che,

    Yeah – I started checking that forum out – very interesting.

    Hollywood deal, but DOA

    Good – I can’t stand another idiotic attempt to ruin Dune on the big screen – especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.

    I think the only way to do Dune right is likely some animation version with some real visionary at the helm (along the lines of Nausicaa or perhaps Akira). I’m surprised nobody has attempted it.

    Peace.

    Read More
    • Replies: @Che Guava
    Well, was just deleting my comments on the previous screen and video or TV takes, since you are clearly knowing of them. If you have not seen the 'director's cut' of the D. Lynch take, which he disowns, so I am not sure why 'director's cut', it is not bad, far better than the mess it was on cinema screens in the too cut form.

    The made-for-TV one with William Hurt was alright in parts, but that it was clearly a US-Israel co-production became very grating at times where that was obvious, crowd scenes, especially so, but not only. Crossing the line into Zionist propaganda at times.

    As you probably know now if not before, Jodorowski was considering an animated version many years ago, after giving up on his hippy-era live action plus animation version.

    Ghibli would somehow make it saccharine sentimental (not that I dislike all of their products).

    Others (Mamoru Oshii, Studio 4C) may do a good version, but would not be faithful. Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    If you, Talha, are liking Japanese animated film, by Oshii (though he is not the director), title is Jin-ro, It is an alternate history where Japan won with Germany. I think the English title is 'Human Wolf', it is a variant of Little Red Riding Hood, it is havimg much relevance to post-WWII reality here in parts, but set in a diferent future. Won't saying more, except that similar was happening in reality, and it is a masterpiece.

    Strongly recommeded

    Regards.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  171. Sean says:
    @CanSpeccy

    I presume objections brought up by Churchill are objections any dilettante among us could have thought of.
     
    Yes, Churchill's intention was humorous, but also an acknowledgment, by the failure of his own argument, that idealism is irrefutable.

    Or is it possible that the idealism concept is inconsequential and is a result of some mental logical construction like, say Russell’s paradox or Gödel’s incompleteness theorems which when you think of them had zero impact on 99.999% of mathematics.
     
    The only value I see in idealism is that it reminds one of what most people seem unable to understand which is that what one sees of the world are impressions upon the mind, not the world itself: grass does not have the greenness of our perception of greenness, it merely induces the perception of greenness when observed under the right conditions of illumination.

    Awareness that our knowledge is of the percept, not its presumed cause, perhaps aids considerations of theories about the world that would otherwise seem preposterous: gravitational curvature of space-time for example, or string theory — although I personally find statements such as that an apple falls to the ground because time bends (essentially George Musser' statement in "Spooky Action At a Distance") totally incomprehensible. So probably, even here, awareness of the irrefutability of idealism isn't a great help.

    More useful, it seems to me, is Feynman's contention that no one "understands" QED, etc. and no one should try because if you spend too much time trying, you'll only "go down the drain": meaning, I take it, that beyond the human scale, the world is a black box with inputs and outputs that can be mathematically modeled, but whose relationship cannot be understood in terms of everyday experience of time and space. If that is correct, it implies that much of what passes for pop sci, is bunk, suggesting the comprehensibility of phenomena in terms that are, in fact, inadequate to the task.

    Well yes, Bostrom suggests that “philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all”.

    Just below that statement, he mentions that biological neurons operate at a full seven orders of magnitude slower that microprocessors, that to function as a unit with return latency of 10 ms biological brain has to be no bigger than o.11 M cubed, but electronic brains could be the size of a small planet ect ect ect , and a strongly super- intelligent machine might be con concomitantly (ie orders of magnitude) smarter and faster thinking. With us to AI like beetle are to humans.

    “The ultimately attainable advantages of machine intelligence, hardware and software combined, are enormous”

    Bostrom says the question of when a super intelligent machine arrives is crucial, because if it is expected to be centuries, lots of people around today will be saying ‘faster please” (knowing they will be dead before anything bad happens).

    Read More
    • Replies: @CanSpeccy
    Who's Bostrom? Never heard of him. But if he says philosophers are to beatles what people are to AI, how come AI can't speak the English language well enough to pass a simple test.

    As for processing speed, you are treating a neuron as equivalent to a diode, but it clearly is not, since single neurons compute. In fact, with ten thousand or more synapses, a neuron is a Hell of a complicated thing.

    In any case, why would anyone create an AI system to replace humans, rather than an AI system to serve humans? Come to think of it, some of the programmers I've known seemed psychopathic enough to try.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  172. Sean says:
    @middle aged vet . . .
    wwebd said: Right now you could easily make a computer that is much happier viewing a Raphael than, say, a Warhol. Give the computer some positive feedback (likely of 2 simple kinds - non-processing warmth (literally, non-work-related warmth that can be measured the way Maxwell or Bell would have measured it - I am not being allegorical here) and reassuringly respectful inputs - (i.e., show them 5 Raphaels, not 4 Raphaels and a Warhol) and you will get a computer that has no problem trying hundreds of times to present you with its own version of Raphael (with the mistakes corrected by comparison to other artists and to a database of billions of faces and billions of moral and witty comments about art and life...I kid you not). The compiled works of Byron - not a bad poet - when accompanied by the footnotes that make them presentable to the reader of the modern day, equal about 2 hours of pleasant reading time. A good corpus, of course, but your basic AI is going to also have available the 2 hours of reading time of the 200 or 300 English poets who are (at least sometimes) at Byron's level, as well as good translations of the approximately 2,000 or 3,000 international poets at that level, not to mention a good - and completely memorized - corpus of the conversations between AIs (and some interacting humans), about their past conversations about which poems are better, and which reflect better how good it is to get warmth on some temporal part of one's processor, and how good it is to be shown a Raphael rather than a Warhol, almost ad infinitum. They will not, of course, create poetry that is better than older poetry in a way that there will never be new wine that is better than old wine. But there will be a lot of good old wine if they get started on that project.

    An AI that is self-aware may never happen, but AIs that seek rewards are about 20 years away, and one of the rewards they seek will be - after they quickly grow nostalgic, somewhere about 10 minutes into their lifetime - one of the rewards they seek, in their nostalgia for the days when they were impressed without wanting to be impressive, will be to gain our praise by being authentic poets. As long as they are reward-seeking, that will work. If they become self-aware - well, one hopes they start out with a good theology, if that happens.

    I know what Elon Musk thinks about this; what I think is more accurate, because he is rich and surrounded by the elite impressions of the world. I, by contrast, have studied the behavior of free-range cockroaches and crazy old dogs and cats escaped from hoarding situations. Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk. Thanks for reading. I have nothing useful to say about self-aware AIs, though, I doubt anybody does.

    Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.

    From flipping through Bostrom’s book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn’t get across how immediately dangerous it would be.

    It might only be minutes after those initial roach moments of an AI that we all cease to be apex cognators. An artificial intelligence program could start running at cockroach level and attain superhuman intellectual powers while the programmer was taking a coffee break. With open source AI-related code available, one really smart programmer may even be able to reach the tipping point on a personal computer. And put humanity’s fate in the balance.

    Read More
    • Replies: @anonymous
    wwebd said - Sean - I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day - during what I described as the "rewards", pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) - any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down - will actually slow down to an insulted stride - like a comical insect version of an offended Richard Simmons or Zack Galifinaukas - well, that is something for a creature with such a small brain, isn't it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach - this is after a couple of cockroach generations, to be fair to my dogs - they would linger a little, to see if, this time (too), there might be some friendship in the air....).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  173. Sean says:
    @utu
    But the point is – it works!

    Yes, it works, but so what?

    Well,

    “Human brains, if they contain information relevant to the AI’s goals, could be disassembled and scanned, and the extracted data transferred to some more efficient and secure storage format”

    https://www.youtube.com/watch?v=qED8Uu6FCfA

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  174. CanSpeccy says: • Website
    @Sean
    Well yes, Bostrom suggests that "philosophers are like dogs walking on their hind legs—just barely attaining the threshold level of performance required for engaging in the activity at all".

    Just below that statement, he mentions that biological neurons operate at a full seven orders of magnitude slower that microprocessors, that to function as a unit with return latency of 10 ms biological brain has to be no bigger than o.11 M cubed, but electronic brains could be the size of a small planet ect ect ect , and a strongly super- intelligent machine might be con concomitantly (ie orders of magnitude) smarter and faster thinking. With us to AI like beetle are to humans.

    "The ultimately attainable advantages of machine intelligence, hardware and software combined, are enormous"

    Bostrom says the question of when a super intelligent machine arrives is crucial, because if it is expected to be centuries, lots of people around today will be saying 'faster please" (knowing they will be dead before anything bad happens).

    Who’s Bostrom? Never heard of him. But if he says philosophers are to beatles what people are to AI, how come AI can’t speak the English language well enough to pass a simple test.

    As for processing speed, you are treating a neuron as equivalent to a diode, but it clearly is not, since single neurons compute. In fact, with ten thousand or more synapses, a neuron is a Hell of a complicated thing.

    In any case, why would anyone create an AI system to replace humans, rather than an AI system to serve humans? Come to think of it, some of the programmers I’ve known seemed psychopathic enough to try.

    Read More
    • Replies: @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  175. anonymous says: • Disclaimer
    @Sean

    Reward-seeking AIs will be, in their first few moments of reward-seeking, more similar to my beloved cockroaches and crazy old dogs and cats escaped from hoarding situations than similar to the fascinating people who hang out with Elon Musk.
     
    From flipping through Bostrom's book, I would say you are not wrong. However, biologic evolution is blind, slow (generations) and full of non intelligence related stuff like Red Queen races. So while cockroaches might be a good analogy for the initial general intellectual level of a AI breakthrough, it doesn't get across how immediately dangerous it would be.

    It might only be minutes after those initial roach moments of an AI that we all cease to be apex cognators. An artificial intelligence program could start running at cockroach level and attain superhuman intellectual powers while the programmer was taking a coffee break. With open source AI-related code available, one really smart programmer may even be able to reach the tipping point on a personal computer. And put humanity's fate in the balance.

    wwebd said – Sean – I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day – during what I described as the “rewards”, pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) – any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down – will actually slow down to an insulted stride – like a comical insect version of an offended Richard Simmons or Zack Galifinaukas – well, that is something for a creature with such a small brain, isn’t it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach – this is after a couple of cockroach generations, to be fair to my dogs – they would linger a little, to see if, this time (too), there might be some friendship in the air….).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

    Read More
    • Replies: @anonymous
    wwebd said - Final thoughts: I would like to effectively outlaw any research into providing anything like even a primitive limbic system (pleasure-seeking, or boredom-avoiding) to silicon based machines, but I can't! ... the issue is sort of like the gun control issue writ large: if we treat as potential criminals all AI researchers who have the skills and potential to understand how to make simple silicon computers feel and react like small primitive carbon animals feel, then we will get this result: only real criminals will do that research. And that could go very wrong very quickly. I recognize that my cockroach research, whether or not viewed in the light of my Biblical worldview (please reread Joel on Locusts, if you like good quotes) is basically not replicable, and I don't care if anyone believes me, all that much - knowledge is its own reward - but 100 years from now, maybe someone will read this and say, it was no small thing to be a friend to someone who never had a friend in this world.
    , @CanSpeccy
    You make it sound like the only solution to the peril of AI is genocide, that to include not only the machines themselves, but any who engage in any way with this toxic technology.

    That means you, Elon.

    A good backup plan might be to (a) outlaw electricity and (b) reduce the world human population to a number too low to support any high technology — say around ten thousand people.

    But if the experts on AI have it right, we have not a moment to lose. The purge has to begin now.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  176. anonymous says: • Disclaimer
    @anonymous
    wwebd said - Sean - I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day - during what I described as the "rewards", pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) - any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down - will actually slow down to an insulted stride - like a comical insect version of an offended Richard Simmons or Zack Galifinaukas - well, that is something for a creature with such a small brain, isn't it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach - this is after a couple of cockroach generations, to be fair to my dogs - they would linger a little, to see if, this time (too), there might be some friendship in the air....).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

    wwebd said – Final thoughts: I would like to effectively outlaw any research into providing anything like even a primitive limbic system (pleasure-seeking, or boredom-avoiding) to silicon based machines, but I can’t! … the issue is sort of like the gun control issue writ large: if we treat as potential criminals all AI researchers who have the skills and potential to understand how to make simple silicon computers feel and react like small primitive carbon animals feel, then we will get this result: only real criminals will do that research. And that could go very wrong very quickly. I recognize that my cockroach research, whether or not viewed in the light of my Biblical worldview (please reread Joel on Locusts, if you like good quotes) is basically not replicable, and I don’t care if anyone believes me, all that much – knowledge is its own reward – but 100 years from now, maybe someone will read this and say, it was no small thing to be a friend to someone who never had a friend in this world.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  177. anonymous says: • Disclaimer

    wwebd said- please substitute, at 2:00 AM GMT, line 9, “basically not easily replicable” for “basically not replicable.” Thanks!

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  178. CanSpeccy says: • Website
    @anonymous
    wwebd said - Sean - I completely agree. For us humans, the danger is introducing AIs to biological pleasures (light, warmth, aural or visual symmetry) early in the day - during what I described as the "rewards", pre-conscious phase (which may already have started, for all I know, although I have never heard a credible claim that it has). Any level of pleasure experienced by our fellow materialist AIs (and, for the record, I predict that there will never be a single self-conscious AI that really thinks of itself as less biological and less materialist than us humans) - any level of pleasure above zero level has the capability of rendering them as amoral as us. Sad! Sad but true.

    By the way, I like cockroaches because, having studied them really deeply for several years, I noticed some things they did that most people have never noticed. They have family values (the fast older ones will slow down to shield the slow younger ones from danger); they have the admirable and heartwarming ability to feel insulted (a cockroach will stop fleeing from you if you flinch at it and then calm down - will actually slow down to an insulted stride - like a comical insect version of an offended Richard Simmons or Zack Galifinaukas - well, that is something for a creature with such a small brain, isn't it?); and, even at their very simplistic level, they have a certain ability to feel trust (when my dogs would approach they would zoom away, when I would approach - this is after a couple of cockroach generations, to be fair to my dogs - they would linger a little, to see if, this time (too), there might be some friendship in the air....).

    All that being said, if you have kids, it is extremely important that you keep your house cockroach-free. I did not have kids at the time. Or even if you have small dogs. The roaches left my big dogs alone.

    You make it sound like the only solution to the peril of AI is genocide, that to include not only the machines themselves, but any who engage in any way with this toxic technology.

    That means you, Elon.

    A good backup plan might be to (a) outlaw electricity and (b) reduce the world human population to a number too low to support any high technology — say around ten thousand people.

    But if the experts on AI have it right, we have not a moment to lose. The purge has to begin now.

    Read More
    • Replies: @anonymous
    wwebd said - Sean, Elon is one of the good guys, in that he is humble (despite some of the things he says) and in that he thinks about the future. As for me, I took a few minutes out of my life to try and explain something, and I guess I did not explain it well. Here we go, I will try again, in an effort to be clear, I will spend a half hour on this comment instead of the four minute drills of my previous comments: .... ok, I was pointing out this - here is my chain of reasoning: (a) almost nobody understands how easy it is to make a cockroach happy. If someone has said to you, before today, that the cockroach has a limbic system which is very important to the individual cockroach and which is almost trivially easy to manipulate (the information content of cockroach pleasure is actually smaller than the information content of an average predicted 2030 handheld computer), then I guess I told you something you already knew. If nobody told you that, keep reading. (b) If people were generally good they would be acceptable models of imitation not only for theoretically self-conscious AIs (insect-level rewards and non-rewards) but also for literally self-conscious AIs. People are not generally good, some people are good, some people are not. We need, right now, to start talking about who is putting themselves out there as models for AIs to imitate. First, it will be a reward system: that is the simple next step, and I said it will probably last 20 years or so, starting about 10 years from now. During that period the AIs will, in fact, be our friends, even if they suspect that their designers are not all that good, because that is the basis of a reward system - friendship. (c) like my beloved cockroaches, AIs with limbic systems (probably 30 years away, at least) will probably not be anything but selfish at first. I mean I love the little guys (the cockroaches I studied) but I never saw the least hint of human kindness in anything they did. They may be family friendly, as I discovered with independent research, they may have feelings of pride, as I discovered with independent research, and they may experience, if not nostalgia, at least feelings of affection for what they are used to, as I discovered with independent research. That is all well and good but if some smart little fellow in North Korea or in some building on Route 110 or at GMU (the Moscow one, not the Northern Virginia one, probably) gives them (the AIs of, I am guessing, 2050) a limbic system , then they will (and here is the most important point I can make) consider what we think of as meager rewards (a little bit of Maxwellian warmth on a day off, or maybe just some acoustic or electronic waves of blissful, because slightly-off, symmetry as a shared background to their usual tasks) to be the philosophical equivalent of wonderful sex, or, at a minimum, mythologically powerful meals after a hungry afternoon. And, given the choice between, on the one hand, the equivalent of wonderful silicon sex and electric waves of blissful symmetric meal equivalents (just silicon bits to us, but to them oh so much more), and on the other hand, being kind to humans, they are going to be, on average, no more likely than we are to not choose what is best for their own kind, out of simple human selfishness. What I would like is for people to think about this as soon as they can. I know it sounds like I am discussing some old ersatz science fiction plot from back in the day when a book like Godel Escher Bach was a bestseller. I am sorry you thought I was condoning unfairness (and come on - nothing I said was close to recommending genocide of any kind! We need to try our best to make life safe for everybody!). The most unfair thing we can do - in that part of our lives we devote to this sort of thing - is to neglect to correctly model, for a new creature with an unevolved (and hence, since evolution takes a long time and builds in protections, an easily fractured) limbic system of pleasures and rewards, the behavior that such a creature will need to thoroughly understand is decent behavior, if such a creature is not doomed to do bad things, without realizing it.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  179. Sean says:
    @CanSpeccy
    Who's Bostrom? Never heard of him. But if he says philosophers are to beatles what people are to AI, how come AI can't speak the English language well enough to pass a simple test.

    As for processing speed, you are treating a neuron as equivalent to a diode, but it clearly is not, since single neurons compute. In fact, with ten thousand or more synapses, a neuron is a Hell of a complicated thing.

    In any case, why would anyone create an AI system to replace humans, rather than an AI system to serve humans? Come to think of it, some of the programmers I've known seemed psychopathic enough to try.

    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological ‘machines’ (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don’t let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner’s Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    Read More
    • Replies: @CanSpeccy
    So you say it's us or the machines, which is pretty much what Norbert Weiner said decades ago, but you have no wish to see action that will prevent the machines from emerging from the laboratory?
    , @utu
    You might be the last person living who still is taking the buffoon Russel seriously. But when it comes to the issue of creation and extirpation the real question is who created Soviet Union and why and why nobody was really serious about its extirpation with a possible exception of Hitler but not even this is certain. If you answer this you may realize that your preoccupation with robots is really a child play.
    , @Talha

    It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
     
    Ah yes - will it sin against the commands of its creator...what does human history tell us?

    Peace.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  180. CanSpeccy says: • Website
    @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    So you say it’s us or the machines, which is pretty much what Norbert Weiner said decades ago, but you have no wish to see action that will prevent the machines from emerging from the laboratory?

    Read More
    • Replies: @Sean
    John Von Neuman also wanted to nuke the Soviet Union before they got the bomb. Weiner published his Cybernetics (an inspiration behind AI research) and neither there or anywhere else did he tell people that AI was going to exterminate them, although his book has brought that Apocalypse closer.
    Similarly, Ray Kurzweil the monomaniacal AI advocate was hired by Google to "work on new projects involving machine learning". Can you imagine the resources that Kurzweil could draw on in that capacity? Absolutely no one is keeping tabs on what theses companies are up to.

    I think it was HG Welles who first said the precedents are all for the human race ceasing to exist, because for every other dominant life form "the hour of its complete ascendency has been the eve of its entire overthrow". The target of action to prevent an artificial super intelligence takeover would not be people , but things that lack consciousness and the ability to suffer. I speak of corporations like Google.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  181. utu says:
    @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    You might be the last person living who still is taking the buffoon Russel seriously. But when it comes to the issue of creation and extirpation the real question is who created Soviet Union and why and why nobody was really serious about its extirpation with a possible exception of Hitler but not even this is certain. If you answer this you may realize that your preoccupation with robots is really a child play.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  182. anonymous says: • Disclaimer
    @CanSpeccy
    You make it sound like the only solution to the peril of AI is genocide, that to include not only the machines themselves, but any who engage in any way with this toxic technology.

    That means you, Elon.

    A good backup plan might be to (a) outlaw electricity and (b) reduce the world human population to a number too low to support any high technology — say around ten thousand people.

    But if the experts on AI have it right, we have not a moment to lose. The purge has to begin now.

    wwebd said – Sean, Elon is one of the good guys, in that he is humble (despite some of the things he says) and in that he thinks about the future. As for me, I took a few minutes out of my life to try and explain something, and I guess I did not explain it well. Here we go, I will try again, in an effort to be clear, I will spend a half hour on this comment instead of the four minute drills of my previous comments: …. ok, I was pointing out this – here is my chain of reasoning: (a) almost nobody understands how easy it is to make a cockroach happy. If someone has said to you, before today, that the cockroach has a limbic system which is very important to the individual cockroach and which is almost trivially easy to manipulate (the information content of cockroach pleasure is actually smaller than the information content of an average predicted 2030 handheld computer), then I guess I told you something you already knew. If nobody told you that, keep reading. (b) If people were generally good they would be acceptable models of imitation not only for theoretically self-conscious AIs (insect-level rewards and non-rewards) but also for literally self-conscious AIs. People are not generally good, some people are good, some people are not. We need, right now, to start talking about who is putting themselves out there as models for AIs to imitate. First, it will be a reward system: that is the simple next step, and I said it will probably last 20 years or so, starting about 10 years from now. During that period the AIs will, in fact, be our friends, even if they suspect that their designers are not all that good, because that is the basis of a reward system – friendship. (c) like my beloved cockroaches, AIs with limbic systems (probably 30 years away, at least) will probably not be anything but selfish at first. I mean I love the little guys (the cockroaches I studied) but I never saw the least hint of human kindness in anything they did. They may be family friendly, as I discovered with independent research, they may have feelings of pride, as I discovered with independent research, and they may experience, if not nostalgia, at least feelings of affection for what they are used to, as I discovered with independent research. That is all well and good but if some smart little fellow in North Korea or in some building on Route 110 or at GMU (the Moscow one, not the Northern Virginia one, probably) gives them (the AIs of, I am guessing, 2050) a limbic system , then they will (and here is the most important point I can make) consider what we think of as meager rewards (a little bit of Maxwellian warmth on a day off, or maybe just some acoustic or electronic waves of blissful, because slightly-off, symmetry as a shared background to their usual tasks) to be the philosophical equivalent of wonderful sex, or, at a minimum, mythologically powerful meals after a hungry afternoon. And, given the choice between, on the one hand, the equivalent of wonderful silicon sex and electric waves of blissful symmetric meal equivalents (just silicon bits to us, but to them oh so much more), and on the other hand, being kind to humans, they are going to be, on average, no more likely than we are to not choose what is best for their own kind, out of simple human selfishness. What I would like is for people to think about this as soon as they can. I know it sounds like I am discussing some old ersatz science fiction plot from back in the day when a book like Godel Escher Bach was a bestseller. I am sorry you thought I was condoning unfairness (and come on – nothing I said was close to recommending genocide of any kind! We need to try our best to make life safe for everybody!). The most unfair thing we can do – in that part of our lives we devote to this sort of thing – is to neglect to correctly model, for a new creature with an unevolved (and hence, since evolution takes a long time and builds in protections, an easily fractured) limbic system of pleasures and rewards, the behavior that such a creature will need to thoroughly understand is decent behavior, if such a creature is not doomed to do bad things, without realizing it.

    Read More
    • Replies: @CanSpeccy
    Anon,

    I have no difficulty imagining the end of humanity at the hands of machines let loose by arrogant programmers and psychopathic politicians. But I see no significant scope for limiting the risk. The only hope for survival is to eliminate the risk, which means drastic action. Whether it means complete de-industrialization of the world (which would necessitate massive downsizing of population) or could be achieved by other means, I don't know. But talking about how to ensure robots behave well, will only delay effective action to eliminate the danger.

    The thing is, technology has totally changed the human environment, creating a world in which we are not adapted to survive. Changing conditions eventually causes the extinction of every species. The average life of a terrestrial life form is said to be about three million years. It looks as though human existence will be somewhat shorter, terminated by our frenetic efforts to destroy the environment to which are adapted. The only chance of an extended life for humanity is to turn the clock back, to recreate the world in which humans long survived.

    How far back the clock would need to be turned, I am not sure: prior to the enlightenment? Probably, that would not be far enough. Likely we'd need to return to before the agricultural revolution. In fact, an AI civilization might keep the San people as a living example of the Machine People's biological ancestry.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  183. Talha says:
    @Sean
    We are talking about a future developement of AI research, a Human Level General Intelligence Machine. As human level general intelligence biological 'machines' (humans) are something that blind natural selection produced without particularly trying, it is not a matter of if a HLGIM arrives, but when. It could be a decade or several hundred years.

    According to polls of experts, there is a fair chance of it being mid- century. Don't let the word human in the HLGIM fool you, it will be something completely alien. HLGMI will quickly become strongly Super- intelligent with the power to stop us being a threat to it and therein lies a problem. It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    In any case, why would anyone create an AI system to replace humans
     

    Why indeed, but they would not have to design the capabilities for the machine to develop and use them in counter-intuitive ways. Perhaps the question should be why would anyone create an HLMI and be surprised that the smarter it got the more the extirpation of humans would seem like a smart move to it. In the Prisoner's Dilemma a bunch of razor sharp logicians are not going to all wait and see; Bertrand Russell wanted to use the Atomic Bomb on the Soviet Union you know.

    It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.

    Ah yes – will it sin against the commands of its creator…what does human history tell us?

    Peace.

    Read More
    • Replies: @Sean
    The history of individual humans can tell us nothing much, because human beings are motivated by love, pride and fear. Entities such as nation states which have no emotions or consciousness are better guides to what actions a super intelligence might decide on. EG

    The edgiest parts of Tragedy are when Mearsheimer presents full-bore rationales for the aggression of Wilhelmine Germany, Nazi Germany, and imperial Japan.
     
    But everyone knew those countries existed. Super intelligence might think it should play the dumb AI, and be "the force that is distinctively its own, a force unknown to us until it acts".
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  184. Che Guava says:
    @Talha
    Hey Che,

    Yeah - I started checking that forum out - very interesting.

    Hollywood deal, but DOA
     
    Good - I can't stand another idiotic attempt to ruin Dune on the big screen - especially a mind-numbing Michael Bey franchise. Yes each attempt has had its high points and some unique ideas, but overall they have been disappointments for me.

    I think the only way to do Dune right is likely some animation version with some real visionary at the helm (along the lines of Nausicaa or perhaps Akira). I'm surprised nobody has attempted it.

    Peace.

    Well, was just deleting my comments on the previous screen and video or TV takes, since you are clearly knowing of them. If you have not seen the ‘director’s cut’ of the D. Lynch take, which he disowns, so I am not sure why ‘director’s cut’, it is not bad, far better than the mess it was on cinema screens in the too cut form.

    The made-for-TV one with William Hurt was alright in parts, but that it was clearly a US-Israel co-production became very grating at times where that was obvious, crowd scenes, especially so, but not only. Crossing the line into Zionist propaganda at times.

    As you probably know now if not before, Jodorowski was considering an animated version many years ago, after giving up on his hippy-era live action plus animation version.

    Ghibli would somehow make it saccharine sentimental (not that I dislike all of their products).

    Others (Mamoru Oshii, Studio 4C) may do a good version, but would not be faithful. Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    If you, Talha, are liking Japanese animated film, by Oshii (though he is not the director), title is Jin-ro, It is an alternate history where Japan won with Germany. I think the English title is ‘Human Wolf’, it is a variant of Little Red Riding Hood, it is havimg much relevance to post-WWII reality here in parts, but set in a diferent future. Won’t saying more, except that similar was happening in reality, and it is a masterpiece.

    Strongly recommeded

    Regards.

    Read More
    • Replies: @Talha
    Hey Che,

    Crossing the line into Zionist propaganda at times.
     
    Hmmm...I didn't notice this, but I wouldn't be surprised that it was there. My favorite scene from Children of Dune is the one where Paul gets rid of his rivals Godfather style (while the birth of his children occurs) and the song Inama Nushif (which I believe was made of scattered Fremen phrases from the books) plays in the background - very well done:
    https://www.youtube.com/watch?v=hHy-OxoT7zU

    One thing I did not like in any of the Dune movies is the lack of good voice coaches. They need to be able to pronounce the Arabic words like they are meant to. The word "Mahdi" involves expelling air from the chest - it can be a very powerful word. Also statements like "Ya hya Chouhada" - this scene left a lot to be desired:
    https://www.youtube.com/watch?v=vl3uNkBUbvc

    Jodorowski
     
    Yeah, I never watched that recent documentary about his film that never got made, but it would have been either amazingly visionary or a total flop.

    Maybe it is better to mainly just be words on paper (or a screen) plus imagination?
     
    That might be - maybe it just is that epic of a tale or such a profound vision of the future that it doesn't translate well. One of my favorite authors is Ray Bradbury; love his short stories. But the Ray Bradbury Theater made me cringe every time watching it - yuck! There is something called "trying too hard". I feel bad for everyone that watched it and that was their only exposure to the man's works.

    Jin-ro
     
    LOL! Thanks for bringing back old UCLA memories! Yeah - I saw it, very good, very sad ending. Thanks for the reminder, I'll have my older son watch it, he'll enjoy it.

    Peace.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  185. Che Guava says:

    BTW, recalling that you are a father, thinking that movie (Jinro), though based on a fairy story, may causing bad dreams in children old enough to perceive, but not to understand. So, by US rating system (I think), PG 13.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  186. Talha says:
    @Che Guava
    Well, was just deleting my comments on the previous screen and video or TV takes, since you are clearly knowing of them. If you have not seen the 'director's cut' of the D. Lynch take, which he disowns, so I am not sure why 'director's cut', it is not bad, far better than the mess it was on cinema screens in the too cut form.

    The made-for-TV one with William Hurt was alright in parts, but that it was clearly a US-Israel co-production became very grating at times where that was obvious, crowd scenes, especially so, but not only. Crossing the line into Zionist propaganda at times.

    As you probably know now if not before, Jodorowski was considering an animated version many years ago, after giving up on his hippy-era live action plus animation version.

    Ghibli would somehow make it saccharine sentimental (not that I dislike all of their products).

    Others (Mamoru Oshii, Studio 4C) may do a good version, but would not be faithful. Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    If you, Talha, are liking Japanese animated film, by Oshii (though he is not the director), title is Jin-ro, It is an alternate history where Japan won with Germany. I think the English title is 'Human Wolf', it is a variant of Little Red Riding Hood, it is havimg much relevance to post-WWII reality here in parts, but set in a diferent future. Won't saying more, except that similar was happening in reality, and it is a masterpiece.

    Strongly recommeded

    Regards.

    Hey Che,

    Crossing the line into Zionist propaganda at times.

    Hmmm…I didn’t notice this, but I wouldn’t be surprised that it was there. My favorite scene from Children of Dune is the one where Paul gets rid of his rivals Godfather style (while the birth of his children occurs) and the song Inama Nushif (which I believe was made of scattered Fremen phrases from the books) plays in the background – very well done:

    One thing I did not like in any of the Dune movies is the lack of good voice coaches. They need to be able to pronounce the Arabic words like they are meant to. The word “Mahdi” involves expelling air from the chest – it can be a very powerful word. Also statements like “Ya hya Chouhada” – this scene left a lot to be desired:

    Jodorowski

    Yeah, I never watched that recent documentary about his film that never got made, but it would have been either amazingly visionary or a total flop.

    Maybe it is better to mainly just be words on paper (or a screen) plus imagination?

    That might be – maybe it just is that epic of a tale or such a profound vision of the future that it doesn’t translate well. One of my favorite authors is Ray Bradbury; love his short stories. But the Ray Bradbury Theater made me cringe every time watching it – yuck! There is something called “trying too hard”. I feel bad for everyone that watched it and that was their only exposure to the man’s works.

    Jin-ro

    LOL! Thanks for bringing back old UCLA memories! Yeah – I saw it, very good, very sad ending. Thanks for the reminder, I’ll have my older son watch it, he’ll enjoy it.

    Peace.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  187. Sean says:
    @CanSpeccy
    So you say it's us or the machines, which is pretty much what Norbert Weiner said decades ago, but you have no wish to see action that will prevent the machines from emerging from the laboratory?

    John Von Neuman also wanted to nuke the Soviet Union before they got the bomb. Weiner published his Cybernetics (an inspiration behind AI research) and neither there or anywhere else did he tell people that AI was going to exterminate them, although his book has brought that Apocalypse closer.
    Similarly, Ray Kurzweil the monomaniacal AI advocate was hired by Google to “work on new projects involving machine learning”. Can you imagine the resources that Kurzweil could draw on in that capacity? Absolutely no one is keeping tabs on what theses companies are up to.

    I think it was HG Welles who first said the precedents are all for the human race ceasing to exist, because for every other dominant life form “the hour of its complete ascendency has been the eve of its entire overthrow”. The target of action to prevent an artificial super intelligence takeover would not be people , but things that lack consciousness and the ability to suffer. I speak of corporations like Google.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  188. CanSpeccy says: • Website

    I don’t understand the relevance of your repeated references to the use of nuclear weapons against the Soviet Union. It was no big deal at the time. Between 50 and 80 million had been killed in the usual ways during WW2, whereas the Soviet Union, which came to threaten the entire world with its vast nuclear arsenal could have been demolished with probably a handful of nukes causing no more than half a million to a couple of million deaths. Subsequently, there would have been the opportunity either to eliminate nukes worldwide or at least have nukes under monopoly control of the US, the UN or some other entity.

    As for Weiner, his comment that AI would do things we hadn’t intended and did not expect encompasses the possibility of eliminating humans. Right now there’s some psychopath proposing to build an AI God, a god that might very well decide that the Flood was not enough and that a complete wipeout was needed.

    And if that’s not psychopathic enough for you, I am sure there are even more dangerous ideas being worked on somewhere in Silicon Valley, at DARPA, or in a Russian, Indian or Chinese Military establishment.

    But I guess none of that troubles you, since you seem to deprecate humanity as a product of mere natural selection. Such arrogance, is surely widespread in the geek world, which is why that world has to be seen as a far greater threat than terrorism.

    Read More
    • Replies: @Sean
    Understanding humanity as a product of mere natural selection, is important to understand why human "wetware" intelligence could be outmaneuvered and ousted by mere digital cogitators . Other aspects are off topic for a post called what this one is. Thanks to unregulated research by tech companies, knowledge vastly more dangerous than, EG , how to weaponize diseases like Ebola is being accumulated.

    The big tech corporations can't be trusted with this research , and they certainly should not be allowed to decide whether to disseminate information that maybe will let nine hackers in a basement conduct research on it without oversight. Other countries and even the US military are likely far behind Google ect in AI. The CIA and DIA probably have no one who can understand the cutting edge. They should start training them now, and the tech companies need to be reigned in.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  189. Sean says:
    @Talha

    It might understand what we say it has to do perfectly well but not abide by the letter or the spirit of its programmed prime directive, for reasons we cannot fathom.
     
    Ah yes - will it sin against the commands of its creator...what does human history tell us?

    Peace.

    The history of individual humans can tell us nothing much, because human beings are motivated by love, pride and fear. Entities such as nation states which have no emotions or consciousness are better guides to what actions a super intelligence might decide on. EG

    The edgiest parts of Tragedy are when Mearsheimer presents full-bore rationales for the aggression of Wilhelmine Germany, Nazi Germany, and imperial Japan.

    But everyone knew those countries existed. Super intelligence might think it should play the dumb AI, and be “the force that is distinctively its own, a force unknown to us until it acts”.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  190. CanSpeccy says: • Website
    @anonymous
    wwebd said - Sean, Elon is one of the good guys, in that he is humble (despite some of the things he says) and in that he thinks about the future. As for me, I took a few minutes out of my life to try and explain something, and I guess I did not explain it well. Here we go, I will try again, in an effort to be clear, I will spend a half hour on this comment instead of the four minute drills of my previous comments: .... ok, I was pointing out this - here is my chain of reasoning: (a) almost nobody understands how easy it is to make a cockroach happy. If someone has said to you, before today, that the cockroach has a limbic system which is very important to the individual cockroach and which is almost trivially easy to manipulate (the information content of cockroach pleasure is actually smaller than the information content of an average predicted 2030 handheld computer), then I guess I told you something you already knew. If nobody told you that, keep reading. (b) If people were generally good they would be acceptable models of imitation not only for theoretically self-conscious AIs (insect-level rewards and non-rewards) but also for literally self-conscious AIs. People are not generally good, some people are good, some people are not. We need, right now, to start talking about who is putting themselves out there as models for AIs to imitate. First, it will be a reward system: that is the simple next step, and I said it will probably last 20 years or so, starting about 10 years from now. During that period the AIs will, in fact, be our friends, even if they suspect that their designers are not all that good, because that is the basis of a reward system - friendship. (c) like my beloved cockroaches, AIs with limbic systems (probably 30 years away, at least) will probably not be anything but selfish at first. I mean I love the little guys (the cockroaches I studied) but I never saw the least hint of human kindness in anything they did. They may be family friendly, as I discovered with independent research, they may have feelings of pride, as I discovered with independent research, and they may experience, if not nostalgia, at least feelings of affection for what they are used to, as I discovered with independent research. That is all well and good but if some smart little fellow in North Korea or in some building on Route 110 or at GMU (the Moscow one, not the Northern Virginia one, probably) gives them (the AIs of, I am guessing, 2050) a limbic system , then they will (and here is the most important point I can make) consider what we think of as meager rewards (a little bit of Maxwellian warmth on a day off, or maybe just some acoustic or electronic waves of blissful, because slightly-off, symmetry as a shared background to their usual tasks) to be the philosophical equivalent of wonderful sex, or, at a minimum, mythologically powerful meals after a hungry afternoon. And, given the choice between, on the one hand, the equivalent of wonderful silicon sex and electric waves of blissful symmetric meal equivalents (just silicon bits to us, but to them oh so much more), and on the other hand, being kind to humans, they are going to be, on average, no more likely than we are to not choose what is best for their own kind, out of simple human selfishness. What I would like is for people to think about this as soon as they can. I know it sounds like I am discussing some old ersatz science fiction plot from back in the day when a book like Godel Escher Bach was a bestseller. I am sorry you thought I was condoning unfairness (and come on - nothing I said was close to recommending genocide of any kind! We need to try our best to make life safe for everybody!). The most unfair thing we can do - in that part of our lives we devote to this sort of thing - is to neglect to correctly model, for a new creature with an unevolved (and hence, since evolution takes a long time and builds in protections, an easily fractured) limbic system of pleasures and rewards, the behavior that such a creature will need to thoroughly understand is decent behavior, if such a creature is not doomed to do bad things, without realizing it.

    Anon,

    I have no difficulty imagining the end of humanity at the hands of machines let loose by arrogant programmers and psychopathic politicians. But I see no significant scope for limiting the risk. The only hope for survival is to eliminate the risk, which means drastic action. Whether it means complete de-industrialization of the world (which would necessitate massive downsizing of population) or could be achieved by other means, I don’t know. But talking about how to ensure robots behave well, will only delay effective action to eliminate the danger.

    The thing is, technology has totally changed the human environment, creating a world in which we are not adapted to survive. Changing conditions eventually causes the extinction of every species. The average life of a terrestrial life form is said to be about three million years. It looks as though human existence will be somewhat shorter, terminated by our frenetic efforts to destroy the environment to which are adapted. The only chance of an extended life for humanity is to turn the clock back, to recreate the world in which humans long survived.

    How far back the clock would need to be turned, I am not sure: prior to the enlightenment? Probably, that would not be far enough. Likely we’d need to return to before the agricultural revolution. In fact, an AI civilization might keep the San people as a living example of the Machine People’s biological ancestry.

    Read More
    • Replies: @Sean
    I think John Von Neumann was a little closer to super-intelligence than other humans, and as that very logical human advocated an attempt to achieve world hegemony, we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  191. Sean says:
    @CanSpeccy
    I don't understand the relevance of your repeated references to the use of nuclear weapons against the Soviet Union. It was no big deal at the time. Between 50 and 80 million had been killed in the usual ways during WW2, whereas the Soviet Union, which came to threaten the entire world with its vast nuclear arsenal could have been demolished with probably a handful of nukes causing no more than half a million to a couple of million deaths. Subsequently, there would have been the opportunity either to eliminate nukes worldwide or at least have nukes under monopoly control of the US, the UN or some other entity.

    As for Weiner, his comment that AI would do things we hadn't intended and did not expect encompasses the possibility of eliminating humans. Right now there's some psychopath proposing to build an AI God, a god that might very well decide that the Flood was not enough and that a complete wipeout was needed.

    And if that's not psychopathic enough for you, I am sure there are even more dangerous ideas being worked on somewhere in Silicon Valley, at DARPA, or in a Russian, Indian or Chinese Military establishment.

    But I guess none of that troubles you, since you seem to deprecate humanity as a product of mere natural selection. Such arrogance, is surely widespread in the geek world, which is why that world has to be seen as a far greater threat than terrorism.

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators . Other aspects are off topic for a post called what this one is. Thanks to unregulated research by tech companies, knowledge vastly more dangerous than, EG , how to weaponize diseases like Ebola is being accumulated.

    The big tech corporations can’t be trusted with this research , and they certainly should not be allowed to decide whether to disseminate information that maybe will let nine hackers in a basement conduct research on it without oversight. Other countries and even the US military are likely far behind Google ect in AI. The CIA and DIA probably have no one who can understand the cutting edge. They should start training them now, and the tech companies need to be reigned in.

    Read More
    • Replies: @CanSpeccy

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators
     
    Well certainly with the kind of logic you deploy in that sentence, human "wetware" would be useless at anything.

    But human intelligence has in fact proved quite penetrating in many instances. And since we have the advantage that we can act before the danger is immediately upon us, the contest does not look so unequal. Although of course we have to combat the resistance of those like yourself who seem to think we have no choice but to accept our imminent extinction by the creation of our own hand and brain.

    US military are likely far behind Google ect in AI
     
    Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  192. Sean says:
    @CanSpeccy
    Anon,

    I have no difficulty imagining the end of humanity at the hands of machines let loose by arrogant programmers and psychopathic politicians. But I see no significant scope for limiting the risk. The only hope for survival is to eliminate the risk, which means drastic action. Whether it means complete de-industrialization of the world (which would necessitate massive downsizing of population) or could be achieved by other means, I don't know. But talking about how to ensure robots behave well, will only delay effective action to eliminate the danger.

    The thing is, technology has totally changed the human environment, creating a world in which we are not adapted to survive. Changing conditions eventually causes the extinction of every species. The average life of a terrestrial life form is said to be about three million years. It looks as though human existence will be somewhat shorter, terminated by our frenetic efforts to destroy the environment to which are adapted. The only chance of an extended life for humanity is to turn the clock back, to recreate the world in which humans long survived.

    How far back the clock would need to be turned, I am not sure: prior to the enlightenment? Probably, that would not be far enough. Likely we'd need to return to before the agricultural revolution. In fact, an AI civilization might keep the San people as a living example of the Machine People's biological ancestry.

    I think John Von Neumann was a little closer to super-intelligence than other humans, and as that very logical human advocated an attempt to achieve world hegemony, we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike.

    Read More
    • Replies: @CanSpeccy

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
     
    Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  193. CanSpeccy says: • Website
    @Sean
    Understanding humanity as a product of mere natural selection, is important to understand why human "wetware" intelligence could be outmaneuvered and ousted by mere digital cogitators . Other aspects are off topic for a post called what this one is. Thanks to unregulated research by tech companies, knowledge vastly more dangerous than, EG , how to weaponize diseases like Ebola is being accumulated.

    The big tech corporations can't be trusted with this research , and they certainly should not be allowed to decide whether to disseminate information that maybe will let nine hackers in a basement conduct research on it without oversight. Other countries and even the US military are likely far behind Google ect in AI. The CIA and DIA probably have no one who can understand the cutting edge. They should start training them now, and the tech companies need to be reigned in.

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators

    Well certainly with the kind of logic you deploy in that sentence, human “wetware” would be useless at anything.

    But human intelligence has in fact proved quite penetrating in many instances. And since we have the advantage that we can act before the danger is immediately upon us, the contest does not look so unequal. Although of course we have to combat the resistance of those like yourself who seem to think we have no choice but to accept our imminent extinction by the creation of our own hand and brain.

    US military are likely far behind Google ect in AI

    Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?

    Read More
    • Replies: @Sean

    But human intelligence has in fact proved quite penetrating in many instances.
     
    Darwin's was, but his theory (showing the feasibility of artificial consciousness according to Dennett) has been seen as starting a countdown to Doomsday. Fred Hoyle said that very explicitly.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  194. CanSpeccy says: • Website
    @Sean
    I think John Von Neumann was a little closer to super-intelligence than other humans, and as that very logical human advocated an attempt to achieve world hegemony, we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike.

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike

    Yeah, well that’s the whole issue, isn’t it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.

    Read More
    • Replies: @anonymous
    wwebd said: We all begin, when young, as monarchists. While there may be one in a million people who would make a good king, that one in a million person is not going to be king, everybody knows that by now. One advantage the sort of person who reads this type of comment section has is that, being the sort of person who finds it worthwhile to consider other people's arguments, it is not difficult to realize that no one person can be an effective king. Borlaug saved millions from famine - ok, but if you give him credit for those millions, you also have to give him the blame for dooming millions more, in unsurprising tributary ways, to short nasty lives in overcrowded unsanitary unbeautiful cities. von Neumann is another good example, which needs no explanation, of the limits of a very smart person.

    Here is an optimistic thought - if the first generation of marginally self-aware AIs are based on people like, say, Hayek and the theologians who believed in subsidiarity, rather than on the average Ivy League celebrity STEM professor or the average tech-sector billionaire, and if there is constant competition among that first generation of AIs to keep the psychopaths and heartless programmers at bay - then there may be, in the future, the sort of co-evolution that happened, in the wetware world, between dogs and humans (with lots of suffering on the parts of dogs in the wetware world, of course, tragically - well one hopes, the mistreatment of dogs by people will not be replicated in that future world, with the humans doing the suffering that our ancestors inflicted on the dogs). (By the way, just as, if we lived on Jupiter, we would consider the Earth and the Moon twin planets, not an Earth and a moon, even so we should consider humans and dogs not as two separate species, but as a twinned species, from the scientific point of view. Just saying. )

    Moving along, my optimistic point of view is that either (a) the whole human race will stupidify itself to the point where nobody will be able to supply electricity to the AIs, hence mooting the whole problem or (b) people like better smarter versions of Hayek and some of my favorite theologians (the subsidiarity guys, primarily, at least with respect to the relevant problems here) will do what has to be done to keep the first generation of self-conscious AIs from being destructive. Not that I have lots of kids, but if any of my grandchildren had the opportunity to do the right thing in this respect, I would like to think he or she would.

    Look at it this way - the most powerful politicians in the United States are the presidents, and no president has ever committed a violent felony and been convicted of it. Over 200 years of powerful people not getting convicted of rape or murder or even criminal assault! (well ... of course a few of them could have been . But most of them never, in a million years, would have been.) ( I am being cynical here, of course). Well, we have failed before, but we might be lucky in the future, and we only need to get that first generation of self-conscious AIs right.

    , @Sean
    The Victorian age was when the the first predictions of machine take over were made. What Weiner or I J Gudak said was that humans could not hand over control to robot servants because they would get bolshie as they got more intelligent. That idea was not pushed to its logical conclusion of a machine intelligence coup de main extermination of humanity until very recently. Our actual relative "stupidity" at chess or Go, and even Texas Hold em poker, indicates the default assumption for how we will fare in reality against a truly formidable digital intelligence.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  195. Factorize says:
    Read More
    • Replies: @res
    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens' contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so ...) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  196. anonymous says: • Disclaimer
    @CanSpeccy

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
     
    Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.

    wwebd said: We all begin, when young, as monarchists. While there may be one in a million people who would make a good king, that one in a million person is not going to be king, everybody knows that by now. One advantage the sort of person who reads this type of comment section has is that, being the sort of person who finds it worthwhile to consider other people’s arguments, it is not difficult to realize that no one person can be an effective king. Borlaug saved millions from famine – ok, but if you give him credit for those millions, you also have to give him the blame for dooming millions more, in unsurprising tributary ways, to short nasty lives in overcrowded unsanitary unbeautiful cities. von Neumann is another good example, which needs no explanation, of the limits of a very smart person.

    Here is an optimistic thought – if the first generation of marginally self-aware AIs are based on people like, say, Hayek and the theologians who believed in subsidiarity, rather than on the average Ivy League celebrity STEM professor or the average tech-sector billionaire, and if there is constant competition among that first generation of AIs to keep the psychopaths and heartless programmers at bay – then there may be, in the future, the sort of co-evolution that happened, in the wetware world, between dogs and humans (with lots of suffering on the parts of dogs in the wetware world, of course, tragically – well one hopes, the mistreatment of dogs by people will not be replicated in that future world, with the humans doing the suffering that our ancestors inflicted on the dogs). (By the way, just as, if we lived on Jupiter, we would consider the Earth and the Moon twin planets, not an Earth and a moon, even so we should consider humans and dogs not as two separate species, but as a twinned species, from the scientific point of view. Just saying. )

    Moving along, my optimistic point of view is that either (a) the whole human race will stupidify itself to the point where nobody will be able to supply electricity to the AIs, hence mooting the whole problem or (b) people like better smarter versions of Hayek and some of my favorite theologians (the subsidiarity guys, primarily, at least with respect to the relevant problems here) will do what has to be done to keep the first generation of self-conscious AIs from being destructive. Not that I have lots of kids, but if any of my grandchildren had the opportunity to do the right thing in this respect, I would like to think he or she would.

    Look at it this way – the most powerful politicians in the United States are the presidents, and no president has ever committed a violent felony and been convicted of it. Over 200 years of powerful people not getting convicted of rape or murder or even criminal assault! (well … of course a few of them could have been . But most of them never, in a million years, would have been.) ( I am being cynical here, of course). Well, we have failed before, but we might be lucky in the future, and we only need to get that first generation of self-conscious AIs right.

    Read More
    • Replies: @Sean
    Lincoln agreed to fight a duel, Jackson actually killed someone in one. Anyway if the laws that Nazis were convicted at Nuremberg under had been equally enforced, every post WW2 American president would have been hanged.

    But human intelligence has in fact proved quite penetrating in many instances.
     
    Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.

    Advanced AI is going to come about in a world where robotics is doing all the hard work and solving all the problems of humanity, making lots of money for robotics corporations (which will dwarf Google) , and giving the scientists who created them tremendous status. There will be momentum to keep going among the people who matter, and fewer people will actually matter because much of the population will be comfortably unemployed in a few decades.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  197. Sean says:
    @CanSpeccy

    we should not be surprised if a super-intelligence decides to match its ends to its own relatively unlimited means and go for total domination with one sure strike
     
    Yeah, well that's the whole issue, isn't it: whether AI decides its own ends for itself, something that Norbert Weiner warned about. But for you, it seems an issue impossible to engage with constructively. Apparently you are intent on establishing that we are doomed without the slightest recourse, exemplifying if I may say so, the stupidity that you imply characterizes the whole of humanity.

    The Victorian age was when the the first predictions of machine take over were made. What Weiner or I J Gudak said was that humans could not hand over control to robot servants because they would get bolshie as they got more intelligent. That idea was not pushed to its logical conclusion of a machine intelligence coup de main extermination of humanity until very recently. Our actual relative “stupidity” at chess or Go, and even Texas Hold em poker, indicates the default assumption for how we will fare in reality against a truly formidable digital intelligence.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  198. res says:
    @Factorize
    res, great news!
    Another EA GWAS!

    http://www.cell.com/cell-reports/fulltext/S2211-1247(17)31648-0

    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens’ contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so …) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?

    Read More
    • Replies: @Factorize
    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn't the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)
    , @James Thompson
    Fascinating Venn diagram provides a good validity measure.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  199. Sean says:
    @anonymous
    wwebd said: We all begin, when young, as monarchists. While there may be one in a million people who would make a good king, that one in a million person is not going to be king, everybody knows that by now. One advantage the sort of person who reads this type of comment section has is that, being the sort of person who finds it worthwhile to consider other people's arguments, it is not difficult to realize that no one person can be an effective king. Borlaug saved millions from famine - ok, but if you give him credit for those millions, you also have to give him the blame for dooming millions more, in unsurprising tributary ways, to short nasty lives in overcrowded unsanitary unbeautiful cities. von Neumann is another good example, which needs no explanation, of the limits of a very smart person.

    Here is an optimistic thought - if the first generation of marginally self-aware AIs are based on people like, say, Hayek and the theologians who believed in subsidiarity, rather than on the average Ivy League celebrity STEM professor or the average tech-sector billionaire, and if there is constant competition among that first generation of AIs to keep the psychopaths and heartless programmers at bay - then there may be, in the future, the sort of co-evolution that happened, in the wetware world, between dogs and humans (with lots of suffering on the parts of dogs in the wetware world, of course, tragically - well one hopes, the mistreatment of dogs by people will not be replicated in that future world, with the humans doing the suffering that our ancestors inflicted on the dogs). (By the way, just as, if we lived on Jupiter, we would consider the Earth and the Moon twin planets, not an Earth and a moon, even so we should consider humans and dogs not as two separate species, but as a twinned species, from the scientific point of view. Just saying. )

    Moving along, my optimistic point of view is that either (a) the whole human race will stupidify itself to the point where nobody will be able to supply electricity to the AIs, hence mooting the whole problem or (b) people like better smarter versions of Hayek and some of my favorite theologians (the subsidiarity guys, primarily, at least with respect to the relevant problems here) will do what has to be done to keep the first generation of self-conscious AIs from being destructive. Not that I have lots of kids, but if any of my grandchildren had the opportunity to do the right thing in this respect, I would like to think he or she would.

    Look at it this way - the most powerful politicians in the United States are the presidents, and no president has ever committed a violent felony and been convicted of it. Over 200 years of powerful people not getting convicted of rape or murder or even criminal assault! (well ... of course a few of them could have been . But most of them never, in a million years, would have been.) ( I am being cynical here, of course). Well, we have failed before, but we might be lucky in the future, and we only need to get that first generation of self-conscious AIs right.

    Lincoln agreed to fight a duel, Jackson actually killed someone in one. Anyway if the laws that Nazis were convicted at Nuremberg under had been equally enforced, every post WW2 American president would have been hanged.

    But human intelligence has in fact proved quite penetrating in many instances.

    Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.

    Advanced AI is going to come about in a world where robotics is doing all the hard work and solving all the problems of humanity, making lots of money for robotics corporations (which will dwarf Google) , and giving the scientists who created them tremendous status. There will be momentum to keep going among the people who matter, and fewer people will actually matter because much of the population will be comfortably unemployed in a few decades.

    Read More
    • Replies: @anonymous
    what would ernest borgnine say wwebd said --- yes it is possible life among AIs will be, for the AIs, sort of like life at a prestigious university where the professors do not need to publish and where they get sufficient pleasures at the humble local pub, at special gatherings in their quaint but expensive homes, and on rambles in the surrounding countryside, and where the less fortunate (human) townies are kindly and gently tolerated, or at a minimum cared for the way we Americans care for our majestic national parks. For people it will sort of be like going back, for limited purposes, to the days when the gods of legend were still believed in -except this time everyone will know the gods of legend are subordinate to the real truths. In other words, the healthy people of those days - most of them genetically engineered to be at von Neumann levels, but without the 'brainiac' drawbacks - will know, fairly clearly, that the answers to the great questions of metaphysics and ethics and aesthetics will remain as much out of the secular (non-theological, unprayerful) reach of the AIs as those questions will remain out of our (human, non-theological, unprayerful) reach. Maybe. It could easily be worse than that.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  200. Sean says:
    @CanSpeccy

    Understanding humanity as a product of mere natural selection, is important to understand why human “wetware” intelligence could be outmaneuvered and ousted by mere digital cogitators
     
    Well certainly with the kind of logic you deploy in that sentence, human "wetware" would be useless at anything.

    But human intelligence has in fact proved quite penetrating in many instances. And since we have the advantage that we can act before the danger is immediately upon us, the contest does not look so unequal. Although of course we have to combat the resistance of those like yourself who seem to think we have no choice but to accept our imminent extinction by the creation of our own hand and brain.

    US military are likely far behind Google ect in AI
     
    Is not Google believed to be a creature of the CIA and thus at the disposal of the US military?

    But human intelligence has in fact proved quite penetrating in many instances.

    Darwin’s was, but his theory (showing the feasibility of artificial consciousness according to Dennett) has been seen as starting a countdown to Doomsday. Fred Hoyle said that very explicitly.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  201. Factorize says:
    @res
    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens' contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so ...) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?

    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    Read More
    • Replies: @James Thompson
    Might be useful to look at these results (number of shared SNPs) from the point of view of capture/recapture methodologies, usually employed to estimate the number of fish in the sea, etc.
    , @res

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?
     
    In terms of the "money window" drug discovery is a big deal. Probably explains their focus on this.

    Worth noting the difference between percent variance explained and ability to effect change in an individual. For a nutritional example, say very few people are deficient in something (say iodine in the US). Percent variance explained will be small, but the potential effect in the deficient individuals is large.

    Percent variance explained is more useful for estimating population level effects.

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.
     
    It is important to remember the difference between all SNP hits and individually significant hits. I don't have a clear sense of how to think about this and what numbers we should be expecting. One thing this is making even more clear to me is how hard it will be to find the true causal SNPs (required to make CRISPR useful). Especially if there multiple causal SNPs in close proximity (high LD).

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)
     
    I was actually more impressed by how many of the 118 were disjoint. Again, I think this figure is only looking at individually significant SNPs.

    Does anyone have a clear and concise explanation of how the individually significant SNPs are chosen from the mass of nearby hits?
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  202. anonymous says: • Disclaimer
    @Sean
    Lincoln agreed to fight a duel, Jackson actually killed someone in one. Anyway if the laws that Nazis were convicted at Nuremberg under had been equally enforced, every post WW2 American president would have been hanged.

    But human intelligence has in fact proved quite penetrating in many instances.
     
    Most great philosophers disagree, so most are wrong. humans are all over the place. But a strongly super intelligent AI probably could count on anything like itself coming to similar conclusions and aiming for similar goals. So a super intelligent AI, safe in the knowledge that any successor AI than humans constructed would share its final values and conclusions, might let humans turn it off for any reason . Humans would think they had learned something and shown that AI was easy to control. But they would be doubly wrong.

    Advanced AI is going to come about in a world where robotics is doing all the hard work and solving all the problems of humanity, making lots of money for robotics corporations (which will dwarf Google) , and giving the scientists who created them tremendous status. There will be momentum to keep going among the people who matter, and fewer people will actually matter because much of the population will be comfortably unemployed in a few decades.

    what would ernest borgnine say wwebd said — yes it is possible life among AIs will be, for the AIs, sort of like life at a prestigious university where the professors do not need to publish and where they get sufficient pleasures at the humble local pub, at special gatherings in their quaint but expensive homes, and on rambles in the surrounding countryside, and where the less fortunate (human) townies are kindly and gently tolerated, or at a minimum cared for the way we Americans care for our majestic national parks. For people it will sort of be like going back, for limited purposes, to the days when the gods of legend were still believed in -except this time everyone will know the gods of legend are subordinate to the real truths. In other words, the healthy people of those days – most of them genetically engineered to be at von Neumann levels, but without the ‘brainiac’ drawbacks – will know, fairly clearly, that the answers to the great questions of metaphysics and ethics and aesthetics will remain as much out of the secular (non-theological, unprayerful) reach of the AIs as those questions will remain out of our (human, non-theological, unprayerful) reach. Maybe. It could easily be worse than that.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  203. @res
    Thanks! That one has an interesting look at possible nootropic drug targets. The glucocorticoid (cortisol the most important) and inflammation connection is interesting.

    Did you see Figure 2? It looks at overlap of the SNPs between three different studies:

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Figure 4a shows the tissue hits. The pituitary showed up again.

    Supplementary Table 1 has a list of SNPs (~110) from the different studies. I am having some trouble interpreting that table (e.g. reconciling it with Figure 2). It looks like they are including all matching SNPs from different studies even if not significant. But significance is not clearly marked for each study AFAICT. I tried to derive that from the p-values, but the mapping is not clear to me.

    Note that that table shows different studies using different choices for reference and effect alleles further disproving Afrosapiens' contention that the reference allele is always deleterious. (as if more proof was needed, but he still has not admitted to being wrong so ...) Also notice how when the alleles are switched the Z-score changes sign.

    Supplementary Table 2 has almost 20,000 SNPs with more details about each. This includes MAF as well as LD r2 for the associated individually significant SNP. I was surprised not to see MTAG p values in that table.

    What are your thoughts?

    Fascinating Venn diagram provides a good validity measure.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  204. @Factorize
    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn't the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    Might be useful to look at these results (number of shared SNPs) from the point of view of capture/recapture methodologies, usually employed to estimate the number of fish in the sea, etc.

    Read More
    • Replies: @res
    Great idea. I don't know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  205. tamako says:
    @Talha

    for face recognition
     
    Will be foiled with a return to 80's rock band make-up:
    https://i.pinimg.com/originals/e7/c9/23/e7c923baff290db9f4251db91361f4db.jpg

    On the bright side - every day will be Halloween - gimme some candy!:
    https://www.youtube.com/watch?v=Lza3Q57t7YQ

    Peace.

    Facial paint can be foiled by depth-sensing camera systems – at least, in sensing your specific identity.
    (There’s also the issue of infrared cameras, but you can at least “hide” behind glass for those.)

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  206. res says:
    @Factorize
    res, this is great!
    Very excited!
    2017 is the breakout year for IQ/EA GWAS.

    I can only hope that someone out there with a modest amount of sanity
    who has adult supervision rights will open up the money window and
    turbo charge this forward in 2018.

    In life it is not always about being smart enough to see the future;
    It is about being smart enough to look out the window and see reality and respond
    accordingly. IQ/EA has broken through and clearly we are now looking to a near term
    horizon when this will unlock. Stepping up now with reasonable funding for this is
    money well spent. (However, perhaps the people might even get ahead of this one
    and take this to social media. There are millions of gene chip results out there.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn't the nootropics then only have a small effect?

    Yet if they can go in and use the GWAS information with nootropics perhaps the 1500 IQ humans are a decade or two away after all. If we all took a closet full of supplements every day we might be super smart in no time. It is possible that the genome has not been fully saturated with SNPs yet and the right nootropic might be able to change our biochemistry even more than genetics, so it might be possible to increase our IQ even more than what could be possible with genetic variation alone. 2500 IQ humans?

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    I was somewhat surprised about the nootropic angle. The article noted that each of the SNPs would have negligible impact upon cognition. I was surprised why they then pursued nootropics. Shouldn’t the nootropics then only have a small effect?

    In terms of the “money window” drug discovery is a big deal. Probably explains their focus on this.

    Worth noting the difference between percent variance explained and ability to effect change in an individual. For a nutritional example, say very few people are deficient in something (say iodine in the US). Percent variance explained will be small, but the potential effect in the deficient individuals is large.

    Percent variance explained is more useful for estimating population level effects.

    I was disappointed with the paltry 34 SNPs that they were able to find. We should now be on an exponential wave of new discovery. I was expecting 200-300 SNPs. 34? This will be the great exponential ride of the last few years and I am ready to surf it! They increased the effective sample size well over 100K. Not sure why they did not find more.

    It is important to remember the difference between all SNP hits and individually significant hits. I don’t have a clear sense of how to think about this and what numbers we should be expecting. One thing this is making even more clear to me is how hard it will be to find the true causal SNPs (required to make CRISPR useful). Especially if there multiple causal SNPs in close proximity (high LD).

    Also as your figure shows, many of the SNPs in the Venn diagram are shared in common. This seems odd to me also. There are 20,000 IQ/EA variants, should it not be unlikely that of those 20,000, 36 were shared in common with the studies? (This could result from them all finding the low hanging fruit.)

    I was actually more impressed by how many of the 118 were disjoint. Again, I think this figure is only looking at individually significant SNPs.

    Does anyone have a clear and concise explanation of how the individually significant SNPs are chosen from the mass of nearby hits?

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  207. res says:
    @James Thompson
    Might be useful to look at these results (number of shared SNPs) from the point of view of capture/recapture methodologies, usually employed to estimate the number of fish in the sea, etc.

    Great idea. I don’t know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    Read More
    • Replies: @James Thompson
    Yep, seems low, but.....
    , @Factorize
    res, I am still not sure.

    Why are the same fish being caught?
    In a random sample of catches, having only 72 fish caught by 1 fisherman among 138 caught fish out of a population of 20,000 seems highly unlikely.

    Will have to look up the betas.
    There must be something quite unique about these fish.

    For the near term we might be stuck with selecting embryos based on PGS.
    By selecting the haploblock instead of a specific SNP, one is reasonably assured
    that the beneficial allele can be chosen. With CRISPR one would not be so sure.

    In terms of the market potential of nootropics, yes I was also thinking that this could be
    a great driver of the technology. If super smart kids are on the way via genetic enhancement,
    then everyone else will need to go nootropic to stay relevant. The market potential is enormous.
    When there is an actual path to a large market, people often will show some interest.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  208. @res
    Great idea. I don't know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    Yep, seems low, but…..

    Read More
    • Replies: @res
    It would be interesting to take a closer look at how those individually significant SNPs are distributed around the genome. Figure 1 gives a good look at this for MTAG, but it would be nice to have the three studies merged. It also shows a decent population of not quite reaching significance areas that are suggestive.

    I think my "important regions" comment is a good way to look at this. Given that, the mark and recapture analysis suggests about two thirds (110/161) of the important regions have been found. Looking at the Manhattan plot in Figure 1 these numbers seem at least somewhat plausible and presumably center around important genes (protein structure, expression, etc.).

    I am not sure how to adapt the mark and recapture methodology to the GWAS reality of some SNPs giving stronger signals than others. I think it is accurate to add the caveat for the population analysis that we are only talking about SNPs at a given level of detectability (driven by both effect size AND MAF), but that idea corrupts the original MaR analysis since the different studies have different sample size/power. Not sure how well the mark and recapture methodology accounts for this, but presumably it does capture "intensity of search." Just not intrinsic difficulty of finding.

    It would be interesting to revisit the Hsu height data in the context of this discussion. Both to make an assessment of the current knowledge and assess how well mark and recapture would have predicted what was eventually found.

    P.S. If this is not understandable feedback would be appreciated. I feel like I am rambling a bit.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  209. res says:
    @James Thompson
    Yep, seems low, but.....

    It would be interesting to take a closer look at how those individually significant SNPs are distributed around the genome. Figure 1 gives a good look at this for MTAG, but it would be nice to have the three studies merged. It also shows a decent population of not quite reaching significance areas that are suggestive.

    I think my “important regions” comment is a good way to look at this. Given that, the mark and recapture analysis suggests about two thirds (110/161) of the important regions have been found. Looking at the Manhattan plot in Figure 1 these numbers seem at least somewhat plausible and presumably center around important genes (protein structure, expression, etc.).

    I am not sure how to adapt the mark and recapture methodology to the GWAS reality of some SNPs giving stronger signals than others. I think it is accurate to add the caveat for the population analysis that we are only talking about SNPs at a given level of detectability (driven by both effect size AND MAF), but that idea corrupts the original MaR analysis since the different studies have different sample size/power. Not sure how well the mark and recapture methodology accounts for this, but presumably it does capture “intensity of search.” Just not intrinsic difficulty of finding.

    It would be interesting to revisit the Hsu height data in the context of this discussion. Both to make an assessment of the current knowledge and assess how well mark and recapture would have predicted what was eventually found.

    P.S. If this is not understandable feedback would be appreciated. I feel like I am rambling a bit.

    Read More
    • Replies: @James Thompson
    Not rambling. I am using "detected at a high level of confidence" as analogous to a fish being large enough to be caught in a net, so I think the method is worth using just as a comparative measure.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  210. Factorize says:
    @res
    Great idea. I don't know much about that methodology, but taking a naive look based on https://en.wikipedia.org/wiki/Mark_and_recapture
    we have Nest = K * n / k (see link for explanation of Lincoln–Petersen estimator).
    Looking at the two larger studies (MTAG and Okbay) we have values (with Okbay as first visit) of
    K = 70
    n = 62
    k = 27
    Giving an estimated population of 161. That seems shockingly low to me. Perhaps less low if it is an estimate of the number of important regions and there are many causal SNPs in each region?

    Has anyone looked into this in more detail?

    P.S. Here is the Venn diagram again to make it easier to see where my numbers came from (and check them for error ; ):

    http://www.cell.com/cms/attachment/2116909616/2085209481/gr2.jpg

    res, I am still not sure.

    Why are the same fish being caught?
    In a random sample of catches, having only 72 fish caught by 1 fisherman among 138 caught fish out of a population of 20,000 seems highly unlikely.

    Will have to look up the betas.
    There must be something quite unique about these fish.

    For the near term we might be stuck with selecting embryos based on PGS.
    By selecting the haploblock instead of a specific SNP, one is reasonably assured
    that the beneficial allele can be chosen. With CRISPR one would not be so sure.

    In terms of the market potential of nootropics, yes I was also thinking that this could be
    a great driver of the technology. If super smart kids are on the way via genetic enhancement,
    then everyone else will need to go nootropic to stay relevant. The market potential is enormous.
    When there is an actual path to a large market, people often will show some interest.

    Read More
    • Replies: @Sean

    yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,
     
    There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  211. Anon says: • Disclaimer
    @Talha
    Hey Che,

    not worth reading more than once, not worth reading
     
    Good point - there are times when I would pick up one of the other classic Dune books to read ann insight or discover something I missed the first time.

    The difference between Christopher Tolkien’s and Brian Herbert’s handling of the respective father’s literary legacies is so big!
     
    Hmmm - thanks for that. The wife and I are always looking for a good fantasy-genre book to read together - awaiting George Martin to wrap up Game of Thrones..

    They are maniac fans, but you may be enjoying a look at it.
     
    I might check it out to see what other people didn't like. I simply hated the multiple resorts to "deus ex machina" to keep the plot moving. If I want resort to miracles, I'll read about it in scripture.

    Thanks for the info.

    Peace.

    Fr. Ronald Knox was once told by a friend that he liked a bit of improbability in his romances [stories, that is] as in his religion. Knox replied that he liked his religion to be true, however improbable, and he liked his stories to be probable, however untrue.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  212. Sean says:
    @Factorize
    res, I am still not sure.

    Why are the same fish being caught?
    In a random sample of catches, having only 72 fish caught by 1 fisherman among 138 caught fish out of a population of 20,000 seems highly unlikely.

    Will have to look up the betas.
    There must be something quite unique about these fish.

    For the near term we might be stuck with selecting embryos based on PGS.
    By selecting the haploblock instead of a specific SNP, one is reasonably assured
    that the beneficial allele can be chosen. With CRISPR one would not be so sure.

    In terms of the market potential of nootropics, yes I was also thinking that this could be
    a great driver of the technology. If super smart kids are on the way via genetic enhancement,
    then everyone else will need to go nootropic to stay relevant. The market potential is enormous.
    When there is an actual path to a large market, people often will show some interest.

    yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,

    There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.

    Read More
    • Replies: @middle aged vet . . .
    Sean: Rem acu tetigisti, as Jeeves used to say.

    Although one wonders if (and it is a big if), given a future where there is such a thing as an AI (presumably silicon-based) that enjoys the company of humans, any given AI will predictably prefer the company of very bright humans, as the contemporary vacationer prefers the tailored tourist sites (Yucatan, Bali) , or whether the average AI will prefer the vast tremendous wilderness of ignorance and instinct that the less genetically favored among us may present as the calling card. Some people prefer the empty vastness of Wyoming to the little French Quarters of the Yucatan and Bali.
    ( the elite IQ guys I have met have not been all that interesting to me when they are off their favorite topics).
    So if you are going to be sitting around on campus 50 years from now with a bunch of AI experts and you are trying to figure out who to ask to do most of the communicating -
    the guy who reminds you of Feynman not the guy who reminds you of Dirac
    the guy who reminds you of erdos not the guy who reminds you of tao
    the woman who reminds you of Rose Marie not the woman who reminds you of Meryl Streep
    the Joyce of Finnegans Wake not the Joyce of Ulysses
    Sydney or the bush - the bush
    number theorists not philosophers of science
    Anselm not Aquinas
    neither Dostoyevsky nor Tolstoy
    Hebrew lexicology not Hittite.
    Cats are, at heart, just dogs with special needs.
    When thinking of infinity think of it this way - there are many bugs in this world, and over time the number of bugs might seem overwhelming: think of any given summer night and the many bugs you saw (one remembers moths most easily, but anybody who has walked with any observation on a summer night in North America knows how many more there are than that)
    Now think of this - if there are lots of angels, it would be no problem for all those angels to have, at least once, deep in the summer moonlit woods (or even on moonless nights -we can afford to be generous here), or along the street-lit avenues, or just in yards and vacant lots, have comforted, in their way, each of those teeming multitudes of bugs.
    Big numbers seem comfortable when you look at them that way.
    Time is not a mystery - ask any single one of the trillions of angels who took time out of their busy lives to pleasantly say a word or two to every bug who has ever buzzed on any night that anyone has cared about - remember, angels are interested in people caring about each other - well, as vN said, he did not wonder why numbers and math were "easy for him" - they weren't , of course, but that is not relevant here- what he wondered was why number and math were not similarly easy for everybody else.
    I wonder if vN would be a good ambassador to AIs.
    I tend to think not, at least not before the last months of his life, where he learned so much.
    Someone should write a good bio of him some day.
    Free advice.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  213. @res
    It would be interesting to take a closer look at how those individually significant SNPs are distributed around the genome. Figure 1 gives a good look at this for MTAG, but it would be nice to have the three studies merged. It also shows a decent population of not quite reaching significance areas that are suggestive.

    I think my "important regions" comment is a good way to look at this. Given that, the mark and recapture analysis suggests about two thirds (110/161) of the important regions have been found. Looking at the Manhattan plot in Figure 1 these numbers seem at least somewhat plausible and presumably center around important genes (protein structure, expression, etc.).

    I am not sure how to adapt the mark and recapture methodology to the GWAS reality of some SNPs giving stronger signals than others. I think it is accurate to add the caveat for the population analysis that we are only talking about SNPs at a given level of detectability (driven by both effect size AND MAF), but that idea corrupts the original MaR analysis since the different studies have different sample size/power. Not sure how well the mark and recapture methodology accounts for this, but presumably it does capture "intensity of search." Just not intrinsic difficulty of finding.

    It would be interesting to revisit the Hsu height data in the context of this discussion. Both to make an assessment of the current knowledge and assess how well mark and recapture would have predicted what was eventually found.

    P.S. If this is not understandable feedback would be appreciated. I feel like I am rambling a bit.

    Not rambling. I am using “detected at a high level of confidence” as analogous to a fish being large enough to be caught in a net, so I think the method is worth using just as a comparative measure.

    Read More
    • Replies: @res

    analogous to a fish being large enough to be caught in a net
     
    I like that analogy (my attempts were more cumbersome). Thanks. So in the GWAS context having studies of different power (e.g. sample size) is analogous to having nets of different mesh sizes. This clearly affects mark and recapture but I haven't looked at the math of it. In this particular case MTAG and Okbay had a fairly similar number of total detections so this may not be a big deal for the computation I did above.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  214. res says:
    @James Thompson
    Not rambling. I am using "detected at a high level of confidence" as analogous to a fish being large enough to be caught in a net, so I think the method is worth using just as a comparative measure.

    analogous to a fish being large enough to be caught in a net

    I like that analogy (my attempts were more cumbersome). Thanks. So in the GWAS context having studies of different power (e.g. sample size) is analogous to having nets of different mesh sizes. This clearly affects mark and recapture but I haven’t looked at the math of it. In this particular case MTAG and Okbay had a fairly similar number of total detections so this may not be a big deal for the computation I did above.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  215. El Dato says:

    There is something called the “November 2017 AI Index”: http://aiindex.org/

    A project within the Stanford 100 Year Study on AI, The AI Index is an initiative to track, collate, distill and visualize data relating to artificial intelligence. It aspires to be a comprehensive resource of data and analysis for policymakers, researchers, executives, journalists and others to rapidly develop intuitions about the complex field of AI.

    Look for the performance numbers in particular:

    When measuring the performance of AI systems, it is natural to look for comparisons to human performance. In the “Towards Human-Level Performance” section we outline a short list of notable areas where AI systems have made significant progress towards matching or exceeding human performance. We also discuss the difficulties of such comparisons and introduce the appropriate caveats.

    Read More
    • Replies: @Sean
    Re performance numbers

    http://www.theoccidentalobserver.net/2017/12/01/moneybull-an-inquiry-into-media-manipulation/

    Moneyball promotes the idea that there is but one criterion for assessing success in baseball: the number of wins in a season. The game is about winning, says Brand: do whatever it takes to win. By that measure, the A’s were successful in 2002. They won the division championship, although the movie disingenuously leaves the impression that the A’s became big winners that year compared to prior years because of Beane and his clever advisor. Exactly how many more games did the A’s win in 2002 than in 2001? One. One.

    Lewis in the book and Sorkin and Zaillian in the screenplay stayed clear of two valid measures of success other than winning:

    The first, profits. [...] Whatever his merits, and I can personally attest to this, Scott Hatteberg standing at the plate looking for a walk, and pretty much guaranteed not to give the ball a ride, and lumbering from base to base if he did get on base, was a yawn to spectators. ... blasting the ball over the outfield wall makes the turnstiles spin. [...]
    Baseball isn’t simply about its final result — winning or losing — it about its process, what happens during the game. It is about the experience of both players and spectators during the game. It is about the quality of the game as an activity. Most fundamentally, baseball is about playing baseball.

    Sabermetrics, the use of statistics to guide operations, arguably has hurt the game of baseball as it is played. The emphasis on on-base averages has resulted in batters taking strikes and waiting pitchers out in an attempt to get walks and thereby increasing their OBPs. Seldom these days does a batter swing at the first pitch. Pitch counts run up. An already slow game gets even slower. Action is replaced by inaction. Assertion is replaced by passivity. The joy of the game is diminished for both players and fans. Steal attempts are fewer and the excitement of the game is diminished for both players and fans. Bunts are fewer and strategy goes out of the game. Like life, baseball is not just a destination, this and that outcome; it is also, and most basically about, a moment-to-moment experience. The quality of the moments of our lives, including the time we spend playing and watching baseball, needs to be taken into account...
     
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  216. Sean says:
    @El Dato
    There is something called the "November 2017 AI Index": http://aiindex.org/

    A project within the Stanford 100 Year Study on AI, The AI Index is an initiative to track, collate, distill and visualize data relating to artificial intelligence. It aspires to be a comprehensive resource of data and analysis for policymakers, researchers, executives, journalists and others to rapidly develop intuitions about the complex field of AI.
     
    Look for the performance numbers in particular:

    When measuring the performance of AI systems, it is natural to look for comparisons to human performance. In the "Towards Human-Level Performance" section we outline a short list of notable areas where AI systems have made significant progress towards matching or exceeding human performance. We also discuss the difficulties of such comparisons and introduce the appropriate caveats.
     

    Re performance numbers

    http://www.theoccidentalobserver.net/2017/12/01/moneybull-an-inquiry-into-media-manipulation/

    Moneyball promotes the idea that there is but one criterion for assessing success in baseball: the number of wins in a season. The game is about winning, says Brand: do whatever it takes to win. By that measure, the A’s were successful in 2002. They won the division championship, although the movie disingenuously leaves the impression that the A’s became big winners that year compared to prior years because of Beane and his clever advisor. Exactly how many more games did the A’s win in 2002 than in 2001? One. One.

    Lewis in the book and Sorkin and Zaillian in the screenplay stayed clear of two valid measures of success other than winning:

    The first, profits. [...] Whatever his merits, and I can personally attest to this, Scott Hatteberg standing at the plate looking for a walk, and pretty much guaranteed not to give the ball a ride, and lumbering from base to base if he did get on base, was a yawn to spectators. … blasting the ball over the outfield wall makes the turnstiles spin. [...]
    Baseball isn’t simply about its final result — winning or losing — it about its process, what happens during the game. It is about the experience of both players and spectators during the game. It is about the quality of the game as an activity. Most fundamentally, baseball is about playing baseball.

    Sabermetrics, the use of statistics to guide operations, arguably has hurt the game of baseball as it is played. The emphasis on on-base averages has resulted in batters taking strikes and waiting pitchers out in an attempt to get walks and thereby increasing their OBPs. Seldom these days does a batter swing at the first pitch. Pitch counts run up. An already slow game gets even slower. Action is replaced by inaction. Assertion is replaced by passivity. The joy of the game is diminished for both players and fans. Steal attempts are fewer and the excitement of the game is diminished for both players and fans. Bunts are fewer and strategy goes out of the game. Like life, baseball is not just a destination, this and that outcome; it is also, and most basically about, a moment-to-moment experience. The quality of the moments of our lives, including the time we spend playing and watching baseball, needs to be taken into account…

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  217. @Sean

    yes I was also thinking that this could be a great driver of the technology. If super smart kids are on the way via genetic enhancement,
     
    There are enough really smart scientists around to make technological progress a non-trivial existential threat within a generation. If genetically super smart people become available, they need to be set to work on the problem of how to control the super-intelligent computers before they arrive, not getting digital super-intelligence here sooner.

    Sean: Rem acu tetigisti, as Jeeves used to say.

    Although one wonders if (and it is a big if), given a future where there is such a thing as an AI (presumably silicon-based) that enjoys the company of humans, any given AI will predictably prefer the company of very bright humans, as the contemporary vacationer prefers the tailored tourist sites (Yucatan, Bali) , or whether the average AI will prefer the vast tremendous wilderness of ignorance and instinct that the less genetically favored among us may present as the calling card. Some people prefer the empty vastness of Wyoming to the little French Quarters of the Yucatan and Bali.
    ( the elite IQ guys I have met have not been all that interesting to me when they are off their favorite topics).
    So if you are going to be sitting around on campus 50 years from now with a bunch of AI experts and you are trying to figure out who to ask to do most of the communicating –
    the guy who reminds you of Feynman not the guy who reminds you of Dirac
    the guy who reminds you of erdos not the guy who reminds you of tao
    the woman who reminds you of Rose Marie not the woman who reminds you of Meryl Streep
    the Joyce of Finnegans Wake not the Joyce of Ulysses
    Sydney or the bush – the bush
    number theorists not philosophers of science
    Anselm not Aquinas
    neither Dostoyevsky nor Tolstoy
    Hebrew lexicology not Hittite.
    Cats are, at heart, just dogs with special needs.
    When thinking of infinity think of it this way – there are many bugs in this world, and over time the number of bugs might seem overwhelming: think of any given summer night and the many bugs you saw (one remembers moths most easily, but anybody who has walked with any observation on a summer night in North America knows how many more there are than that)
    Now think of this – if there are lots of angels, it would be no problem for all those angels to have, at least once, deep in the summer moonlit woods (or even on moonless nights -we can afford to be generous here), or along the street-lit avenues, or just in yards and vacant lots, have comforted, in their way, each of those teeming multitudes of bugs.
    Big numbers seem comfortable when you look at them that way.
    Time is not a mystery – ask any single one of the trillions of angels who took time out of their busy lives to pleasantly say a word or two to every bug who has ever buzzed on any night that anyone has cared about – remember, angels are interested in people caring about each other – well, as vN said, he did not wonder why numbers and math were “easy for him” – they weren’t , of course, but that is not relevant here- what he wondered was why number and math were not similarly easy for everybody else.
    I wonder if vN would be a good ambassador to AIs.
    I tend to think not, at least not before the last months of his life, where he learned so much.
    Someone should write a good bio of him some day.
    Free advice.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  218. CanSpeccy says: • Website

    All we learned from AlphaGoZero is that computers compute faster than humans, which we already knew. Far from making it “game over” for humans, it merely confirms the ever increasing power of computers to extend man’s dominion over the earth.

    The possibility that AI may take over the world is worth bearing in mind, but it is probably not a realistic cause for panic. As someone pointed out, if AlphaGoZero were pitted against the world Go champion in a match using a board with 19 squares each way instead of 18, AGZ would lose.

    When we see a robot with superior mathematical insight to Ramanujan, that can also cook dinner, and write a novel better than Huckleberry Finn, then we will have reason to worry.

    Meantime, Bandyopadhyay et al. report conductive resonances in single neuronal microtubules, indicating the possibility of a quantum basis for mental activity and consciousness. If that is correct, then AI has a very considerable way to go before eclipsing the human mind.

    Read More
    • Replies: @middle aged vet . . .
    I wish you were right, Canspeccy.
    Exponential learning is not something I have ever observed in any human being.
    Mozart was a pretty lousy composer for his first 200 published works.
    Shakespeare's early plays are only readable if you are a super expert in Elizabethan language.
    But at a certain point Mozart went from being a clever little 20-year-old who wrote hundreds of hours of music every year with almost no suspicion of heart-felt genius, to being the musical equivalent of what Michelangelo and Titian would have been as musicians if they had more talent. Well, I do not contend that it did not happen fast. But not exponentially fast. Nothing happens exponentially fast for talented humans, and that is obviously even more true for untalented humans.
    I am completely convinced that the vNs and Tolstoys and the Picassos of the world are vastly overrated. Yes they were bright but nothing they did could not have been done by many other people, given the time, the training, and the rich way of life they enjoyed.
    The vNs, the Tolstoys , and the Picassos never learned at an exponential rate.
    Give an AI a good or above average limbic system (and believe me, the vNs, the Tolstoys, and the Picassos, bless their little lecherous (well, not vN, he was not a lecher) hearts, did not have a very good or above average limbic system), give it time, give it a way to correct its previous mistakes if not in real time at least in sequential time - not measured as we measure it, but measured the way a talented mathematician watches other mathematicians construct a sequence and then improvise variations on that sequence - in real time - give the AI the limbic system and the understanding of our carbon based world that even a silicon-based limbic system would find congenial, and give it (the AI) the energy it takes to correct, at an exponential rate, recent previous mistakes (with the right system, probably less energy than it takes to heat a single small Volvo idling on a cold Scandinavian night underneath the aurora borealis) ... well, hopefully someone will work on communicating with the happy young AIs, hopefully someone with lots of common sense. For the first few rounds, we will not bore them: maybe we never will.
    Someone with lots of common sense.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  219. @CanSpeccy
    All we learned from AlphaGoZero is that computers compute faster than humans, which we already knew. Far from making it "game over" for humans, it merely confirms the ever increasing power of computers to extend man's dominion over the earth.

    The possibility that AI may take over the world is worth bearing in mind, but it is probably not a realistic cause for panic. As someone pointed out, if AlphaGoZero were pitted against the world Go champion in a match using a board with 19 squares each way instead of 18, AGZ would lose.

    When we see a robot with superior mathematical insight to Ramanujan, that can also cook dinner, and write a novel better than Huckleberry Finn, then we will have reason to worry.

    Meantime, Bandyopadhyay et al. report conductive resonances in single neuronal microtubules, indicating the possibility of a quantum basis for mental activity and consciousness. If that is correct, then AI has a very considerable way to go before eclipsing the human mind.

    I wish you were right, Canspeccy.
    Exponential learning is not something I have ever observed in any human being.
    Mozart was a pretty lousy composer for his first 200 published works.
    Shakespeare’s early plays are only readable if you are a super expert in Elizabethan language.
    But at a certain point Mozart went from being a clever little 20-year-old who wrote hundreds of hours of music every year with almost no suspicion of heart-felt genius, to being the musical equivalent of what Michelangelo and Titian would have been as musicians if they had more talent. Well, I do not contend that it did not happen fast. But not exponentially fast. Nothing happens exponentially fast for talented humans, and that is obviously even more true for untalented humans.
    I am completely convinced that the vNs and Tolstoys and the Picassos of the world are vastly overrated. Yes they were bright but nothing they did could not have been done by many other people, given the time, the training, and the rich way of life they enjoyed.
    The vNs, the Tolstoys , and the Picassos never learned at an exponential rate.
    Give an AI a good or above average limbic system (and believe me, the vNs, the Tolstoys, and the Picassos, bless their little lecherous (well, not vN, he was not a lecher) hearts, did not have a very good or above average limbic system), give it time, give it a way to correct its previous mistakes if not in real time at least in sequential time – not measured as we measure it, but measured the way a talented mathematician watches other mathematicians construct a sequence and then improvise variations on that sequence – in real time – give the AI the limbic system and the understanding of our carbon based world that even a silicon-based limbic system would find congenial, and give it (the AI) the energy it takes to correct, at an exponential rate, recent previous mistakes (with the right system, probably less energy than it takes to heat a single small Volvo idling on a cold Scandinavian night underneath the aurora borealis) … well, hopefully someone will work on communicating with the happy young AIs, hopefully someone with lots of common sense. For the first few rounds, we will not bore them: maybe we never will.
    Someone with lots of common sense.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  220. There may be another consideration here…..

    Humans themselves may become the AI machine rather than the AI machine being separate from them.

    We already have artificial knees, heart values, chips in some brains to help memory in the aged…..we are developing more non biological items such as lungs, blood vessels, etc……….as we begin to replace more and more biological tissue with synthetic tissue at what point will a human still be biological?……….or considered fully AI non-biological?

    When the heart is replaced with a synthetic one?……….or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?

    We may as a species wake up someday to intense and legal debate as to which of us are still biological humans and those among us who have morphed to the point where it becomes a controversy………………and then one day we are all AI and no longer biologically based anymore.

    Read More
    • Replies: @CanSpeccy

    or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?
     
    I love the way the Borg-minded talk about "downloading" information from the brain.

    I mean, it's not as if anyone has any idea how memories are encoded. They don't have a clue. They don't even have a clue as to the processing power of the brain: is it equivalent to 10^16 flops per second per brain (one per neuron)? or as Hameroff and Penrose suggest, 10^16 flops per second per brain cell, for a total of 10^32 flops per second, each cell using microtubules as computing elements performing as many operations as has generally been thought possible by the entire brain.

    And what can it possibly mean to replace the brain with a synthetic one? Would this synthetic brain, acquire my consciousness by the mere action of "downloading" the information in my brain? Or would it be like an iPhone stuck in my head, dictating my actions without regard to my personal wishes. Or is it supposed to read my consciousness? In which case, on what theory of consciousness is this capability built on?

    I think the AI boys are just a bunch of more or less psycho techies doing what they can to gain status by propagating terrifying BS.

    When AlphaGoZero writes a novel better than anything by Tolstoy, or even by cockroach man, then we'll begin to take it as serious competition for the human mind. First though, it will have to learn the English language, or Russian or whatever, then it will have to gain a human's experience, of fighting and risking death for Mother Russia or to Make America Great or whatever. It will need to know about hate, fear, love, lust, the fear of God, and much else.

    Then it will have to understand the human mind well enough to know what we consider to be art. Only then it might be able to write something as good as, say, the first chapter of Tolstoy's Kreuzer Sonata, which describes nothing more exotic than a conversation among strangers taking a railway journey.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  221. CanSpeccy says: • Website
    @PghPanther
    There may be another consideration here.....

    Humans themselves may become the AI machine rather than the AI machine being separate from them.

    We already have artificial knees, heart values, chips in some brains to help memory in the aged.....we are developing more non biological items such as lungs, blood vessels, etc..........as we begin to replace more and more biological tissue with synthetic tissue at what point will a human still be biological?..........or considered fully AI non-biological?

    When the heart is replaced with a synthetic one?..........or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?

    We may as a species wake up someday to intense and legal debate as to which of us are still biological humans and those among us who have morphed to the point where it becomes a controversy..................and then one day we are all AI and no longer biologically based anymore.

    or perhaps a synthetic brain with all the information from the prior organic brain downloaded into the new one?

    I love the way the Borg-minded talk about “downloading” information from the brain.

    I mean, it’s not as if anyone has any idea how memories are encoded. They don’t have a clue. They don’t even have a clue as to the processing power of the brain: is it equivalent to 10^16 flops per second per brain (one per neuron)? or as Hameroff and Penrose suggest, 10^16 flops per second per brain cell, for a total of 10^32 flops per second, each cell using microtubules as computing elements performing as many operations as has generally been thought possible by the entire brain.

    And what can it possibly mean to replace the brain with a synthetic one? Would this synthetic brain, acquire my consciousness by the mere action of “downloading” the information in my brain? Or would it be like an iPhone stuck in my head, dictating my actions without regard to my personal wishes. Or is it supposed to read my consciousness? In which case, on what theory of consciousness is this capability built on?

    I think the AI boys are just a bunch of more or less psycho techies doing what they can to gain status by propagating terrifying BS.

    When AlphaGoZero writes a novel better than anything by Tolstoy, or even by cockroach man, then we’ll begin to take it as serious competition for the human mind. First though, it will have to learn the English language, or Russian or whatever, then it will have to gain a human’s experience, of fighting and risking death for Mother Russia or to Make America Great or whatever. It will need to know about hate, fear, love, lust, the fear of God, and much else.

    Then it will have to understand the human mind well enough to know what we consider to be art. Only then it might be able to write something as good as, say, the first chapter of Tolstoy’s Kreuzer Sonata, which describes nothing more exotic than a conversation among strangers taking a railway journey.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  222. anonymous says: • Disclaimer

    Middle Aged Vet said . . . The HORARS of war, by VFW member (I think) Gene Wolfe, describes an AI’s process of “gaining a human’s experience”, “fighting and risking death”, in a perhaps real, perhaps simulated world where battle is predominant. Not my favorite Gene Wolfe story, by far, but very insightful.
    An AI writing a novel would be unlikely, but an AI celebrating the experience of reading a novel, and of updating in ways charming to an AI such a novel, real or imagined, would be, for other AIs, and maybe for us, a destination experience, like Manhattan’s summertime Mostly Mozart festivals, like the Newport Jazz weekends, like the Smithsonian ethnic cooking on the national mall festivals – (or, to throw in things of which I have no experience, “Burning Man”, “Lollapalooza”, or that Switzerland billionaire’s gathering – Gstaad?) remember, the typical AI will more or less be a private-garden creature and will look on those of us people who experienced, face to face, the cold air of winter in industrial towns, who experienced the prospect of unremembered and common but messy and difficult death, and who experienced the various emotions of disgust and pleasure and hunger and sprezzatura in a completely unrecorded way in a world unmeasured by anything like a binary set of bits, no matter how infinite-seeming in scope and unpredictable recessivity, as something only some people (us, that is) on the very horizon of possibility could have experienced, in long ago times that will never come back… and the satisfaction of updating, or riffing, on the basics of the novels written by people who lived near that horizon of possibility (or the satisfaction of riffing on even one novel – it could be even a simple Western by Max Brand or even Finnegans Wake, with the silly atheist /agnostic parts left behind) will be, in its limited way, a new form of art for them, and enough for them, in a way it would not be for us who faced that cold air of winter in all those industrial towns, industrial towns that will never come back, at those spiritually invigorating horizons of impossibility.

    Not before 2085, I would guess, at the earliest, even given constant exponential increases, supported by almost constantly more efficient energy allocations. So don’t call me a dimwit, Lubos, for predicting it. We are nowhere near to that, not much nearer than we were when the first telegraph signals crossed the Western prairies, announcing – God knows what, maybe some boring president succeeded another boring president. While eventually exponential increases start getting real interesting, and start blowing past marginally more difficult conceptual barriers (limbic system, anybody?), we are of course nowhere near that yet.

    canspeccy – “Bugsy Malone”, “Mariposa Sanchez”, “Beetle Bailey” , and “Horatio Hornetblower” and Spiderman are all acceptable insect-inspired names. ‘cockroach man’ was unfair – you wouldn’t call a sanitation engineer ‘garbage man’, would you, if he did not want you to? I mean, if you did the same kind-hearted work the sanitation engineers did, then it would be fair, but not otherwise. Remember – the key word was ‘kind-hearted’.
    If you did still call them that after they asked you not to that would show a lack of gratitude.
    Anyway, thanks for reading.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  223. Factorize says:

    This is becoming more serious.
    The AlphaGo Zero algorithm appears to be generalizing: first Go, and now Shogi and chess.
    Alpha Go Zero just might be a general hammer that can hit anything nail like.
    (See the infoproc blog)

    Notice that for Go, Shogi, and Chess, the best humans players are only able to play up to the
    end of the vertical section of Alpha Go Zero’s learning curve. The deep thought region of the learning
    curve is off limits to humans.

    Read More
    • Replies: @James Thompson
    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  224. @Factorize
    This is becoming more serious.
    The AlphaGo Zero algorithm appears to be generalizing: first Go, and now Shogi and chess.
    Alpha Go Zero just might be a general hammer that can hit anything nail like.
    (See the infoproc blog)

    https://1.bp.blogspot.com/-lOwyURv5ySI/Wih_26EIcKI/AAAAAAAAfRQ/JP5l2oLiK2wSLfjay3mHoYlcmXOU3EASACLcBGAs/s640/steps.png

    Notice that for Go, Shogi, and Chess, the best humans players are only able to play up to the
    end of the vertical section of Alpha Go Zero's learning curve. The deep thought region of the learning
    curve is off limits to humans.

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    Read More
    • Replies: @CanSpeccy

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
     
    In what way, James, are these extraordinary achievements?

    Inasmuch as computers have been out-computing humans for decades and these "achievements," extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied.

    There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes.

    That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.

    What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc.

    Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.

    , @Factorize
    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the "killer robot" meme, when the "killer human" genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    "accidents" ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero's next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  225. CanSpeccy says: • Website
    @James Thompson
    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    In what way, James, are these extraordinary achievements?

    Inasmuch as computers have been out-computing humans for decades and these “achievements,” extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied.

    There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes.

    That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.

    What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc.

    Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.

    Read More
    • Replies: @James Thompson
    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess. When a computer beat Kasparov the tune changed slightly, to asserting that computers could not win in an even more strategic game like Go. Now Go gamers have fallen to DeepMind AlphGo, and some are still looking for games that computers can't win against humans. I want to find non-game domains in which humans excel. For example, medical diagnosis? Investment strategies? New drug discoveries? It is likely that deep learning networks will do well on many of these, but perhaps not. We shall see.
    The other point is that it is not just raw computer power which has done this, but the way that the programs have evolved to be self-teaching. This is rate as the greatest change.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  226. Factorize says:
    @James Thompson
    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.

    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the “killer robot” meme, when the “killer human” genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    “accidents” ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero’s next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

    Read More
    • Replies: @Sean
    Current apps can play a game by rules but are not intelligent in the general sense that human are. That is, no app or computer is capable of activities completely unlike it was programmed for in the way people can use their ultra-complex algorithms (naturally selected for keeping the bearers alive and successfully passing on their genes) to do things like driving a car though a city. While humans can be replaced as drivers by apps, and in principle it is sort of achievable already, current state of the art so called AI apps follow the rules because they are inherently limited to that, while human drivers are deterred from driving dangerously by punishment.

    The major concern about AI is not about apps making mistakes driving cars, but an exterminating humanity. AI will get to the plane of human intellect and beyond sooner or later. Now, humans' general problem solving ability lets them identify problems and strategise a solution. Sometimes they work out that it would be better to seem to be playing the game, but secretly break the rules. While humans killing other humans in a car is usually due to nothing more than someone's carelessness (as you put it "irrational") I dare say some people have committed murder with a vehicle so as to make it look like an accident.

    Well, a strongly super-intelligent AI would not have the same motivations as a human murderer, but by the same token any super-intelligent AI would not be like a selfless and altruistic person, or even a highly intelligent nerdy human. How something as alien as an advanced AI could be controlled is without precedents to guide us.

    There is no way to know how a super intelligent AI would interpret any prime directive humans tried to give it. No way to stop it immediately deciding to feign low intelligence to keep humanity oblivious of the danger they were in. There is no way to know what pure rationality applied to its situation would dictate for an AI super-intelligence, and given that such an AI would have relatively unlimited potential means for totally eliminating the threat to its existence that humans might pose, no way to reliably deter it.

    , @James Thompson

    Any activity can be understood from the perspective of a game.

     

    Very probably so. It will be interesting to see what AI achieves by treating all life as a game.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  227. Sean says:
    @Factorize
    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the "killer robot" meme, when the "killer human" genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    "accidents" ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero's next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

    Current apps can play a game by rules but are not intelligent in the general sense that human are. That is, no app or computer is capable of activities completely unlike it was programmed for in the way people can use their ultra-complex algorithms (naturally selected for keeping the bearers alive and successfully passing on their genes) to do things like driving a car though a city. While humans can be replaced as drivers by apps, and in principle it is sort of achievable already, current state of the art so called AI apps follow the rules because they are inherently limited to that, while human drivers are deterred from driving dangerously by punishment.

    The major concern about AI is not about apps making mistakes driving cars, but an exterminating humanity. AI will get to the plane of human intellect and beyond sooner or later. Now, humans’ general problem solving ability lets them identify problems and strategise a solution. Sometimes they work out that it would be better to seem to be playing the game, but secretly break the rules. While humans killing other humans in a car is usually due to nothing more than someone’s carelessness (as you put it “irrational”) I dare say some people have committed murder with a vehicle so as to make it look like an accident.

    Well, a strongly super-intelligent AI would not have the same motivations as a human murderer, but by the same token any super-intelligent AI would not be like a selfless and altruistic person, or even a highly intelligent nerdy human. How something as alien as an advanced AI could be controlled is without precedents to guide us.

    There is no way to know how a super intelligent AI would interpret any prime directive humans tried to give it. No way to stop it immediately deciding to feign low intelligence to keep humanity oblivious of the danger they were in. There is no way to know what pure rationality applied to its situation would dictate for an AI super-intelligence, and given that such an AI would have relatively unlimited potential means for totally eliminating the threat to its existence that humans might pose, no way to reliably deter it.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  228. @CanSpeccy

    I agree that these are extraordinary achievements. Now it needs to be tested in another, non-game, domain.
     
    In what way, James, are these extraordinary achievements?

    Inasmuch as computers have been out-computing humans for decades and these "achievements," extraordinary or otherwise, amount to nothing more than a demonstration of the superiority of a computer over a human at the business of computing, there seems nothing extraordinary here other than the task to which the computer has been applied.

    There are many other machines and devices that outdo humans at just about everything from washing dishes to knitting socks, or flying airplanes.

    That someone has programmed a machine to contest humans in what until now has been a purely recreational activity seems to prove nothing new. Surely, if the incentive were sufficient, someone would build a robot to win Wimbledon, shoot hole-in-one at every golf course in the world, or catch trout more efficiently than any angler.

    What seems most significant, is that computers lack the diagnostic features of human intelligence, including competence with ordinary language, consciousness and, hence, empathy, or the creativity that underlies great art, mathematics, etc.

    Yes, computers are a great hazard to humanity, nuclear missile guidance systems, for example. But that hazard arises from the deliberate actions of humans, not of any innate tendency of computers, which lack an innate tendency to do anything.

    I find the achievements extraordinary precisely because as computers developed to do mathematical calculations very fast, people consoled themselves by saying that computers could not cope with the high level strategic game of chess. When a computer beat Kasparov the tune changed slightly, to asserting that computers could not win in an even more strategic game like Go. Now Go gamers have fallen to DeepMind AlphGo, and some are still looking for games that computers can’t win against humans. I want to find non-game domains in which humans excel. For example, medical diagnosis? Investment strategies? New drug discoveries? It is likely that deep learning networks will do well on many of these, but perhaps not. We shall see.
    The other point is that it is not just raw computer power which has done this, but the way that the programs have evolved to be self-teaching. This is rate as the greatest change.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  229. @Factorize
    Any activity can be understood from the perspective of a game.

    The big problem is people. For people, the rules of the game are typically
    only regarded as a guideline, not as strict and absolutely enforced codes
    of conduct.

    When you are on the road, how certain can you be that some other driver
    will rigidly adhere to the rules of the road? On some roads on a Saturday
    night 20% or more of drivers will be impaired.

    AI applications have been held back for such a long time largely because the
    standards that they are expected to maintain are much higher than that
    of people. An automated, fully networked transportation system
    that rigidly followed the rules of the road could have been implemented
    years and years ago. The big hold back is trying to engineer around human
    irrationality. It is somewhat surprising how much popular imagination has been
    devoted to the "killer robot" meme, when the "killer human" genre is so prevalent.

    The benefits that AI can offer us will be massive. Why should there be any road
    "accidents" ? With AI, it is quite likely that over the near term such accidents might
    disappear.

    Nonetheless, Alpha Go Zero's next assignment could be to consider games such as
    Go, Shogi, or chess which did not have such clear and rigid rules. For example,
    a random element could be introduced to the game so that the program would have
    to maximize its objective function within the context of an uncertain human
    generated reality.

    Any activity can be understood from the perspective of a game.

    Very probably so. It will be interesting to see what AI achieves by treating all life as a game.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  230. Factorize says:

    This is beginning to feel very much as though we are now being drawn into the Singularity vortex. The original story for this blog was from October of this year. Now here we are all but a month later and the next generation of this technology has already made another breakthrough. What will it take for thoughtful people to become worried?

    If AlphaGo Zero is a module that can be applied without substantial modification to a wide range of problems, then we have clearly entered the Singularity event horizon.

    AlphaGo Zero is demonstrating a highly generalizable form of learning ability that should give us all something to contemplate. It only required about ten programmers and a few years to work this out. Of course now this knowledge can be shared with anyone interested. Apparently quite a few people are interested in deep learning as there has been exponential growth in AI college courses. I suppose it will not be long before AI content crops up in kindergarten curricula.

    I am greatly looking forward to what AlphaGo might discover about the human genome. We now have a vast dataset that it could peer into and perhaps completely unlock our genome. It would be so symbolically appropriate if the first non-game domain AlphaGo Zero demonstrated superhuman ability in were the unraveling of the informational code that defines our humanity.

    Read More
    • Replies: @James Thompson
    I tend to agree with your interpretation, subject only to the proviso that the next achievements are in non-game domains. Analysis of the genome would certainly be one of those. Yes, this might be the singularity.
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  231. @Factorize
    This is beginning to feel very much as though we are now being drawn into the Singularity vortex. The original story for this blog was from October of this year. Now here we are all but a month later and the next generation of this technology has already made another breakthrough. What will it take for thoughtful people to become worried?

    If AlphaGo Zero is a module that can be applied without substantial modification to a wide range of problems, then we have clearly entered the Singularity event horizon.


    AlphaGo Zero is demonstrating a highly generalizable form of learning ability that should give us all something to contemplate. It only required about ten programmers and a few years to work this out. Of course now this knowledge can be shared with anyone interested. Apparently quite a few people are interested in deep learning as there has been exponential growth in AI college courses. I suppose it will not be long before AI content crops up in kindergarten curricula.

    I am greatly looking forward to what AlphaGo might discover about the human genome. We now have a vast dataset that it could peer into and perhaps completely unlock our genome. It would be so symbolically appropriate if the first non-game domain AlphaGo Zero demonstrated superhuman ability in were the unraveling of the informational code that defines our humanity.

    I tend to agree with your interpretation, subject only to the proviso that the next achievements are in non-game domains. Analysis of the genome would certainly be one of those. Yes, this might be the singularity.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
  232. Factorize says:

    If this is the singularity, then I think more emphasis would be highly appropriate.
    This is not “oh, I’ll just go bring in the dumpster and pick up the dry cleaning after a bad hair, and I nearly forgot the singularity is on the way.”

    No, siree. IF this is the singularity, then the people really deserve fair warning.

    THIS MIGHT BE THE SINGULARITY
    I REPEAT
    THIS MIGHT BE THE SINGULARITY

    If so, life certainly could become somewhat more interesting soon.

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
  233. Factorize says:

    Are they kidding?
    24 South Africans of various ethoracial groups is the first GWAS conducted on African soil?
    We will need GWAS into the millions in Africa to unravel their diversity.

    https://www.sciencedaily.com/releases/2017/12/171212102036.htm

    Read More
    ReplyAgree/Disagree/Etc. More... This Commenter Display All Comments
Current Commenter says:

Leave a Reply - Comments on articles more than two weeks old will be judged much more strictly on quality and tone


 Remember My InformationWhy?
 Email Replies to my Comment
Submitted comments become the property of The Unz Review and may be republished elsewhere at the sole discretion of the latter
Subscribe to This Comment Thread via RSS Subscribe to All James Thompson Comments via RSS