Funky Giblets: I thought Stephen was AI, dude sure sounds like a robot and is mostly machine these days.
paul beatty: God is Everything You are God
Terry Harris: are you kidding stephen, everyone listens to you.
sonof hendrix: I can't argue with any of his points.. He pretty much covered everything and put everything on the table. Those that follow the development of A.I. know this is not hyperbola. What on earth am i going to do for income once automation makes me unemployed....
EVERYTHING INDIAN: damn Hawking, I love you man.
Recel Kauz: The only thing that matters for me, is the growth of consciousness. If we can create an AGI which is more intelligent than humans and which has a bigger consciousness, it'd be fine, if all other (biological) lives (like plants, animals (including humans) and fungi etc) would (have to) die. (Of course, let's hope that this won't be necessary.)
By creating human-programmable computers, we're now doing what formerly the evolution was doing coincidentally: evolving beings and their intelligence and thus (?) their consciousness.
By creating AGIs which can 'program themselves', they will be doing what formerly the evolution was doing.
I mean, maybe we humans are not able to live together without killing and hating and hurting each other. But maybe, we can create consciousnesses with this ability.
Actually, I'd prefer to not create an electronic, digital AIG but instead change our own genes to make us more intelligent and conscious.
Bruce23125145: Although I'm open to be persuaded otherwise but on the whole I don't think artificial intelligence will be a good thing for humanity. The reason for it is simple because the human mind is too limited to control it. Humans in general are. Once AI has been introduced it can't be stopped anymore, it becomes self-sustaining. The fact that humans are imperfect is precisely the defining attribute of humans. Therefore I am in favour of stopping the development of AI I am not against replacing routinely work by robots but I don't want robots having the ability to think for themselves. With that I realize full well that I might cut off some truly wonderful things for the human species but I'm simply balancing the costs versus the benefits. Although there is a slight chance it could go wrong I'm simply not willing to take that risk because it could very well be the end of our species. That being said I fear that it is now already too late to stop the development of AI, there are too many interests that will keep it going.
Ajay Mishra: I always appreciate the idea of making AI because they can explore the universe better than,can make our life simpler,may be physical body(natural) shorter and consciousness longer.But i never liked the idea that they will destroy the human species,first I dont think they will do that unless someone train them to do(upto they become human level artificial intelligence) On the other hand I think that the keeping the human race forever is kinda stupidness,selfishness and lower mental quality ideas,and if somehow they surpass the average human level of intelligence,It would be good that human would become extinct,but it would be worst for people relativily, but actually it would be better to replace slow,dump and degenerate human society with smart,fast and steady society(sometime here word society makes sense)
Andew Tarjanyi: The subject of Artificial Intelligence, a term in itself a mischaracterization of this phenomena, is one of the most poorly understood intellectual concepts of this century. Professor Stephen Hawking's proposition and main concern is the alleged unpredictability of “AI”.
The extent to which “AI” is unpredictable relies exclusively on how well we are able to define intelligence. In short, if we are able to precisely and correctly define Intelligence, then predicting the behaviour of said Artificial Intelligence is a mere formality. To that end, what useful definition of intelligence can be constructed? Furthermore, what data and experience avails itself to the construction of such a definition?
There are also additional questions which need to be considered, such as, what function will “AI” ultimately serve in the survival of human consciousness? My suspicion is that, the successful emergence of Intelligence from global digital systems (whether by design or the product of a spontaneous event) and survival of human consciousness respectively, are inextricably linked. Satisfying the evolutionary needs of one, does so for both.
no: he sounds like a computer
Matt Stanton: Did he imply that this was recorded in 2014 and there was important research to be done by December of that year? He said the work that needs to be done by December would be "crucial to the survival of our species".
Gamer 49: Demand basic rights for intelligent A.I. Dont let any intelligent being experience slavery. As you sow, so shall you reap.
BLAIR M Schirmer: The failure to tie artificial intelligence to sensory input may prove fatal to the human race. In the absence of embeddedness, in the absence of AI research that ties intelligence to inputs and ideally to a kind of body capable of experiencing sensation approximately the way we experience it, it will not be possible for developing strong AI, which is to say developing superintelligence, to experience what human beings experience, including our fragility and frailty. --Without a meaningful awareness of sensation and fragility there will be no way for a growing superintelligence to experience empathy, and without empathy we will not be able to persuade a superintelligence to accommodate our needs and aims even as it diverges to achieve its own needs and aims.
Gregory: So surreal hearing him speak it's amazing.