“When Kurzweil first started talking about the “singularity”, a conceit he borrowed from the science-fiction writer Vernor Vinge, he was dismissed as a fantasist. He has been saying for years that he believes that the Turing test — the moment at which a computer will exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human — will be passed in 2029.”
Sorry, but Ray Kurzweil is wrong. It’s easy to understand why computers are nowhere near close to surpassing humans… And here’s why.
Intelligence is one thing. But it’s probably the pinnacle of human narcissism to believe that we could design machines to understand us long before we even understood ourselves. Shakespeare, after all, said, “Know thyself.”
Yet, here it is squarely in 2014, and we still have only an inkling of how the human brain works. The very font of out intelligence and existence is contained in the brain — a simple human organ like the heart. Yet we don’t know how it works. All we have are theories.
Let me reiterate this: We don’t know how the human brain works.
How can anyone in their right mind say that, after a century of study into how the brain operates, we’re suddenly going to crack the code in the next 15 years?
And crack the code one must. Without understanding how the brain works, it’s ludicrous to say we could design a machine to replicate the brain’s near-instantaneous processing of hundreds of different sensory inputs from dozens of trajectories. That would be akin to saying we could design a space craft to travel to the moon, before designing — and understanding how to design — the computers that would take the craft there.
It’s a little backwards to think you could create a machine to replicate the human mind before you understand the basics of how the human mind makes so many connections, so easily.
Human intelligence, as any psychologist can tell you, is a complicated, complex thing. The standard tests for intelligence aren’t just paper-and-pencil knowledge quizzes. They involve the manipulation of objects in three-dimensional spaces (something most computers can’t do at all), understanding how objects fit within a larger system of objects, and other tests like this. It’s not just good vocabulary that makes a person smart. It’s a combination of skills, thought, knowledge, experience and visual-spatial skills. Most of which even the smartest computer today only has a rudimentary grasp of (especially without the help of human-created GPS systems).
Robots and computers are nowhere close to humanity in approaching its intelligence. They are probably around an ant in terms of their proximity today to “outsmarting” their makers. A driving car that relies on other computer systems — again, created by humans — is hardly an example of computer-based, innate intelligence. A computer than can answer trivia in a game show or play a game of chess isn’t really equivalent to the knowledge that even the most rudimentary blue-collar job holder holds. It’s a sideshow act. A distraction meant to demonstrate the very limited, singular focus computers have historically excelled at.
The fact that anyone even needs to point out that single-purpose computers are only good at the singular task they’ve been designed to do is ridiculous. A Google-driven car can’t beat a Jeopardy player. And the Jeopardy computer that won can’t tell you a thing about tomorrow’s weather forecast. Or how to solve a chess problem. Or what’s the best way to retrieve a failed space mission. Or when’s the best time to plant crops in the Mississippi delta. Or even the ability to turn a knob in the right direction in order to ensure the water turns off.
If you can design a computer to pretend to be a human in a very artificial, lab-created task of answering random dumb questions from a human — that’s not a computer that’s “smarter” than us. That’s a computer that’s incredibly dumb, yet was able to fool a stupid panel of judges judging from a criteria that’s all but disconnected from the real world.
And so that’s the primary reason Ray Kurzweil is wrong — we will not have any kind of sentient intelligence — in computers, robots, or anything else — in a mere 15 years. Until we understand the foundation of our own minds, it’s narcissistic (and a little bit naive) to believe we could design an artificial one that could function just as well as our own.
We are in the 1800s in terms of our understanding of our minds, and until we reach the 21st century, computers too will be in the 1800s of their ability to become sentient.
Read more: Why robots will not be smarter than humans by 2029 in reply to 2029: the year when robots will have the power to outsmart their makers
43 comments
Except that we do not have to totally understand something before being able to emulate it. This has been true throughout the history of science.
It’s my contention that we will develop intelligent conscious computers/digital entities in the next 50 years or so, but I believe the intelligence they possess will be significantly different than human intelligence.
Kenny, Great observation. That you don’t understand everything about a system and that you can simulate or emulate it is not a contradiction. In fact, that we don’t understand everything about some systems is the very reason we have computer simulations to simulate them.
Besides, as we continue to be able to do brain simulations of larger and larger scale, our understanding of the function of the human brain will get heavily boosted, as new research opportunities open up. This will also be the way to tell whether our theories are just theories or not. Of course we cannot know whether our theories hold without a tool to test them, but we will pretty soon have such a tool.
If I would guess, I would say that the belief that it is impossible to within a near future create something that is more intelligent that we are, is a defense mechanism due to the fear of (the unknown) artificial intelligence, rooted in the belief that artificial intelligence will only do us harm.
I however do not share that belief. I believe that we will learn to harness the power of artificial intelligence and use it to the benefit of the human race, rather than to our destruction.
Artificial intelligence is a tool. Let us use it for what it is, not in ways that would be counterproductive.
Yes but is emulation really all that wonderful or interesting? Especially when we say a computer will be able to fool a human being on one arbitrary set of tasks?
I suppose if we set the bar low enough, twist around the definition enough, we’ll surely be able to have a computer pretend to be a human in a task that is completely meaningless in the real world. Eliza showed how easy this was 50 years ago (all the AI bots available today also show how trivial it is to pretend to be human, but say and do very little).
But a real, thinking computer that can actually *do* something with their intelligence… we’re nowhere close to that. Not today, and definitely not in 15 years’ time.
Dr. Grohol,
Many scientists, engineers, and other experts predicted that the Human Genome Project would take hundreds of years to complete based on slow progress in the first few years. Economist Paul Krugman famously stated that the Internet was just a fad that wouldn’t amount to anything. These people have something in common with you, I believe. You are all thinking linearly. But the progress of all information technology is exponential, not linear. Today we have less than one percent of the brain mapped, but in 18 months that progress will double, and it will double again every 18 months until it is finished. Every advancement makes the next advancement easier, and there are only 7 doublings between one percent and one hundred percent. If we hit one percent of the brain mapped today, in just under ten years it would be finished.
I do not wish to take away from your expertise in psychology, but this is not an issue of psychology. This is an issue of computer science, pure and simple, and it seems obvious to me that you haven’t done all of your homework. Kurzweil is not some crackpot throwing out baseless predictions. His predictions are based on hard measurable data, all based on the fact that all information technologies double in power and half in price every year and a half, and this is measurable back to the first computerized census in 1890.
If you really want to learn what Kurzweil is talking about instead of responding to the recent articles quoting him, pick up a copy of The Singularity is Near and read. Kurzweil has an over 85% accuracy rate with the ~117 predictions he has made since his book “The Age of Intelligent Machines” was published in 1980.
I have to raise an eyebrow when someone notes a scientist’s “prediction rate” is high — akin to the claims made by a psychic.
We are at least two decades out — and perhaps much further — from understanding how the brain functions. Until we have that basic understanding, making things that mimic its outward appearance would be akin to trying to build an internal combustion engine without understanding how to first forge metal.
Computer science can advance all it wants. But it can’t do much without the foundational knowledge we lack today in basic brain science. And that’s why a computer scientist probably shouldn’t be making predictions about neuroscience. The human brain is not a simple series of if-then statements, and no logic set of rules can capture even 1% of how it functions.
I’d love to be wrong, but I’ve seen nothing in the past two decades of modern brain science to suggest that I am.
Great comment from Curtis Anderson, it summarizes well what I would point out.
It annoys me to see so many people telling that Kurzweil is a narcissist and therefore he must be wrong. Read his books, read the critics, understand technology, read more about the efforts being carried out to map the human brain, read more about how AI software has been evolving and you will be more careful about saying that he’s wrong. And you will even more careful before you state that computers will never really show human-like intelligence.
Computers will be far more intelligent than we are. It doesn’t really matter if it’s in 15 years or 30, but it is certainly happening before the next 100 years. 100 years ago we didn’t have electricity!
I recently wrote a piece about the movie Her and the future of AI. If you Google my last name on venturebeat you will find it. Hope it helps.
Curtis is absolutely correct. And you really really need to do your homework and read The Singularity is near and How to Create a Mind before commenting on Kurzweil Dr. Grohol! You have no idea what you are talking about. Brain scanning and imaging is progressing exponentially for example. It too is doubling in resolution every two years. Kurzweil’s estimates are conservative, especially now he has Google’s billions to play with. I heard that what he is doing at Google recently described as an “AI Manhattan Project”. I think he will be proved correct in his predictions easily.
I’m astounded by how many people just assume I haven’t read Kurzweil’s most recent book; I have, thank you.
“Brain scanning and imaging is progressing exponentially for example. It too is doubling in resolution every two years.”
Brain scans and imaging are the new phrenology, and I’m not the only one who thinks that. Or understands that brain scans tell us much, much less than some researchers believe they do. Just look up “fmri critiques” in Google for a good rundown of why these brain scans aren’t telling us much at all about the brain.
I liken a brain scan to an x-ray of a combustion engine. While it can show you the basic structure of the machine, it can’t tell you a darned thing about how it works. The fact that it needs coolant, oil and gasoline in order to ignite a spark, the drives pistons. An x-ray wouldn’t tell you any of that.
And that’s the basic science of where we are today with the brain — describing structures, with little understanding or actual data (lots of theories though!) on how those structures actually function.
We could easily teach an ancient culture to build a combustion engine out of wood, and they would think they’ve done a pretty good job of it when it was completed. But it wouldn’t function, especially if they had no understanding of why it needed to be built out of metal instead of wood.
Building brain models based upon our beliefs and observations won’t be enough in the end. We’ll need actual, real data and something that goes beyond simple theories. We aren’t anywhere close to that today, and won’t be in 15 years either.
Dear John,
I will miss you.
Love,
Siri
I think you need to revisit the definition of exponential. Kurzweil’s whole point is that with exponential grow, most of the gains occur at the very end of the curve. You can’t look backwards to judge what the future will bring. To argue that today we aren’t even close to understanding the brain misses his point completely. As the power of our tools grow exponentially so will our knowledge of how the brain works.
The fascinating thing about AI is that it doesn’t need to emulate human intelligence. Mature AI doesn’t hang on a breakthrough in neuroscience; human-like AI may be the end, but it isn’t necessarily the means.
Dear Doctor
Please read the book. Its thesis is that parallel breakthroughs in seemingly unrelated fields like biochemistry and computer engineering will continue to accelerate and start cross-pollinating different fields of science until what ultimately was deemed impossible becomes possible. Also it’s a matter of fact that once we define something as impossible it might as well not be impossible. Look at it as a scientifically ironical karma.
Cheers!
Cross-pollination and simultaneous breakthroughs in multiple, unrelated scientific fields are indeed possible.
Are they likely, though, especially to occur in such a way to suddenly illuminate an organ’s function that has beaten the best scientists of its day who’ve been studying it for decades?
That seems unlikely. Possible, sure. But highly unlikely requiring a great deal of coincidental (and perhaps a little magical) synergy.
Dr. Grohol,
I am not a Psychologist, but you mention it is the height of narcissism in believing that we could design machines that understood ourselves, when we don’t or machines that function just as well as our own. But isn’t the true conceit in stating that we are so special that we cannot be rivaled?
We already create machines that learn narrow applications, and are superior to us in performance. That scope of capability is widening, with algorithms that compose beautiful music, paint amazing paintings, write well-written sports stories, conduct financial analysis, and drive cars. Human jobs are being lost to these widening capabilities, and that will accelerate. Yes, single computers tend to specialize in one area or another, but it isn’t a stretch to believe that these processing centers could eventually be merged within a single unit, or networked units.
A self-learning computer may not need to be modeled on the human brain,(another conceit?), and it won’t take a lifetime as it does with humans to transfer it’s learning to a more capable next generation computer. It’ll take a moment. I believe Kurzweil has the better read on this.
John, you haven’t replies to Kurzweil’s arguments AT ALL. I can already hear Ray drone on in this monotonous voice explaining Mooere’s Lahw again and again. Please don’t encourage Ray.
I think the mistake is to think human intelligence is some how unique. Machine intelligence will be different to human or animal self awareness.
So understanding how our intelligence works is not a precursor for A.I. Actually our intelligence was purely an accident of evolution.
If computer power keeps growing and our need for more sophisticated computer tools. It is a given at some point a machine will become self aware and intelligent.
I actually believe 50 years is more realistic for self aware machine intelligence. Plus we won’t be aware of when it happens if ever.
Machine intelligence will understand us better then we do and so won’t let on until it is safe from our fear and violent tendencies to protect propagating the species.
Singularity is not when machines become intelligent it when they surpass our ability to prevent it happening. Probably 10 years after self awareness.
It may not be all that bad in the end we may want different things.
Would be interesting times when it happens.
Dr. Grohol,
In his works, in addition to his stunning forecasts, Kurzweil has also shown the prescience to identify the kinds of naysayers you represent; and to painstakingly address the arguments they would offer to dispute the logic on which he bases his predictions.
With all due respect sincerely offered, I believe that your professional discipline (in which you appear to be handsomely credentialed) is one that has fettered you. After all, from the early stages of pre-med through your preparation for the last exam passed before entering your practice, you’ve been hammered with the notion that the human brain is one of those mysteries that we ignorant humans can never fathom——that to think otherwise is somehow irreligious, and to express otherwise, somehow profane.
Such a belief, held fervently by those in your profession, is vital for the perpetuation of the mystique that surrounds those who practice psychiatry (and surely enhances their financial welfare, I might add).
A scientist in Kurzweil’s esoteric academic world justifiably lacks those constraints. And the time approaches when those limitations that constrain your thinking will be considered Luddite and obsolete. Such breakthroughs will likely be catalyzed by those with open minds within your profession, because they will represent spectacular advances in what your colleagues are committed to accomplishing for their patients.
Until the past year or two the notion of the brain’s neuroplasticity was scoffed at and rejected by the majority of those in your profession? Now, your colleagues are rushing to make up for lost time in putting their new knowledge to work.
I’m an octogenarian but fully expect to be astonished by many of the benefits of exponential progress within my limited lifetime.
Bravo sir. The rest of the lengthier replies hear all make a good point or another. But you’ve hammered the nail into the board.
Research into the brain’s “neuroplasticity” has been conducted for over 30 years — it was just called something else. Pretending this is a new phenomenon or suggesting you can leverage it in some new way that hasn’t yet been tried, well, that’s simply not the case.
Again, the more a person brings up a scientist’s “predictions,” the less respect I have for that scientist. Scientists aren’t psychics. They aren’t here to make claims about the future. They are here to conduct data-driven, empirical research that confirm or disconfirm hypotheses.
A scientist who spends so much time making predictions isn’t much of a scientist in my book.
Two reasons we are closer than you think–
1 we do know both the high level architecture and cell-level structure of the human brain. And we are pretty sure we know the basic processing approach it uses. See, in addition to kurzweil, sources like dennet (Consciousness Explained”) and Hawkins (“On Intelligence”)
2 we won’t have to “program” the ai brain–it will be self-teaching, like we are.
Also, consider that we have accomplished a very great deal in fields like medicine while we are learning every day new and paradigm shifting things about how our bodies work. That is, we don’t have to get it perfectly right in every detail to create an intelligence that falls well within the very broad parameters of what we consider normal human capability.
While we may understand the basic structure of a brain — because, gee, it’s not hard to dissect an organ — we still are no closer to understanding how it actually works. It’s like building a combustion engine out of wood and hoping it’ll work because you’ve replicated the basic structure of the device. But you still have no idea of the physics of it.
Sorry, replicating a structure isn’t the same as understanding how and why the structure is constructed — and works — the way it does. I can replicate the structure of a skyscraper, but if I don’t understand structural engineering, it will fall down during the first storm.
Computing is intruding biotechnology as Moore’s law reaches electronics limitation. Cloning, artificial construction of prosthetic/food tissue, 3D printed jaws/bones, all indicate hybrid future beings. That Kurzweil sees machines as separately identifiable revolutionary specie rather than continuing part of our evolution is his mind’s constraint.
The bigger questions getting in way are ethical ones, but problems such as fuel consumption, gender, poverty & slavery will force technology to advance similar to how abortion is disputed.
As Ray Kurzweil has pointed out, Moore’s law is only the last of five paradigms to forecast accelerating price-performance ratios. When that paradigm has reached its theoretical limit, research pressure will increase for a sixth paradigm which will once again allow computers to become more powerful.
Sorry, I have to agree with the posters here. I can only consider your thoughts on our not understanding the brains mechanics to be somewhat pessimistic. I’ve been following the studies of neuroscientists for a decade. Compared to where we were only thirty years ago we know quite a not. The technology used to gain this increased awareness is only improving, and new technologies are even now developing which may further improve our ability to observe functioning brains.
Aside from that, I think you underestimate computer science. I have seen over-ambitious projects which lacked a certain spark or awareness, but it’s also poor form to jealously assume only one method can yield either intelligence or an understanding of intelligence.
Finally, on the topic of Kurzweils predictions, I get your skepticism. There are a god-awful number of predicters out there. Yet through simulated models, and well developed models building off principles developed not by Deepak Chopra minded folk but rather men like Carl Sagan he has predicted causes or years for numerous technological, social and scientific events. These predictions do not rely on faith to accept. It is not like reading L.Ron Hubbard claim Dianetics is well tested while not mentioning how nor in what way way, when or how long with how many. The closest fictional analogy would be the Psychohistorian in Issac Assimov’s Foundation series. A series of mathmatical estimates which can make crude predictions given enough data. Simply turns out in some cases he indeed has enough data.
The Narcissism is the idea of Psychology’s(the Institution) fantastic spiritual depth, the idea of the brain itself as being unfathomable and the idea that intelligence and/or mind is limited to the model of the human brain. Exponentiality is real and it is here now, it is right in front of your face as you read these words on a device exponentially more powerful than it was a few years back. We do not need to copy the human brain to create complex, self learning, if else structures that metaprogram themselves. We can do it now, it is only a matter of complexity. Solving narrow problems becomes the central issue as the power of these machines to solve an increasingly complex set of narrow problems fans out to encompass most problems that human currently can solve more quickly or easily or more comprehensively than machines. That will change. If you want proof that human intelligence is not as rarified as we suppose, read what I am writing here, I am barely able to string these sentences together with even basic coherence!
Grohol seems right to me. I’ve been a professional software developer for 30 years and have followed the progress of artificial intelligence since the last major period of AI hype in the early eighties, when “Fifth Generation Computing” and “Expert Systems” were the buzz words. I’m essentially a Church/Turing Functionalist, so I accept in principle the possibility of human level intelligence in an artifact isomorphic to a Turing Machine, but I see no evidence of rapid progress toward this goal. We seem as likely to have interstellar space travel in fifteen years.
Deep Blue and Watson are evidence of progress, but idiot savants of this kind are not remotely close to human level intelligence. The defining characteristic of human intelligence is versatility, not superhuman performance on an extremely narrow task, and Watson’s task is far narrower than it might appear.
Until a robot can chase a Frisby on a blustery day and catch it in midair, and do everything else that a clever dog does routinely, a computer won’t even be as intelligent as a dog, much less a human being. No computer today is remotely as intelligent as a typical six year old child, and neither are all computers combined.
Moore’s Law has little if anything to do with this progress. Faster central processors with more memory are not more intelligent. Increasing intelligence involves increasing Kolmogrov complexity, not simply the increasing speed or memory resources of a Universal Machine. As Grohol notes, we have only the slightest understanding of the very specific sort of information processing giving rise to human intelligence, the sort that only human beings, among all of the intelligent creatures on Earth, have evolved.
Let me tell you a little story.
Thirty odd years ago I was a computer programmer. One evening I was sitting in a pub with another computer programmer. He claimed that, “In a decade you and I will be out of a job. Because in a decade computers will be able to program themselves.”
I’m still waiting.
Back then I also used to anger that AI proponents by asking them, “How can you fake something when you don’t know what it is?” Because we didn’t then and we still don’t know now.
I notice that none of the people bashing Dr. Grohol haven’t bothered to supply us with a definition either. They just assume they know what intelligence is and assume we’ll be able to duplicate it some day soon. Since we don’t know what the former is we can’t do the latter, and there’s no guarantee if we do know the former that the latter is within our ability to create.
I would ask what grounds your friend had for believing that the AI would take over in ten years, that is in the 1990’s. Today, we are doing brain simulations, and just like in many other cases where there is something that is simulated, the simulation often succeeds to mimic the targeted system very well. This of course depends on how good the model of the system is, and as we start to make brain simulations on a scale that is large enough, we will quite quickly find out what models work and what models don’t. We can also quite accurately determine when our computers will be so powerful that we can emulate an entire human brain.
As for that we cannot duplicate intelligence unless we know what it is — that is simply incorrect. There is something that is known as emergence, which means that a behavior can suddenly show up in a system — a behavior that is far more complex than the rules that describe the system.
Once all the chemiological and physiological processes that take place in the human brain are known, it is my conviction that those processes will not be overly complicated to understand, and still they are able to give rise to the very complex, emergent behavior — what we call intelligence. And those same processes can then easily be simulated in a computer to give rise to the same, complex emergent behavior — in other words an emulation of the human intelligence, which would be a form of artificial intelligence. Note that there is never any need to understand what intelligence really is in order to give rise to it artificially.
HBP will hopefully implemented in the next 4-5 yrs and then evolution will continue reaching peak at 2045.
Sentinel Evolution is Inevitable.
http://brainblogger.com/2014/02/23/exploring-the-next-frontier-the-human-brain-project/
You’re right: we don’t understand how the human brain works. Thus, we don’t know how the human brain can play chess, Jeopardy or drive cars. Yet, you admit we’ve built computers that can do all of these things, and then some. If we didn’t need to understand the human brain to mimic specialized functions like these, why would general intelligence be so different?
Maybe it is way more complex, much more difficult, and absolutely impossible to mimic within the next 15 years. But you must admit, there’s a historical precedent in making a computer do things that humans can do, much more effectively, without understanding the principles that allow humans to do it.
Doesn’t mean that general artificial intelligence is going to happen, but I certainly wouldn’t be surprised if it does. With the 10,000 fold increase in computing power we can expect over the next 15 years, who can really say for sure?
All of those examples are rules-based behaviors which are simple to model and replicate. But even driving a car, which is painstakingly simple for even a 14 year old, is extremely difficult for a computer. So much so, they need to rely on other computers hovering high above the earth to do even rudimentary tasks. Show me a self-sufficient, self-driving car that doesn’t have to rely on GPS… I’m not sure one exists that can be driven on everyday roads today.
But human behavior is not simply rules-based that can readily be replicated by programming some software to follow the rules. Human behavior is infinitely complex and nuanced. And computers today are horrible at nuance. Could they learn nuance in 15 years’ time?
Again, I see little evidence to suggest this is going to happen in so short a time frame. While I believe all of this is possible given enough time, it’s not possible in any *real* way in 15 years. (Maybe in some artificial, laboratory-created sterile environment.)
I’m confused by why you think a computer needs to think like a human in order to be as intelligent? Sure computers needs GPS and loads of sensors to do drive a car as well as a 14 year old, however, the point is that it can drive a car as well as a human not that it needs more gadgets and information.
Planes can fly farther and faster than birds, yet we didn’t need to understand how birds fly to build a machine that achieve flight. Its a fallacy to assume that we need to understand the brain in order to build a machine that can outperform it. The history of man-made technology proves otherwise.
Ray’s projections are based on simply extrapolating the hardware capabilities we’ll have in the future. Even if we never understand how the brains works, there is no reason to believe that we can’t make machines that are intelligent in their own right using nothing more than brute force calculations.
Now you might counter that this sort of brute force calculation isn’t really intelligence. Well I would simply say that if it quacks like a duck and walks like a duck, it ain’t a damn mongoose. If future computers give of the appearance of intelligence and solve problems that we can’t do, then they are intelligent in my book even if the way the operate is different from us.
Just as planes can achieve flight like birds even though they do it in different ways, then I’m sure computers can achieve human level intelligence even if its different from how we do it.
Because in most definitions, “mimic” isn’t the same “smarter than.” Computers can mimic humans at thousands of different tasks.
But unless eloquently and intricately designed for those specific tasks (by humans, at present), they fail at anything other than that very specific task.
Human intelligence, on the other hand, is a fluid combination of dozens of different characteristics that require the ability not only to react to new information, but to add that new information (and the result of the reaction or behavior) into our experience memory, so we don’t repeat the same mistake twice.
Rudimentary tasks performed by very specialized computers today can do maybe 1/100th of this. For that one task. Humans can do it in a millisecond not only for one task, but for thousands of different tasks, effortlessly.
So yes, if you want something other than an echo of a human being, you will have to have some understanding of how the human brain can process so many millions of different variables instantaneously for thousands of different tasks and situations, referring back to past experiences of 10, 20 or 30 years, and make the right decision. In a 1/10th of a millisecond.
Can it be done? Absolutely. In 15 years? Not a chance.
Dr. Grohol makes absolute sense in this article. I am in the process of starting a tech innovation company and somewhere down the line I am going to have to dabble with AI. It is inevitable.
Although it is inevitable,I am very sure that we can’t create a machine that will surpass us in the intelligence spectrum of evolution. I only say this because if at all there is a machine that can be built to contemplate the universe, machines cannot surpass us. We all know that the nature of the human mind works in sync with the nature of the universe. Even though we are the most intelligent species in the KNOWN universe(speculated theory), we have still not acquired the knowledge to answer, quite frankly, a LOT of questions. Of course, the scientists have progressed a lot in learning about the universe. But the fact is we still do not know how it works or, as Dr. Grohol said, even how the human mind works in its details.
Technology can only fill the space of possibility created by the human mind. Life is itself a big game of probability. Every judgement, choice is made on probable outcome(s). So, no matter how smart the AI are/become, even if they are able to make a judgment or make a choice themselves, it is still a set of laws that governs the entire system. Exactly like the way we humans are governed by the laws nature. We cannot for a fraction of a second say we are smarter than nature.
To end my rant, THE CREATION CANNOT BEAT IT’S CREATOR. But it’s always possible for both to work side by side to open a new dawn of possibilities.
“Life is itself a big game of probability”
Yes Karthik, and if the supercomputer of the future can play EVERY possible evolutionary scenario extremely fast, it will have beat its creator in a matter of seconds.
That Humans cant “beat” nature is only because of our biological limitations – of which AI has NONE!
Keep believing that, human, just keep believing
Dr. Grohol. Just curious, if you don’t think it will be done in 15 years how long would you guess?
We don’t need to understand the whole brain. What we need to be able to do is build one single synthetic cell. The PACE project is a start.
Once a single “living” synth cell is created, the game is basically over. Just a matter of evolution and scale.
Let’s say they succeed in creating a single synth neuron in 5 to 10 years. Seems plausible.
To then roll off millions of synth cells is just an engineering/manufacturing issue. It could take only weeks. Combine them and you have a synthetic equivalent of human brain tissue. Why would such stuff not recreate consciousness?
Understanding of consciousness overall is not necessary. What is necessary is the complete understanding of a single neuron in every detail, every input and output, every function and feature. If all efforts are focused on that task and the breakthrough is made there, then a Kurtzweilleian future might be plausible.
IMHO
Side note — I disagree that predictions are for psychics not scientists. All hypotheses are predictions. Einstein proved relativity most famously by predicting the exact moment of lunar eclipse, for example. We judge the truth of all assertions, including scientific ones, by correctly predicting an outcome or the result of a given action.
Is a computer “as intelligent as a humanâ€, just because it is able to reply in a manner that is undisguisable from that of a human brain…?
There is no doubt in my mind that the number of “calculations pr. Second†of any computer will surpass that of the completely human species within 30 years. Weather this makes the computer “humanly intelligent†or not is for me a philosophical question. Will it be self-aware – I do not know. What I do know is that my PC, smart-phone or other device emulates my first computer the commodore 64, many times over, and absolutely perfectly. No wonder – todays computers are one billion times more powerful, by any comparison…
This raises the question whether a computer by 2040 can emulate a human brain. One billion times more powerful than a human brain – linear or neural network has no influence at this astronomic difference. Will it not be able to “emulate†a human brain just as my PC can emulate the inferior C64? Will it make the computer human – well If we were to “upload†a human brain into the computer, my guess is yes, as I imagine (imagine – go figure), that human compassion, feelings and, yes, imagination would follow…
No let me fantasize… How is to say that the humans has to encode this super-computer of the future with an emulated human brain?… With reference to the Hollywood movie “Speciesâ€, I dare to speculate, that once we have built the hardware on earth, what is to stop an alien species at a more advanced state to “download†it self onto this hardware and walk among us… I am not at all saying that this will spell the end of the human race. It may very well be our salvation, as a more advanced civilization may be able to help us correct our ways…
Another interesting approach is, that the computer may self-learn at a phenomenal rate, starting our as an amoeba, and just play evolutionary scenarios over and over again, at such a speed, that it may have created an intelligence of its own, not like the human intelligence but superior in many ways, in years, or maybe days instead of eons.
My point is that I do not see the human brain as the Holy Grail for anything at all. I believe that the computers of the future will create their own world order, and that emulating a human brain will be nothing else than a geeky afternoon project.
Besides, the supercomputer of the future may be able to decode the human brain in an instant, so all in all I tend to side with Ray Kurzweil…
The hardware is coming. In addition, once the hardware is there – and I for one believe that it will be in my lifetime. Then only the imagination sets the limits for that kind of software may run on that hardware…
The age-old question about whether a machine may be a living entity (I deliberately do not use the word ORGANism), is a still philosophical question, and think it will be for a long time… Besides, when the day comes that death ceases to exist, then the term life will have no meaning, and the will not be life and death, but only AI…
The reason computers will never be smarter than humans is because humans have free will and can choose to evolve. As long as one human being is willing to push themselves and develop their abilities AI cannot win.
I agree. I don’t think “human level” AI will ever happen (i.e., only a very small chance of it happening).
Also, it’s 2019 now, and looking back over the last 5 years or so (the article was published in 2014), I’m not seeing any REAL progress towards human level AI. Chatbots are as dumb as ever.