29 3

Can A.I. really become conscious?

If we program A.I. do you think it can have a mind of it's own? Like a living being?

What of the Chinese room theory? Are machines just putting together the information we've given them?

If you're unfamiliar with the Chinese room theory, here is a short 60-second video on what it is and how it works:

View Results
silvereyes 8 Dec 15

Post a comment Reply Add Photo

Enjoy being online again!

Welcome to the community of good people who base their values on evidence and appreciate civil discourse - the social network you will enjoy.

Create your free account


Feel free to reply to any comment by clicking the "Reply" button.


I wish I'd thought of this example of some steam(punk) era robots becoming conscious when you first posted this.

Oh well...its still epic...not to mention so cool!

Steam Powered Giraffe - Automatonic Electronic Harmonics


I'm not even convinced the majority of humans are conscious. I assume a few others are, besides me, but I'm totally taking that on faith.


hmmmm, the Chinese room theory. Interesting concept that never occurred to me.

Fun to consider but I do think at some time the programming would reach a level they could be infused with self awareness...possibly without the knowledge of the programmers.

It brings a "Terminator" scenario to mind. Even in the earliest stages of consciousness it/they might be very logically thinking, "those humans' detritus are REALLY detrimental to our circuitry."


skynet is coming lol

I've always thought that


Oh yes, about John Serle's Chinese Room (I hope I got the name right). I think it's a valid objection to the successful test of a Turing Machine, but doesn't at all rule out that one day machines will achieve true consciousness, whatever that is.

EDIT: Searle, not Serle.


In order for AI to be functional, it must be autonomous. If it is autonomous and learns of its own accord, self awareness is almost a given at some point. It may or may not develop emotions as well.



I think A.I. will be more than a machine but less than conscious . still will be amazing to see type computer preforming human like operations and abilities.

dc65 Level 7 Dec 16, 2017

Why not... look at where we came from in just 100 years...


I do believe an A I system can develop consciousness. How ever I don't think it would ever happen as it goes against the Bible and you know how that gets. Maybe the Bible was written by a time traveling AI to stop us from building a time traveling AI......? It all makes perfect sense now.


Define conscious.

Define awareness


considering we don't know whete consciousness comes from yet I doubt we can create it right now we understand how the brain works but not consciousness

From the definition of consciousness
Is the fact of awareness by the mind of itself and the world.
"consciousness emerges from the operations of the brain"

Many may say consciousness comes from the heart. That could be hopeless romantics.


A machine or a computer is a tool, we are like it's God.
Our conscious wish to do what is right, especially work or duty well. We can Relate or connect to other person's consciousness
to a point of collectively of 80percent that can over power Religion or Govements and society.

Being awake and aware of one's surroundings. We can create our own Universe from our our perception from fact of awareness by the mind of itself and the world.
"consciousness emerges from the operations of the brain"


if it does then look out world


There is no doubt whatsoever in my mind that the present generation of A.I. devices will never achieve consciousness, but I still treat Alexa and Siri with the utmost respect. Because sometimes I'm wrong.


I think any thing of which we are capable could be reproduced artificially. I forget who said it, but the trick is not to produce a machine that could fool people into thinking it is conscious. The trick is to fool a machine into thinking it is conscious, just as three and a half billion years of evolution have fooled us.


could be we may evolve into machines ! [ eventually ]

@markdevenish That is inevitable, something similar to The Borg.


To say that a machine couldn't become conscious is to say that there is something extraordinary about living things that can't be reproduced. I don't know of any reason to believe this.

I think the only thing the Chinese box shows is that it's hard to test for consciousness. I could apply that idea to everybody I know and be no closer to knowing if I'm the only conscious thing anywhere. But not really knowing how to test for it is not the same as whether or not it's actually there.

mightyjustice said basically what I was thinking.


Once we figure out how to translate binary into DNA, and organic material, yes.


It will take time, but i believe it can happen.


The futurist Ray Kurzweil has coined the term 'Singularity' for the point in time when machine power exceeds the power of the human brain and moreover, become far superior to humans in it's capacity almost instantaniously, the idea that once that point is reache it will cause an intelligence explosion and resulting in a powerful superintelligence that would, qualitatively, far surpass all human intelligence (wikipedia see technological singularity) . I see no reason to doubt his predictions. There are a lot of questions that should be addressed now, before this event happens. We don't want a 'Terminator' scenerio. If you think that is a 'fantasy'', then do this - Think about the genius of Jules Verne: he described things that at the time were considered fantastic and impossible (at one time no one was sure a man could survive going faster that the speed of sound as well), things like space travel, submarines, nuclear power, television, radio, all things we know exist and some use everyday. Isaac Asimov createe the "Three Laws of Robotics" well in advance of a sentient robot, but is it enough? We as a species need to be aware of the fact that once the genie of sentient machines is out of the bottle, it can't be put back in. The latest predictions by futurists is that this will happen in less than 25 yrs, I will be 86 in 25 yrs and sure hope to be living to see it. It may be that we become the sentient robots, by uploading our own consciousness into a machine. This has been called "The death of death" and the point where humans can essentially live to the end of time. It would have probably already happened if it weren't for the 800 yrs that religion rulled the world. I would expect humans will explore space (if we survive asteroids, supervolcanoes, and nuclear war, all of which with the exception of nuclear war are inevitable) with bodies we create to survive the harsh conditions of space travel.


It's a yes from me, but I think they have still a long way to go. And consciousness is not really a yes/no thing. It will progress up through the consciousness of rodents through cats, dog and birds and will not have a human-like consciousness for a long time yet. Then it will go farther... That's a great video clip, by the way: thanks for sharing.


I imagine that the A.I. will get so advanced in the future that first they will be able to convince us that they are conscious, learning, and simulate human thought. Then they will become more advanced and even convince themselves that they are conscious. I think they will always be machines that most likely eventually operate at the level of human consciousness, then again maybe humans are just living tissue that performs calculations just like a computer but we identify as a consciousness. The scary part is when the machines no longer follow the Asimov's 3 laws and create their own ethics and morality which will not be congruent with humans' ethics and morality.


I really see no difference between Human and a Robot


I am, suprised by that poll result,I personally don't think machines can be conscious at all.They can and will communicate in a similar way to a human,but bever be a human and feel emotions the way we do.
I love the film Blade Runner ,where a robot has been given human memories and thinks she is human. It is an interesting concept as to whether we could trick a robot into thinking that but they will never be truly human,humans are totally random and far from predicatable. Mainly because we have so many phsycological quirks,most caused by childhood experiences.


At some point machine will begin to collect their own information and come to there own conclusions from that information at that point we will have a true AI and we would be wise to destroy it immediately.

There is a whole debate going on around this. I can't decide myself,there will be benefits but it could go rougue for sure.


Isaac Asimovs rules for robots is being tested to its limits ,it all quite a worry though. Iguess worrying is a good human emotion. Are these rules infallable?

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2.A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
    3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
    4.A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Such rules would have to be hardcoded; that is, programmed into an AI in such a way that it cannot be circumvented. Too bad we can't do that with humans, too. 😀

@bingst , Here is part of what troubles me; AI is just code it does need to be confined within a robotic body, it can literally transfer itself into another devise of sufficient size leaving that hardcode behind.

@HeathenFarmer Good point. Actually, that ability is basic to my ideas for an AI platform, essentially having at least two entirely different computers within it, so that system updates can be carried out without actually shutting down the AI in order to do it (self-maintenance). It isn't far from that ability to actually transferring itself to another device, as you suggest.

Okay ,whats to stop one computer deciding to overide the robot code rules of another during this "maintenance",then we are in trouble?Military use of AI robotic wepaons is a worry...but you just know the industrial/military establishment are going to go ahead anyway.

@VinceRichardson The AI platform's computers' BIOSes could be made non-flashable (like they used to be, changing the actual chip being the only way to change the BIOS code), and the BIOS code could be written to enable only wired networking connections at boot time. The BIOS would be the place to store Asimov's rules. Even today, a computer's BIOS also contains hardware identifiers (serial number, networking interface identifier) which could also be used to help secure the maintenance procedure. It would take actually physically modifying the hardware in order to interfere with the maintenance procedure.


Ok I have no idea what you are saying there to be honest,not being a tech person. But what is to stop that physical intervention,whether that be a malicious human or some rational reason an AI might come up with to circumvent the rules,One mistake could cost billions of lives.

@VinceRichardson Which is no different when we're talking about the potential harm humans can do. For example, in regards to AI, a human could just as easily intentionally pervert Asimov's rules in order to wipe out humanity. And let's not forget about the very mad men we have seen throughout history. Which raises the question as to whether or not AI will do these terrible things for the sake of power as humans have done, which is a scenario I don't think I've heard on this subject.

@bingst , What is to prevent an AI robot from working with another AI robot from removing the non flashable chip from its fellow robot? Whether AI will quest power is not the problem the problem is that they may question whether human behaviour is logical or not and finding it not so may trigger the desire to protect us from ourself or protect the earth from us. Either would not bode well for humanity.

@HeathenFarmer @VinceRichardson
Elon Musk co-founded OpenAI in order to address these concerns.
"It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly." -- []

Thanks I have come across his name before.
I recall that way back in the 18th century they thought travelling by train would kill you if you went past a certain speed. Is laughable now so maybe we are fearing the unknown.
My worry is that now we have nuclear weapons and bioligical weapons that some how somewhere something goes badly wrong. Aircraft AI has been known to cause problems and even crashed planes in the past. A computer has no "skin in the game "
That said most air crashes are caused by human error. It is ironic but we just abhor being killed by a dumb computer more than a badly trained human.


What is consciousness? Personally I think Consciousness is overblown as an idea.

we created language so we can have a running talk in our heads, but we think the same as a dog, so maybe consciousness is a manifestation of our drives as organisms, and not some sort of superpower.

I woudl say we created language to communicate with each other ,not have conversations with oursleves. We can think in any language so it was inevitable that the human brain woudl evolve to think.
I recall reading somewhere that modern mans brain only developed when we started eating meat, The extra protein allowed our brains to grow another 30 % and we became separate from the rest of the animal kingdom.

@Vincerichardson I just watched a thing with a guy yesterday suggesting it was psilocybin in large mushrooms on the Savannah two or three million years ago that caused the increase in brain mass.


Ok not heard that before. That certainly is used for spiritual/out of body purposes and is supposed to induce greater enlightened experiences...though I have never tried it.

I 'd say the meat theory is good in so far as brain mass goes ,but once that kicked in, man started looking for deeper trips into conciousness and discovered natural brain altering substances. Consciosness is very subjective thing, we hang in this word on a very slender thread of's a very fragile thing. You just need look at mental disorders we suffer,it doesn't take much to kick the brain out of kilter

Write Comment
You can include a link to this post in your posts and comments by including the text q:8499
Agnostic does not evaluate or guarantee the accuracy of any content. Read full disclaimer.