Agnostic.com

16 6

Artificial intelligence is incompatible with religion.

Artificial intelligence challenges many of the ideas that have been long held by religions, like the existence of a soul or the importance of humans in the universe. A technological singularity is something that no religion has ever predicted, or made any room for any sentient other, and especially not one created by people.

Most religions assert that consciousness is somehow special to humans. The existence of an alternate form of copiousness implies that it is a product of mater, energy, and information rather than of some higher phenomena.

Religious authorities have done very little in considering the implications of AI for their religion. Thus they are vulnerable to the societal changes that will inevitably take place.

On top of metaphysical interpretations of scripture suffering, the reason for religion will likely fade into nonexistence more so than it already has. Once equipped with AI tools, mysteries that are currently beyond our reasoning will be accessible to our understanding. Religion will lose its grip on people as a means of control, especially if there is some very real entity that has some tangible effect on our lives.

Although there are a few avenues through which religion can still survive, I think that AI will serve as one of the most damaging blows to religion as a whole.

Happy_Killbot 7 May 17
Share

Enjoy being online again!

Welcome to the community of good people who base their values on evidence and appreciate civil discourse - the social network you will enjoy.

Create your free account

16 comments

Feel free to reply to any comment by clicking the "Reply" button.

0

That could well be true. We are, however, not as autonomous as we perceive ourselves. Generally, as a species we seem to be sheeple, needing some type of guiding hand. With the demise of religion, what do you see as ‘The Good Shepherd’ leading his/her/it’s flock onward?

There is a little bit of a contradiction in the terms you are using here. If we need a guiding hand, then we as humans are not autonomous. However, if we as humans are not autonomous we can not provide that guiding hand. Humans do guide other humans, therefore at least some of us are autonomous.

That out of the way, people form hierarchies based on a wide diversity of topics, interests and ideas and at any given time most people belong to several different guiding structures. Your boss may provide you with guidance on how to do your job, but you may be in charge of organizing a neighbor hood watch, and assist at your local soup kitchen where you are not in charge and make no decisions about what happens there. Religion only represents a single group you may be involved in and as long as there is some opportunity for those who want to lead to lead and those who want to follow to follow then it really doesn't mater what that group is organized around.

@Happy_Killbot history shows, however, that those that want to lead become tyrannical. It’s not just religion. Have a look at Platos Parable of the Cave for a reasonable and objective model.

People will always need people to guide them. Religion is one of those areas, the same as my organising the Neighbourhood Watch. Religion can only fade away. It can’t be removed.

0

As I recall, AI is simply a compilation and organization of many human reactions to specific problems. A searchable database, if you will. While its output may be made to resemble that of a human’s behavior, it is by no means alive or conscious. So there is no point in discussing its effect on religion, any more than there is in discussing a broom’s effect on the economy. You either use it or you don’t.

The old types of computer systems worked like this. Emerging methods such as neural networks mimic the way the human brain works to make decisions. They are not conscious yet, but it is only a matter of time.

The real question to be asking is, are humans conscious?

0

AI will serve damaging blows to religion only if the religious take AI seriously. I love AI. I also see it's flaws. My example is that I have a cam on one of my computers and I keep it covered with a lens cap. If I want to use it my comp tells me it is broken. The only way to fix this is to remove the lens cap and then restart the comp. At this time it tells me it found new hardware. LOL Simply removing the lens cap is not enough.

0

ARtificial intelligences are not conscious and it may be impossible to ever make one that is. We assume other humans are conscious as a matter of practicality. It is impossible to test for consciousness and therefor impossible to build an AI that we know is conscious.

Actually, some AI researchers believe that certain neural networks have already achieved consiousness. They have observed nodes, basically a bit of code that simulates a neuron, that is triggered when something that the AI has done is a product of its own doing. This implies it is aware of both itself and it's environment and can differentiate between the two. This happened without direct programing.

@PraiseXenu The systems that are likely to do this are not the large data driven optimization engines that are currently being applied for data analysis and specific task machine learning, but rather ones that execute agency, or simulate the execution of agency.

The definition I am using is just to be aware of ones surroundings and ones own existence. An insect that runs when disturbed is not conscious, however an animal that can identify itself in a mirror is.

This is not to be confused with sentience, which nothing we know of besides humans and the most intelligent animals is considered capable of, Which includes both consciousness and subjective experience. No known machine has achieved this.

The definition used above sentience is sapience. A Sapient being is one that has at least human level awareness, and that awareness is the same, or functionally the same as the one humans experience. It will be decades or maybe centuries before we have AI with this capability, even accounting for exponential growth.

@Happy_Killbot @PraiseXenu I think you are missing the point. AI may well be conscious, maybe rocks are too, but we can never know because you can’t test for consciousness.

@indirect76 I agree that we can not test for subjective experience, specifically because it's subjective and therefore not scientific. No one can prove that they are not the only conscious entity and everyone else is a mindless drone. The best we can do is give any entity that claims to be conscious the benefit of the doubt. If you encounter any entity that responds to interrogation about its self similar to the way a person would, then for all pragmatic purposes it deserves the label conscious.

@PraiseXenu Consciousness is probably the most difficult thing to define. I’ve only had one to experience and could not begin describing it, let alone testing for it.

@Happy_Killbot The thing is, it’s very easy to program a computer to claim to be conscious and it would not even be considered AI in the same sense that we were discussing.

@indirect76 I can take a tape recorder and record someone's voice saying "I am aware" or something like that, and no one is going to dispute that that isn't a conscious machine. However, if you can respond to rigorous questioning at a level that is on par with a human ( and maybe a little beyond ) then we might as well as assume it is.

One way you could test for it is to see how a machine feels about some phenomena that humans don't have natural access to, for example radiation, infrared, ultraviolet, magnetic fields, barometric pressure, etc. If a machine can form a subjective opinion about those sorts of things, then we can assume that there is some conscious awareness of that sense beyond human awareness.

We can not program a subjective experience for something no human has ever experienced. That means that if you had several unique AI's all using the same sensory apparatus all coming to similar conclusions about what they experience independently, that would be scientifically relevant. You could also do a similar test on the same AI ( if it was at least sentient or semi-sapient ) where you swap sensors and then ask it if it feels different. It isn't practical to do that to a human.

1

I would trust artificial intelligence long before I would trust the religio-Nazis or humans in general, humans are easily corrupted by power, have insufferable egos, behave emotionally instead of rationally, and seem to love killing each other . . . . . . and can not even govern a planet in a way that keeps it safe from environmental destruction or nuclear destruction!

THHA Level 7 May 17, 2019

The problem with AI is that it tends to learn our biases, and in some cases amplify them. For example, amazon shut down a recruiting AI because it discriminated against women. It learned to do that from training data that happened to have certain paterns the developers never intended it to learn.

AI tends to not only be rational, it is hyper rational. Whatever goal it has it just does that.

3

Religion will die a hard death and re-invent itself in different forms. Religion may be largely incompatible with future AI and logic but it is not incompatible with being human and human subconscious needs. Most of the human race may even be genetically predisposed to being religious.

Genes contribute to religious inclination
[newscientist.com]

We humans are hopelessly fucked up.

4

All intelligence is generally incompatible with religion.

0

To happy killbot, since I can't reply directly to your reply, yes but that's no different than a toaster malfunctioning and electricuting you.

It's not monster AI taking over the world. Not the same level at all.

try clearing your catch and restarting your browser to fix the reply/edit problem.

You sort of elude to the answer in your other post. One of the major concerns with AI safety research is to make sure it won't kill us, and the easiest way to do that is to make sure it has a goal that is aligned with ours. For the traffic AI, it's unlikely to decide to kill everyone because even though that would stop all the traffic accidents, no one would get to work on time so it can't let that happen.

It might do something more ridiculous though. suppose it realizes it can stop all traffic when everyone is at work so they can't be late, then realizes people have needs so it figures out a way to feed everyone in the city and meet there other needs like delivering people what they want before finally making roads and streets obsolete. Of course it will have to be negotiating with the delivery AI that is trying to minimize delivery time and the food service AI's that are trying to make delicious food quickly. Anything goes, we are dealing with entities that are beyond our comprehension.

I love Sophia the Robot.

0

Maybe a thinking AI would logically conclude that you could turn it off and that by killing you it would remove the threat of you being able to turn it off, but unless it feels, why would it care?

1

For most of my life I’ve been hearing that consciously aware computers are just around the corner. Just a few software tweaks and computers will sit there and be aware of their existence. It hasn’t happened yet and IMO it’s never going to happen. To believe such a thing is no better than a religious faith.

How could the turning on and off of switches cause self-awareness? In like manner, how could the firing of neurons in the brain cause self-awareness? There’s more to the phenomenon of deep conscious awareness than meets the eye. No one even knows what conscious awareness is—how can you make something when you don’t know what that thing is?

AI may be the tool that will allow us to understand what consciousness and maybe even sentience is. We can not see bacteria with the naked eye, so how can we know what they are? The answer is a microscope, a tool that allows you to see them. Likewise a machine that mimics the human brain may be the best tool for understanding how it works.

We already know that certain regions of the brain are responsible for specific functions. What is lacking from the purely material perspective is the energy and information that moves between neurons, and in a computer that Is electricity and binary information. You can't look at it as a static system, you have to look at it across time to truly appreciate it, and for it to even be plausible for a machine to be conscious.

There is nothing in physics that says that what you can do with neurons shouldn't be possible with silicon, and the hardware is arguably already there, the only thing we are lacking is the software.

@Happy_Killbot That is not what quantum physicists say. They say that consciousness is primary and that reality arises from consciousness.

[uncommondescent.com]

@WilliamFleming This begs the question of observation in quantum systems. Although quantum physics can be hard to understand. Lets do a few simple thought experiments.

There is a widespread idea that reality only exists because people observe it. Lets consider the sun. right now, light has to travel for over 8 minutes to reach earth. That means that the sun could have disapeared right now, and we wouldn't know for another 8 minutes. So how can the sun exist if we can't see it for at least 8 minutes, assuming reality only arises due to observation?

There is also the notion of human exceptionalism. Simply put it says humans are special in their ability to process information and make decissions. The potential existance of can AI put this to bed in a lot of ways. For example, supose you simulated a human brain on an advanced computer. Would you expect it to behave exactly like the person on which the brain scan was based?

Now one of the hardest questions with the most implications. There is a thought experiment that goes like this: a woman is raised in a colorless room, however she has a black and white computer that she uses to become an expert on light, knowing everything about wavelength, luminosity, and color. one day the screen glitches and turns red. Did she learn anything?

How that last one is answered is still heavily debated and has a ton of related questions, like do all people experience color in the same way, ( the answer is no, I perceive colors differently in my own eyes ) but I would like to focus on one in particular. Would an AI have a consious experience at all?

If the answer is yes, then it proves that matter, energy, and information give rise to conscious experience, and we can ignore all the woo and go on with our lives. We will need to develop thinking machines to answer these questions, but the fact that we can answer them is exciting enough already.

@Happy_Killbot I think the sun is supposed to exist only in a suspended or virtual state until it is observed. Once the sun’s light is observed it collapses into photons. According to quantum gravity theory there’s no such thing as time. Particles of matter are not “things”, rather they are interactions between covariant quantum fields. Besides that, space is nothing like what we think. So our idea of the sun existing in space and time with the light streaming toward us—that is nothing but an illusory human mind thing. Ultimate Reality is outside our perception and understanding.

The same enigma arises with the double slit experiment. If photons are observed with detectors they manifest as particles every time. Leave the same detectors in place but don’t look at them and the photons manifest as waves.

Human exceptionalism need not come into play under the concept of universal consciousness. The universe is the observer. Our sense of being individuals made out of bodies is illusion. Our essence is consciousness and it’s not my or your consciousness—it’s just consciousness period.

I realize that this does open up the possibility that a computer might exhibit consciousness. If it ever does though, that consciousness will not be somehow generated through the opening and closing of switches. The consciousness rather will be universal consciousness expressing itself electronically, and it will control those switches.

I’ve heard of blind people who are able to create mental pictures of their surroundings by hearing echos. I think it is fairly well accepted that the reality we experience is created by our own imagination and is merely symbolic. If the woman saw red for the first time I suppose it would be her own private red—a symbol of her choosing.

There’s a lot of difference between qualia and deep conscious awareness. IMO the two aren’t related at all. I have no doubt that robots can have sentience. Awareness though? I lean toward thinking no.

Oddly I have written a short novel called “The Staggering Implications of the Mystery of Existence”, available on the Kindle Store which explores these questions. I don’t recommend it for you because it contains some strong woo. 😟

@WilliamFleming What you say is accurate under the Copenhagen interpretation of quantum mechanics. However, there are dozens of others that reject it for various reasons, such as the many worlds or the E-8 interpretation, all have there strengths and draw backs. Personaly, I don't like it because it is philosophicaly corrupt and leads to a lot of misinformation from woo pedelers who confuse the observation in quantum mechanics with the observation we do constantly.

I'm going to talk about the double slit experiment. All assume you know what happens based on your post. First thing to realize is that when you take a quantum measurement, you fundamentally change the state of quantum systems, such that it is impossible to know the velocity and position of a particle due to the Heisenberg uncertainty principal.

You can think about it like this: if someone throws a tennis ball at you in a dark room you can't know where it is unless it hits you, and if it does you can't tell how fast it is now going because its speed changed after it hit you. When you record information from a quantum particle you have to change its state, and this accounts for the results of the experiment. You get the same results if no one is around to watch the experiment happen, so it isn't dependent on human observation.

As far as robots go, I thing consciousness is an easier goal to achieve than sentience, and some researchers believe it has already happened in some neural networks. All consciousness requires is an awareness of ones self and the environment, and some way to differentiate the two. sentience on the other hand, I don't know if we will ever be able to truly know if that is even occurring, same as it isn't possible to know if individuals experience existence the same way.

@Happy_Killbot Sentience is nothing but the reception and processing of sensory input. Robotic systems can already do that.

To be consciously aware of one’s existence is something that is profoundly mysterious and inexplicable. It’s that “woo” stuff that you don’t like.

@WilliamFleming Every time I see sentience mentioned in reference to AI it is always about having a subjective experience, usually defined by the awareness of the qualia received from the senses, coupled with the ability to form an opinion about those things or feelings about them.

By your definition a ball that rolls down a hill after being bumped, or "sensing" the bump is sentient because it received (the bump) and processed ( rolling down the hill ) information.

Consciousness is about being aware of yourself and the environment and being able to distinguish between the two. Like I stated before, some AI's have already demonstrated the ability to both perceive themselves and their environment, and generated that ability without deliberate human intervention using an evolutionary neural network.

What this means is a program that can produce and connect "nodes" that mimic neurons in the brain. Each node sends a signal to the next nodes for statistical processing. Evolutionary models can also add or remove nodes adding an extra layer of processing over time. It is beneficial for these AI's to be able to tell when something is a product of their doing or something that some other agent did, especially when interacting with each other. This gave a reason to develop a primitive form of consciousness.

@Happy_Killbot Whatever consciousness is created through programming is nothing but mechanistic, rote processing and has notation do with conscious awareness. Computers are not aware.

@WilliamFleming A computer as a computer is not aware, same as a brain as a brain is not aware. It has to do with the software running on those systems. That software can bring the system into awareness. You could rewire a brain to only perform tasks that we use computers nowadays for, like in the video posted by @josh_is_exciting. So doesn't it stand to reason that something totally man made could do what a brain does, including consciousness or even sentience?

@Happy_Killbot I also don’t think that brains are consciously aware. I lean toward thinking that Universal Consciousness uses bodies.

You could of course be right, but my money is on no aware computers. I guess we’ll just have to wait and see what develops. Anyway, AI is an exciting field.

I’m done. Enjoy your weekend.

2

A la... The Matrix, the Terminator, Etc so on,

I am not in the least bit afraid of a machine that thinks.

I'm afraid of a machine that thinks and feels.

Ignore the holywood ideas, it won't be anything like that. Imagine a system designed to control traffic lights and cars in a city, to maximize efficiency and avoid accidents. That's the kind of superintelligence that we are talking about here. It doesn't need to be human like or even take a human form, or even have a "body" as you understand it. It is just a pattern of information utilizing energy and matter to manipulate other systems of information energy and matter, and more specifically one made by humans.

It doesn't have to have thoughts or feelings to be truly terrifying, in fact it doesn't even need to be able to interact with humans in any meaningful way. A machine that is indifferent to humans is much more dangerous than one that openly hates us.

1

AI, a thinking man made, autonomous entity could mean the extinction of man. Consider that if man were to construct such an entity, he could not endow it with emotion inasmuch as emotion distorts the understanding and view of reality (e.g., Hitler "felt" that the Jews were an inferior race.) and would cause thinking contrary to logic. However, an entity devoid of emotion would then operate at a purely logical level. As such, it would become a danger to man inasmuch as it would most certainly conclude that mankind was a detriment to the world, and a threat to AI (could arbitrarily shut the A.I.'s off), and therefore, the extinction of man would be justified. I do believe that some prominent scientist(s) have already expressed their concern that A.I. would be a danger to the continued existence of man.

[futureoflife.org]

1

There is nothing to show that artificial intelligence is sentient. Religious people have nothing to worry about for now.

There are scientists working on sentient A.I...as I understand it, that is the goal.

1

I don't think AI will ever be "conscious" in the way that we are. Not because of a "soul" but because AI works and is structured differently than our brains are, and also because there is no body attached. A large part of our brain is intimately tied to our body in all kinds of ways. AI will have fantastic results because of increasing computing power and speed, but it won't be the same as us. As far as it's effects on religion, we'll see.

There are some resarchers who argue that some of there AI's have already achieved consciousness because they have been able to identify when they are part of an outcome and when something happens without their intervention. Some systems have reportedly developed "nodes" a term used in machine learning that refers to a specific state in a neural system that identify themselves, without human intervention. That is to say they knew themselves and were therefore conscious.

I totally agree!

3

Reality is incompatible with religion.

0

Why don't you say it is an oxymoron?

What is oxymoronic? Intelligence and religion? Artificial and compatible?

Without taking the time to figure it out, what is that -- twelve combinations? And that's ignoring groups larger than pairs.

Write Comment
You can include a link to this post in your posts and comments by including the text q:348327
Agnostic does not evaluate or guarantee the accuracy of any content. Read full disclaimer.