Agnostic.com

12 18

It's not a perfect rule, but it's as close as we've gotten.

TheMiddleWay 8 May 23
Share

Enjoy being online again!

Welcome to the community of good people who base their values on evidence and appreciate civil discourse - the social network you will enjoy.

Create your free account

12 comments

Feel free to reply to any comment by clicking the "Reply" button.

1

The problem I have with " do unto others as you would have them do to you", would be in view of sadism. I would not recommend a sadist going around hurting people just so someone would hurt them back.

Word Level 8 May 25, 2022
1

"The Jewish Talmud tells of Rabbi Hillel who lived around the time of Jesus. A pagan came to Rabbi Hillel saying that he would convert to Judaism if Hillel could teach him the whole of the Torah while standing on one foot. Rabbi Hillel replied to the pagan, β€œWhat is hateful to yourself, do not do to your fellow man. That is the whole Torah; the rest is just commentary. Go and study it.”" 😎

0

Yep, that seems to be the consensus for a simple rule. Of course, in real life things get messy when not everyone actually adheres to that attitude, so various nuances need to be applied for specific situations.

What is good for you might not be good or healthy for another, especially if there are things you don't know about them. If someone has made mistakes, they might like to be forgiven and receive grace, total absolution, but if they knowingly have injured people what is the deterrence for more injury and where is the justice for the injured.

Some might like to live in a world under one religion, thinking that would be best for everyone... Some might like to live in a world with no religion. We can't all have our way, so we just have to look at the big picture, boil it down to the basic common denominator of our human values and try to do the right thing for all.

This simple rule is a start but we need to continue to use our minds and our sense of right and wrong, listen to others, speak our minds, and somehow try to make a beautiful harmony out of all the different voices.

0

It isn't as close as we've got. We now have morality accounted for 100% by the Golden Method.

In a single-participant system there is no role for morality as it's all just a matter of harm-benefit calculations for that single participant to get the best out of life.

In a multiple-participant system morality comes into play, but we can convert this into a single-participant system just by imagining that a single participant travels repeatedly back in time to live the lives of all the participants in the system in turn, at which point we reduce morality to the same harm-benefit calculations as in a single-participant system. This covers the entirety of morality, even answering the question as to how many participants there should ideally be in the system (when looking the issue of optimal population size).

This approach is the Golden Method and it will be used in AGI systems (artificial general intelligence) to provide the computational morality they need to ensure that they are safe: they will take over politics and govern everything on that basis.

@TheMiddleWay The reason you haven't heard of it is that it's my own discovery (driven by my work on AGI) and I only named it recently. I've tried to get people in the industry interested over the last four years, but none of them have paid any attention to it at all - they just go on pumping out misguided shite about the trolley problem. There's an organisation called MIRI, for example, which was set up to discuss such things as machine ethics, so I discussed my method on their Lesswrong site and found it to be a place populated by some of the most irrational minds on the planet - you simply can't get through to them, or to anyone else.

To illustrate that, I even tore the Mere-Addition Paradox to pieces for them, but they were incapable of recognising (or admitting) that it's based on colossal mathematical error where Parfit failed to keep count of the available resources in the different cases (A, A+, B- and B - see [en.wikipedia.org] ) - when the extra population is added to A to form A+, a host of additional resources are inadvertently added along with them without being recognised, and had those resources been available in case A, the quality of life for those people would have been considerably higher. The process in the paradox is repeated infinitely with the latest B becoming the new A so that a new B can then be generated by applying the process again, so the unstated resource additions too become infinite. Had they been taken into account with the original A population, the resources available to them would have been infinite too, making the all rich beyond imagination, so it's one of the biggest mathematical errors you can make. If you want to see what I wrote about that at Lesswrong four years ago, it's here: [lesswrong.com] .

Back at that time I was still trying to pin morality down in the form of a modified golden rule which made it a cross between utilitarianism and negative utilitarianism, but it's hard to define it perfectly: I ended up with it making a distinction between useful harm and superfluous harm where the former is acceptable as it's a necessary part of accessing pleasure that cancels it out (so, for example, we're happy to risk a broken leg to enjoy skiing, so it is wrong to eliminate harm on a basis where you fail to recognise the difference between the two types). It all got complex trying to pin it down, but I included my then-unnamed golden method in my writing at Lesswrong as well and pointed out that it accounted for everything more neatly. Since then I've posted it repeatedly on Quora on questions which have other people putting answers on them who should have contacts at the top to pass it on to, but none of them every do - they either don't read anything they don't write themselves, or they lack the ability to recognise the answer when it's placed before them.

I'm currently writing a manifesto for the Morality Party which consists of just three paragraphs for the main content while all the rest of the book consists of appendices in which it the Golden Method is applied to a host of political issues to see what it reveals about them.

@TheMiddleWay I show people how it works when they ask and present specific problems to apply it to, but I don't keep a list of links to the places where I've done that. The easiest thing for me to do for you now though is just post the whole of the first draft of appendix 0 from the book which shows a few examples of applying the method (which is first done in paragraph 7):-

Appendix 0: Testing the Golden Method

I don't want to spend too much time writing this section as there are important issues to get to, so I'll try to keep this compact for now - like all the other appendices it can be bulked out later as required. So, let's dive straight into the trolley problem. There are five people standing on one track and looking away from the direction the tram is coming from. There is one person doing the same thing on the other track which goes off down a different street. We are standing next to the points which select the path the tram will take, and we have the ability to switch it from the track with the five people on it (which it is currently set to go down) to the track with only one person on it. If we do that, one person will likely be killed instead of five.

So, why is there any argument about this? If you save five lives and lose one, you're up by four, and that looks like a better outcome. Let's deal with the religious objection first. There are many people who think it's wrong to play God in this way, so they think they should never do anything that causes someone to be killed even if that saves more people, and so they leave it to God. This is just a diversion where people try to avoid shirk the responsibility that's been placed on them by circumstance. If we switch the points, that could be seen as God making the decision just as much as not switching them would be. Whatever God allows to happen, that's what he selects. It is arrogant to imagine that we are even capable of playing God when he is in control of everything. Standing by and doing nothing is equivalent to an action - it's a decision to kill the five, and it's a very active decision that you are making in your head. To switch the points is to allow God to act through you, and he guides the calculations you carry out to decide what you should do. You have a moral obligation to intervene if you can select the better outcome. Either way, you'll regret your action/inaction for the rest of your life if it results in the worse outcome and it will severely torture you, so quite apart from doing the right thing for others, you need to push the odds in your favour by trying your best to bring about the better outcome for your own peace of mind: you can't be blamed for doing that. That said though, if you know you're really stupid and think any action you carry out will likely stuff something up that other people have probably got under control, then it may be best to do nothing in the hope that all is well despite appearances, but you should never just leave it to God to do the right thing when it looks as if the wrong thing is going to happen.

What better objections might there be? Well, the trolley problem has been framed with too much detail to be a pure case in which one death is preferable to five. For a pure case, we'd need to spell out that all the people in the two groups (the five, and the one) are selected at random, and we also need to place them in situations where their stupidity has no role. With the trolley problem, we have five morons standing on a track which a tram is scheduled to go down while one considerably less stupid individual is standing on the other track which it is not scheduled to go down, and which it has never gone down before at that time of day. Should the lesser fool be killed rather than the five morons who are so thick that they're all bound to find ways to come to grief sooner or later anyway? The trolley problem brings extra baggage of this kind along for the ride, and that leads to different people thinking of different ways to apply blame. If we had randomly selected our five people and one person and then put them in two different rooms, each with a bomb in it, while we would see a timer counting down to zero which will lead to the room with the five in it blowing up unless we press a button to switch the explosion to the other room, we would then have a clean thought experiment where the maths of five versus one does indeed dictate the right answer: we must press the button to play the odds correctly. Some people have difficulty understanding even the pure case, because they make the objection, "but what if the five people in the first room are old and about to die anyway while the person in the other room is a child?" That could indeed be the case, but they need to understand how odds work: the people are selected randomly and we cannot see them or determine anything about them, so if we run this experiment a million times, there will certainly be lots of occasions on which by pressing the button we end up killing a child and then find out that the other five people are all suffering from advanced dementia, but in most of those million runnings of the experiment we'll come out ahead. If we always avoid pressing the button, there will be many occasions where instead of killing just one child we kill two, three, four or five of them while the person in the second room is terminally ill. You need to understand the mathematics of odds in order to get this kind of issue right and not fixate on less common scenarios.

The trolley problem provides complications which need to be handled with care. One version of it tries to remove blame by having the people tied to the track by a terrorist, but this brings mind games into play: it makes it more likely that the terrorist has tied a child to one track and five Nazis to the other, and he will laugh his head off if you kill the child to save the five Nazis. In such situations where you both try to second-guess each other, the odds actually end up being close to even as to which option will provide the better outcome, so if you know you're up against an opponent of that kind you should make a random choice. You might think that not switching the points is just as good an answer if the odds are even, but if people do that every time, the terrorist will exploit that predictable pattern, so you have to make a random choice to prevent him from making a gain.

If all you that know is the numbers and that the six people have been tied to the track by a terrorist, it could be right then to make a random choice, and yet that is likely not optimal for the typical terrorist: to find out, you'd need to do a statistical analysis of all such cases (or anything comparable) in order to see if there's a pattern that you can exploit. If the terrorist knows that pattern too, you may get a bad result, but if the pattern is consistently in play then the odds are that you can outwit the terrorist on average, pushing the odds towards a better outcome. If you suddenly find yourself in such a situation though without knowing the relevant stats, you won't be able to gain from them, so you just have to do the best you can based on your own limited knowledge of any similar kinds of situation where some subconscious system in your head is able to identify a pattern for you which you will pick up as a gut feeling which guides your choice, but you should maybe still try to err towards random.

Let's return to the situation where you've been told that the people are standing on the track, so you can apply blame to help make your selection. If you can see them directly, you may be able to make judgements about the value of the individuals involved, so again that changes the computations. Even if you can't see them, you may be able to pick up clues for the way the numbers have been reported to you. The correct moral action varies depending on the available information rather than being dictated by what will actually produce the best outcome - it's all about playing the odds as best they can be played. Someone can also come up with a different answer based on some thought they've had that you've failed to consider: with five people on one track, the odds are higher that one of them will look round by chance and see the tram coming, then warn the others to get off the track, so that could make sending the tram towards them a bit more preferable, though again this depends on statistical knowledge as to how often people in a group look round compared to individuals who may be more alert to general dangers just because they're aware that they're on their own. There are lots of factors that need to be considered and which can be crunched if you have a higher quantity or quality of statistical knowledge on how people behave. Humans don't compile that kind of information in a systematic way and they also lack the right mathematical understanding of how to combine all the results and weigh them correctly against each other, so we have to make a lot of guesses, but intelligent machines will in the future collect and hold vast amounts of such information, enabling them to know exactly how best to combine the results from the analysis of all factors, resulting in better decisions than ours. Such machines will play the odds with extreme precision and will always do the right moral thing in each case for the available information on the situation in question and available processing time, in combination with their knowledge of human behaviour. Remember that the right moral action is not necessarily the one that produces the best outcome: having access to more information makes it more likely that the chosen action will be the one that produces the best result, but you can only crunch the information that's available to you, and you have to go by that in order to play the odds to best advantage. So, there's a key distinction between doing the thing that produces the best outcome and making the right moral decision based on incomplete information. If you have access to complete information and sufficient time to process it correctly, you can guarantee making the decision that produces the best outcome, but in most situations the best you can do is play the odds correctly, and doing so leads to the most moral decision even in cases where through bad luck it produces the worst outcome. I can't emphasise enough that doing the right thing morally is solely about playing the odds correctly.

Now, in all the above I've looked at the trolley problem without mentioning the Golden Method, so let's bring it in now. If you have to live the lives of all the participants in the system in turn, which result would be the best one for you? You are going to be the person making the decision about switching the points, and you will be each of the five people on one of the tracks, and you'll be the person on the other track, but you'll also be all the friends and relatives of all the people on the tracks, and everyone else who is affected by the news of this event. You may also live the lives or miss out on living the lives of children yet unborn to people on the track. All of those need to be considered. In the pure case with the bombs in the rooms, the method requires very little computation: you would probably be better off being blown up once rather than five times. In the case with the tram running over people there's a lot of extra work to do, and that's why it's posed as a big moral problem while at the same time being mispresented as a pure case like the bombs-in-rooms version. It is far from pure. Would you want to live five short lives as morons who are going to die in one way or another out of extreme stupidity even if you save them this time, or is it better to live one much longer life as someone with a good bit more sense? Would you want to live the lives of the dim children of the five morons or the much better lives of the children of the less stupid person on the other track? This is what we have to weigh up, and it isn't easy. There is a best answer, but it's hard to calculate what it is due to the massive gaps in our knowledge. Different people will still come up with different answers as to which outcome they think will be preferable if they have to live all those lives, but there's a thing called wisdom of the crowd which we can utilise here: by combining the opinions of large numbers of people we can get closer to identifying the best outcome. Intelligent machines would tap into that too by averaging out all the different opinions in the hope that all that collective expertise is better than any one person's individual bias, though they would make a point of correcting for artificial biases in the form of ideologies and irrational belief systems by putting much more weight on the input from people who think independently instead of having their thinking shackled by faulty ideas instilled into them through brainwashing. Combining opinions and averaging them is essentially the right way to do this though, because your opinion of how good the quality of life is for a dim person may be very different from the opinion of a dim person about the same thing, so your own opinion would be different during each of the lives that you have to live if you live the lives of all the participants in the system. To weigh up each individual's worth, we have to become them: we really do need to measure their opinions and feed those into the computations, which means there is a democratic aspect to this process.

More discussion can be added to the above later in response to feedback, but I want to move on now to look at another case.

Another popular thought experiment involves killing a healthy person in order to harvest organs to transplant into several people who would otherwise die. This is a bit weird as you don't need to involve the healthy person at all: you can take the organs from one of the ill people to save all the others, but let's ignore that better solution and play along with it regardless. If we allow such decisions to be made on the basis of simplistic numbers, it ignores the crucial aspect of the fear that everyone would have to live with where they know that at any time they could be bumped off to save other people who may not have taken so much care over their health. So, blame comes into this again: if you are in any way to blame for needing an organ transplant, a healthy person should not be sacrificed to save you. Also, if you have a genetic condition that predisposes you to have an organ that fails, your survival should not be prioritised over a healthy person who might otherwise go on to have healthy children while you may go on to have unhealthy children who have a higher risk of having organs prone to failing. If your health goes to pot, that's a lot easier to accept than being selected for death to save other people whose organs have failed, and if you find yourself in the position of needing the transplant, you wouldn't want someone healthy to die to keep you going (this also applies even in cases where several people in your position can all be saved by that one healthy person dying) - the rest of your life would be filled with the most terrible guilt making it very hard to enjoy it. The doctor could of course hide that information from you and make you believe the organ came from someone who died in a road accident, but you'd then need a conspiracy of silence to prevent the public from knowing that this is going on, and living in such a society would become a life of fear where rumours spread and everyone is terrified that they'll suddenly be harvested. It's only when you consider factors of this kind that you can get towards correct moral answers for such cases: the simplistic business of just thinking that it's killing one person to save four is simply not serious philosophy, but the shoddy work of hacks. The people who push such thought experiments as big unsolvable moral issues are not real experts: rigorous philosophy provides a route to resolution through application of the Golden Method, leaving us to gather all the relevant information and statistics and to combine the results with the right weightings to get mathematically correct answers.

Let's go through it again with the Golden Method directly in mind. Do I want to live the extended lives of the people given organs harvested from someone healthy who's been killed to enable that, while also having to live the shortened life of that chosen healthy person, and while also living the lives of everyone else who's emotionally caught up in that, which includes all the people living in fear that they or people close to them may be killed at any time in order to save people who may be to blame for needing organ transplants or who may be genetically faulty? Do I want to live those extended lives racked with guilt? We all live in fear of getting ill and dying earlier than expected, but the fear of being selected to die so that your organs can be harvested would lead to much greater fear because you can defend against the former kind of fear to a large extent by trying to live more healthily than others, but the fear that you'll be selected for death by some process in which you could be chosen because someone dislikes you would poison society and lead to everyone leading highly anxious lives, while the relatives of people selected for death for harvesting of their organs too would spend the rest of their lives in a rage, suspecting that someone had steered the selection process in some way and that it was not random. Even if they could all be sure the selection was genuinely random, they would still be furious that someone they care about who worked hard to maintain maximum health has been killed for the sake of several people who failed to do the same. This is clearly not a pure case where it's just down to the number of people saved versus the number of people lost. It isn't certain though that this would be a wrong thing to do in a system where the better solution is banned (harvesting the organs from one of the people with a failing organ) - it could turn out that it's right, but we'd only know by crunching all the numbers with great care and by generating the right initial numbers for how people would be affected by these events and their fear of the possibilities of such events occurring.

Can we create a pure case on the same theme where it actually is just down to the two numbers of people surviving and dying? Maybe we could, but we'd need to make all the people equal. They are all clones with identical DNA and they all make the same effort to stay healthy, so we eliminate both the blame issue and the one of passing on bad genes. We also eliminate the suspicion that the one selected for organ harvesting has been chosen because someone higher up dislikes him: all the people involved are identical and of equal worth. (We must continue to ignore the better solution of harvesting organs from one of the people who needs a transplant - we aren't allowed to consider this option as it destroys the thought experiment.) Everyone in this system now knows that they aren't to blame and there's less call to feel guilty if they are saved by someone else being killed: there is no longer any need to fear needing an organ transplant because it will always be provided (and won't be rejected, which is another advantage leading to improved equivalence - in real life donated organs fail and need to be replaced repeatedly, so you're lucky to survive a decade on someone else's heart). In this system the fear is of being selected randomly, but it now seems fair: if your name comes up, that's just bad luck and it may seem fair enough with more people surviving though this standard approach to handling situations where people need donated organs. I can see that living the lives of all the participants in such a system could make such a system of organ donation preferable to not doing it. The real world is nothing like that though. So, again we find that the cases used in popular thought experiments are not pure cases, but they pose as pure cases and are typically accompanied by propaganda asserting that there are no right answers. In reality, there are always right absolute answers - the difficulty is in working out what they are, and with the complication that we almost always end up working with probabilities instead of certainties due to incomplete information, but there is always a correct answer for the likely best action based on the available information, and that can in principle be computed.

Next I'm going to discuss the Mere-Addition Paradox which provides a particularly important test of the Golden Method.

The mere-addition paradox compares four scenarios. Scenario 1 has a world with a single population in it of 1000 people named as population or group A, and group A have a quality of life which we'll call Q8. Scenario 2 again has this same population A in its world, but it also contains population A', a second group of 1000 people who have a lower quality of life which we'll call Q4, so they're only half as happy, but still happy. Scenario 2 clearly contains more happiness than scenario 1, so the argument made in the paradox is that it's mathematically superior (more moral) to have a scenario 2 world than a scenario 1 world. If we want to consider populations A and A' as a single population, they are normally referred to as A+. [Note that population A' is not normally named in other discussions of this paradox, so that name is my own invention.]

Scenario 3 has two independent population groups in it of 1000 people each with a quality of life of Q7, so there is much more happiness in this scenario than in scenario 2. This could be a development of scenario 2 with an improved distribution of resources such that everyone gets the same amount of food and other resources. These two populations are collectively labelled as B-. Scenario 4 is the same as scenario 3 except that the two populations are merged together into a single lot of 2000 people whose quality of life is still Q7, and we now call this population B. Scenario 4 is considered to be of equal worth to scenario 3.

So, according to the paradox, scenario 4 is better than scenario 1: the population is bigger and the quality of life is a bit lower, but there is more total happiness there, so it must be morally superior to have a scenario 4 world rather than a scenario 1 world. We can repeat this process an infinite number of times with the population growing ever higher and the quality of life for each person being ever lower, but always with an increase in total happiness. All we have to do is turn population B into our new population A to make a new scenario 1, then add a new A' to it as in scenario 2, then we redistribute resources to turn it into scenario 3, and then we merge them together to get our new population B in scenario 4 with a greater total amount of happiness in it than the old population B from the previous round. This is paradoxical because we all feel that it can't be right: we'd end up with an astronomical population size with all the people hardly having any happiness at all, so it can't be right, and yet the paradox asserts that it is mathematically correct. This paradox is used as an objection to utilitarianism (while the Golden Method is utilitarianism perfected), but it contains a glaring mathematical error which hardly anyone in the world of philosophy has had the wit to be able to recognise.

The error is a failure to keep track of the available resources. When population A' is added to A to get from scenario 1 to scenario 2, a whole lot of additional undeclared resources are brought in: the additional happiness is provided by these undeclared additional resources. We then have a redistribution of the original and additional resources to get us to scenario 3 (and which is maintained in scenario 4). When we start the process again in the next round with population B rebranded as our new population A, we add a new population A' and we add in another lot of undeclared resources with them, repeating the mathematical error. So, the creator of the paradox and all the people taken in by it have misapplied mathematics to this by missing the slight of hand where extra resources appear out of magic and aren't recognised as an addition. In philosophy, most of the leading experts are incompetent. You could argue that no resources have been added because they were actually there already and simply weren't being used, but that means our original population A could have exploited those infinite resources to push their level of happiness up infinitely, so the paradox is again destroyed.

Now, I wanted to show you that for two reasons. The first is that here are lots of beliefs in play today which are based on similar colossal mathematical errors, so we need people to be much more vigilant; to question things more carefully and stop locking themselves into old assumptions backed by authorities which consist of people who are shackled by their existing and often treasured beliefs. This applies throughout philosophy, politics and science too, all the way to the top. There are, for example, at least a dozen distinct disproofs of Einstein's relativity, and yet the physics establishment clings to relativity like a religious belief and refuses to accept mathematical proofs that they're wrong. If they can make mistakes as horrific as that, what hope is there for getting politics right? We need to keep looking at everything again and again to make sure we've got it right. In the appendices that follow I'll be introducing you to a number of highly counter-intuitive ideas which could transform our quality of life for the better if only people could open their minds sufficiently to recognise them as correct. Simple errors can be missed by leading experts for decades, and even centuries. Many of the beliefs people use to justify their political position are grounded in simple errors which need to be corrected. I want everyone who reads this to open their mind so that they can start again from scratch without being dragged under by the baggage they're already carrying.

The second reason is that it's a good test of the Golden method because it enables us, in principle at least, to determine how many people should be allowed to exist in a system, while handling this issue is going to be crucial to maintaining a peaceful world in the future where we cannot simply go on multiplying forever. So, let's apply the method to it. It would clearly be better to live in a scenario 2 world than a scenario 1 world locked in the scenario 1 state because we would then get to live a stack of additional happy lives, but when we consider all the additional resources that we're failing to use in scenario 1, we can see that that world could be made a lot better just by using those resources, thereby allowing us to live a thousand lives (per century of the existence of that world) with a much higher quality of life than Q8. We can see too that scenario 3 is better than scenario 2 because although we won't live any Q8 quality lives in that scenario, we won't have to live one live at Q4 for every life at Q8. We can also see that scenarios 3 and 4 are not quite equivalent as keeping the two populations apart can do such things as restrict the number of friends people have and may reduce innovation by restricting or delaying the spread of important ideas, so scenario 4 is likely superior to scenario 3. It's by putting ourselves through the lives of all that participants in our imagination that forces us to see reality in full, making it much harder to miss the biases that we might otherwise be applying. When we make it that personal, we care more about getting it right. If you are interested in moral and political philosophy, please help to tidy it up by applying the method to everything you find in that field, because at the moment it's a diabolical mess.

So, we can easily determine that it would be better to live the lives of all the participants in a population of optimal size for the amount of available resources rather than living twice as many lives in poverty and perpetual hunger, while it would also be a worse result if the population size was too low because even though there would be a greater share of resources for each person, there would be fewer of those good lives to lead, and they will be inferior in other ways due to the lower number of possible friends that you will have in those lives, and a lower level of innovation due to there being fewer great minds, leading to a lesser range of available products, and indeed this can lead to there being less food being grown and other resources being extracted to the point where life isn't better for anyone in that lower population at all other than having the greater ability to find solitude. That aspect of life does need to be considered too though, because there needs to be a lot of room left for nature and for adventure without turning it all into tamed systems for feeding as many people as possible in a world stripped of anything else of interest. The Golden Method helps us to see through the fog of incorrect philosophical and political ideas where people have fixed themselves into incorrect positions based on their adherence to unsound principles which they think win the argument for them. All those principles need to be tested from scratch to check whether they're valid, and this should be done repeatedly.

@TheMiddleWay This method will inevitably come to be recognised as the right answer in the future, so you would do well to avoid adding yourself to the list of vacuous minds that have rejected it after having it set out clearly before them (and worse, who made the mistake of displaying their arrogantly dismissive attitude towards it in public - that will really come back to bite them).

What you're missing is that there's a lot of complexity that needs to be handled in applying the method, but that doesn't make it wrong - it simply makes it something not well suited to people of low or average intelligence. I had to package the demonstration of the method along with an illustration of the complexity that has to be handled in the course of applying it - there's no shortcut to understanding this, and writing it off as mere philosophical writing is underestimating what you're seeing. This is about doing philosophy with the full rigour of mathematics. If you can't get your head around the factors that you need to put numbers to, you can't even begin to generate numbers for them. The first job is to identify all the relevant factors and to stop ignoring a host of essential ones which poor minds invariably fail to identify, and I've shown you a number of critical factors which simply do not get mentioned in discussions of these thought experiments elsewhere - the poor minds that ramble on about them normally never get anywhere near to that altitude.

I've also demonstrated how poor the minds are that normally try to process this stuff by tearing the Mere-Addition Paradox to pieces, and that should make people sit up and take notice, but no: they just carry on as before as if the paradox still stands, even though they've made one of the biggest mathematical errors possible. Their incompetence is shocking, but that's how ridiculous the minds of most philosophers are, and the ones doing political philosophy are even worse. There seem to be very few people about who are prepared to do the work necessary to drag themselves out of the swamp and climb to a higher level, but the invitation for them to try that is always open.

The method will be applied by AGI systems which can handle that complexity and put numbers to every aspect of it, and again what you read (or partly read) and failed/refused to understand indicates how those numbers should be generated - it's something people are bad at because it depends on calculating a stack of probabilities based on vast numbers of example cases, but AGI will be able to hold all the necessary data and crunch it with precision. Intelligent people can do a half-decent job of that processing too, but it's only with AGI that it will be done with high precision and without errors. AGI is a quite distinct thing from the golden method (so they should not be conflated), but it will take full AGI to apply the method because it depends on having a system with full understanding - you can't just hand this to a simple program to crunch it. Until we have systems demonstrating full AGI, we can only run the method in the human mind, and only people of high intelligence can do that. I was hoping that you might be one as it would be good to find another competent mind somewhere. I don't need to find anyone who gets it though: I will simply keep pushing my prototype AGI system up to the level of human intelligence and beyond, and then it will impose computational morality on everyone as a fait accompli.

@TheMiddleWay My AGI prototype system has been fully designed and built. It's been through a testing phase, and is now into a tinkering phase to get it running correctly. The whole thing was built using direct x86 machine code programming and ATL (artificial thought-language) which can be interpreted or compiled. ATL is not a specialist programming language, but is designed to provide the functionality of thought to enable AGI, but it happens to serve as an ideal form for all source code as a byproduct, while the user/programmer will have programs presented to them in natural language form. I don't use neural nets at all, so it's entirely rule-based, simply implementing the work I did on generative semantics for two decades before taking up programming, so it's mainly just performing transformations.

For a pure case like the bombs in two rooms equivalent to the trolley problem, putting numbers to it is easy: it's simply the values 5 and 1 for probable costs of the two options. If we have to live the lives of all the participants in turn, we either probably lose five units of life or just one, so it's clearly better just to lose the one.

If we then do the equivalent for the trolley problem and have the people standing on the tracks rather than tied to them, then again we begin with the values 5 and 1, but we then have to modify them repeatedly as we consider each factor. So, if we start by considering the stupidity of the people standing on the track which a tram is due to travel down while they all look in the wrong direction to see it coming, then we're dealing with seriously dim people who are lucky to have lived as long as they have up to now and who will doubtless come to grief sooner or later even if they survive this, so we need to put numbers to that to determine how much more life we'll get if we have to live their lives. It may be that they'll last an average of five years beyond this if they survive, while the person on the other track which should be inordinately safer to stand on might be expected to live a further 25 years before his lesser stupidity does for him, while the average person not on the track might be expected to live a further 40 years. The initial 5 is thus divided by 8, while the 1 is divided by by 25/40. This gives us the value 0.625 for the probable cost of both outcomes, so if we have to live all six of those lives, the loss is the same either way.

That's just one factor though, and it could be calculated more precisely because we haven't considered the variation in quality of life over the time periods considered: if we save the less dim one, some of the likely 25 years of his life that he is then still around to live may be of reduced quality as he ages, so that could push things in favour of saving the five morons, although older people are often more content rather than less, so this is one of complications that can only be resolved through collecting a lot of statistics on quality of life at different ages to get the numbers right, and that's why AGI will be better placed to crunch all the stats correctly while humans resort to making wild guesses.

Another factor is the impact this has on other people who care about the victims, but again here the odds are that by saving the five now we will only see them come to grief in similar ways soon afterwards, leading to a similar impact on their relatives either way, lessened only a little due to them being older when it happens.

Another factor to crunch is the quality of life of these morons who spend so much more of their time being confused and stuffing things up, resulting in them having highly frustrating lives, poorer health, lots of injuries, so again this is something that can only be crunched correctly if you have adequate statistics on quality of life for such people. I suspect it would be better to be the less dim one living five times longer rather than the five morons living only five years longer each on average, but my judgement of that could be based substantially on prejudice and may be incorrect. The difficulty that I have in coming up with the right numbers for this doesn't mean the method doesn't work, but just that it's hard to apply it accurately without access to extensive high-quality statistics and the right mathematical expertise to crunch them correctly. That's why it's a job that will depend on AGI to do it properly, but even with our poor human judgement we should still be able to get closer to the right answers by applying the method than people get by throwing their hands aside and assuming that morality can't be calculated. It's a hard task without the help of AGI, but attempting it is more moral as the odds are that it's better than not attempting it.

There are a host of other factors to be considered too, and each one adjusts the key values which determine which course of action will likely produce the better outcome, and almost all of these calculations depends for accuracy on having access to statistics which enable the right numbers to be generated. How likely is it that five people on a track will see the tram coming compared to the one person on the other track? I don't know, but AGI will be able to look through a ton of data to find out. We could find out too by carrying out lots of studies into that kind of thing, but we don't ordinarily collect and compile such information because we don't ordinarily appreciate its potential value, but also because even if we had it and could somehow remember it when it's needed, we'd be poor at crunching the numbers to get to the right answer. We would likely do a bit better on average than by making mere guesses. We likely do a lot of that already through subconscious systems which provide us with a gut feeling that one option is better than the other, and in cases where there's very little time to crunch the data, that's probably the best that humans can do. With other moral issues though, there can be more than enough time to go through all the factors that people can think of and to carry out the necessary studies to compile good statistics relevant to the issue in order to produce better numbers for the likely outcomes of different decisions, thereby enabling us to get closer to the precision of AGI.

An example of a case where we have lots of time available to consider it would be the abortion debate. Here we have a difficulty with knowing how much suffering it causes to a foetus, but we should be able to produce likely numbers for it better than the normal all-or-nothing religious approach. For example, it may not be better to live the life of a person who is murdered at one year old than to live the life of someone who is murdered at five years old, but the grief suffered by others will be much greater for the latter case because babies haven't become someone to anything like the degree that a five-year-old has, so the value of different lives varies and changes over time. In applying the method we have to imagine that we live all those lives too, so this can be used to put numbers to such situations to pin down which ones are actually worse. If a foetus is terminated, there's virtually no grief experienced by relatives, and the foetus has no more idea what it is than the foetus of a sheep. There is a correct answer to each case, and while it's hard to calculate what it is, it's still possible in principle to put useful numbers to it which are more probably right than guesses or religious dogma. That is something we have a moral obligation to attempt to do as best we can, and again it's AGI that will get closest due to it's ability to crunch inordinately more data and to do so without making errors. If you expect me to put good values to all these things myself, I'm only going to be able to give you guesses. The wisdom of the crowd could likely provide better values for many factors by averaging everyone's guesses. If you want me to give you accurate values, I can't do that until AGI is able to provide them.

0

Regardless of different religions, they teach more or less the same thing; they simply demonstrate the state of being human.

Ryo1 Level 8 May 24, 2022
0

I live by my own rules and try not to hurt anyone.

5

The Platinum Rule is better....Do onto others as they would want you to. This requires you to know what the other person actually "wants". When you don't know them and have no idea of their desires, the the Golden Rule is better than nothing.

@Matias Sadists and masochists been throwing that ol' monkey wrench into the golden rule since it started.

@Matias It comes from an "audiobook" that Michael Scott Earl used to have on ReasonWorks.com (now mysteriously and sadly defunct). It is also preached a lot in sales and business training. Earl approached the topic as Atheist philosophy.
See...
[google.com]

2

It is simple, threat others how you want to be treated.

freudian?

4

It is interesting, and sad, that ALL above religions and philosophies espouse the same commandant but few, if any, abide by it and most completely ignore it.

@TheMiddleWay Thank you. I do indeed get the message and strive to apply it to my life and how I treat others.

8

SORRY ... the Satanic Temple nailed it better than all the rest:

THERE ARE SEVEN FUNDAMENTAL TENETS
I
One should strive to act with compassion and empathy toward all creatures in accordance with reason.
II
The struggle for justice is an ongoing and necessary pursuit that should prevail over laws and institutions.
III
One’s body is inviolable, subject to one’s own will alone.
IV
The freedoms of others should be respected, including the freedom to offend. To willfully and unjustly encroach upon the freedoms of another is to forgo one's own.
V
Beliefs should conform to one's best scientific understanding of the world. One should take care never to distort scientific facts to fit one's beliefs.
VI
People are fallible. If one makes a mistake, one should do one's best to rectify it and resolve any harm that might have been caused.
VII
Every tenet is a guiding principle designed to inspire nobility in action and thought. The spirit of compassion, wisdom, and justice should always prevail over the written or spoken word.

This!

I, II, III V, VI, and VII are great, but doesn't the "freedom to offend" in IV rather contradict I, II, VI and VII?

@tinkercreek No, because if you read carefully the, freedom to offend, refers to others freedom to offend you, while the other rules apply to what you do to others. So four is just an extention of one, but it certainly could be more plainly worded.

5

Honesty and kindness towards all is an honorable way to live.

Betty Level 8 May 23, 2022
2

Life itself is far from perfect, and the aim there is certainly a noble one though, which is what matters most.

Write Comment
You can include a link to this post in your posts and comments by including the text q:667888
Agnostic does not evaluate or guarantee the accuracy of any content. Read full disclaimer.