"Rogue country" is outside evaluative characteristic.
Lets try to define "rogue country" by its estimation-independent characteristics: 1) It is country which fight for world domination 2) It is a country which is interested in worldwide promotion of its (crazy) ideology (USSR, communism) 3) Its a country which survival is threatened by risks of aggression 4) It is a country which is ruled by crazy dictator.
I would like to say that superpowers is the type of "rogue countries", as they sometimes combine some of listed above properties.
The difference is mainly that we always had two (or three) superpowers which fight for the world domination. Sometimes one of them was on the first place and another one was challenging its position as world leader. The second superpower is more willing to create global risk, as it may rise it "status" or chances to overpower "alpha-superpower".
The topic is interesting, and there a lot what could be said on it including current political situation and even war in Syria. Just read an article today which explained this war from this point of view.
I would also add Doomsday blackmailers. These are rational agents which would create Doomsday Machine to blackmail the world with the goal of world domination.
Another option worth considering is arogant scientists, who benefit personally from dangerous experiments. Example is CERN proceeded with LHC before its safety was proven. Another group of bioscientists excavated 1918 pandemic flu, sequenced it and posted it in the internet. And another scientist deliberately created new superflu studying genetic variation which could make birds flu stronger. We could imagine a scientist who would to increase personal longevity by gene therapy, even if it poses 1 per cent pandemic risk. And if there are many of them...
Also there is a possible class of agents who try to create smaller catastrophe in order to prevent larger catastrophe. Recent movie "Inferno" is about it, where a character created a virus to kill half humanity to safe all humanity later.
I listed all my ideas in my agent map, which is here on Less Wrong http://lesswrong.com/r/discussion/lw/o0m/the_map_of_agents_which_may_create_xrisks/
How many seconds have you been in the room?
Let's say the time between t1 and t2 is 1 trillion seconds. Let us further assume that all people go through the rooms in the same amount of time (thus people spend 1 second each in room A, and 1 million seconds each in room B).
100 trillion of the 100.1 trillion observer moments between 0 and 1 seconds in a room occur in room A. All of the observer moments past 1 second occur in room B (this is somewhat flawed in that it is possible that the observers don't all spend the same amount of time in a given room, but even in the case where 100 million people stay in room A for 1 million seconds each, and the rest spend zero time, an observer who's been in a room for 1 million seconds is still overwhelmingly likely to be in room B. So basically the longer you've been in the room, the more probably you should consider it that you're in room B).
If an observer doesn't know how long they've been in a given room, I'm not sure how meaningful it is to call them "an" observer.
See my other comment for more clarification on how CEV would eliminate negative values.
how will you create or choose a process which will build a FAI?
You are literally asking me to solve the FAI problem right here and now. I understand that FAI is a very hard problem and I don't expect to solve it instantly. Just because a problem is hard, doesn't mean it can't have a solution.
First of all let me adopt some terminology from Superintelligence. I think FAI requires solving two somewhat different problems. Value Learning and Value Loading.
You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want. I think that's the easy problem, and any intelligent AI will form a model of humans and understand what we want. Getting it to care about what we want seems like the hard problem to me.
But I do see some promising ideas to approach the problem. For instance have AIs that predict what choices a human would make in each situation. So you basically get an AI which is just a human, but sped up a lot. Or have an AI which presents arguments for and against each choice, so that humans can make more informed choices. Then it could predict what choice a human would make after hearing all the arguments, and do that.
More complicated ideas were mentioned in Superintelligence. I like the idea of "motivational scaffolding".Somehow train an AI that can learn how the world works and can generate an "interpretable model". Like e.g. being able to understand English sentences and translate their meanings to representations the AI can use. Then you can explicitly program a utility function into the AI using its learned model.
That doesn't make much sense. What do you mean by "negative" and from which point of view?
From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.
Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of "improve".
Your stated example was ISIS. ISIS is so bad because they incorrectly believe that God is on their side and wants them to do the things they do. That the people that die will go to heaven, so loss of life isn't so bad. If they were more intelligent, informed, and rational... If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.
The second thing CEV does is average everyone's values together. So even if ISIS really does value killing people, their victims value not being killed even more. So a CEV of all of humanity would still value life, even if evil people's values are included. Even if everyone was a sociopath, their CEV would still be the best compromise possible, between everyone's values.
LOGIC NATION: A Psychological Revolution https://www.youtube.com/watch?v=drcseH-7hpw
What does LW think after trying or think to not try?
I actually think most historical groups wanted to vanquish the enemy, but not destroy either themselves or the environment to the point at which it's no longer livable. This is one of the interesting things that shifts to the foreground when thinking about agents in the context of existential risks. As for people fighting to the death, often this was done for the sake of group survival, where the group is the relevant unit here. (Thoughts?)
If I had to summarize it quickly, I'd say this is an article about how boring / generic things are seen as more legitimate, especially if large groups of people do them. For example, if you spend every waking moment working on some futuristic technology, people will see you very differently if you're doing it in your garage (alone), working for a small startup (small group, presumably all working on similar projects), or working for IBM (large group, doing unrelated work). Also, we don't like to talk about people's opinions, and we'd rather talk about the "opinions" of organizations like political parties.
I thought it was interesting, but I can't help feeling like it would have been a much better article without a Ra metaphor. It seemed like a huge stretch and a significant portion of the article was trying (and in my opinion, failing) to justify the metaphor.
Probably? Most people are attached to their existence (insert obvious comment re: what kind of people win a competition between those who love life and those who can take it or leave it), and giving up your existence to create some alien thing is still giving up your existence, even if you are told it is going to have a great time.
Dialectical Behavioral Therapyis at least worth looking into.
DBT combines standard cognitive behavioral techniques for emotion regulation and reality-testing with concepts of distress tolerance, acceptance, and mindful awareness largely derived from Buddhist meditative practice. DBT is the first therapy that has been experimentally demonstrated to be generally effective in treating BPD.[8][9] The first randomized clinical trial of DBT showed reduced rates of suicidal gestures, psychiatric hospitalizations, and treatment drop-outs when compared to treatment as usual.[4] A meta-analysis found that DBT reached moderate effects in individuals with borderline personality disorder.[10]
First of all, it's hard to imagine anyone slowing down AI research. Even a ban by n large governments would only encourage research in places beyond government control.
Second, there is quite a lot of uncertainty about the difficulty of these tasks. Both human-comparable software and AI value alignment probably involve multiple difficult subproblems that have barely been researched so far.
This sounds like an Eastern philosophy/religion narrative. Is logic / mathematical patterns basically dao (or tao)? The promise of peace if you abandon the mistaken urges of the emotional mind is also quite familiar :-)
I see it as two sides of the same coin, the sensational world is logic and the other is the indescribable/unmanifested/nothingness/void. All sides is how it is and I am fine with that. Who knows?
Ah, I see. It's Athene/Boumaaza. Why didn't you say so from the start?
Maybe I should have from the start. That's where all of this is from. https://www.twitch.tv/athenelive/v/95140378?t=1h28m23s
This sounds like an Eastern philosophy/religion narrative. Is logic / mathematical patterns basically dao (or tao)? The promise of peace if you abandon the mistaken urges of the emotional mind is also quite familiar :-)
Ah, I see. It's Athene/Boumaaza. Why didn't you say so from the start?
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
That seems different to what you were saying before.
This is well explored in "Three worlds collide". Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I'm using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
There's not much objectivity in that.
Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.
No problem, did you awaken? Some have said after awakening they would like to spread it to other people, but it is hard to be honest to one self and do this to awaken.
There isn't any better words then this, paradigm shift or click etc. I suggest going here https://www.reddit.com/r/makingsense and reading other people's experiences.
with best regards
Well then, we're done, aren't we? If everything is already logical there doesn't seem to be any point in complaining about lack of logic.
Well emotionally most of us aren't aligned with logic instead of some other value like comfort. Even though logic is what has given us all the comfort, technology and so forth. This is a psychological experience.
Mathematical patterns created us? That's an interesting idea.
Without logic most of us probably wouldn't have existed, the technology of which have satisfied our most primal needs such as food, water, healthcare, shelter. Going back in time, seeing evolution and evolutionary biology. How stardust evolved to us.
So there is an "emotional mind" (which, I think, means the subconsciousness) and the "rational mind" (aka the consciousness). The emotional mind wants things like identity (you probably mean the sense of belonging) and safety. You think the emotional mind is wrong and it should really want... mathematical patterns?
Well yes, that's what we do, we value things like safety or comfort over everything else. Yet it is contradictory as whatever we value can never be the case, or it can be taken from us. By looking at what even made these things possible in the first place, whether it be a parent or spouse, it was logic that gave us this. Of course it's hard to go on an emotional level. The emotional mind is weakening the rational mind.
Therefore the rational mind should yell at the emotional mind to stop being silly and just do what the rational minds tells it to do? And the emotional mind will say "yes, of course" instead of extending the middle finger, as usual? And if that process actually happens, your complete whole will achieve the mathematical-pattern enlightenment while ignoring the not-important stuff like safety?
In all actuality, I think it is the emotional mind which is driving us and weakening the rational part of brain by not submitting to logic. It is right before our eyes that everything we have, have been brought about by logic and it all seems to be universal in this sensational world. When the emotional mind realizes that rationality or logic will bring it all the safety it needs, with the easy heuristics previously, it will be at peace. Because nothing can bring you more comfort. If you aren't comfortable and it is solvable, you do so, otherwise you accept it for what it is.
It is a paradigm shift.
If you were uncomfortable however and it wasn't solvable, with comfort as your most important value, you will suffer and you won't understand why. Which is what we are doing. It is a rat race.
And you promise it will be grrrrrreat(tm)?
You will feel euphoric after logic is your most important value. It will fade away over time, then you might become very anxious because you don't understand anything. So you will have to rebuild everything you think you knew before. At the part of being anxious you might think it has stopped working, that it doesn't make sense. You might return to your old important value then. Instead of searching for the logic in all things. See what I replied to another one where to go if you do try this out.
Am I the only one who will be gladly superhappified?
So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me.
The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.
Yudkowsky isn't being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.
On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.
obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.
This is well explored in "Three worlds collide". Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I'm using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.
Farming is a completely different type of job than software development or designing integrated circuits.
When you program a computer, it does what you tell it to. You can diagnose what's wrong and fix it. Whether you can fix things is based a lot on your own skill and knowledge.
If you become a farmer, you can do everything skillfully and still have a crop fail. You are at the mercy of the weather.
Have you even grown a small vegetable garden? That might be a first step to get some idea what you are facing if you go into farming.
Don't assume farming will be easy. It can be one of the toughest jobs. Spend some time out on a farm helping out with whatever needs to be done to research this option. If you enjoy being outdoors, and are ok with the uncertainty of farming, then maybe it is for you. If you love the land, then farming may be for you.
Since your wife wants to move to another country, I think it is worth going through the immigration processes for various countries. Immigration processes can take years, and still have an uncertain result, so you might as well get started. If you can afford to visit some countries, that may make things clearer too. There are more changes to moving to another country than just money. Climate and culture can have a big effect on a person's life.
Where I live in Canada, a starting salary for a software developer can be about CAD $40 000. So the $100 000 figure other people are talking about isn't applicable everywhere.
Regarding freelance work, do you have enough time in the day that you could start doing freelance work while still working at the job you have? If you take on some side projects, you could see how that goes without quitting your job.
I think part of what this decision comes down to is what your goals are, and what the goals of your family are. What are your priorities? What is most important to you?
I think it could help to take the first steps on all three options at once, to get a taste of what each of them is like. Things might become clearer as you get more information. Right now I don't think you know enough about each option to make a good decision. Research is what is required.
Also, I read a book recently about making good decisions. It is called "Decisive: How to Make Better Choices in Life and Work" by Chip Heath and Dan Heath. There are some techniques in there, like asking yourself, "Imagine a year from now the project I was attempting to do has failed. What are the reasons?" that have helped me figure some things out in my own life.
I'm not sure what you're saying. I would describe giving to charity as morally good without implying that not giving is morally evil.
I agree that moral goodness is different from hedonic goodness (which I assume means pleasure), but I would describe that by saying that pleasure is good in a certain way, but may or may not be good all things considered, while moral goodness means what is good all things considered.
I do not support "letting a sentient being eat babies just because it wants to" in general. So for example if there is a human who wants to eat babies, I would prevent that. But that is because it is bad for humans to eat babies. In the case of the babyeaters, it is by stipulation good for them.
That stipulation itself, by the way, is not really a reasonable one. Some species do sometimes eat babies, and it is possible that such a species could develop reason. But it is likely that the very process of developing reason would impede the eating of babies, and eating babies would become unusual, much as cannibalism is unusual in human societies. And just as cannibalism is wrong for humans, eating babies would become wrong for that species. But Eliezer makes the stipulation because, as I said, he believes that human values are intrinsically arbitrary, from an absolute standpoint.
So there is a metaethical disagreement. You could put it this way: I think that reality is fundamentally good, and therefore actually existing species will have fundamentally good values. Eliezer thinks that reality is fundamentally indifferent, and therefore actually existing species will have fundamentally indifferent values.
But given the stipulation, yes I am serious. And no I would not accept those solutions, unless those solutions were acceptable to them anyway -- which would prove my point that eating babies was not actually good for them, and not actually a true part of their values.
A fact that I've observed is that NONE of those who other themselves and blame the shard get it done.
I don't smoke meth!
If someone says "I've got a terrible temper", he will still hit. If he says "I hit my girlfriend", he might stop. If someone says "I have shitty executive function", he will still be late. If he says "I broke my promise", he might change.
Wow, I never thought of it like that. So internal attributions lead to antisocial behaviour, compared to external attributions which lead to behaviour change?
I'm reading a lot into phrasing, so if this isn't you, my bad. Just...my advice... be sure to own your stuff man.
I think you are on to something, but I find it a bit hard to understand.
You either "have an addiction", or "screwed some randos without protection", and my experience suggests that thinking of it as the second one will help you more.
I think you are right. It's just that I feel so shamed thinking of the second one. I can feel psychological defences like denial and rationalisations coming to my mind while I type. I screwed some random without protection. I am a dumb person.
Thank you for your advice.
French Foreign Legion
Part of my motivation is motivated by hopes of greater social status. Recently I read that sociometric status is associated with greater happiness which fits in with my notions about the esteem of the military. However, when I read about the empirical research on the correlates of sociometric status (at least in kids) it appears to be traits very very different than would be expected in the military!
https://www.ncbi.nlm.nih.gov/pubmed/1619136 https://www.ncbi.nlm.nih.gov/pubmed/3286817 https://www.jstor.org/stable/23086265?seq=1#page_scan_tab_contents
Also, I ran across a paper and a podcast I remember saying that peers ratings (essentially sociometric status) factor in to military career advancement so there is that confounding things.
I think I prefer to be a lover, than a fighter. I mean, I am probably driven to fight by my childhood trauma and I shouldn't give in to that. I can gain a better life by growing through developing a secure attachment style probably instead.
Interacial dating
Yes, when I went out with another asian girl in the past someone once said: it's an 'x' (can't remember what they guessed) and an asian together?' it's a really strange voice - they were two teenage guys. Not quite rascist, but it could be in the future. I'm mixed ethnicity but appear either dark skinned Latino or Arab I guess.
Re STD check
It happened in Sunday the 16th. I will not have sexual contact till I get tested, but because some STI's incubate p to 9 weeks (hep c - but I couldn't be exposed to that). The next longest is HIV 1-3 months but that's low likelihood since I didn't have unprotected penetrative sex so I am not concerned this time (but don't do it again, Clarity! Just in case! Oral herpes which is 4-6 weeks but apparently everyone has that, even kids, so I am not concerned. But the same for genital herpes (4-6 weeks) which I am concernd about so I will wait well the average upper 6 weeks to get tested. That means testing on the 27th of November. I'll just have to do it when I'm overseas I guess...
I think the server lost some of its marbles. It can't even find the static image for the error :-/
"Failed to load resource: the server responded with a status of 404 (Not Found): http://lesswrong.com/static/youbrokeit.png"
Everything is logical
Well then, we're done, aren't we? If everything is already logical there doesn't seem to be any point in complaining about lack of logic.
By logic I mean, what has created us, the mathematical patterns.
Mathematical patterns created us? That's an interesting idea.
But OK, let's dive into the pile of the brown stuff in search of a nut, a kernel of something...
So there is an "emotional mind" (which, I think, means the subconsciousness) and the "rational mind" (aka the consciousness). The emotional mind wants things like identity (you probably mean the sense of belonging) and safety. You think the emotional mind is wrong and it should really want... mathematical patterns? Therefore the rational mind should yell at the emotional mind to stop being silly and just do what the rational minds tells it to do? And the emotional mind will say "yes, of course" instead of extending the middle finger, as usual? And if that process actually happens, your complete whole will achieve the mathematical-pattern enlightenment while ignoring the not-important stuff like safety?
And you promise it will be grrrrrreat(tm)?
Metaethically, I don't see a disagreement between you and Eliezer. Ethically, I do.
Eliezer says he values babies not being eaten more than he values letting a sentient being eat babies just because it wants to.
You say you don't, that's all. Different values.
Are you serious, though? What if you had enough power to stop them from eating babies without having to kill them? Can we just give them fake babies?
"Right" is just another way of saying "good", or anyway "reasonably judged to be good."
No, morally rightness and wrongness have implications about rule following and rule breaking, reward and punishment that moral goodness and harness dont. Giving to charity is virus, but not giving to charity isn't wrong and doesn't deserve punishment.
Similarly, moral goodness and hedonic goodness are different.
Such a useful information. Problem solving is one of the key helps required to be successful at work, but definition speedy and creative solutions to the contests and difficulties that inevitably arise is not an easy task. I am working at Dissertation Help Desk at London, I would like to share this discussion with my academic team and students. When challenged with a problem, all too often, we try to ‘force’ our intelligence into coming up with a explanation. Not only is this a poor way to resource our creativity but this approach can result in stress while our mind wrestles with the problem.
I think I get it.
You're saying that "right" just means "in harmony with any set of values held by sentient beings?"
So, baby-eating is right for baby-eaters, wrong for humans, and all either of those statements means is that they are/aren't consistent with the fundamental values of the two species?
I think that's right.
Except that something is moral whether any being cares about morality or not, just like something is prime regardless of whether or not anyone cares about primality.
It's not that morality is there because of evolution, but that being who CARE about morality are there because of evolution.
I'm not sure what you mean by fragile morality, but since you've gotten pretty much everything right, I suspect you've got the right idea, there, too.
"There are different sets of self-consistent values." This is true, but I do not agree that all logically possible sets of self-consistent values represent moralities. For example, it would be logically possible for an animal to value nothing but killing itself; but this does not represent a morality, because such an animal cannot exist in reality in a stable manner. It cannot come into existence in a natural way (namely by evolution) at all, even if you might be able to produce one artificially. If you do produce one artificially, it will just kill itself and then it will not exist.
This is part of what I was saying about how when people use words differently they hope to accomplish different things. I speak of morality in general, not to mean "logically consistent set of values", but a set that could reasonably exist in the real word with a real intelligent being. In other words, restricting morality to human values is an indirect way of promoting the position that human values are arbitrary.
As I said, I don't think Eliezer would accept that characterization of his position, and you give one reason why he would not. But he has a more general view where only some sets of values are possible for merely accidental reasons, namely because it just happens that things cannot evolve in other ways. I would say the contrary -- it is not an accident that the value of killing yourself cannot evolve, but this is because killing yourself is bad.
And this kind of explains how "good" has to be unpacked. Good would be what tends to cause tendencies towards itself. Survival is one example, but not the only one, even if everything else will at least have to be consistent with that value. So e.g. not only is survival valued by intelligent creatures in all realistic conditions, but so is knowledge. So knowledge and survival are both good for all intelligent creatures. But since different creatures will produce their knowledge and survival in different ways, different things will be good for them in relation to these ends.
The reason for the higher crime rates isn't directly relevant to the discussion of police "racial bias".
It's not? How do you know?
Police bias seems likes it could be directly related to crime rates (since it's the cops who do the arresting).
How did this "racial bias" manifest itself? Them acting like they believed blacks were more likely to be criminals than whites.
Judgements based only on race.
Or even willingness to shoot a black who was running at him and grabing for his gun?
I'm not arguing every white cop who shoots a black person is racist. Not even close. I'm trying to understand what impact implicit racial bias might have in policing.
Good chat. I'm out.
That objection is not logical :-P
Everything is logical when you realize that it is the reason for you to even be able to think.
Sorry, don't have those. Maybe somewhere in dusty off-line storage, but certainly not activated.
Well all your actions is your subconscious emotional brain aligned to your most important value I think.
That strikes me as an expression devoid of meaning. Logic is a tool. Tools can be useful or not so much, but tools are not values unto themselves, they just make it easier to reach actual goals.
Are you attaching too much to the definition? By logic I mean, what has created us, the mathematical patterns. The Fibonacci sequence in nature is an example. It's how reality is. It's logical. Yet we give importance to something like identity or safety instead of how it is and miss out what we truly wanted.
Do tell, how The One True Value of logic led you to post word salad on LW?
It's starting to become apparent for me that our emotional-rational connection is lacking in rationality or logic, and our subconscious brain craves the safety that it provides. Nothing else can give so, probably in exception of enlightenment, liberation etc.
So your current value can be considered a value and none else?
This isn't a religious conversion experience, that's kind-of a joke/metaphor :D It's using your brain mechanics seeking for a higher power or something to believe in, religion is so popular in the world for a reason. Logic will be even more impactful.
I'm asking you to see for yourself. Because is it really that bad to value logic over all else? Or rationality. It's the same.
In particular did you know about the different rates of murder commited by blacks and whites before posting the OC?
I don't think I knew that particular stat was an empirical fact, though I wasn't surprised by it. My view, generally, was that blacks in America earned less, had higher incarceration rates, etc. The causes interest me.
Do you have any evidence for this belief? If so, why haven't you presented it anywhere in this thread?
I believe all three of my points are basically non-controversial, specifically #2 and #3. #1 is true in at least some cases based on many, many experiences I've had. How widespread racial bias is, and to what extent it effects people, is the crux of the matter in my view.
Or does "bias" in this case mean that the cops understand the differences in murder rates?
I'm not sure I understand what you mean...
That means you haven't looked deep enough what's causing your actions which you do in your daily life. If you for example value validation the most you have to create an identity of yourself which you go around and strengthen, seeking people to give you attention. Have your name written in the history books. So forth. Or buying something to show off or having certain amount of knowledge.
All of your values come from something. Be honest.
That's the first step. Then the realization of the flaw of this value and replace it with logic with a leap of faith. (or whatever else could be considered blind trust)
Figure out your core, most important value and realize logic.
If 1 out of 100 people do this, seriously, open-mindendly, all will change.
After this process, which you do so you awaken, you solve all of your problems. You can have experiences like a religious person. Receiving the benefits of religion but obviously, because its based on logic, even better.
Start by observing yourself, you'll have to go deep and you'll have to be brutally honest to yourself.
Figure out what's most important to you, not multiple things, one thing. It might be comfort, security, validation, a person, even money. And so on. If you have multiple things, figure out what is the common theme. What is most important to you at a core?
It might be very difficult, and you will have to be honest.
Figure out why this is the case, why do you have this as your most important value?
Now, understand, logic is the creator of all of reality, the mathematical patterns, everything.. It's your creator. Your God.
Change your most important value to logic, with a leap of faith, on an emotional level. Not logic as if it something cold. Logic as in divine love. Something beautiful. Emotionally. Not rationally. How to make logic something warm: Think of someone you care about a lot or something, and realize logic created this.
Think of logic as the purpose of what we are.
Change it to your most important value. Think of how someone converts to a religion. Think of someone religious.
Whatever you have now, it's causing you suffering. It's illogical to have something else then logic. You have to trust this for you to make the bridge. SUBJECTIVELY.
Is logic your most important value? Are you really willing to give your life to whatever value you have now? You have something strong, which you don't dare to be honest to yourself about. So what if you buy a new car and impress your neighbor?
You know through facts that logic is true rationally, now it's time on an emotional level. (think of fibonacci sequence etc)
Seriously, you can awaken. Give it a try :)
Questions & Answers:
Q: This seems very cultish?
A: Ask yourself, what is your most important value? If it's approval, so you are in approval cult? Or relationship, is it relationship cult? So change to logic cult. :)
Q: But my safety? My comfort?
A: You will have to believe that logic will give you all the safety in the world, take the leap of faith. You will have anxiety before but then a catharsis.
Q: What do you mean by Logic?
A: Watch this video: https://www.youtube.com/watch?v=kkGeOWYOFoA But you have to understand, you have the answer yourself already. This isn't about different definitions. :)
Q: I don't know certainly what my most important value is.
A: Go down the ladder, deeply, honestly with yourself. Don't stay one step above.
Q: How will this affect my relationships?
A: You will see people truly for what they are. It will improve your relationships.
Q: It didn't click for me. I didn't awaken.
A: Truly figure out your most important value. Realize how logic will provide you what seek. Take a leap of faith. Seriously go deep emotionally.
Q: Why do you draw parallels to religion?
A: Because people seek something to believe in, they go to church etc. This taps into this emotional drive of the mechanics of you.
Q: What do I have to do?
A: Go to your core and figure out what it is.
Q: Why does this work?
A: Because everything comes from your core. Your emotional drive is flawed and not aligned with reality. Logic. Rationality. So you will find all safety which you need within this.
This reminds me of this comment of mine, although it is not directly related.
I think this is a special case of the problem that it's usually easier for an AI to change itself (values, goals, definitions) than for it to change the external world to match a desired outcome. There's an incentive to develop algorithms that edit the utility function (or variables storing the results of previous calculations, etc) to redefine or replace tasks in a way that makes them easier or unnecessary. This kind of ability is necessary, but in the extreme the AI will stop responding to instructions entirely because the goal of minimizing resource usage led it to develop the equivalent of an "ignore those instructions" function.
Icy moons would need oxygen to come down from their surfaces, where ultraviolet light and particle radiation spatters hydrogen out of the ice and off into space leaving oxidized molecules (not just oxygen) behind in the ice. This is possible, as on Europa the surface is young and it is believed to recycle surface ice down towards the internal ocean on megayear timescales (and Europa has an exceedingly thin oxygen 'atmosphere', one trillionth of a bar, from radiation-split water).
Figures I've seen (see "Energy, Chemical Disequilibrium, and Geological Constraints on Europa" by Hand et al) suggest that on Europa, a maximum of 5*10^9 moles of 'oxidants' may be delivered to the interior of Europa from the ice crust per year. Let's assume that's all oxygen - in that case, it's about a millionth of the photosynthetic oxygen flux of the Earth, and if we assume it is oxidizing hydrogen sulfide provides an energy flux of only about 45 megawatts to the interior.
That's less than a hundred thousandth the geothermal energy flux of the moon (and thus probably much smaller than the geochemical energy flux), but like the geothermal/geochemical energy flux it would not be even, there would be areas of downwelling ice with oxidizing agents slowly oozing out as it melted where this energy would be concentrated.
I think I basically agree with the "embrace existing moral intuitions" bit.
Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that's not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.
That is why it is right to speak of morality in general, and human morality in particular.
I prefer Eliezer's way because it makes evident, when talking to someone who hasn't read the Sequence, that there are different set of self-consistent values, but it's an agreement that people should have before starting to debate and I personally would have no problem in talking about different moralities.
Eliezer believes that human values are intrinsically arbitrary
But does he? Because that would be demonstrably false. Maybe arbitrary in the sense of "occupying a tiny space in the whole set of all possible values", but since our morality is shaped by evolution, it will contain surely some historical accident but also a lot of useful heuristics.
No human can value drinking poison, for example.
What is "good for us" is not arbitrary, but an objective fact about relationships between human nature and the world
If you were to unpack "good", would you insert other meanings besides "what helps our survival"?
In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.
When people talk about moral theories they refer to systems which describe the way that one ought to act or the type of person that one ought to be. Sure, some moral theories can be called "a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals," but I don't see how that changes anything about the definition of a moral theory.
Arxiv papers for Fuch's from comments in interview article
QBism and the Greeks: why a quantum state does not represent an element of physical reality
https://arxiv.org/abs/1412.4211
QBism, the Perimeter of Quantum Bayesianism
To say that you may chose any one of two actions when it doesn’t matter which one you chose since they have the same value, isn’t to give “no guidance”.
Proves my point. That's no different from how most most moral theories respond to questions like "which shirt do I wear". So this 'completeness criterion' has to be made so weak as to be uninteresting.
Edit: Down the rabbit hole...
An interview with the founder, pretty interesting and straightforward
https://www.quantamagazine.org/20150604-quantum-bayesianism-qbism/
I am leaning to the hologram outlook lately. Still enjoy Rovelli's writing more than most tho...
Picked this up in a thread at Artic Sea Ice site, that i haunt for methane related info, but the Holo Dark Energy/Entropy model is a fabulous cosmological model.
Michael Paul Gough (2013), "Holographic Dark Information Energy: Predicted Dark Energy Measurement", Entropy, 15(3), 1135-1151; doi:10.3390/e15031135
Abstract: "Several models have been proposed to explain the dark energy that is causing universe expansion to accelerate. Here the acceleration predicted by the Holographic Dark Information Energy (HDIE) model is compared to the acceleration that would be produced by a cosmological constant.
and:
Michael Paul Gough (2014), "A Dynamic Dark Information Energy Consistent with Planck Data", Entropy, 16(4), 1902-1916; doi:10.3390/e16041902
Abstract: "The 2013 cosmology results from the European Space Agency Planck spacecraft provide new limits to the dark energy equation of state parameter. Here we show that Holographic Dark Information Energy (HDIE), a dynamic dark energy model, achieves an optimal fit to the published datasets where Planck data is combined with other astrophysical measurements. HDIE uses Landauer’s principle to account for dark energy by the energy equivalent of information, or entropy, of stellar heated gas and dust. Combining Landauer’s principle with the Holographic principle yields an equation of state parameter determined solely by star formation history, effectively solving the “cosmic coincidence problem”.
source post is fun too http://forum.arctic-sea-ice.net/index.php?topic=1578.msg91730#new
I'd recommend to take up gardening, especially if you have a local community garden.
Nothing like having your hands in the earth, to ground you. You will also then be surrounded with peaceful folk, who care for each other, and the land. Not a bad group to connect with.
And you will be personally helping save the world, just by growing and planting some trees. If you do high value woods, like cherry, you will be taking CO2 permanently out of circulation, if the wood is used for making things. Jump on a bike, and go plant some apricots along old creek beds, will help stabilize the soil, and make food for people and animals.
Even if you are living in the slums, you can go out and collect some lichen living on an old building, mix it up in a blender with whole milk, let it sit a couple days, then go spray it in the cracks in an old brick building, or the sides of old concrete walls, and it will help purify the air. If you do the same with a lichen you find growing on an old tree, and spread it to other living trees, it will fix nitrates from the air into plant usable nitrites.
Just dealing daily with living, growing things is very powerful for the psyche.
And growing things, actually producing food, and giving it away is a very powerful form of altruism.
Or you can just get a grow light, and use that to help relax.....
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)