Filter Today

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: WalterL 19 October 2016 09:21:03PM 6 points [-]

My life places me in a position to observe an uncommon number of people repenting and trying to change. As you might expect, humans being what we are, few accomplish their goal.

A fact that I've observed is that NONE of those who other themselves and blame the shard get it done. If someone says "I've got a terrible temper", he will still hit. If he says "I hit my girlfriend", he might stop. If someone says "I have shitty executive function", he will still be late. If he says "I broke my promise", he might change.

So, when you say "I have an addiction", I'm a bit concerned. A LW truism is that we don't have brains, we are brains. We aren't ghosts manning machines, we are machines.

I think it is some old "devil made me do it", stuff. The "other me" isn't real, so energy spent fighting him is wasted. Effort spent changing my behavior might bear fruit.

I'm reading a lot into phrasing, so if this isn't you, my bad. Just...my advice... be sure to own your stuff man. You either "have an addiction", or "screwed some randos without protection", and my experience suggests that thinking of it as the second one will help you more.

Comment author: WalterL 20 October 2016 02:45:03AM 3 points [-]

Sorry, I didn't mean that to be what you took from it.

I used to be fat. ( I still am, but not nearly to the same extent) Like, Jabba fat. My parents got doctors to say that I had an eating disorder, and maybe I did.

Othering my appetite never helped me. Like "I have an eating disorder" focused my energy on something (my disorder) that didn't have a mind. It couldn't get tired, or bored...it didn't exist. It's like "fighting" cancer.

But that doesn't mean that what worked was thinking "I'm a glutton".

When you say that "I am a dumb person", it isn't any closer to a thought you can act on. Kicking yourself when you are down feels good (or, at least, it did for me), it feels like "paying" for the behavior, but that's just thoughts. It doesn't actually change stuff.

I was shooting for more "I am a person who had unprotected sex with sketchy folks at place X". That feels, 'actionable', if you will, to me. Like, if the problem is a sex addiction, I dunno what the solution is. If the problem is being a dumb person, I dunno what the solution is. But if the problem is going to a place and doing stuff, there are a bunch of solutions.

1: Carry protection, everywhere. Put it in something that you carry everywhere (wallet, little thingy on your car keys, cell phone case, whatever). If you ever screw someone sketchy, make sure you take it out and use it. If they aren't willing, maybe that's a spur to reconsider?

2: Enlist the help of the dudes who run the place. Tell them if they see you there, you will give them ten thousand dollars, or however much money would sting. Ask them, as friends, to kick you out. Tell them you have leprosy. Whatever words you have to say to make sure you aren't welcome back there.

3: If this place is pay to play, then ration your funds. Each morning put exactly as much cash as you'll need that day in your wallet, and don't carry a credit card.

I don't know if any of these could work for you, but something similar might. A behavior that you don't want to repeat can always be made more inconvenient. That's what helped me out with eating too much. I hope that you can do a similar thing to get yourself a different habit.

Comment author: James_Miller 20 October 2016 04:00:23PM 2 points [-]

Megyn Kelly walked by me once. If she had handed me a knife and asked me to remove my own heart and give it to her, part of my brain would have felt obligated to comply.

Comment author: siIver 20 October 2016 01:41:10AM *  2 points [-]

This may be a naive and over-simplified stance, so educate me if I'm being ignorant--

but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.

Feel free to just provide a link if the argument has been discussed before.

Comment author: Manfred 20 October 2016 01:04:15AM *  2 points [-]

Depends on information. If people retain memories, so that each person-moment follows from a previous one, then knowing only that I suddenly find myself in a room means I'm probably in room A. If people are memory-wiped at some interval, then this increases the probability I should assign to being in room B - probability of being in a specific room, given that your state of information is that you suddenly find yourself in a room, is proportional to the number of times "I have suddenly found myself in a room" is somebody's state of information.

The above is in fact true. So here's a fun puzzler for you: why is the following false?

"If you tell me the exact time, then my room must more likely be B, because there are 1000 times more people in room B at that time. Since this holds for all times you could tell me, it is always true that my room is probably B, so I'm probably in room B."

Hint: Assuming that room B residents "live" 1,000,000 times longer than room A residents, how does their probability of being in room B look throughout their life, assuming they retain their memories?

Comment author: Gram_Stone 20 October 2016 12:22:21AM 2 points [-]

Here's a stab: If I understand you correctly, then every observer's experience is indistinguishable from every other's, so my credence in the proposition "I'm in room A" is 0.999 and my decision policy is "Bet that I'm in room A." If 100 trillion + 100 billion people choose room B, then 100 trillion will lose and 100 billion will win. If 100 trillion + 100 billion people choose room A, then 100 billion will lose and 100 trillion will win.

Comment author: gbear605 19 October 2016 10:35:19PM 2 points [-]

Not OP, but each single person could be in room A for 1/1,000,000 the time that they're in room B. The time doesn't run slower, but they're there less time, producing the same effect.

Comment author: Houshalter 20 October 2016 08:48:36PM 1 point [-]

Most AI researchers have not done any research into the topic of AI research, so their opinions are irrelevant. That's like pointing to the opinions of sailors on global warming. Because global warming is about oceans and sailors should be experts on that kind of thing.

I think AI researchers are slowly warming up to AI risk. A few years ago it was a niche thing that no one had ever heard of. Now it's gotten some media attention and there is a popular book about it. Slate Star Codex has compiled a list of notable AI researchers that take AI risk seriously.

Personally my favorite name on there is Schmidhuber, who is very well known and I think has been ahead of his time in many areas in AI. With a focus particularly on general intelligence, and methods that are more general like reinforcement learning and recurrent nets, instead of the standard machine learning stuff. His opinions on AI risk are nuanced though, I think he expects AIs to leave Earth and go into space, but he does accept most of the premises of AI risk.

Bostrom did a survey back in 2014 that found AI researchers think there is at least a 30% probability that AI will be "bad" or "extremely bad" for humanity. I imagine that opinion has changed since then as AI risk has become more well known. And it will only increase with time.

Lastly this is not an outlier or 'extremist' view on this website. This is the majority opinion here and has been discussed to death in the past, and I think it's as settled as it can be expected. If you have any new points to make or share, please feel free. Otherwise you aren't adding anything at all. There is literally no argument in your comment at all, just an appeal to authority.

Comment author: philosophytorres 20 October 2016 08:06:02PM 1 point [-]

Totally agree that some x-risks are non-agential, such as (a) risks from nature, and (b) risks produced by coordination problems, resulting in e.g. climate change and biodiversity loss. As for superpowers, I would classify them as (7). Thoughts? Any further suggestions? :-)

Comment author: hairyfigment 20 October 2016 05:24:51PM 1 point [-]

I hate this essay immediately, though a lot of that may be your false summary. (The link tries to make clear it is not about "high-status halos" in general.) Defending priests who raped children was not, I think, "primarily an upper-class phenomenon." Those who did the same for Roman Polanski were often rich, but for very different reasons.

Comment author: username2 20 October 2016 02:02:55PM *  0 points [-]

What you've expressed is the outlier, extremist view. Most AI researchers are of the opinion, if they have expressed a thought at all, that there is a long series of things that must happen in conjunction for a skynet like failure to occur. It is hardly obvious at all that AI should be a highly regulated research field, like say nuclear weapons research.

I highly suggest expanding your reading beyond the Yudjowsky, Bostrom et all clique.

Comment author: username2 20 October 2016 12:26:33AM *  1 point [-]

Part of my motivation is motivated by hopes of greater social status.

That would be a bad reason to join the FFL. It is a small clique of people that would assign higher social status to a legionnaire, current or former. Most, thanks to Hollywood, will think you a psychopath or maybe a fugitive on the run. At best your so-called friends and family will be left wondering "why France?" -- because in their minds the only justifiable reason to serve is patriotism, and how can you be a patriot for a nation that is not your own? You don't even get a French passport at the end unless you are wounded in battle.

So if it is social status seeking by people you already know, then the FFL is not for you. But if you think it's more than that, read on.

I would read some works by David Grossman, specifically "On Killing" and also "On Combat". Maybe start first with this classic essay by him, excerpted from "On Combat":

http://www.killology.com/sheep-wolves-and-sheepdogs

Ask yourself: are you a sheepdog? Not do you want to be a sheepdog, but are you? Is that how you define yourself? If so, the path of a warrior might have value for you. But don't take this question lightly. Talk to those whose opinion you trust and who know you well. Only tell them as many details as you need to, but get their opinion of you. You might recognize things you never realized in how others see you. Maybe reach out to some warrior forums and get some opinions from those who did serve.

If you do join the légion étrangère, it is really in your interests to go all-in. Make sure you excel at everything you do, top of the class. Then volunteer for the 2e REP parachute regiment, which is the special ops branch. Volunteer for as much advanced training as you can get. You'll then serve the remaining 3-4 years of your tour on assignments in various hot spots. Get to know everyone, but particularly focus on those that are the most professional. This will help your career whether you stay with the legion or not.

When you're out you'll have access to the alumni network of people that live not above but outside the rule of law. Those who understand the true nature and basis of governance and serve when it is necessary as its blunt instrument. Your reputation is what gates access to this network.

One thing to note: "soldier of fortune" is misnamed. You will not get rich being a merc. The pay is decent, and tax-free, but we're talking like maybe $100k/yr. You can do way better in tech or finance and without putting your life on the line. Only do this if it is a serious calling.

it's an 'x' (can't remember what they guessed) and an asian together?' it's a really strange voice - they were two teenage guys. Not quite rascist, but it could be in the future. I'm mixed ethnicity but appear either dark skinned Latino or Arab I guess.

Nah man that's totally racist, and f-- them. I would have been in their faces causing trouble if that was at me or my girl. That goes far beyond "having a good comeback" territory.

Comment author: RomeoStevens 19 October 2016 10:56:13PM *  1 point [-]

Yeah, to rephrase: do we update based on subjective or objective measure of time?

There are two groups of brains, x1 and x2. x1 exists for a million years but only experiences 1000 years of subjective time. x2 exists for 1000 years but experiences a million years of subjective time.

If you don't know which of the groups you are in you'll update differently depending on which rule you are following. If updating on objective time you'll update towards x1, if updating on subjective you'll update towards y1. What meta-rule we might propose that would generate differences between x1 and x2? I can't imagine what would.

Comment author: turchin 19 October 2016 09:52:17PM 1 point [-]

I don't understand how it could happen: do you mean that time in the room B is million times slower?

Comment author: turchin 20 October 2016 08:39:31PM *  0 points [-]

I would also add Doomsday blackmailers. These are rational agents which would create Doomsday Machine to blackmail the world with the goal of world domination.

Another option worth considering is arogant scientists, who benefit personally from dangerous experiments. Example is CERN proceeded with LHC before its safety was proven. Another group of bioscientists excavated 1918 pandemic flu, sequenced it and posted it in the internet. And another scientist deliberately created new superflu studying genetic variation which could make birds flu stronger. We could imagine a scientist who would to increase personal longevity by gene therapy, even if it poses 1 per cent pandemic risk. And if there are many of them...

Also there is a possible class of agents who try to create smaller catastrophe in order to prevent larger catastrophe. Recent movie "Inferno" is about it, where a character created a virus to kill half humanity to safe all humanity later.

I listed all my ideas in my agent map, which is here on Less Wrong http://lesswrong.com/r/discussion/lw/o0m/the_map_of_agents_which_may_create_xrisks/

Comment author: faul_sname 20 October 2016 08:36:10PM 0 points [-]

How many seconds have you been in the room?

Let's say the time between t1 and t2 is 1 trillion seconds. Let us further assume that all people go through the rooms in the same amount of time (thus people spend 1 second each in room A, and 1 million seconds each in room B).

100 trillion of the 100.1 trillion observer moments between 0 and 1 seconds in a room occur in room A. All of the observer moments past 1 second occur in room B (this is somewhat flawed in that it is possible that the observers don't all spend the same amount of time in a given room, but even in the case where 100 million people stay in room A for 1 million seconds each, and the rest spend zero time, an observer who's been in a room for 1 million seconds is still overwhelmingly likely to be in room B. So basically the longer you've been in the room, the more probably you should consider it that you're in room B).

If an observer doesn't know how long they've been in a given room, I'm not sure how meaningful it is to call them "an" observer.

Comment author: Houshalter 20 October 2016 08:31:48PM 0 points [-]

I agree. But it's worthwhile to try to get AI researchers on our side, and get them researching things relevant to FAI. Perhaps lesswrong could have some influence on this group. If nothing else it's interesting to keep an eye on how AI is progressing.

Comment author: Houshalter 20 October 2016 08:29:43PM 0 points [-]

See my other comment for more clarification on how CEV would eliminate negative values.

Comment author: Houshalter 20 October 2016 08:27:59PM 0 points [-]

how will you create or choose a process which will build a FAI?

You are literally asking me to solve the FAI problem right here and now. I understand that FAI is a very hard problem and I don't expect to solve it instantly. Just because a problem is hard, doesn't mean it can't have a solution.

First of all let me adopt some terminology from Superintelligence. I think FAI requires solving two somewhat different problems. Value Learning and Value Loading.

You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want. I think that's the easy problem, and any intelligent AI will form a model of humans and understand what we want. Getting it to care about what we want seems like the hard problem to me.

But I do see some promising ideas to approach the problem. For instance have AIs that predict what choices a human would make in each situation. So you basically get an AI which is just a human, but sped up a lot. Or have an AI which presents arguments for and against each choice, so that humans can make more informed choices. Then it could predict what choice a human would make after hearing all the arguments, and do that.

More complicated ideas were mentioned in Superintelligence. I like the idea of "motivational scaffolding".Somehow train an AI that can learn how the world works and can generate an "interpretable model". Like e.g. being able to understand English sentences and translate their meanings to representations the AI can use. Then you can explicitly program a utility function into the AI using its learned model.

That doesn't make much sense. What do you mean by "negative" and from which point of view?

From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I'm showing you that CEV would eliminate these things.

Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of "improve".

Your stated example was ISIS. ISIS is so bad because they incorrectly believe that God is on their side and wants them to do the things they do. That the people that die will go to heaven, so loss of life isn't so bad. If they were more intelligent, informed, and rational... If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.

The second thing CEV does is average everyone's values together. So even if ISIS really does value killing people, their victims value not being killed even more. So a CEV of all of humanity would still value life, even if evil people's values are included. Even if everyone was a sociopath, their CEV would still be the best compromise possible, between everyone's values.

Comment author: NancyLebovitz 20 October 2016 08:18:45PM 0 points [-]

I'm improving the subject line-- I agree that I need to do better with the summary.

Comment author: philosophytorres 20 October 2016 08:08:23PM 0 points [-]

What do you mean? How is mitigating climate change related to blackmail?

Comment author: ingive 20 October 2016 08:08:04PM *  0 points [-]

LOGIC NATION: A Psychological Revolution https://www.youtube.com/watch?v=drcseH-7hpw

What does LW think after trying or think to not try?

Comment author: philosophytorres 20 October 2016 08:07:56PM 0 points [-]

I actually think most historical groups wanted to vanquish the enemy, but not destroy either themselves or the environment to the point at which it's no longer livable. This is one of the interesting things that shifts to the foreground when thinking about agents in the context of existential risks. As for people fighting to the death, often this was done for the sake of group survival, where the group is the relevant unit here. (Thoughts?)

Comment author: philosophytorres 20 October 2016 08:04:53PM 0 points [-]

(2) is quite different in that it isn't motivated by supernatural eschatologies. Thus, the ideological and psychological profiles of ecoterrorists are quite different than apocalyptic terrorists, which are bound together by certain common worldview-related threads.

Comment author: philosophytorres 20 October 2016 08:02:50PM 0 points [-]

I think my language could have been more precise: it's not merely genocidal, but humanicidal or omnicidal that we're talking about in the context of x-risks. Also, Khmer Rough wasn't suicidal to my knowledge. Am I less right?

Comment author: philosophytorres 20 October 2016 07:58:05PM 0 points [-]

As for your first comment, imagine that everyone "wakes up" in a room with only the information provided and no prior memories. After 5 minutes, they're put back to sleep -- but before this occurs they're asked about which room they're in. (Does that make sense?)

Comment author: korin43 20 October 2016 06:52:06PM *  0 points [-]

If I had to summarize it quickly, I'd say this is an article about how boring / generic things are seen as more legitimate, especially if large groups of people do them. For example, if you spend every waking moment working on some futuristic technology, people will see you very differently if you're doing it in your garage (alone), working for a small startup (small group, presumably all working on similar projects), or working for IBM (large group, doing unrelated work). Also, we don't like to talk about people's opinions, and we'd rather talk about the "opinions" of organizations like political parties.

I thought it was interesting, but I can't help feeling like it would have been a much better article without a Ra metaphor. It seemed like a huge stretch and a significant portion of the article was trying (and in my opinion, failing) to justify the metaphor.

Comment author: WalterL 20 October 2016 05:28:58PM 0 points [-]

Probably? Most people are attached to their existence (insert obvious comment re: what kind of people win a competition between those who love life and those who can take it or leave it), and giving up your existence to create some alien thing is still giving up your existence, even if you are told it is going to have a great time.

Comment author: NancyLebovitz 20 October 2016 05:14:18PM *  0 points [-]

Dialectical Behavioral Therapyis at least worth looking into.

DBT combines standard cognitive behavioral techniques for emotion regulation and reality-testing with concepts of distress tolerance, acceptance, and mindful awareness largely derived from Buddhist meditative practice. DBT is the first therapy that has been experimentally demonstrated to be generally effective in treating BPD.[8][9] The first randomized clinical trial of DBT showed reduced rates of suicidal gestures, psychiatric hospitalizations, and treatment drop-outs when compared to treatment as usual.[4] A meta-analysis found that DBT reached moderate effects in individuals with borderline personality disorder.[10]

Comment author: scarcegreengrass 20 October 2016 04:16:37PM 0 points [-]

First of all, it's hard to imagine anyone slowing down AI research. Even a ban by n large governments would only encourage research in places beyond government control.

Second, there is quite a lot of uncertainty about the difficulty of these tasks. Both human-comparable software and AI value alignment probably involve multiple difficult subproblems that have barely been researched so far.

Comment author: ingive 20 October 2016 03:34:02PM 0 points [-]

This sounds like an Eastern philosophy/religion narrative. Is logic / mathematical patterns basically dao (or tao)? The promise of peace if you abandon the mistaken urges of the emotional mind is also quite familiar :-)

I see it as two sides of the same coin, the sensational world is logic and the other is the indescribable/unmanifested/nothingness/void. All sides is how it is and I am fine with that. Who knows?

Ah, I see. It's Athene/Boumaaza. Why didn't you say so from the start?

Maybe I should have from the start. That's where all of this is from. https://www.twitch.tv/athenelive/v/95140378?t=1h28m23s

Comment author: philosophytorres 20 October 2016 03:23:44PM 0 points [-]

Yes to both possibilities. But gbear605 is closer to what I was thinking.

Comment author: Lumifer 20 October 2016 02:27:13PM *  0 points [-]

This sounds like an Eastern philosophy/religion narrative. Is logic / mathematical patterns basically dao (or tao)? The promise of peace if you abandon the mistaken urges of the emotional mind is also quite familiar :-)

Ah, I see. It's Athene/Boumaaza. Why didn't you say so from the start?

Comment author: ingive 20 October 2016 02:22:54PM 0 points [-]

Well, then we get anxious and avoiding talking to our crush wondering where our rational brain went, when it was never there in the first place. (emotional drive) Maybe. :D

Comment author: entirelyuseless 20 October 2016 01:29:15PM 0 points [-]

This would be a long discussion, but there's some truth in that, and some falsehood.

Comment author: TheAncientGeek 20 October 2016 01:20:04PM *  0 points [-]

The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.

That seems different to what you were saying before.

This is well explored in "Three worlds collide". Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I'm using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.

There's not much objectivity in that.

Why is it so important that our morality is the one that motivates us? People keep repeating it as though its a great revelation, but its equally true that babyeater morality motivates babyeaters, so the situation comes out looking symmetrical and therefore relativistc.

Comment author: ingive 20 October 2016 11:10:29AM 0 points [-]

No problem, did you awaken? Some have said after awakening they would like to spread it to other people, but it is hard to be honest to one self and do this to awaken.

There isn't any better words then this, paradigm shift or click etc. I suggest going here https://www.reddit.com/r/makingsense and reading other people's experiences.

with best regards

Comment author: ingive 20 October 2016 11:02:45AM *  0 points [-]

Well then, we're done, aren't we? If everything is already logical there doesn't seem to be any point in complaining about lack of logic.

Well emotionally most of us aren't aligned with logic instead of some other value like comfort. Even though logic is what has given us all the comfort, technology and so forth. This is a psychological experience.

Mathematical patterns created us? That's an interesting idea.

Without logic most of us probably wouldn't have existed, the technology of which have satisfied our most primal needs such as food, water, healthcare, shelter. Going back in time, seeing evolution and evolutionary biology. How stardust evolved to us.

So there is an "emotional mind" (which, I think, means the subconsciousness) and the "rational mind" (aka the consciousness). The emotional mind wants things like identity (you probably mean the sense of belonging) and safety. You think the emotional mind is wrong and it should really want... mathematical patterns?

Well yes, that's what we do, we value things like safety or comfort over everything else. Yet it is contradictory as whatever we value can never be the case, or it can be taken from us. By looking at what even made these things possible in the first place, whether it be a parent or spouse, it was logic that gave us this. Of course it's hard to go on an emotional level. The emotional mind is weakening the rational mind.

Therefore the rational mind should yell at the emotional mind to stop being silly and just do what the rational minds tells it to do? And the emotional mind will say "yes, of course" instead of extending the middle finger, as usual? And if that process actually happens, your complete whole will achieve the mathematical-pattern enlightenment while ignoring the not-important stuff like safety?

In all actuality, I think it is the emotional mind which is driving us and weakening the rational part of brain by not submitting to logic. It is right before our eyes that everything we have, have been brought about by logic and it all seems to be universal in this sensational world. When the emotional mind realizes that rationality or logic will bring it all the safety it needs, with the easy heuristics previously, it will be at peace. Because nothing can bring you more comfort. If you aren't comfortable and it is solvable, you do so, otherwise you accept it for what it is.

It is a paradigm shift.

If you were uncomfortable however and it wasn't solvable, with comfort as your most important value, you will suffer and you won't understand why. Which is what we are doing. It is a rat race.

And you promise it will be grrrrrreat(tm)?

You will feel euphoric after logic is your most important value. It will fade away over time, then you might become very anxious because you don't understand anything. So you will have to rebuild everything you think you knew before. At the part of being anxious you might think it has stopped working, that it doesn't make sense. You might return to your old important value then. Instead of searching for the logic in all things. See what I replied to another one where to go if you do try this out.

Comment author: MrMind 20 October 2016 07:56:29AM 0 points [-]

Am I the only one who will be gladly superhappified?

Comment author: MrMind 20 October 2016 07:53:42AM 0 points [-]

So my car is a car becuse it motor-vates me, but your car is no car at all, because it motor-vates you around, but not me.

The difference is not between two cars, yours and mine, but between a passegner ship and a cargo ship, built for two different purpose and two different class of users.

Yudkowsky isn't being rigourous, he is instead appealing to an imaginary rule, one that is not seen in any other case.

On this we surely agree, I just find the new rule better than the old one. But this is the least important part of the whole discussion.

obviously the premissibility of imposing ones values on others depends on whether they are immoral, amoral, differently moral , etc. Differrently moral is still a possibilirt, for the reasons that you are differently mothered, not unmohtered.

This is well explored in "Three worlds collide". Yudkowski vision of morality is such that it assigns different morality to different aliens, and the same morality to the same species (I'm using your convention). When different worlds collide, it is moral for us to stop babyeaters from eating babies, and it is moral for the superhappy to happify us. I think Eliezer is correct in showing that the only solution is avoiding contact at all.

Comment author: Strangeattractor 20 October 2016 04:56:30AM 0 points [-]

Farming is a completely different type of job than software development or designing integrated circuits.

When you program a computer, it does what you tell it to. You can diagnose what's wrong and fix it. Whether you can fix things is based a lot on your own skill and knowledge.

If you become a farmer, you can do everything skillfully and still have a crop fail. You are at the mercy of the weather.

Have you even grown a small vegetable garden? That might be a first step to get some idea what you are facing if you go into farming.

Don't assume farming will be easy. It can be one of the toughest jobs. Spend some time out on a farm helping out with whatever needs to be done to research this option. If you enjoy being outdoors, and are ok with the uncertainty of farming, then maybe it is for you. If you love the land, then farming may be for you.

Since your wife wants to move to another country, I think it is worth going through the immigration processes for various countries. Immigration processes can take years, and still have an uncertain result, so you might as well get started. If you can afford to visit some countries, that may make things clearer too. There are more changes to moving to another country than just money. Climate and culture can have a big effect on a person's life.

Where I live in Canada, a starting salary for a software developer can be about CAD $40 000. So the $100 000 figure other people are talking about isn't applicable everywhere.

Regarding freelance work, do you have enough time in the day that you could start doing freelance work while still working at the job you have? If you take on some side projects, you could see how that goes without quitting your job.

I think part of what this decision comes down to is what your goals are, and what the goals of your family are. What are your priorities? What is most important to you?

I think it could help to take the first steps on all three options at once, to get a taste of what each of them is like. Things might become clearer as you get more information. Right now I don't think you know enough about each option to make a good decision. Research is what is required.

Also, I read a book recently about making good decisions. It is called "Decisive: How to Make Better Choices in Life and Work" by Chip Heath and Dan Heath. There are some techniques in there, like asking yourself, "Imagine a year from now the project I was attempting to do has failed. What are the reasons?" that have helped me figure some things out in my own life.

Comment author: TheAncientGeek 20 October 2016 04:49:37AM *  0 points [-]

Good would be what tends to cause tendencies towards itself. Survival is one example

Any virulently self-reproducing meme would be another.

Comment author: entirelyuseless 20 October 2016 04:39:02AM 0 points [-]

I agree.

Comment author: TheAncientGeek 20 October 2016 04:31:55AM 0 points [-]

I'm saying its a bad idea to collapse together the ideas of moral obligation, moral advisability and pleasure.

Comment author: entirelyuseless 20 October 2016 02:51:06AM 0 points [-]

I'm not sure what you're saying. I would describe giving to charity as morally good without implying that not giving is morally evil.

I agree that moral goodness is different from hedonic goodness (which I assume means pleasure), but I would describe that by saying that pleasure is good in a certain way, but may or may not be good all things considered, while moral goodness means what is good all things considered.

Comment author: entirelyuseless 20 October 2016 02:14:11AM 0 points [-]

I do not support "letting a sentient being eat babies just because it wants to" in general. So for example if there is a human who wants to eat babies, I would prevent that. But that is because it is bad for humans to eat babies. In the case of the babyeaters, it is by stipulation good for them.

That stipulation itself, by the way, is not really a reasonable one. Some species do sometimes eat babies, and it is possible that such a species could develop reason. But it is likely that the very process of developing reason would impede the eating of babies, and eating babies would become unusual, much as cannibalism is unusual in human societies. And just as cannibalism is wrong for humans, eating babies would become wrong for that species. But Eliezer makes the stipulation because, as I said, he believes that human values are intrinsically arbitrary, from an absolute standpoint.

So there is a metaethical disagreement. You could put it this way: I think that reality is fundamentally good, and therefore actually existing species will have fundamentally good values. Eliezer thinks that reality is fundamentally indifferent, and therefore actually existing species will have fundamentally indifferent values.

But given the stipulation, yes I am serious. And no I would not accept those solutions, unless those solutions were acceptable to them anyway -- which would prove my point that eating babies was not actually good for them, and not actually a true part of their values.

Comment author: WhySpace 19 October 2016 09:30:48PM 0 points [-]

Discussion went down (and is back up) for me too.

I also see the error message when clicking the "new article" or "new link" buttons. It's been that way for a while. Does anyone else get the same thing?

Comment author: Clarity 19 October 2016 09:30:36PM 0 points [-]

A fact that I've observed is that NONE of those who other themselves and blame the shard get it done.

I don't smoke meth!

If someone says "I've got a terrible temper", he will still hit. If he says "I hit my girlfriend", he might stop. If someone says "I have shitty executive function", he will still be late. If he says "I broke my promise", he might change.

Wow, I never thought of it like that. So internal attributions lead to antisocial behaviour, compared to external attributions which lead to behaviour change?

I'm reading a lot into phrasing, so if this isn't you, my bad. Just...my advice... be sure to own your stuff man.

I think you are on to something, but I find it a bit hard to understand.

You either "have an addiction", or "screwed some randos without protection", and my experience suggests that thinking of it as the second one will help you more.

I think you are right. It's just that I feel so shamed thinking of the second one. I can feel psychological defences like denial and rationalisations coming to my mind while I type. I screwed some random without protection. I am a dumb person.

Comment author: Clarity 19 October 2016 09:27:03PM 0 points [-]

Thank you for your advice.

French Foreign Legion

Part of my motivation is motivated by hopes of greater social status. Recently I read that sociometric status is associated with greater happiness which fits in with my notions about the esteem of the military. However, when I read about the empirical research on the correlates of sociometric status (at least in kids) it appears to be traits very very different than would be expected in the military!

https://www.ncbi.nlm.nih.gov/pubmed/1619136 https://www.ncbi.nlm.nih.gov/pubmed/3286817 https://www.jstor.org/stable/23086265?seq=1#page_scan_tab_contents

Also, I ran across a paper and a podcast I remember saying that peers ratings (essentially sociometric status) factor in to military career advancement so there is that confounding things.

I think I prefer to be a lover, than a fighter. I mean, I am probably driven to fight by my childhood trauma and I shouldn't give in to that. I can gain a better life by growing through developing a secure attachment style probably instead.

Interacial dating

Yes, when I went out with another asian girl in the past someone once said: it's an 'x' (can't remember what they guessed) and an asian together?' it's a really strange voice - they were two teenage guys. Not quite rascist, but it could be in the future. I'm mixed ethnicity but appear either dark skinned Latino or Arab I guess.

Re STD check

It happened in Sunday the 16th. I will not have sexual contact till I get tested, but because some STI's incubate p to 9 weeks (hep c - but I couldn't be exposed to that). The next longest is HIV 1-3 months but that's low likelihood since I didn't have unprotected penetrative sex so I am not concerned this time (but don't do it again, Clarity! Just in case! Oral herpes which is 4-6 weeks but apparently everyone has that, even kids, so I am not concerned. But the same for genital herpes (4-6 weeks) which I am concernd about so I will wait well the average upper 6 weeks to get tested. That means testing on the 27th of November. I'll just have to do it when I'm overseas I guess...

Comment author: WalterL 19 October 2016 09:11:26PM 0 points [-]

Seems like it is back now.

Comment author: ChristianKl 19 October 2016 09:07:04PM 0 points [-]

Also, is there a place where Yudkowsky has talked about Climate Change?

I'm not aware of any specific one. I think he doesn't consider Climate Change to be important because he thinks FOOM will happen sooner or later.

View more: Prev