Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pepe_prime 13 September 2017 01:20:21PM 10 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: RowanE 13 September 2017 07:00:52PM 21 points [-]

I have taken the survey.

Comment author: whpearson 29 August 2017 05:48:34PM 0 points [-]

Self-improvement wasn't her terminal value, that was only derived from her utilitarianism, she liked to improve herself and see new vistas because it allowed her to be more efficient in carrying out her goals.

I could have had her spend some time exploring her hedonistic side before looking at what she was becoming (orgasmium) and not liking it from her previous perspective.But the ASI decided that this would scar her mentally and that the two jump as dreams was the best way to get her out of the situation (or I didn't want to have to try to write highly optimised bliss, one of the two).

Comment author: RowanE 02 September 2017 11:13:21PM 0 points [-]

That's the reason she liked those things in the past, but "acheiving her goals" is redundant, she should have known years in advance about that, so it's clear that she's grown so attached to self-improvement that she sees it as an end in itself. Why else would anyone ever, upon deciding to look inside themselves instead of at expected utility, replace thoughts of paragliding in Jupiter with thoughts of piano lessons?

Hedonism isn't bad, orgasmium is bad because it reduces the complexity of fun to maximising a single number.

I don't want to be upgraded into a "capable agent" and then cast back into the wilderness from whence I came, I'd settle for a one-room apartment with food and internet before that, which as a NEET I can tell you is a long way down from Reedspacer's Lower Bound.

Comment author: MattG2 29 August 2017 12:23:06AM 0 points [-]

Is it possible to make something a terminal value? If so, how?

Comment author: RowanE 29 August 2017 11:18:35AM 0 points [-]

By believing it's important enough that when you come up with a system of values, you label it a terminal one. You might find that you come up with those just by analysing the values you already have and identifying some as terminal goals, but "She had long been a believer in self-perfection and self-improvement" sounds like something one decides to care about.

Comment author: whpearson 28 August 2017 01:11:03PM 0 points [-]

A short story - titled "The end of meaning"

It is propaganda for my improving autonomy work. Not sure it is actually useful in that regard. But it was fun to write and other people here might get a kick out of it.

Tamara blinked her eyes open. The fact she could blink, had eyes and was not in eternal torment filled her with elation. They'd done it! Against all the odds, the singularity had gone well. They'd defeated death, suffering, pain and torment with a single stroke. It was the starting of a new age for mankind, one not ruled by a cruel nature but by a benevolent AI.

Tamara was a bit giddy about the possibilities. She could go paragliding in Jupiter clouds, see super nova explode and finally finish reading infinite jest. But what should she do first? Being a good rationalist Tamara decided to look at the expected utility of each action. No possible action she could take would reduce the suffering of anyone or increase their happiness, because by definition the AI would be maximising those anyway with its super intelligence and human aligned utility maximisation. She must look inside herself for which actions to take.

She had long been a believer in self-perfection and self-improvement. There were many different ways that she might self-improve, would she improve her piano, become an astronomy expert or plumb the depths of understanding her brain so that she could choose to safely improve her inner algorithms. Try as she might she couldn't make a decision between these options. Any of these changes to herself looked as valuable as any other. None of them would improve her lot in life. She should let the AI decide what she should experience to maximise her eudaimonia.

blip

Tamara struggled awake. That was some nightmare she had had about the singularity. Luckily it hadn't occurred yet, she could still fix it and make the most meaningful contribution to the human race's history by stopping death, suffering and pain.

As she went about her day's business solving decision theory problems she was niggled by a possibility. What if the singularity has already happened and she was just in a simulation. It would make sense that the greatest feeling for people would be to solve the worlds greatest problems. If the AI was trying to maximise Tamara's utility, ver might put her in a situation where she could be the most agenty and useful. Which would be just before the singularity. There would have to be enough pain and suffering within the world to motivate Tamara to fix it, and enough in her life to make it feel consistent. If so none of her actions here are meaningful, she is not actually saving humanity.

She should probably continue to try and save humanity, because of indexical uncertainty.

Although if she had this thought her life would be plagued by doubts about whether her life is meaningful or not, so she is probably not in a simulation as her utility is not being maximised. Probably...

Another thought gripped her, what if she couldn't solve the meaningfulness problem from her nightmare? She would be trapped in a loop.

blip

A nightmare within a nightmare, that is the first time this had happened to Tamara for a very long time. Luckily she had solved the meaningfulness problem a long time ago else the thoughts and worries would plague her. We just need to keep humans as capable agents and work on intelligence augmentation. It might seem like a longer shot than a singleton AI requiring people to work together to build a better world, but humans would have a meaningful existence. They would able to solve their own problems, make their own decisions about what to do based upon their goals and also help other people, they would still be agents of their own destiny.

Comment author: RowanE 28 August 2017 10:00:20PM 1 point [-]

Serves her right for making self-improvement a foremost terminal value even when she knows that's going to be rendered irrelevant, meanwhile the loop I'm stuck in is of the first six hours spent in my catgirl volcano lair.

Comment author: RowanE 23 December 2016 01:15:53PM 5 points [-]

Seems heavy on sneering at people worried about AI, light on rational argument. It's almost like a rationalwiki article.

Comment author: Dagon 21 December 2016 12:16:08AM 6 points [-]

I think you missed out

  • Trolls. Those who want to torment or infuriate people who take a topic seriously.
Comment author: RowanE 21 December 2016 11:02:53AM 2 points [-]

I'll add a datapoint to that and say an anonymous site like that is would tempt me enough to actively go and troll even though I'm not usually inclined towards trolling.

Although I picture it getting so immediately overwhelmed by trolls that the fun would disappear; "pissing in an ocean of piss" as 4chan calls it.

Comment author: Gunnar_Zarncke 09 December 2016 11:16:45PM 1 point [-]

Maybe you want to tell us about your result:

I'm a

My rationality score (the value where the median currently is 55) is

Submitting...

Comment author: RowanE 11 December 2016 03:58:56PM 0 points [-]

So, uh, are people honestly reporting that they got a "rationalist" result from this, or are they just thinking "well, I'm a rationalist, so..."?

Comment author: Bound_up 04 December 2016 08:42:08PM 0 points [-]

Putting aside the piece itself, I'm curious...what do you think a believer would say about faith if an atheist claimed to not believe in God because of faith?

Comment author: RowanE 06 December 2016 10:23:14AM 0 points [-]

"Oh, that's nice."

They wouldn't exactly be accepting the belief as equally valid; religious people already accept that people of other religions have a different faith than they do, and on at least some level they usually have to disagree with "other religions are just as valid as my own" to even call themselves believers of a particular religion, but it gets you to the point of agreeing to disagree.

Comment author: RowanE 03 December 2016 07:29:27PM 1 point [-]

Since my comment was vague enough to be misunderstood, I'll try to clarify what I thought the first time.

The dialogue reads as a comedy skit where the joke is "theists r dum". The atheist states beliefs that are a parody of certain attitudes of religious believers, and then the theist goes along with an obvious setup they should see coming a mile away. It doesn't seem any more plausible than the classic "rabbit season, duck season" exchange in Looney Tunes, so it's not valuable.

Comment author: gwern 02 September 2016 08:09:59PM 2 points [-]

Meta: is it time to switch these to bimonthly or less frequent? The past few months have seen very few quotes submitted, and none of particularly great quality.

Submitting...

Comment author: RowanE 01 December 2016 01:30:56PM 0 points [-]

I think an overall decrease in activity on Less Wrong is to blame - "the death of Less Wrong" has been proclaimed for a while now. In which case, decreasing the frequency of the quotes thread seems like it would add to the downward spiral if it did anything at all.

View more: Next