Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Raemon 08 October 2017 09:53:32PM 0 points [-]

Is the stylish style actually available somewhere?

In response to comment by Raemon on Feedback on LW 2.0
Comment author: lahwran 08 October 2017 10:03:56PM 1 point [-]

the large text is a link

In response to Feedback on LW 2.0
Comment author: lahwran 03 October 2017 01:51:03AM 1 point [-]

Something I'm noticing: almost all the feedback in this thread is easy stuff. UI changes and etc are all pretty easy. The problem that I expect is that the dev team won't get to the easy stuff because the hard problem of making the page load fast will take their attention.

Comment author: Lumifer 14 July 2017 02:39:04PM 1 point [-]

That this is possible to do is potentially useful information

Was there any doubt about this? Did anyone ever say "No, you can't do that"?

Comment author: lahwran 11 September 2017 08:55:00PM 0 points [-]


Comment author: cousin_it 13 June 2017 05:23:02PM *  4 points [-]

Very impressive, I'm happy that Paul ended up there! There's still a lot of neural network black magic though. Stuff like this:

We use standard settings for the hyperparameters: an entropy bonus of β = 0.01, learning rate of 0.0007 decayed linearly to reach zero after 80 million timesteps (although runs were actually trained for only 50 million timesteps), n = 5 steps per update, N = 16 parallel workers, discount rate γ = 0.99, and policy gradient using Adam with α = 0.99 and ε = 10−5.

For the reward predictor, we use 84x84 images as inputs (the same as the inputs to the policy), and stack 4 frames for a total 84x84x4 input tensor. This input is fed through 4 convolutional layers of size 7x7, 5x5, 3x3, and 3x3 with strides 3, 2, 1, 1, each having 16 filters, with leaky ReLU nonlinearities (α = 0.01). This is followed by a fully connected layer of size 64 and then a scalar output. All convolutional layers use batch norm and dropout with α = 0.5 to prevent predictor overfitting.

I know I sound like a retrograde, but how much of that is necessary and how much can be figured out from first principles?

Comment author: lahwran 14 June 2017 08:07:54AM 2 points [-]

seems like neural network anxiety misses the point of this paper - that the artificial intelligence algorithms that actually work can in fact be brought towards directions that have a shot at making them safe

Comment author: lahwran 13 May 2017 05:43:42PM 0 points [-]


Comment author: lahwran 22 April 2017 03:24:50PM 0 points [-]

this seems a bit oversold, but basically represents what I think is actually possible

Comment author: lahwran 09 April 2017 01:11:25PM 1 point [-]

I will say the same thing here that I did there: if (and only if) you attempt to kill me, I'll attempt to kill you back with appropriately much torture to make you fear that outcome. your morality should be selected for being the best for you, logically. I commit to making sure anything that involves attacking me is very bad for you.

Comment author: username2 09 April 2017 05:03:53AM 2 points [-]

Um, obvious solution: redefine your morality. There is no objective morality. If you think the net utility of world is negative, that really says more about you than the world.

And if you are totally sincere in this belief, then honestly: seek professional mental help.

Comment author: lahwran 09 April 2017 01:07:45PM 1 point [-]

for what it's worth, I don't think professional mental health is any good most of the time, and it's only worth it if you're actually psychotic. for things that don't totally destroy your agency and just mostly dampen it, I think just doing things on your own is better.

Comment author: Good_Burning_Plastic 05 April 2017 11:05:12PM 0 points [-]

I think people who say[1] that guess culture only exists some places are meaningfully confused.

Or maybe they just don't fall prey to the fallacy of gray and realize it sometimes might make sense to call something black even though it doesn't literally scatter exactly no light at all (otherwise there'd be no point in having a word if it didn't apply to anything at all).

Comment author: lahwran 06 April 2017 05:45:59AM 0 points [-]

I understand that. I wrote the post you're replying to with that in mind. I think the thing that people call guess culture actually applies almost everywhere, and anything but high trust between very close friends will secretly be only using different words, but have the same guessing patterns. I'm not making some wordplay claim here, I actually think there is a high magnitude error in the theory and that the update is to apply guess culture almost everywhere.

Comment author: RedMan 01 April 2017 11:24:06AM *  0 points [-]

I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If 're-merging' is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.

Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.

This is far from a settled issue, and has analogy in the question 'should you terminate an uncomplicated preganancy with terminal birth defects?' Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a 'yes', and I read your line of argument as consistent with a clear 'no'.

Thanks again for the food for thought.

Comment author: lahwran 05 April 2017 07:28:55PM 0 points [-]

I acausally cooperate with agents who I evaluate to be similar to me. That includes most humans, but it includes myself REALLY HARD, and doesn't include an unborn baby. (because babies are just templates, and the thing that makes them like me is being in the world for a year ish.)

View more: Next