Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: latanius 28 December 2013 05:48:56PM 5 points [-]

... I did my fair share too, Santa vs. thin threads spun across the way between where the presents were supposed to emerge and the door... "stand back, I'm going to try Science" for the first time I remember.

Actually, it was a really nice experience not only about Science but also about how compartmentalization feels from the inside. I definitely remember thinking both that it's my parents and that it's some kind of mystical thingy, the only new thing that year was that these two aren't supposed to coexist in the same world. Not surprisingly, it's the very same feeling that I felt after being exposed to a semester of catholic middle school. Didn't have a name for it then though...

Comment author: latanius 04 November 2013 02:14:21AM 13 points [-]

Martial arts training camp. Average sleep time was around 4 hours per day, also, guard shifts round the day, so sometimes it ended up being 2. So towards the end of the week I was quite... sleepy. And this seems to have an interesting effect on visual pattern recognition.

One day, me and another guy were standing guard, around 4 in the morning, the sun was just about to come up. Making circles around the countryside weekend house we were staying in, I noticed that some people appeared with a truck and started to pick grapes from the nearby field. I promptly went and reported it to the other guy, so I was pretty sure of this observation, until I went back, and...

the truck and the people somehow turned into grapes and new people appeared to pick them.

Later that week I actually made up a rule saying "the guy standing in front of the house is, regardless of how much he seems to move around, a tree". Since I actually went there once and checked previously. Science over unreliable visual cortices...

Comment author: sixes_and_sevens 22 October 2013 12:52:59PM 8 points [-]

Having just got a Kindle Paperwhite, I'm surprised by (a) how many neat tricks there are for getting reading material onto the device, and (b) how under-utilised and hacky this seems to be. So far I've implemented a pretty kludgey process for getting arbitrary documents / articles / blog posts onto it, but I'm pretty sure there's a lot of untapped scope for the intelligent assembly and presentation of reading material.

So, fellow infovores, what neat tips and tricks have you found for e-readers? What unlikely material do you consume on them?

Comment author: latanius 26 October 2013 06:22:10AM 2 points [-]

k2pdfopt. It slices up pdfs so that you can read them without zooming on a much narrower screen, and since its output pdfs are essentially images, it eats everything up to (and including )very math-heavy papers, regardless of the number of columns they have. Also, it works with scanned stuff too.

(And even though the output is a bit bigger than the originals, I didn't encounter any problems with 600 page books... the result was about 50 megs tops.)

In response to Education control?
Comment author: latanius 18 May 2013 04:53:08PM 5 points [-]

Possibly relevant: Sudbury schools, with the curriculum of "do whatever you want, as long as you're in school, surrounded by interesting stuff". Also, http://www.psychologytoday.com/blog/freedom-learn. It really seems that we are doing quite bad by default...

As it turns out, for example, kids are quite good at learning stuff from each other (including things like reading... "I can't always get the big kids to read me stories, so I'd better go and learn this <<reading>> thing from them"...)

Now, find a way to prevent that from happening. Sorting kids by age and separating the groups? Perfect.

Comment author: gothgirl420666 02 May 2013 02:56:30AM *  15 points [-]

I was wondering to what extent you guys agree with the following theory:

All humans have at least two important algorithms left over from the tribal days: one which instantly evaluates the tribal status of those we come across, and another that constantly holds a tribal status value for ourselves (let's call it self-esteem). The human brain actually operates very differently at different self-esteem levels. Low-status individuals don't need to access the parts of the brain that contains the "be a tribal leader" code, so this part of the brain is closed off to everyone except those with high self-esteem. Meanwhile, those with low self-esteem are running off of an algorithm for low-status people that mostly says "Do what you're told". This is part of the reason why we can sense who is high status so easily - those who are high status are plainly executing the "do this if you're high-status" algorithms, and those who are low status aren't. This is also the reason why socially awkward people report experiencing rare "good nights" where they feel like they are completely confident and in control (their self-esteem was temporarily elevated, giving them access to the high-status algorithms) , and why in awkward situations they feel like their "personality disappears" and they literally cannot think of anything to say (their self-esteem is temporarily lowered and they are running off of a "shut up and do what you're told" low-status algorithm). This suggests that to succeed socially, one must trick one's brain into believing that one is high-status, and then one will suddenly find oneself taking advantage of charisma one didn't know one had.

Translated out of LessWrong-speak, this equates to "A boost or drop in confidence can make you think very differently. Take advantage of confidence spirals in order to achieve social success."

Comment author: latanius 02 May 2013 03:48:06AM *  7 points [-]

Your "running different code" approach is nice... especially paired up with the notion of "how the algorithm feels from the inside", seems to explain lots of things. You can read books about what that code does, but the best you can get is some low quality software emulation... meanwhile, if you're running it, you don't even pay attention to that stuff as this is what you are.

Comment author: Adele_L 02 May 2013 02:48:57AM 0 points [-]

Yeah this only makes sense for preference utilitarianism, I should have mentioned that.

It is strange to be sure. I wonder what the aggregated preferences of humanity would look like. I wouldn't be to surprised if it ended up being really similar to the aggregated preferences of current humans. Also, adding some sort of EV to this would probably make any issue here go away. But in any case, it seems to be an open problem on how to chose the starting set of utility functions in a moral way. Once things were running, it might work pretty well, especially once death is solved.

Why not just plan for whatever the current set of utility functions is? In the context of a FAI, it probably wouldn't want the aggregate utility function to change anyway. But again, deciding which functions to aggregate seems to be unsolved.

Comment author: latanius 02 May 2013 03:39:53AM 0 points [-]

Aren't utility functions kind of... invariant to scaling and addition of a constant value?

That is, you can say that "I would like A more than B" but not "having A makes me happier than you would be having it". Neither "I'm neither happy or unhappy, so me not existing wouldn't change anything". It's just not defined.

Actually, the only place different people's utility functions can be added up is in a single person's mind, that is, "I value seeing X and Y both feeling well twice as much as just X being in such a state". So "destroying beings with less than average utility" would appeal to those who tend to average utilities instead of summing them. And, of course, it also depends on what they think of those utility functions.

(that is, do we count the utility function of the person before or after giving them antidepressants?)

Of course, the additional problem is that no one sums up utility functions the same way, but there seems to be just enough correllation between individual results that we can start debates over the "right way of summing utiliity functions".

Comment author: ThereIsNoJustice 02 May 2013 12:59:49AM 1 point [-]

Does anyone know the terms for the positions for and against in the following scenario?:

Let's assume you have a one in a million chance of winning the lottery. Despite the poor chance, you pay five dollars to enter, and you win a large sum of money. Was playing the lottery the right choice?

Comment author: latanius 02 May 2013 03:10:02AM 1 point [-]

You won. Aren't rationalists supposed to be doing that?

As far as you know, your probability estimate for "you will win the lottery" (in your mind) was wrong. It is another question how that updates the probability of "you would win the lottery if you played next week", but whatever made you buy that ticket (even though the "rational" estimates voted against it... "trying random things", whatever it was) should be applied more in the future.

Of course, the result is quite likely to be "learning lots of nonsense from a measurement error", but you should definitely should update having seen that, and a decision you use for updates causing that decision to be made more in the future is definitely a right one.

If I won the lottery, I would definitely spend $5 for another ticket. And eventually you might realize that it's just Omega having fun. (actually, isn't one-boxing the same question?)

Comment author: Oscar_Cunningham 16 April 2013 11:22:35AM 2 points [-]

I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.

I don't quite see what you mean here. Do you know that each post has its own comments RSS feed?

Comment author: latanius 19 April 2013 01:19:53AM 0 points [-]

... this is the thing I've been looking for! (I think I had some strange cached thought from who knows where that posts do not have comments feeds, so I didn't even check... thanks for the update!)

Comment author: [deleted] 16 April 2013 02:19:15AM 7 points [-]

I have a super dumb question.

So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there's a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say "Well...don't do that".

So there must be some other reason for the rule, 'don't divide by zero.' What is it?

In response to comment by [deleted] on Open Thread, April 15-30, 2013
Comment author: latanius 16 April 2013 04:42:16AM 2 points [-]

Didn't they do the same with set theory? You can derive a contradiction from the existence of "the set of sets that don't contain themselves"... therefore, build a system where you just can't do that.

(of course, coming from the axioms, it's more like "it wasn't ever allowed", like in Kindly's comment, but the "new and updated" axioms were invented specifically so that wouldn't happen.)

Comment author: latanius 16 April 2013 01:37:21AM *  2 points [-]

Is there a nice way of being notified about new comments on posts I found interesting / commented on / etc? I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.

... or a "number of green posts" indicator near the post titles when listing them? (I know it's a) takes someone to code it b) my gut feeling is that it would take a little more than usual resources, but maybe someone knows of an easier way of the same effect.)

View more: Next