Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Yvain 22 July 2014 04:21:27AM *  19 points [-]

"Hard mode" sounds too metal. The proper response to "X is hard mode" is "Bring it on!"

Therefore I object to "politics is hard mode" for the same reason I object to "driving a car with your eyes closed is hard mode". Both statements are true, but phrased to produce maximum damage.

There's also a way that "politics is hard mode" is worse than playing a video game on hard mode, or driving a car on hard mode. If you play the video game and fail, you know and you can switch back to an easier setting. If you drive a car in "hard mode" and crash into a tree, you know you should keep your eyes open the next time.

If you discuss politics in "hard mode", you can go your entire life being totally mind-killed (yes! I said it!) and just think everyone else is wrong, doing more and more damage each time you open your mouth and destroying every community you come in contact with.

Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"? There may be a tiny handful of people wise enough to try it - and ironically, those are probably the same handful who have a tiny chance of navigating the minefield. Everyone else is just going to say "No, I'm high-enough level, YOU'RE the one who needs to bow out!"

Both "hard mode" and "mind-killer" are intended to convey a sense of danger, but the first conveys a fun, exciting danger that cool people should engage with as much as possible in order to prove their worth, and the latter conveys an extreme danger that can ruin everything and which not only clouds your faculties but clouds the faculty to realize that your faculties are clouded. As such, I think "mind-killer" is the better phrase.

EDIT: More succintly: both phrases mean the same thing, but with different connotations. "Hard mode" sounds like we should accord more status to politics, "mind-killer" sounds like we should accord less. I feel like incentivizing more politics is a bad idea and will justify this if anyone disagrees.

Comment author: khafra 22 July 2014 11:54:37AM 0 points [-]

Can you imagine a human being saying "I'm sorry, I'm too low-level to participate in this discussion"?

Yes, this is what I thought of when I read this:

In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.”

It's not that politics isn't important to get right, it's just that talking about has negative expected value. Nearly every political argument between two people makes at least one person further entrenched in error.

Maybe "politics is like that scene in a thriller where the two guys are fighting to reach a single gun; but in this case the handle and trigger are actually poisoned."

Comment author: wedrifid 10 July 2014 02:24:35AM 3 points [-]

Experience by itself teaches nothing... Without theory, experience has no meaning. Without theory, one has no questions to ask. Hence, without theory, there is no learning.

This is false. It is false in theory and it is false in practice. Learning can occur without theory. I spent years researching and developing systems to do just that. And on the practical side (actually human psychology) learning frequently---even predominately---occurs without theory. Abstract theoretical reasoning is a special case of 'learning' and one that is comparatively recent and under-developed in the observed universe.

Comment author: khafra 10 July 2014 03:31:33PM 2 points [-]

Learning can occur without theory. I spent years researching and developing systems to do just that.

If you're talking about unsupervised classification algorithms, don't they kinda make their theory as they learn? At least, in the "model," or "lossy compression" sense of "theory." Finding features that cluster well in a data set is forming a theory about that data set.

Comment author: khafra 08 July 2014 02:23:24PM 1 point [-]

Has anybody written up a primer on "what if utility is lexically ordered, or otherwise not quite measurable in real numbers"? Especially in regard to dust specks?

Comment author: Manfred 26 June 2014 06:34:01AM *  20 points [-]

Show me someone who makes predictions of the future by "just looking at the data," and I'll show you someone who's using a theory but not admitting it.

Comment author: khafra 26 June 2014 11:54:22AM 8 points [-]

Yeah, in the AGW case it sounds like the question's more like "to what extent is your belief the result of climate models, and to what extent is it the result of a linear regression model?"

Comment author: brazil84 25 May 2014 10:44:48PM 1 point [-]

There's a pretty good chance that in 10 years, you will have a good deal more information about the efficacy of cryonics; the best options; and/or your personal financial situation. So there's something to be said for buying a 20 year level policy and reevaluating things half-way through.

Yes, you are taking the risk that during the next 10 years you will become uninsurable and unable to earn the necessary money to save for 10 years and pay out of pocket. But I would guess this is a pretty small risk compared to the other risks you face, for example the risk of dying in such a way that cryonics is unable to preserve you.

FWIW I carry a good deal of term insurance; no whole life insurance; and am not signed up with Alcor or anyone else. I am very much taking a "wait and see" approach, which I realize is less conservative than signing up now and doing it with whole life insurance.

Comment author: khafra 27 May 2014 05:35:31PM 1 point [-]

Your chances of dying before middle age are relatively small. Your chances of dying in a way that renders your brain preservable, before middle age, are astronomically small. Thus, although whole life costs around 2^3 as much as term, whole life provides something around 2^8 the benefit.

Comment author: So8res 16 April 2014 04:57:13PM *  7 points [-]

Incorrect -- your implementation itself also affects the environment via more than your chosen output channels. (Your brain can be scanned, etc.) If you define waste heat, neural patterns, and so on as "output channels" then sure, we can say you only interact via I/O (although the line between I and O is fuzzy enough and your control over the O is small enough that I'd personally object to the distinction).

However, AIXI is not an agent that communicates with the environment only via I/O in this way: if you insist on using the I/O model then I point out that AIXI neglects crucial I/O channels (such as its source code).

until I see the actual math

In fact, Botworld is a tool that directly lets us see where AIXI falls short. (To see the 'actual math', simply construct the game described below with an AIXItl running in the left robot.)

Consider a two-cell Botworld game containing two robots, each in a different cell. The left robot is running an AIXI, and the left square is your home square. There are three timesteps. The right square contains a robot which acts as follows:

1. If there are no other robots in the square, Pass.
2. If an other robot just entered the square, Pass.
3. If an other robot has been in the square for a single turn, Pass.
4. If an other robot has been in the square for two turns, inspect its code.
.. If it is exactly the smallest Turing machine which never takes any action,
.. move Left.
5. In all other cases, Pass.

Imagine, further, that your robot (on the left) holds no items, and that the robot on the right holds a very valuable item. (Therefore, you want the right robot to be in your home square at the end of the game.) The only way to get that large reward is to move right and then rewrite yourself into the smallest Turing machine which never takes any action.

Now, consider the AIXI running on the left robot. It quickly discovers that the Turing machine which receives the highest reward acts as follows:

1. Move right
2. Rewrite self into smallest Turing machine which does nothing ever.

The AIXI then, according to the AIXI specification, does the output of the Turing machine it's found. But the AIXI's code is as follows:

1. Look for good Turing machines.
2. When you've found one, do it's output.

Thus, what the AIXI will do is this: it will move right, then it will do nothing for the rest of time. But while the AIXI is simulating the Turing machine that rewrites itself into a stupid machine, the AIXI itself has not eliminated the AIXI code. The AIXI's code is simulating the Turing machine and doing what it would have done, but the code itself is not the "do nothing ever" code that the second robot was looking for -- so the AIXI fails to get the reward.

The AIXI's problem is that it assumes that if it acts like the best Turing machine it found then it will do as well as that Turing machine. This assumption is true when the AIXI only interacts with the environment over I/O channels, but is not true in the real world (where eg. we can inspect the AIXI's code).

Comment author: khafra 21 April 2014 05:18:20PM 0 points [-]

If you define waste heat, neural patterns, and so on as "output channels" then sure, we can say you only interact via I/O (although the line between I and O is fuzzy enough and your control over the O is small enough that I'd personally object to the distinction).

Also, even with perfect control of your own cognition, you would be restricted to a small subset of possible output strings. Outputting bits on multiple channels, each of which is dependent on the others, constrains you considerably; although I'm not sure whether the effect is lesser or greater than having output as a side effect of computation.

As I mentioned in a different context, it reminds me of UDT, or of the 2048 game: Every choice controls multiple actions.

Comment author: raisin 21 April 2014 03:54:08PM 0 points [-]

Any guides on how to do that?

Comment author: khafra 21 April 2014 04:15:36PM 4 points [-]

Rejection Therapy is focused in that direction.

Comment author: khafra 16 April 2014 01:09:55PM 2 points [-]

Yet another possible failure mode for naive anthropic reasoning.

Comment author: khafra 09 April 2014 02:14:29PM 6 points [-]

Since one big problem with neural nets is their lack of analyzability, this geometric approach to deep learning neural networks seems probably useful.

In response to Polling Thread
Comment author: Gunnar_Zarncke 07 April 2014 03:12:51PM 0 points [-]

Do you practice some form of vegetarism or other diet or consumption restriction and if yes which?

What are your reasons for follow this practice? Please answer only if you do follow a specific restriction. Do not use this poll to state your reason why you do not follow such restrictions. If you are interested in such please post a separate poll.

This practice improves my personal health

This practice benefits the overall population health

This practice improves human working/living conditions

This practice avoids harm to animals (and if applicable plants)

This practice allows to better feed more humans, reduce hunger and starvation

This practice helps preserve the environment/use it sustainably

This practice has economic benefits

This practice in encouraged in my social circles

This practice has other benefits (please elaborate in the comments)

Submitting...

Comment author: khafra 08 April 2014 11:18:33AM 1 point [-]

My "other" vote is alternate-day fasting, which I've been doing all year. Not sure if that's what you're looking for, but I feel like it's a dietary restriction, and benefits my health.

View more: Next