Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: arisen 02 July 2017 11:10:39PM *  0 points [-]

Expecting short Inferential Distances wouldn't that be a case of rational thought producing beliefs which are themselves evidence? :P Manifested in over-explaining to the point of cognitive-dissonance? How about applying Occam's Razor and going the shorter distance: improve the clarity of the source by means of symbolism though a reflective correction (as if to compensate for the distortion in the other lens). To me it means to steel-man the opponent's argument to the point where it becomes not-falisifible. See, the fact that science works by falsification and pseudoscience by verification puts them in different paradigms that will only be reconciled by verification alone. Meaning also, science will have value because it can predict, so who cares about its inner workings of reason! This to me makes sense, because right now we seem to rank our intelligence superior to that of a virus, which is a problem of underestimating your enemy :). We are Neurotribes, autistic kids for ex. think in pictures; a different type intelligence may be emerging, may be without beliefs :)

"It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments." Alfred North Whitehead

Comment author: entirelyuseless 02 July 2017 12:32:24AM 0 points [-]

I don't see why the problem is not solved. The probability of being at X depends directly on how I am deciding whether to turn. So I cannot possibly use that probability to decide whether to turn; I need to decide on how I will turn first, and then I can calculate the probability of being at X. This results in the original solution.

This also shows that Eliezer was mistaken in claiming that any algorithm involving randomness can be improved by making it deterministic.

Comment author: justinpombrio 01 July 2017 10:14:30PM *  0 points [-]

And then you can correct for the double-counting. When would you like to count your chickens? It's safe to count them at X or Y.

If you count them at X, then how much payoff do you expect at the end? Relative to when you'll be counting your payoff, the relative likelihood that you are at X is 1. And the expected payoff if you are at X is p^2 + 4p(1-p). This gives a total expected payoff of P(X) * E(X) = 1 * (p^2 + 4p(1-p)) = p^2 + 4p(1-p).

If you count them at Y, then you much payoff do you expect at the end? Relative to when you'll be counting your payoff, the relative likelihood that you are at Y is p. And the expected payoff if you are at Y is p + 4(1-p). This gives a total expected payoff of P(Y) * E(Y) = p * (p + 4(1-p)) = p^2 + 4(1-p).

I'm annoyed that English requires a tense on all verbs. "You are" above should be tenseness.

EDIT: formatting

Comment author: Lander7 01 July 2017 02:05:45PM 0 points [-]

An AI psychologist would stand the best chance of learning human interactions and rational. An AI in that position could quickly learn how we work and understand things. The programmers would also have a great view of how the AI responds.

As for the Dojo idea, we may now have that with, "luminosity.com", possibly the world's first online mental Dojo. A local Dojo would lack the visibility needed to ensure the correct training was being applied. If it were just local it would be more like theology rather than rational learning.

I try to explore ideas and concepts that put people outside the realm or normality on my site thinkonyourown.com, I've found that challenging people's limits is a fantastic way of exercising the mental muscles.

Comment author: Vaniver 01 July 2017 01:15:49AM 0 points [-]

3 has been empirically disproven at this point, I believe?

Comment author: Viliam 30 June 2017 09:59:43AM 0 points [-]

I am not going to argue about the exact number here, just saying that it is a small number

I didn't mean to imply any specific correlation.

Comment author: wnoise 29 June 2017 06:59:45PM 0 points [-]

sociopaths by the clinical definition make about 1-4% of population.

smart sociopaths make maybe 0.1% of the population

Are you asserting that "smart" is top decile to 2.5%, or that sociopathy is correlated to intelligence?

I'd consider a sigma away from the mean to be smart, so 0.3-1.3%.

Comment author: Decius 29 June 2017 04:04:08AM 0 points [-]

There will always be tasks at which better (Meta-)*Cognition is superior to the available amounts of computing power and tuning search protocols.

It becomes irrelevant if either humans aren't better than easily created AI at that level of meta or AI go enough levels up to be a failure mode.

Comment author: Lumifer 26 June 2017 05:16:07PM 1 point [-]


Comment author: gwern 23 June 2017 03:22:21PM 3 points [-]

Computer chess: 'AIs will never master tasks like chess because they lack a soul / the creative spark / understanding of analogies' (laymen, Hofstadter etc); 'AIs don't need any of that to master tasks like chess but computing power and well-tuned search' (most AI researchers); 'but a human-computer combination will always be the best at task X because the human is more flexible and better at mega-cognition!' (Kasparov, Tyler Cowen).

Comment author: themusicgod1 22 June 2017 04:36:45PM 0 points [-]

Similarly, an even more defensible position might be Buddhist one, or that happiness is transitory and mostly a construction of the mind, and virtually always attached to suffering, but suffering is real and worth minimizing.

Comment author: evand 20 June 2017 05:32:49AM 0 points [-]

I hope you have renter's insurance, knowledge of a couple evacuation routes, and backups for any important data and papers and such.

Comment author: ThoughtSpeed 19 June 2017 02:11:54AM 1 point [-]

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0.

Why would that round down to zero? That's a lot more people having cancer than getting nuked!

(It would be hilarious if Zubon could actually respond after almost a decade)

Comment author: Jiro 16 June 2017 08:52:50AM *  0 points [-]

Or, in less binary terms, why do you assign things the probabilities that you do?

I'm assuming that you assign it a high probability.

I personally am assigning it a high probability only for the sake of argument.

Since I am doing it for the sake of argument, I don't have, and need not have, any reason for doing so (other than its usefulness in argument).

In response to comment by Jiro on Nonperson Predicates
Comment author: John_Mlynarski 16 June 2017 03:37:30AM 0 points [-]

Eliezer suggested that, in order to avoid acting unethically, we should refrain from casually dismissing the possibility that other entities are sentient. I responded that I think that's a very good idea and we should actually implement it. Implementing that idea means questioning assumptions that entities aren't sentient. One tool for questioning assumptions is asking "What do you think you know, and why do you think you know it?" Or, in less binary terms, why do you assign things the probabilities that you do?

Now do you see the relevance of asking you why you believe what you do as strongly as you do, however strongly that is?

I'm not trying to "win the debate", whatever that would entail.

Tell you what though, let me offer you a trade: If you answer my question, then I will do my best to answer a question of yours in return. Sound fair?

Comment author: Jiro 15 June 2017 10:40:35PM *  0 points [-]

Having confidence in the belief is irrelevant. Assuming that you agree with it is relevant, because

1) Arguments should be based on premises that the other guy accepts. You probably accept the premise that video game characters aren't conscious.

2) It is easy to filibuster an argument by questioning things that you don't actually disagree with. Because the belief that video game characters aren't conscious is so widespread, this is probably such a filibuster. I wish to avoid those.

Comment author: cousin_it 15 June 2017 08:20:19AM 1 point [-]

Welcome! You can ask your question in the open thread as well.

Comment author: VAuroch 15 June 2017 06:07:45AM 0 points [-]

I don't think I've ever used a text that didn't. "We have" is "we have as a theorem/premise". In most cases this is an unimportant distinction to make, so you could be forgiven for not noticing, if no one ever mentioned why they were using a weird syntactic construction like that rather than plain English.

And yes, rereading the argument that does seem to be where it falls down. Though tbh, you should probably have checked your own assumptions before assuming that the question was wrong as stated.

Comment author: VAuroch 15 June 2017 05:54:37AM 0 points [-]


Comment author: research_prime_space 14 June 2017 09:40:43PM 3 points [-]

Hi! I'm 18 years old, female, and a college student (don't want to release personal information beyond that!). I'm majoring in math, and I hopefully want to use those skills for AI research :D

I found you guys from EA, and I started reading the sequences last week, but I really do have a burning question I want to post to the Discussion board so I made an account.

View more: Prev | Next