Comment author: Unknowns 30 June 2015 03:47:36AM *  0 points [-]

This is like saying "if my brain determines my decision, then I am not making the decision at all."

Comment author: Kindly 30 June 2015 05:08:29AM 0 points [-]

Not quite. I outlined the things that have to be going on for me to be making a decision.

Comment author: Kindly 29 June 2015 07:21:54PM 0 points [-]

In the classic problem, Omega cannot influence my decision; it can only figure out what it is before I do. It is as though I am solving a math problem, and Omega solves it first; the only confusing bit is that the problem in question is self-referential.

If there is a gene that determines what my decision is, then I am not making the decision at all. Any true attempt to figure out what to do is going to depend on my understanding of logic, my familiarity with common mistakes in similar problems, my experience with all the arguments made about Newcomb's problem, and so on; if, despite all that, the box I choose has been determined since my birth, then none of these things (none of the things that make up me!) are a factor at all. Either my reasoning process is overridden in one specific case, or it is irreparably flawed to begin with.

Comment author: Bound_up 12 June 2015 06:03:06PM *  1 point [-]

Did I use Bayes' formula correctly here?

Prior: 1/20

12/20 chance that test A returns correctly +

16/20 chance that test B returns correctly +

12.5/20 chance that test C returns correctly +

Odds of correct diagnosis?

I got 1/2

Comment author: Kindly 12 June 2015 11:53:52PM *  3 points [-]

Let's assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12/20, and if - is correct, then Pr[A returns +] = 8/20).

The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.

If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds are 10:19. This means that Pr[disease|+++] = 10/29, just over 1/3.

You may have obtained 1/2 by a double confusion between odds and probabilities. If your prior had been Pr[disease]=1/21, then we'd have prior odds of 1:20 and posterior odds of 1:2 (which is a probability of 1/3, not of 1/2).

Comment author: Ishaan 20 May 2015 03:51:38AM *  3 points [-]

Realistically I'd probably wrap up my affairs and prepare my loved ones, but broadly I think the comparative advantage is in performing high-risk services. The first thought that came to mind is volunteering for useful dangerous experiments that need live human subjects, but there's probably a lot of bureaucratic barriers there.

I wonder if there are any legally feasible high risk and helpful services that also pay really well...

Comment author: Kindly 20 May 2015 04:11:47AM 2 points [-]

If you're looking for high-risk activities that pay well, why are you limiting yourself to legal options?

Comment author: Toggle 11 May 2015 02:35:54AM 9 points [-]

It looks like AI is overtaking Arimaa, which is notable because Arimaa was created specifically as a challenge to AI. Congratulations to the programmer, David Wu.

Comment author: Kindly 12 May 2015 03:21:36PM 1 point [-]

On the subject of Arimaa, I've noted a general feeling of "This game is hard for computers to play -- and that makes it a much better game!"

Progress of AI research aside, why should I care if I choose a game in which the top computer beats the top human, or one in which the top human beats the top computer? (Presumably both the top human and the top computer can beat me, in either case.)

Is it that in go, you can aspire (unrealistically, perhaps) to be the top player in the world, while in chess, the highest you can ever go is a top human that will still be defeated by computers?

Or is it that chess, which computers are good at, feels like a solved problem, while go still feels mysterious and exciting? Not that we've solved either game in the sense of having solved tic-tac-toe or checkers. And I don't think we should care too much about having solved checkers either, for the purposes of actually playing the game.

Comment author: John_Maxwell_IV 11 May 2015 07:42:42AM *  3 points [-]

I recently found this blog post by Ben Kuhn where he briefly summarizes ~5 classic LW posts in the space of one blog post. A couple points:

  • I don't think that much of the content of the original posts is lost in Ben's summary, and it's a lot faster to read. Do others agree? Do we think producing a condensed summary of the LW archives at some point might be valuable? (It's possible that, for instance, the longer treatment of these concepts in the original posts pushes them deeper in to your brain, or that since people are so used to skimming, conceptually dense content can actually be bad.)

  • Might it be worthwhile to link to this from the LW about page or FAQ for newcomers?

Submitting...

Comment author: Kindly 11 May 2015 11:37:06PM 5 points [-]

This ought to be verified by someone to whom the ideas are genuinely unfamiliar.

In response to comment by Kindly on Logical Pinpointing
Comment author: Houshalter 07 May 2015 10:08:59AM 0 points [-]

"For every x, x+n is reached from x by applying S to x some number of times" because we don't have a way to say that formally.

But that's what I'm trying to say. To say n number of times, you start with n and subtract 1 each time until it equals zero. So for addition, 2+3 is equal to 3+2, is equal to 4+1, is equal to 5+0. For subtraction you do the opposite and subtract one from the left number too each time.

If no number of subtract 1's cause it to equal 0, then it can't be a number.

Comment author: Kindly 07 May 2015 02:53:17PM 0 points [-]

I know that's what you're trying to say because I would like to be able to say that, too. But here's the problems we run into.

  1. Try writing down "For all x, some number of subtract 1's cause it to equal 0". We can write the "∀x. ∃y. F(x,y) = 0" but in place of F(x,y) we want "y iterations of subtract 1's from x". This is not something we could write down in first-order logic.

  2. We could write down sub(x,y,0) (in your notation) in place of F(x,y)=0 on the grounds that it ought to mean the same thing as "y iterations of subtract 1's from x cause it to equal 0". Unfortunately, it doesn't actually mean that because even in the model where pi is a number, the resulting axiom "∀x. ∃y. sub(x,y,0)" is true. If x=pi, we just set y=pi as well.

The best you can do is to add an infinitely long axiom "x=0 or x = S(0) or x = S(S(0)) or x = S(S(S(0))) or ..."

In response to comment by Kindly on Logical Pinpointing
Comment author: Houshalter 05 May 2015 10:29:06AM *  0 points [-]

repeating S n times isn't something we know how to do in first-order logic

Why not? Repeating S n times is just addition, and addition is defined in the peano first order logic axioms. I just took these from my textbook:

∀y.plus(0,y,y)

∀x.∀y.∀z.(plus(x,y,z) ⇒ plus(s(x),y,s(z)))

∀x.∀y.∀z.∀w.(plus(x,y,z) ∧ ¬same(z,w) ⇒ ¬plus(x,y,w))

I've also seen addition defined recursively somehow, so each step it subtracted 1 from the second number and added 1 to the first number, until the second number was equal to zero. Something like this:

∀x.∀y.∀z.∀w.(plus(x,y,z) ⇒ plus(s(x),w,z) ∧ same(s(w),y))

From this you could define subtraction in a similar way, and then state that all numbers subtracted from themselves, must equal 0. This would rule out nonstandard numbers.

Comment author: Kindly 05 May 2015 02:40:28PM 0 points [-]

Repeating S n times is not addition: addition is the thing defined by those axioms, no more, and no less. You can prove the statements:

∀x. plus(x, 1, S(x))

∀x. plus(x, 2, S(S(x)))

∀x. plus(x, 3, S(S(S(x))))

and so on, but you can't write "∀x. plus(x, n, S(S(...n...S(x))))" because that doesn't make any sense. Neither can you prove "For every x, x+n is reached from x by applying S to x some number of times" because we don't have a way to say that formally.

From outside the Peano Axioms, where we have our own notion of "number", we can say that "Adding N to x is the same as taking the successor of x N times", where N is a real-honest-to-god-natural-number. But even from the outside of the Peano Axioms, we cannot convince the Peano Axioms that there is no number called "pi". If pi happens to exist in our model, then all the values ..., pi-2, pi-1, pi, pi+1, pi+2, ... exist, and together they can be used to satisfy any theorem about the natural numbers you concoct. (For instance, sub(pi, pi, 0) is a true statement about subtraction, so the statement "∀x. sub(x, x, 0)" can be proven but does not rule out pi.)

Comment author: DonaldMcIntyre 05 May 2015 05:38:56AM 0 points [-]
  • Consequence of previous events: when things pass from state to state as a consequence of a causal chain of actions that are not initiated or continued by a living decision maker that purposely provoked them.

  • Consequence of decision making: when a living being acted on a chain of physical events and modified them according to its will and therefore the pattern of the sequence is not consistent with random mechanical events.

If your decisions aren't a consequence of previous events, then they are just meaningless randomness.

I agree with the idea that living things make decision based on the observation of reality and must not initiate actions out of nowhere.

And randomness is very distinct from the old concept of free will. Randomness is not your will. You have no control over it. Rather it controls you.

When I mention free will on my OP I am not referring to the ideological concept,but just my personal opinion that decision making in our brains must obey to some randomness in order to be free of regular certainty in physics.

I don't think that randomness is in our brains,I think there must be randomness in the mechanics of physics.

Comment author: Kindly 05 May 2015 05:51:36AM 2 points [-]

What makes you think that decision making in our brains is free of "regular certainty in physics"? Deterministic systems such as weather patterns can be unpredictable enough.

To be fair, if there's some butterfly-effect nonsense going on where the exact position of a single neuron ends up determining your decision, that's not too different from randomness in the mechanics of physics. But I hope that when I make important decisions, the outcome is stable enough that it wouldn't be influenced by either of those.

Comment author: PetjaY 03 May 2015 09:51:32AM 0 points [-]

"It will no longer be correct to say that something is (a color or similar property). One must say it "seems" a color, as well as to whom. Not "Snow is white", rather, "Snow seems white to me"."

I´d say this is not needed, when people say "Snow is white" we know that it really means "Snow seems white to me", so saying it as "Snow seems white to me" adds length without adding information.

My first fixes to english would be to unite spoken and written english with same letters always meaning same sounds, and getting rid of adding "the" to places where it does not add information (where sentence would mean same even without "the").

Comment author: Kindly 03 May 2015 04:49:15PM 1 point [-]

I´d say this is not needed, when people say "Snow is white" we know that it really means "Snow seems white to me", so saying it as "Snow seems white to me" adds length without adding information.

Ah, but imagine we're all-powerful reformists that can change absolutely anything! In that case, we can add a really simple verb that means "seems-to-me" (let's say "smee" for short) and then ask people to say "Snow smee white".

Of course, this doesn't make sense unless we provide alternatives. For instance, "er" for "I have heard that", as in "Snow er white, though I haven't seen it myself" or "The dress er gold, but smee blue."

View more: Next