All of mavant's Comments + Replies

mavant50

The fact that it's the same phrasing used in the literature is really concerning, because it means the interpretation the literature gives is wrong: Many subjects may in fact be generating a mental model (based on deductive reasoning, no less!) which is entirely compatible with the problem-as-stated and yet which produces a different answer than the one the researchers expected.

One could certainly write '(Ace is present OR King is present) XOR (Queen is present OR Ace is present)' which trivially reduces to '(King is present OR Queen is present) AND (Ace i... (read more)

mavant00

Third obvious possibility: B maximises u~Σpivi, subject to the constraints E(Σpivi|B) ≥ E(Σpivi|A) and E(u|B) ≥ E(u|A). where ~ is some simple combining operation like addition or multiplication, or "the product of A and B divided by the sum of A and B".

I think these possibilities all share the problem that the constraint makes it essentially impossible to choose any action other than what A would have chosen. If A chose the action that maximized u, then B cannot choose any other action while satisfying the constraint E(u|B) ≥ E(u|A) unless there... (read more)

0Stuart_Armstrong
I see I've miscommunicated the central idea. Let U be the proposition "the agent will remain a u maximiser forever". Agent A acts as if P(U)=1 (see the entry on value learning). In reality, P(U) is probably very low. So A is a u-maximiser, but a u-maximiser that acts on false beliefs. Agent B is is allowed to have a better estimate of P(U). Therefore it can find actions that increase u beyond what A would do. Example: u values rubies deposited in the bank. A will just collect rubies until it can't carry them any more, then go deposit them in the bank. B, knowing that u will change to something else before A has finished collecting rubies, rushes to the bank ahead of that deadline. So E(u|B) > E(u|A). And, of course, if B can strictly increase E(u), that gives it some slack to select other actions that can increase (Σpivi).
mavant20

The Ace is in both statements and both statements cannot be true as per the requirement.

No.

deal :: IO CardHand

deal = do

x <- randomBoolean

if x

then generateHandsContainingEitherOrBothOf (King, Ace)

else generateHandsContainingEitherOrBothOf (Queen, Ace)

Asking a trick question and then insisting on a particular reading does not constitute evidence of a logical fallacy being committed by the answerer.

0ScottL
It's not a trick question. It's pretty much the same as the example used in the literature and then I have a few other examples that are straight from the literature. The literature on mental models is mainly on deductive reasoning. That is why the question is in the format it is.I have rephrased it to try to make it more clear that it is not about which algorithm is correct. Can you please let me know if you think this helps. Also, did you have the same problem with the second problem. The thing is that the problem requires a particular reading because a different reading makes it a totally different problem. Under your reading the question really is: The dealt hand will contain cards from only one of the following sets of cards: * K, A, K and A * Q, A, Q and A Obviously, that's a totally different problem. If you have any suggestions on how to improve the question, let me know.
mavant-40

If Despotism failed only for want of a capable benevolent despot, what chance has Democracy, which requires a whole population of capable voters?

0Lumifer
Democracy requires capable voters in the same way capitalism requires altruistic merchants. In other words, not at all.
2Jiro
It requires a population that's capable cumulatively, it doesn't require that each member of the population be capable. It's like arguing a command economy versus a free economy and saying that if the dictator in the command economy doesn't know how to run an economy, how can each consumer in a free economy know how to run the economy? They don't, individually, but as a group, the economy they produce is better than the one with the dictatorship.
mavant00

I don't really understand how this could occur in a TDT-agent. The agent's algorithm is causally dependent on '(max $5 $10), but considering the counterfactual severs that dependence. Observing a money-optimizer (let's call it B) choosing $5 over $10 would presumably cause the agent (call it A) to update its model of B to no longer depend on '(max $5 $10). Am I missing something here?

0Vladimir_Nesov
Correctly getting to the comparison of $5 and $10 is the whole point of the exercise. An agent is trying to evaluate the consequences of its action, A, which is defined by agent's algorithm and is not known explicitly in advance. To do that, it could in some sense consider hypotheticals where its action assumes its possible values. One such hypothetical could involve a claim that A=$5. The error in question is about looking at the claim that A=$5 and making incorrect conclusions (which would result in an action that doesn't depend on comparing $5 and $10).
mavant40

Don't know if this has been suggested before, but:

Possibility: Harry's "Father's rock" is the Resurrection Stone. Giving this one low probability, since it has thus far demonstrated no other magical properties, and just seems like a way to get Harry to grind his Transfiguration and mana stats.

Possibility: Harry's "Father's rock" is the Philosopher's Stone. Giving this one even lower probability.

Possibility: The Philosopher's Stone is actually the Resurrection stone, or a similar magical construct. Middling probability; Dumbledore refers... (read more)

2Baughn
Note that the Philosopher's Stone in MoR is actually supposed to transmute base metals into silver, not gold. I can't help but think that this difference is suggestive; if it was purely the result of a happy death spiral, gold would make more sense.
1hairyfigment
Dumbledore (who has used Transfiguration in combat and lived) gave Harry his father's rock the day after Quirrell publicly accused Harry of always thinking purely of killing and novel ways to do it. I don't know if D wanted to encourage H in this, or to provide an alternative to some more dangerous action. (Maybe D has considered the possibility that Q has some dark reason for wanting Harry to learn the Killing Curse?) But I feel very sure that he was thinking of the use H did in fact make of it, and we don't need to imagine another purpose. If anything, Dumbledore would want to keep the Philosopher's Stone more directly under his control, eg hidden under the lampshade in his office.
mavant-10

Can't be Harry's blood; at age eleven he's certainly got less than 3 litres (if he weighs ~80 pounds), possibly little more than two (can't recall if HJPEV is as skinny as Canon!HP). If you cut off a limb, he might have as much one litre "spill" out, but the rest would just sort of... dribble in spurts.

0Velorien
Is there any stipulation that the blood must be freshly gathered, and not kept preserved as for transfusions?
mavant30

It's a shame you retracted this, because I wanted to +1 it.

5arborealhominid
I don't actually remember why I retracted it. I tried to un-retract it afterwards, but I don't think that's possible.
mavant00

That ritual required quite a number more components... But then, it didn't WORK, so perhaps Burgess and his order meant to perform the one Quirrell meant.

This is my headcanon, now.

mavant30

At least one of the definitions is applicable to any arbitrary proposition. Either (1) it can be counterfeited, implying that there's no test you can perform to determine the true state of things, or (2) it can be tested to determine the true state of things.

0BT_Uytya
(non-native speaker here) I was under impression that "to counterfeit" means only "to create imperfect copies in order to fraud someone", but it seems that it also means "to deceive". Thank you!
mavant90

Today I had the health exam for the life insurance policy associated with my cryonic suspension contract.

Then I grabbed my best friend and girlfriend and repeatedly showed them clips from the Futurama episode where Fry's dog waits for years after Fry gets frozen, and Fry misses his dog in the future, and the dog misses Fry in the past, etc. They are now both awaiting insurance policy quotes for their own suspension contracts.

mavant60

No, but Sequences-related. I finished them a couple weeks ago, and it just seemed like the only choice that still made sense.

mavant260

I signed up for life insurance to pay for cryonics. I'm told it'll be about six weeks from today until I'm fully covered (and CI coverage should start the same day).

1Paul Crowley
Hurrah!
0Eliezer Yudkowsky
HPMOR-related? (Curious.)
mavant10

For those who use public transit, anki on the phone is lifechanging. I'd advise keeping a small notepad with you in case you think of something to look up, check, add or edit later - those are all inconvenient on the phone, especially if one is on the subway and can't get online at all.

0aelephant
I use Notes on iPhone to record things to look up, check, add or edit later.
2luminosity
Agreed. I've known about Anki for a long time, but lacked the push that got me finally using it, until I read the Motivation Hacker. Now I have Anki set up on my phone, along with Beeminder. It feels really good of a morning to be able to cycle through my Anki learning for the day, and tick that goal off in Beeminder. Bonus: Combined with better use of Evernote, I finally feel like I'm really getting the use out of having a smartphone that was my reason for switching to one a year ago. It amuses me that Motivation Hacker was the push towards setting up the systems that would allow me to actually remember the important facts from books, etc that I read, such as the Motivation Hacker.
mavant00

Any suggestions besides Rudi Hoffman for finding insurance policies? I requested a quote from him on Monday, but haven't yet heard back.

0Stuart_Armstrong
I live in the UK, and only know one financial advisor here: http://www.ioxfordshire.co.uk/profile/201058/Wantage/415-Independent-Financial-Planners/
mavant10

I recently finished the Sequences, and I'm convinced about cryopreservation (well, convinced that it's a good idea; not 100% convinced it will work...) but I'm not sure what to do next.

Is there any known reason to sign up for Alcor vs Cryonics Institute (or some other org that I'm not familiar with)? I'm young (22) and healthy, if that matters.

0Stuart_Armstrong
I think Alcor is generally seen as better and more expensive. But it's all a bet on the relative stability of the companies; if one of them goes bust, I'm pretty sure you can redirect your insurance policy beneficiary, so the important thing is to get that setup early...