Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: korin43 31 October 2017 10:03:35PM 0 points [-]

Woo!

Also if anyone else gets a "schema validation error" when changing this setting, remove the "Website" from your profile: https://github.com/Discordius/Lesswrong2/issues/225

Comment author: Multipartite 27 October 2017 02:05:06AM *  0 points [-]

Running through this to check that my wetware handles it consistently.

Paying -100 if asked:

When the coin is flipped, one's probability branch splits into a 0.5 of oneself in the 'simulation' branch, 0.5 in the 'real' branch. For the 0.5 in the real branch, upon awaking a subjective 50% probability that on either of the two possible days, both of which will be woken on. So, 0.5 of the time waking in simulation, 0.25 waking in real 1, 0.25 waking in real 2.

0.5 x (260) + 0.25 x (-100) + 0.25 x (-100) = 80. However, this is the expected cash-balance change over the course of a single choice, and doesn't take into account that Omega is waking you multiple times for the worse choice.

An equation for relating choice made to expected gain/loss at the end of the experiment doesn't ask 'What is my expected loss according to which day in reality I might be waking up in?', but rather only 'What is my expected loss according to which branch of the coin toss I'm in?' 0.5 x (260) + 0.5 x (-100-100) = 30.

Another way of putting it: 0.5 x (260) + 0.25 x (-100(-100)) + 0.25 x (-100(-100)) = 30 (Given that making one choice in a 0.25 branch guarantees the same choice made, separated by a memory-partition; either you've already made the choice and don't remember it, or you're going to make the choice and won't remember this one, for a given choice that the expected gain/loss is being calculated for. The '-100' is the immediate choice that you will remember (or won't remember), the '(-100)' is the partition-separated choice that you don't remember (or will remember).)

--Trying to see what this looks like for an indefinite number of reality wakings: 0.5 * (260) + n x (1/n) x (1/2) x (-100 x n) = 130 - (50 x n), which of the form that might be expected.

(Edit: As with reddit, frustrating that line breaks behave differently in the commenting field and the posted comment.)

Comment author: trickster 22 October 2017 05:08:18PM *  0 points [-]

No, you don't need update you assumption. If clever arguer choose to argue about what box is contained a diamond - and not bet his own money on that..... It is sure sign that he have absolutely no idea about this, so all his speeches also just can't contain a usefull information, only total bullshit. It is like updating your beliefs about future fliping a coin. Coin just don't contain information about future- therefore useless for predicting. Also with clever arguer.

I try put it in other words. Arguer is clever. He doesn't sure what box is containing a diamond- i.e. he believe in 50/50. Else- he just bouth box, that he think contain diamond. He has a more information about box, then you. So, how you can think that you have more certain data, that one box contain a diamond -than arguer, if you have less information than he?

Also, I wonder - if somebody hired two clever arguers, one of them will persuaded one person, that diamond in the left box, and the other will argue to second person that diamond in the right box. And clever arguers is so good, that they victims almost sure in that... Isn't it almost as creating new diamond out of air ?

Comment author: adjuant 22 October 2017 10:32:58AM *  1 point [-]

I think that the core of religion—that is to say, Christianity—consists of all the things that human beings ought to do.

Our purpose, both in the particular and universal sense, and our ultimate destination.

Comment author: Habryka 19 October 2017 04:22:00AM 2 points [-]

You can now also deactivate Intercom on your profile. I really wish Intercom wouldn't do the horrible thing with the tab-title.

In response to comment by Ray on Applause Lights
Comment author: Kevin92 18 October 2017 08:35:07PM 1 point [-]

I voted for Justin Trudeau but DAMN! Listen to his speeches! They're terrible!

Comment author: Fulmenius 18 October 2017 07:58:28PM 1 point [-]

Im sorry for being slowpoke, but this text contains phrase "But humanity uses gamete selection," said the Lady Sensory. "We aren't evolving any slower. If anything, choosing among millions of sperm and hundreds of eggs gives us much stronger selection pressures" Maybe I dont understand something, but in my view this phrase is biologically incorrect. Phenotype of a spermatozoon is usually determined by father's diploid DNA (if we dont examine such things as meyotic drive genes etc), so any competition between one's spermatozoons is a competition between the same genes, wich cant create any selection pressure. Also even if such pressure existed, it could only lead to spermatozoons' structure evolution and could not help to adapt to external enviroment. I also apologize for my English.

In response to comment by gwern on Magical Categories
Comment author: gwern 17 October 2017 08:17:35PM 2 points [-]

I've compiled and expanded all the examples at https://www.gwern.net/Tanks

In response to comment by Squark on Dying Outside
Comment author: DragonGod 08 October 2017 03:24:43PM 0 points [-]

Amen to that comrade.

Comment author: DragonGod 07 October 2017 11:06:46AM 1 point [-]

Be careful what you wish for. It seems your wish was granted in the form of Eugine.

In response to comment by metaman on LessWrong podcasts
Comment author: juanker52 06 October 2017 04:06:33PM 1 point [-]
In response to comment by metaman on LessWrong podcasts
Comment author: juanker52 06 October 2017 04:05:36PM 0 points [-]
In response to LessWrong podcasts
Comment author: metaman 05 October 2017 08:45:17PM 1 point [-]

Castify does not appear to have survived? Are the Sequences still out there someplace in audio format?

Comment author: Vaniver 05 October 2017 05:55:43PM 2 points [-]

Our current plan is to send an email with a vote link to everyone over the threshold; we're going to decide when to have the vote later in the open beta period.

Comment author: DragonGod 04 October 2017 09:14:04AM 0 points [-]

if the correct theory were more Kolmogorov-complex than SM+GR, then we would still be forced as rationalists to trust SM+GR over the correct theory, because there wouldn't be enough Bayesian evidence to discriminate the complex-but-correct theory from the countless complex-but-wrong theories.

I reject Solomonoff induction as the correct technical formulation of Occam's razor, and as an adequate foundation for Bayesian epistemology.

Comment author: DragonGod 04 October 2017 09:07:59AM 0 points [-]

Counter example: I changed my epistemology from Aristotelian to Aristotle + Bayes + frequentism.

Comment author: DragonGod 04 October 2017 09:06:54AM 1 point [-]

"Politics is the mindkiller" is an argument for why people should avoid getting into political discussion on Lesswrong; it is not an argument against political involvement in general. Rationalists completely retreating from Politics would likely lower the sanity waterline as far as politics is concerned. Rationalists should get more involved in politics (but outside Lesswrong) of course.

Comment author: TheAncientGeek 03 October 2017 11:37:19AM *  0 points [-]

Well, we don't know if they work magically, because we don't know that they work at all. They are just unavoidable.

It's not that philosophers weirdly and unreasonably prefer intuition to empirical facts and mathematical/logical reasoning, it is that they have reasoned that they can't do without them: that (the whole history of) empiricism and maths as foundations themselves rest on no further foundation except their intuitive appeal. That is the essence of the Inconvenient Ineradicability of Intuition. An unfounded foundation is what philosophers mean by "intuition". Philosophers talk about intution a lot because that is where arguments and trains of thought ground out...it is away of cutting to the chase. Most arguers and arguments are able to work out the consequences of basic intutitions correctly, so disagrements are likely to arise form differencs in basic intuitions themselves.

Philosophers therefore appeal to intuitions because they can't see how to avoid them...whatever a line of thought grounds out in, is definitiionally an intuition. It is not a case of using inutioins when there are better alternatives, epistemologically speaking. And the critics of their use of intuitions tend to be people who haven't seen the problem of unfounded foundations because they have never thought deeply enough, not people who have solved the problem of finding sub-foundations for your foundational assumptions.

Scientists are typically taught that the basic principles maths, logic and empiricism are their foundations, and take that uncritically, without digging deeper. Empircism is presented as a black bx that produces the goods...somehow. Their subculture encourages use of basic principles to move forward, not a turn backwards to critically relflect on the validity of basic principles. That does not mean the foundational principles are not "there". Considering the foundational principles of science is a major part of philosophy of science, and philosophy of science is a philosophy-like enterprise, not a science-like enterprise, in the sense it consists of problems that have been open for a long time, and which do not have straightforward empirical solutions.

Does the use of empiricism shortcut the need for intuitions, in the sense of unfounded foundations?

For one thing, epistemology in general needs foundational assumptions as much as anything else. Which is to say that epistemogy needs epistemology as much as anything else. -- to judge the validity of one system of epistemology, you need another one. There is no way of judging an epistemology starting from zero, from a complete blank. Since epistemology is inescapable, and since every epistemology has its basic assumptions, there are basic assumptions involved in empiricism.

Empiricism specifically has the problem of needing an ontological foundation. Philosophy illustrates this point with sceptical scenarios about how you are being systematically deceived by an evil genie. Scientific thinkers have closely parallel scenarios in which humans cannot be sure whether you are not in the Matrix or some other virtual reality. Either way, these hypotheses illustrate the point that the empiricists are running on an assumption that if you can see something, it is there.

Comment author: Elo 03 October 2017 07:48:49AM 0 points [-]

I think it matters in so far as assisting your present trajectory. Otherwise it might as well be an unfeeling entity.

Comment author: AFinerGrain 03 October 2017 01:54:37AM 0 points [-]

I always wonder how I should treat my future self if I reject the continuity of self. Should I think of him like a son? A spouse? A stranger? Should I let him get fat? Not get him a degree? Invest in stock for him? Give him another child?

View more: Prev | Next