In response to Why CFAR's Mission?
Comment author: ChristianKl 02 January 2016 12:30:42PM *  12 points [-]

Why is CFAR's main venue for teaching those skills a 4-day workshop?

Why not weekly classes of 2 to 3 hours?
Why not a focus on written material as the original sequences had?
Why not a focus on creating videos that teach rationality skills?
Why not focus on creating software that trains the skills?

Comment author: Xachariah 06 January 2016 08:13:12AM *  2 points [-]

This is my main question. I've never seen anything to imply that multi-day workshops are effective methods of learning. Going further, I'm not sure how Less Wrong supports Spaced Repetition and Distributed Practice on one hand, while also supporting an organization that's primary outreach seems to be crash courses. It's like Less Wrong is showing a forum wide cognitive dissonance that nobody notices.

That leaves a few options:

  • I'm wrong (though I consider it highly unlikely)
  • CFAR never bothered to look it up or uses self selection to convince themselves it's effective
  • CFAR is trying to optimize for something aside from spreading rationality, but they aren't actually saying what.
Comment author: Kindly 02 May 2015 07:36:43PM 4 points [-]

Yyyyes and no. Our utility functions are nonlinear, especially with respect to infinitesimal risk, but this is not inherently bad. There's no reason for our utility to be everywhere linear with wealth: in fact, it would be very strange for someone to equally value "Having $1 million" and "Having $2 million with 50% probability, and having no money at all (and starving on the street) otherwise".

Insurance does take advantage of this, and it's weird in that both the insurance salesman and the buyers of insurance end up better off in expected utility, but it's not a Dutch Book in the usual sense: it doesn't guarantee either side a profit.

The Allais paradox points out that people are not only averse to risk, but also inconsistent about how they are averse about it. The utility function U(X cents) = X is not risk-averse, and it picks gambles 1A and 2A (in Wikipedia's notation). The utility function U(X cents) = log X is extremely risk-averse, and it picks gambles 1B and 2B. Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.

There's a Dutch book for the Allais paradox in this post reading after "money pump".

Comment author: Xachariah 02 May 2015 07:53:09PM *  2 points [-]

I didn't mean to imply nonlinear functions are bad. It's just how humans are.

Picking gambles 1A and 2B, on the other hand, cannot be described by any utility function.

Prospect Theory describes this and even has a post here on lesswrong. My understanding is that humans have both a non-linear utility function as well as a non-linear risk function. This seems like a useful safeguard against imperfect risk estimation.

[Insurance is] not a Dutch Book in the usual sense: it doesn't guarantee either side a profit.

If you setup your books correctly, then it is guaranteed. A dutch book doesn't need to work with only one participant, and in fact many dutch books only work with on populations rather than individuals, in the same way insurance only guarantees a profit when properly spread across groups.

Comment author: sixes_and_sevens 02 May 2015 12:54:49AM 4 points [-]

If someone reports inconsistent preferences in the Allais paradox, they're violating the axiom of independence and are vulnerable to a Dutch Book. How would you actually do that? What combination of bets should they accept that would yield a guaranteed loss for them?

Comment author: Xachariah 02 May 2015 06:33:49PM *  2 points [-]

The point of the Allais paradox is less about how humans violate the axiom of independence and more about how our utility functions are nonlinear, especially with respect to infinitesimal risk.

There is an existing Dutch Book for eliminating infinitesimal risk, and it's called insurance.

Comment author: adamzerner 02 January 2015 03:22:12AM *  9 points [-]

I have a developing opinion that I'm not quite sure how to word.

It seems that schools all over the world are teaching the same lessons, but are all trying to recreate the wheel. I sense that it'd be more efficient if a bunch of effort and resources went in to each lesson, and that lesson was made available for everyone.

Elon Musk gave a good analogy (paraphrasing)

"Consider The Dark Knight. It's amazing! They put a ton of resources into it. Got the best actors, directors, special effects etc. Now imagine if you took the same script and asked the local middle school to reproduce it. It'd suck! That's education."

I sense that there is some sort of economic logic/terminology that applies here and that better articulates what I'm trying to say.

My attempt at explaining it a bit more formally. Consider a lesson on mitosis. Say you have 100 classrooms you need to teach this lesson to. And say you have 100 employees. I think it'd be more efficient for those 100 employees to work at creating an optimal lesson, and then providing that lesson (via a website or something) to students. Given that the lesson can (largely) be delivered via software, it's non-rivalrous (my consumption doesn't take anything away from your consumption), and thus can be distributed to everyone at no marginal cost.

Anyway, I hope I did a good enough job explaining such that someone can recognize what I'm trying to say. I'd be really happy if anyone was able to help me further my understanding.

Comment author: Xachariah 03 January 2015 01:49:25PM *  0 points [-]

You may be interested in the term 'inverted classroom', if you're not already aware of it.

The basic idea is that it's the normal school system you grew up with, except students watch video lectures as homework, then do all work in class while they've got an expert there to help. Also, the time when the student is stuck in one place and forced to focus is when they're actually doing the hard stuff.

There's so many reasons why it's better than traditional education. I just hope inverted classrooms start to catch on sooner rather than later.

(Edit: I know this isn't your exact proposal, but it uses many of the features you mention and it can be immediately grafted into the existing public school system with a single change of curriculum and the creation of some videos. It's the low hanging fruit for education.)

Comment author: somnicule 28 October 2014 09:19:12AM 1 point [-]

I have been accepted to App Academy and have been considering it as a faster option to getting high-paying work, but as a younger, international candidate I'd have to pay for flights as well as a US$5000 deposit. It's something I'd even be willing to borrow money for, given my waning motivation for university, but without an income or much current earning potential I don't know if I could get a loan for it. And I couldn't earn enough to go to the round I've been accepted to in time.

Comment author: Xachariah 29 October 2014 06:26:29AM 1 point [-]

Anecdotally someone close to me did one of those and it was a quick way to burn thousands of dollars.

I tried to dissuade them, but end the end they came back with less knowledge than I did of the subject, and all I did was follow some youtube tutorials and look at stack overflow to create a couple learning apps for android.

Comment author: DanielLC 13 August 2014 11:51:45PM 8 points [-]

In principle, you can construct a utility function that represents a deontologist who abhors murder. You give a large negative value to the deontologist who commits murder. But it's kludgy. If a consequentialism talks about murder being bad, they mean that it's bad if anybody does it.

It is technically true that all of these ethical systems are equivalent, but saying which ethical system you use nonetheless carries a lot of meaning.

Instead, recognize that some ethical systems are better for some tasks.

If you choose your ethical system based on how it fulfils a task, you are already a consequentialist. Deontology and virtue ethics don't care about getting things done.

Comment author: Xachariah 15 August 2014 03:33:58AM 1 point [-]

All ethical frameworks are equal the same way that all graphing systems are equal.

But I'll be damned if it isn't easier to graph circles with polar coordinates than it is with Cartesian coordinates.

Comment author: [deleted] 12 August 2014 12:59:09PM 0 points [-]

If post a response to someone, and someone replies to me, and they get a single silent downvote prior to me reading their response, I find myself reflexively upvoting them just so they won't think I was the one who did the single silent downvote, since it seems plausible to me that if you have a single downvote, and no responses, the most likely explanation to me was that it was from the person who you replied to downvoted you, and I don't want people to think that.

Except, then I seem to have gotten my opinion of the post hopelessly biased before even reading it, because I'd feel bad if I revoked the upvote, let alone actually downvoted them, and I feel like I can't get back to the status quo of them just having a 0 point or positive post.

It also doesn't seem like it would have the same effect if someone replied to me and was heavily downvoted, but I don't actually recall that happening.

If I try to assess this more rationally, I get the suggestion 'You're worrying far too much about what other people MIGHT be thinking, based on flimsy evidence."

Thoughts?

In response to comment by [deleted] on Open thread, 11-17 August 2014
Comment author: Xachariah 13 August 2014 06:56:49AM 2 points [-]

You don't need to upvote them necessarily. Just flip a coin.

If you downvote them too, then it just looks like they made a bad post.

Comment author: [deleted] 27 July 2014 07:27:12AM *  1 point [-]

From chapter 100:

the blood must come from a live unicorn and the unicorn must die in the drinking

Quirrell doesn't have a very large window in which to drink the blood.

More to the point, wouldn't particulate matter, other fluids, other bits of the unicorn pollute the blood as a result of the transfiguration? I could see the blood itself fixing that issue, but in the case of another fluid in a similar situation, I could see the drinker getting sick (if not to the degree that the animal did).

I may not be understanding how transfiguration sickness works exactly.

EDIT: formatting

Comment author: Xachariah 27 July 2014 10:27:12PM 4 points [-]

Transfiguration sickness isn't because things turn into poison. Your body goes into a transfigured state, minor changes occur, and when you come back from that state things are different. It'd be tiny things. Huge problems would cause you to die instantly, but little transcription errors would kill you in the timeframe described.

Eg, your veins wouldn't match up right. The DNA in your cells would be just a little bit off and you'd get spontaneous cancer in your entire body. Some small percent of neurotransmitters and hormones would be transformed into slightly different ones... etc. None of that would be contagious or even harmful to somebody consuming it. But to the animal itself it'd be devastating.

Also remember that once the transfiguration reverts and you're back to yourself, you're in a stable state. The only issue is that you're not back together perfectly. Quirrell would only get sick if he drank the blood while it was transfigured and then it changed form while inside of him.

Comment author: jaime2000 26 July 2014 02:19:09PM *  16 points [-]

This chapter confirms earlier speculations that horcruxes work by making backup copies of brain states (with the caveat that actually using the horcrux will merge its memories and personality with those of its host body, resulting in a hybrid entity). The theory that Harry James Potter-Evans-Verres is an instance of Tom Riddle (or, rather, a hybrid of Tom Riddle and the original Harry Potter) now seems very, very probable. It explains why Harry is as smart as Tom's other instances (the original Riddle/Monroe/Voldemort and the Quirrell/Riddle hybrid), and why the remember-ball glowed like the sun (Harry forgot Riddle's memories because he was too young to remember them).

I learned of the horcrux sspell ssince long ago.

Parseltongue has a word for "horcrux"?

Death iss not truly gainssaid. Real sself is losst, as you ssay. Not to my pressent tasste. Admit I conssidered it, long ago.

Voldemort used horcruxes, obviously (that's what the five hidden items in the elemental pattern are), but between the missing memories and the hybridization Quirrellmort doesn't consider them to be worth the trouble. Keep in mind that there is a nice theory about not being able to lie in parseltongue.

Not like certain people living in certain countries, who were, it was said, as human as anyone else; who were said to be sapient beings, worth more than any mere unicorn. But who nonetheless wouldn't be allowed to live in Muggle Britain. On that score, at least, no Muggle had the right to look a wizard in the eye. Magical Britain might discriminate against Muggleborns, but at least it allowed them inside so they could be spat upon in person.

I suppose open borders and unrestricted immigration are in-keeping with Harry's character as a utilitarian who tries to assign equal value to each and every human life.

Also, won't Quirrell die of transfiguration sickness if he drinks the blood of transfigured Rarity?

Comment author: Xachariah 27 July 2014 12:56:58AM *  9 points [-]

Death iss not truly gainssaid. Real sself is losst, as you ssay. Not to my pressent tasste. Admit I conssidered it, long ago.

It's still not a lie.

He considered it long ago and then he did it. He doesn't want to try it again because he's already got some and/or they wouldn't fix his current situation. Literally truthful but appropriately misleading.

Comment author: Xachariah 19 July 2014 07:57:52AM *  15 points [-]

I thought the article was quite good.

Yes it pokes fun at lesswrong. That's to be expected. But it's well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn't agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb's problem. I could give that explanation to my grandma and she'd understand it.

I don't generally believe that "any publicity is good publicity." However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that's really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.

I'm not sure what people's expectations are for free publicity but this is, IMO, best case scenario.

View more: Next