In response to comment by Tedav on Proportional Giving
Comment author: Dias 05 March 2014 03:00:12AM 2 points [-]

If we didn't have a culturally accepted obligation for charity, we wouldn't give as much money to inefficient charities and religious institutions, and might be more willing to consent to a higher progressive tax.

And if people didn't naturally want to have sex, we might be more willing to consent to government-assigned reproduction!

In response to comment by Dias on Proportional Giving
Comment author: Tedav 05 March 2014 08:35:32PM 0 points [-]

Yes, that is true as well.

My point was that since our cultural instinct is to give, but in practice this is done inefficiently, [charities are wasteful, people don't give to charities to optimize utility but rather to charities that they think they like, and a flat percentage is probably worse than a progressive tax], and therefore it would probably be better for society if we didn't expect charity from people - this seemingly beneficial cultural obligation can be argued to be harmful.

In response to Proportional Giving
Comment author: scrafty 03 March 2014 04:34:02AM 6 points [-]

A compromise that I find appealing and might implement for myself is giving a fixed percentage over a fixed amount, with that fixed percentage being relatively high (well above ten percent). You could also have multiple "donation brackets" with an increased marginal donation rate as your income increases.

Comment author: Tedav 04 March 2014 03:31:10AM -2 points [-]

I like this approach.

It makes sense, and it mostly dodges the problem that other "simple" formulae for charity have - namely that most simple systems tend to be essentially voluntary regressive taxation.

This is why the 10% rule has always bugged me - it is a culturally accepted voluntary regressive tax, and as such it exacerbates social inequality.

[Also, one of my friends likes to joke that our culture holds that you give 10% of your income to charity, but capital gains are exempt...]

I'm always on the lookout for things that seem innocuous or even beneficial that actually are ways of enforcing the social structure and preventing upwards mobility, like our strange insistence on prescriptive rules of language, and upon the necessity of "sounding intelligent".

Language are evolved social constructs, and "correct grammar" is determined by native speakers. However, we impose additional rules that stray from the natural form of the language, and develop a notion that certain ways of speaking/writing are proper, and that other ways are ignorant. To learn how to speak in a way that sounds intelligent requires additional investment of time and effort, and those that cannot afford to do so (can't afford to spend as much time reading, or comes from an area with worse schools) will grow up speaking a completely intelligible version of the language, but one that is generally recognized as sounding like a marker of ignorance, and thus limits possibilities for advancement.

Ok, I really got off topic there, but my point was that our cultural construct that people should give a fixed percentage of their income to charity might very well not be a force for good, but rather a force opposing good.

It is a regressive taxation system, but one that is culturally supported. Further, because so many people feel like everyone is already voluntarily consenting to give to charity (especially through religious organizations) that actual taxation is an unnecessary imposition.

If we didn't have a culturally accepted obligation for charity, we wouldn't give as much money to inefficient charities and religious institutions, and might be more willing to consent to a higher progressive tax.

In response to comment by Tedav on The Rationality Wars
Comment author: Desrtopa 04 March 2014 02:26:36AM 0 points [-]

If presented with an opportunity to believe that other people are like you, with no penalty for being wrong, one could expect people will err on the side of predicting behavior consistent with one's own behavior.

I obviously haven't done this experiment, but I suspect that if the subjects asked to wear the sign were offered a cash incentive based on their accuracy of prediction for others, both groups would make a more accurate prediction.

Possibly. But if you're prepared to bet that the bias would vanish in that context, that's a bet I'd take.

Comment author: Tedav 04 March 2014 03:14:17AM 0 points [-]

I'm not prepared to make that bet.

I don't suspect the bias would vanish, but rather be diminished.

In response to comment by Tedav on The Rationality Wars
Comment author: buybuydandavis 28 February 2014 08:31:00PM 0 points [-]

people who they voted for < who they predicted would win < bet on who would win, where '<' indicates predictive accuracy.

Because, the first is signaling about yourself and perhaps trying to sway others, the second is probably just swaying others, and the third is trying to make money.

It's a testament to a demented culture that people are lying about how they vote.

Comment author: Tedav 28 February 2014 09:18:07PM 0 points [-]

people who they voted for < who they predicted would win < bet on who would win, where '<' indicates predictive accuracy.

This is exactly what I was saying.

In response to comment by Tedav on The Rationality Wars
Comment author: Slider 28 February 2014 06:07:19PM 1 point [-]

Voting isn't a form of predicting the winner, it's not about being on the side of the winner.

Comment author: Tedav 28 February 2014 07:32:36PM 1 point [-]

I didn't mean to imply I thought it was, though I see how that wasn't clear.

I didn't intend that last bracketed part to be an example, but rather a related phenomenon - it is interesting to me how asking a random sample of people who they voted for is a worse predictor than asking a random sample of people who they would predict got the most votes, and that this accuracy further improves when people are asked to stake money on their predictions.

I simply was pointing out that certain biases might be significantly more visible when there is no real incentive to be right.

In response to The Rationality Wars
Comment author: buybuydandavis 27 February 2014 11:02:10PM 0 points [-]

In the absence of other data, you should treat your own preferences as evidence for the preferences of others.

But in this case, unless you were raised by wolves, you do have more data.Their objection seems like weak tea here, though it has validity generally.

I often find myself disagreeing with the studies which conclude people have failures of rationality. Often they fail to take cost functions into account, or knowledge, or priors, or contexts.

Comment author: Tedav 28 February 2014 05:18:47PM 0 points [-]

For instance, one supplemental explanation for the False Consensus Effect (because just because it is one effect doesn't mean it has only one cause) that I have heard is that in most cases it is a "free" way of obtaining comfort.

If presented with an opportunity to believe that other people are like you, with no penalty for being wrong, one could expect people will err on the side of predicting behavior consistent with one's own behavior.

I obviously haven't done this experiment, but I suspect that if the subjects asked to wear the sign were offered a cash incentive based on their accuracy of prediction for others, both groups would make a more accurate prediction.

[See also - political predictions are more accurate when the masses are asked to make monetary bets on the winner of the election, rather than simply indicate who they would vote for]

Comment author: ThisSpaceAvailable 09 February 2014 07:37:22AM 1 point [-]

I would assume you can chain proxies, but that would make the latency issues even worse.

Comment author: Tedav 28 February 2014 05:12:19PM 1 point [-]

It sounds like you might be looking for something like The Onion Router (Tor)

Comment author: Eliezer_Yudkowsky 20 July 2009 01:52:43AM 4 points [-]

"I believe X to be like me" => "whatever I decide, X will decide also" seems tenuous without some proof of likeness that is beyond any guarantee possible in humans...

Maybe you could aspire to such determinism in a proven-correct software system running on proven-robust hardware.

Well, yeah, this is primarily a theory for AIs dealing with other AIs.

You could possibly talk about human applications if you knew that the N of you had the same training as rationalists, or if you assigned probabilities to the others having such training.

Comment author: Tedav 28 February 2014 04:44:00PM 0 points [-]

For X to be able to model the decisions of Y with 100% accuracy, wouldn't X require a more sophisticated model?

If so, why would supposedly symmetrical models retain this symmetry?

Comment author: polymathwannabe 25 February 2014 03:09:27AM 0 points [-]

Slightly off-topic, but the actual complement of red is cyan, and the complement of green is magenta.

Comment author: Tedav 25 February 2014 03:48:34AM *  0 points [-]

I actually acknowledge that deeper in the thread [in the response to PECOS-9], noting that this is the publicly understood complement, despite being wrong: society teaches that the primary colors are Red, Yellow, Blue and not Magenta, Yellow, Cyan.

Comment author: Velorien 24 February 2014 09:48:14PM 1 point [-]

Regarding the sequence of events, here's how it goes:

Trelawney, who had been sitting behind him on the two-person broomstick that had just blazed through Hogwarts burning directly through all the walls and floors in their way, hastily pulled herself off and then sat down hard on the floor, a pace away from the red-glowing edges of a newly made gap in the wall. The woman was still breathing in gasps, bending over herself as though she were on the verge of vomiting out something larger than she was.

[Quirrell analyses the emotions he'd felt coming off Harry]

Unseen by anyone, the Defense Professor's lips curved up in a thin smile. Despite its little ups and downs, on the whole this had been a surprisingly good day -

"HE IS HERE. THE ONE WHO WILL TEAR APART THE VERY STARS IN HEAVEN. HE IS HERE. HE IS THE END OF THE WORLD."

(quoted from hpmor.com rather than the .pdf this time for greater accuracy)

I really don't see how you can get any sequence of events out of that other than "Trelawney is about to make prophecy -> Quirrell analyses Harry's emotions and is happy with what he finds -> Trelawney makes prophecy". Quirrell doesn't even get a full stop at the end of his thought before the quote marks open for Trelawney to speak.

Comment author: Tedav 25 February 2014 12:31:25AM 0 points [-]

Fair enough.

I must admit, this makes my theory less likely, but I still don't see your reading as the unambiguously correct interpretation, but I will freely cede that it look plausible that it is an interrupt, not an elaboration. This may, in part, stem from the fact that I am a big proponent of using "-" in my writing, and my usage is somewhat nonstandard.

Even if that is right, I don't think it rules out my guess about Quirrell's plan, but again, I'm significantly less confident now.

View more: Next