Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: pangloss 22 March 2012 04:11:16AM 2 points [-]

Does it matter that you've misstated the problem of induction?

Comment author: pangloss 22 March 2012 04:15:20AM 0 points [-]

In terms of whether to take your complaints about philosophy seriously, I mean.

Comment author: pangloss 22 March 2012 04:11:16AM 2 points [-]

Does it matter that you've misstated the problem of induction?

Comment author: Yvain 03 May 2009 11:40:59AM *  5 points [-]

The intelligent design issue is complex, but he's said outright that he doesn't believe in it. I think his position is something like "Most people who believe evolution are not smart enough to understand it, and would be better off believing intelligent design since it makes more sense on a naive level. Most believers in evolution who are not biologists are making the 'science as belief-attire' type mistake." It's been a while since I read about that particular flame war, so I might be mistaken, but I do remember he specifically said that with extremely high probability ID was wrong.

Hypnotism has been shown to work in studies by the AMA, BMJ, and every other group of medical experts who have investigated the question, and he's a vegetarian, not a vegan - and so am I, so you're going to have trouble convincing me that's a strike against him. Though if you want to write a post about it, I'd be interested in hearing your arguments against.

Comment author: pangloss 05 May 2009 06:34:47AM -1 points [-]

I wish this was separated into two comments, since I wanted to downvote the first paragraph, and upvote the second.

Comment author: rolf_nelson 04 May 2009 06:59:18AM 7 points [-]

"So I was very surprised to find Adams was a believer in and evangelist of something that sounded a lot like pseudoscience."

Yep. The Dilbert Future isn't online so you can't see the nonsense directly, but to get a feeling for what Adams was like before he started backpedaling recently:

(http://www.reall.org/newsletter/v05/n12/scott-adams-responds.html)

Unflattering, but (to memory) accurate, description of The Dilbert Future here:

(http://www.insolitology.com/rloddities/dilbert.htm)

Comment author: pangloss 05 May 2009 06:30:52AM 1 point [-]

Glad someone mentioned that there is good reason Scott Adams is not considered a paradigm rationalist.

Comment author: pangloss 05 May 2009 06:25:54AM 3 points [-]

For anyone interested in wearing Frodo's ring around your neck: http://www.myprecious.us/

In response to comment by gjm on Final Words
Comment author: Eliezer_Yudkowsky 28 April 2009 05:48:36PM 15 points [-]

I voted this down and the parent up because, while it's a fine apology, you should not actually get more karma for admitting a mistake than the person who corrected you gets.

Comment author: pangloss 28 April 2009 06:16:01PM 1 point [-]

I guess this raises a different question: I've been attempting to use my up and down votes as a straight expression of how I regard the post or comment. While I can't guarantee that I am never drawn to inadvertently engage in corrective voting (where I attempt to bring a post or comment's karma in line with where I think it should be in an absolute sense or relative to another post), it seems as though this is your conscious approach.

What are the advantages/disadvantages or the two approaches?

In response to comment by gjm on Final Words
Comment author: Eliezer_Yudkowsky 28 April 2009 05:48:36PM 15 points [-]

I voted this down and the parent up because, while it's a fine apology, you should not actually get more karma for admitting a mistake than the person who corrected you gets.

Comment author: pangloss 28 April 2009 06:07:58PM *  7 points [-]

I voted this down, and the immediate parent up, because recognizing one's errors and acknowledging them is worthy of Karma, even if the error was pointed out to you by another.

In response to comment by gjm on Final Words
Comment author: MBlume 28 April 2009 09:00:21AM 9 points [-]

I was assuming that karma was actually being transferred, zero-sum.

In response to comment by MBlume on Final Words
Comment author: pangloss 28 April 2009 09:05:48AM 1 point [-]

That puts people with a great deal of Karma in a much better position with respect to Karma gambling. You could take us normal folk all-in pretty easily.

Comment author: roland 28 April 2009 03:56:36AM *  1 point [-]

Regarding the wine experts: if I understood correctly their recognition of the same wine later is not impaired by their verbal description of it's taste. But I wonder how accurate their description is, are there even the right words to describe the taste of a wine? I suspect that they just have build up some standard associations of what word to attribute to what taste and then just regurgitate them. If you trained this a lot you can probably do it on autopilot and therefore don't have to really think, therefore your taste memory is not impaired. That would be my ad-hoc explanation. What do you think?

PS: the "doing without thinking" part would be in contrast to non-experts who would have to deliberately reason and look for the correct words to describe the taste.

Comment author: pangloss 28 April 2009 08:50:19AM 1 point [-]

I mean, I don't know if "woody" or "dry" are the right words, in terms of whether they invoke the "correct" metaphors. But, the point is that if you have vocabulary that works, it can allow you to verbalize without undermining your underlying ability to recognize the wine.

I think the training the with vocabulary actually augments verbally mediated recall, not that it turns off the verbal center, but I'm not sure the vehicle by which it works.

Comment author: Psy-Kosh 28 April 2009 04:27:14AM 0 points [-]

Well, a couple things. You can in part interpret that as being an underlying preference to do so, but you seem to have akrasia stopping you from actually choosing what you know you actually want.

Or perhaps you actually would prefer not to go on coasters, and consider the "after the fact" to be the same as "after the fact of taking some addictive drug, you might like it, so you wouldn't want to in the first place"

As far as changing of preferences, you may think of your true preferences as encoded by the underlying algorithm your brain is effectively implementing, the thing that controls how your more visible to yourself preferences change in response to new information, arguments, etc etc etc...

Those underlying underlying preferences are the things that you wouldn't want to change. You wouldn't want to take a pill that makes you into the type of person that enjoys committing genocide or whatever, right? But you can predict in advance that if such a pill existed and you took it, then after it rewrote your preferences, you would retroactively prefer genociding. But since you (I assume) don't want genocides to happen, you wouldn't want to become the type of person that would want them to happen and would try to make them happen.

(skipping one or two minor caveats in this comment, but you get the idea, right?)

But also, humans tend to be slightly (minor understatement here) irrational. I mean, isn't the whole project of LW and OB and so on based on the notion of "they way we are is not the way we wish to be. Let us become more rational"? So if something isn't matching the way people normally behave, well... the problem may be "the way people normally behave"... I believe the usual phrasing is "this is a normative, rather than descriptive theory"

Or did I misunderstand?

Comment author: pangloss 28 April 2009 08:44:16AM 0 points [-]

For the most part I think that starts to address it. At the same time, on your last point, there is an important difference between "this is how fully idealized rational agents of a certain sort behave" and "this is how you, a non-fully idealized, partially rational agent should behave, to improve your rationality".

Someone in perfect physical condition (not just for humans, but for idealized physical beings) has a different optimal workout plan from me, and we should plan differently for various physical activities, even if this person is the ideal towards which I am aiming.

So if we idealize our bayesian models too much, we open up the question: "How does this idealized agent's behavior relate to how I should behave?" It might be that, were we to design rational agents, it makes sense to use these idealized reasoners as models, but if the goal is personal improvement, we need some way to explain what one might call the Kantian inference from "I am an imperfectly rational being" to "I ought to behave the way such-and-such a perfectly rational being would".

View more: Next