pangloss

A Candid Optimist

Wiki Contributions

Comments

In terms of whether to take your complaints about philosophy seriously, I mean.

Does it matter that you've misstated the problem of induction?

I wish this was separated into two comments, since I wanted to downvote the first paragraph, and upvote the second.

Glad someone mentioned that there is good reason Scott Adams is not considered a paradigm rationalist.

For anyone interested in wearing Frodo's ring around your neck: http://www.myprecious.us/

I guess this raises a different question: I've been attempting to use my up and down votes as a straight expression of how I regard the post or comment. While I can't guarantee that I am never drawn to inadvertently engage in corrective voting (where I attempt to bring a post or comment's karma in line with where I think it should be in an absolute sense or relative to another post), it seems as though this is your conscious approach.

What are the advantages/disadvantages or the two approaches?

I voted this down, and the immediate parent up, because recognizing one's errors and acknowledging them is worthy of Karma, even if the error was pointed out to you by another.

That puts people with a great deal of Karma in a much better position with respect to Karma gambling. You could take us normal folk all-in pretty easily.

I mean, I don't know if "woody" or "dry" are the right words, in terms of whether they invoke the "correct" metaphors. But, the point is that if you have vocabulary that works, it can allow you to verbalize without undermining your underlying ability to recognize the wine.

I think the training the with vocabulary actually augments verbally mediated recall, not that it turns off the verbal center, but I'm not sure the vehicle by which it works.

For the most part I think that starts to address it. At the same time, on your last point, there is an important difference between "this is how fully idealized rational agents of a certain sort behave" and "this is how you, a non-fully idealized, partially rational agent should behave, to improve your rationality".

Someone in perfect physical condition (not just for humans, but for idealized physical beings) has a different optimal workout plan from me, and we should plan differently for various physical activities, even if this person is the ideal towards which I am aiming.

So if we idealize our bayesian models too much, we open up the question: "How does this idealized agent's behavior relate to how I should behave?" It might be that, were we to design rational agents, it makes sense to use these idealized reasoners as models, but if the goal is personal improvement, we need some way to explain what one might call the Kantian inference from "I am an imperfectly rational being" to "I ought to behave the way such-and-such a perfectly rational being would".

Load More