All of HungryTurtle's Comments + Replies

Ok, so then I would say that the soccer player in being empathetic to my objectives would be strengthening his or her emotional/ social capacity, which would benefit his or her health/ productivity, and thus benefit his or her soccer playing.

0TheOtherDave
I'm not sure what you mean by "being empathetic to [your] objectives," but if it involves spending time doing things, then one question becomes whether spending a given time doing those things produces more or less improvement in their soccer playing. I would certainly agree that if spending their available time doing the thing you suggest (which, incidentally, I have completely lost track of what it is, if indeed you ever specified) produces more of an improvement in the skills they value than doing anything else they can think of, then they ought to do the thing you suggest.

Could you explain your last paragraph a little more?

1Nebu
I'm saying "support" has two meanings in this context: "to agree with" and "to wish for other people to read". When I used "support" to mean "I wish for other people to read this comment", you interpreted that as "I agree with this comment", and this is where I believe the misunderstanding occurred.

Fair enough, could you tell me what exactly it means to be a good rationalist?

Ok, so these skill sets contribute significantly to the productivity and health of a person. Then would you disagree with the following:

  1. Social and emotional skills signifcantly contribute to health and productivity.
  2. Any job, skill, hobby, or task that is human driven can benefit from an increase in the acting agents health and productivity
  3. Therefore social and emotional skills are relevant (to some degree) to all other human driven skill sets
0TheOtherDave
Sure, agreed.

The feeling that I am jumping on nebu and the idea that I am advocating a straw vulcan is you using loaded words to make an extreme judement about my meaning and my motives. First of all, I am not trying to say a rational person has to be emotionless. The fact taht Emotions are important, doesn't mean that anyone invoking some emotional response is unconditionally right. Supporting something "not because you agree with it" but because you felt some personal attachment is the most common of pyschological reflexes. I am not telling Nebu that he ... (read more)

Additionally, saying that the East should look to the West for enlightenment doesn't mean there is no enlightenment to be found in the East. It just says that by far the more important enlightenment is more common in the West than the East.

Actually saying that the East should look to the West for enlightenment says nothing about where enlightenment is more or less common, or anything about a degree of enlightenment. This is the assumption you are bringing to the statement. All this statement implies are there are things that the East could learn from th... (read more)

0Nebu
It's tough for Nebu_2012 to remember what Nebu_2009 was thinking, exactly, when he wrote that comment 3 years ago. That said, it seems like Nebu_2009 did feel/believe/thought/whatever-word-you-want-to-use that the comment had "real merit": He claimed that the comment was "interesting" and "entertaining". Your use of "support" may be (probably unintentionally) misleading, as it looks like Nebu_2009 was explicitly saying that he did not agree with the statement, and when someone talks about "supporting" a comment, I usually infer them to mean agreeing with the comment. Nebu_2009 seems to be supporting it only insofar that he thinks other people may also find the comment interesting and entertaining, and thus upvoted it to increase its visibility (I'm assuming; I can't recall exactly what Nebu_2009 was thinking). But it looks like you may have fallen into that very equivocation trap when you ask, slightly later on in the thread, "If you support something how is it something you do not agree with, and why are you supporting something you do not agree with?"
4orthonormal
Relevant to why you're being downvoted: Straw Vulcan Feeling Rational Jumping on someone for using "felt", when the sentence would work as well with "thought", is rude as well.

Ok, then the next question is that would you agree for a human skills related to emotional and social connection maximize the productivity and health of a person?

0TheOtherDave
No. Though I would agree that for a human, skills related to emotional and social connection contribute significantly to their productivity and health, and can sometimes be the optimal place to invest effort in order to maximize productivity and health.
0TimS
I wouldn't agree to that statement without a lot more context about a particular person's situation.

Isn't saying that Yvain's final statement is

exactly backwards

also failing to make a distinction between a vaguely hostile comment and an extreme claim? To say it is exactly backwards is to imply that there is nothing wrong with steve jobs statement. I agree with you that some of Yvain's fallacies are distorted--most notably the assumption that those who liked the comment were venting out a subconscious lash at "hippies"--but that does not change the fact that Steve job’s statement contains huge logical issues.

First, Yvain is right that it i... (read more)

1ChrisHallquist
So to clarify, what's "exactly backwards" is saying that to a good rationalist, "the statements '' and '' sound exactly alike." Whereas I think an important part of being a good rationalist is being able to distinguish between the two. I'm not saying Yvain's entire post is backwards.

But I actually can't agree with your argument than "enlightenment" is a fallacy of equivocation. It IS the Enlightenment values of Bacon and Newton that brought us the enlightenment of vaccination and electricity---that's not a coincidence.

I think there is some confusion in Yvain's definition of the third type of enlightenment, and that is why you are missing the point. Yvain describes the third type of enlightenment as

"enlightenment", meaning achieving a state of nirvana free from worldly desire.

It would be better to think abo... (read more)

I'm trying to find a LW essay, i can't remember what it is called, but it is about maximizing your effort in areas of highest return. For example, if you are a baseball player, you might be around 80% in terms of pitching and 20% in terms of base running. to go from 80% up in pitching becomes exponentially harder; whereas learning the basic skill set to jump from dismal to average base running is not.

Basically, rather than continuing to grasp at perfection in one skill set, it is more efficient to maximize basic levels in a variety of skill sets related to target field. Do you know the essay i am talking about?

0TheOtherDave
Doesn't sound familiar. Regardless, I agree that if I value an N% improvement in skill A and skill B equivalently (either in and of themselves, or because they both contribute equally to some third thing I value), and an N% improvement in A takes much more effort than an N% improvement in B, that I do better to devote my resources to improving A. Of course, it doesn't follow from that that for any skill A, I do better to devote my resources to improving A.

That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently.

So does rationality determine what a person or group values, or is it merely a tool to be used towards subjective values?

Sure. But that scenario implies that wanting to kill ourselves is the goal we're striving for, and I consider that unlikely enough to not be worth worrying about much.

My scenario does not assume that al... (read more)

0TheOtherDave
Dunno about "merely", but yeah, the thing LW refers to by "rationality" is a tool that can be used to promote any values. I don't think it assumes that, actually. You mentioned "a world of majorly rational people [..] killing us all faster." I don't see how a world of people who are better at achieving what they value results in all of us being killed faster, unless people value killing all of us. If what I value is killing you and surviving myself, and you value the same, but we end up taking steps that result in both of us dying, it would appear we have failed to take steps that optimize for our goals. Perhaps if we were better at optimizing for our goals, we would have taken different steps. Sure. I mean that whether humanity is digitized has almost nothing to do with the perceived end goal.

I don't think we have a way of slowing technological progress that a) affects all actors (it wouldn't be a better world if only those nations not obeying international law were making technological progress), and b) has no negative ideological effects.

By "negative ideological effects" do you mean the legitimization of some body of religious knowledge? As stated in my post to Dave, if your objective is to re-condition society to have a rational majority, I can see how religious knowledge (which is often narratively rather than logically sequen... (read more)

Honestly, I would moderate society with more positive religious elements. In my opinion modern society has preserved many dysfunctional elements of religion while abandoning the functional benefits. I can see that a community of rationalists would have a problem with this perspective, seeing that religion almost always results in an undereducated majority being enchanted by their psychological reflexes; but personally, I don’t see the existence of an irrational mass as unconditionally detrimental.

It is interesting to speculate about the potential of a... (read more)

0TheOtherDave
For a sufficiently broad understanding of "practical" and "the masses" (and understanding "rationalizing" the way I think you mean it, which I would describe as educating), no. Way too many people on the planet for any of the educational techniques I know about to affect more than the smallest fraction of them without investing a huge amount of effort. It's worth asking what the benefits are of better educating even a small fraction of "the masses", though. That depends, of course, on what the society values. If I value oppressing people, making me more efficient just lets me oppress people more efficiently. If I value war, making me more efficient means I conduct war more efficiently. My best guess is that collectively we value things that war turns out to be an inefficient way of achieving. I'm not confident the same is true about oppression. Sure. But that scenario implies that wanting to kill ourselves is the goal we're striving for, and I consider that unlikely enough to not be worth worrying about much. Similar, yes. A system designed to optimize the environment for the stuff humans value will, if it's a better optimizer than humans are, get better results than humans do. Almost entirely orthogonal.
2Vaniver
Pretty much the first, but with a perspective worth mentioning. Expressing human values in terms that humans can understand is pretty easy, but still difficult enough to keep philosophy departments writing paper after paper and preachers writing sermon after sermon. Expressing human values in terms that computers can understand- well, that's tough. Really tough. And if you get it wrong, and the computers become the primary organizers and arbiters of power- well, now we've lost the future.

What would you say if I said caring about my goals in addition to their own goals would make them a better soccer player?

0TheOtherDave
I would say "Interesting, if true. Do you have any evidence that would tend to indicate that it's true?"
0TimS
Who are you talking about? Your example was a team filled with low effort soccer players. Specifically, whose goals are you considering beside your own?

Thanks for the link. I'll respond back when I get a chance to read it.

Could you show me where he argues this?

3Desrtopa
I'm afraid I don't remember which post he discusses the idea that scientists should worry about the ethics of their work, and I'm having a difficult time finding it. If you want to find that specific post, it might be better to create an open request in a more prominent place and see if anyone else remembers which one it was. Although it would take a much longer time though, I think it might be a good idea for you to read all the sequences. Eliezer wrote them to bring people up to speed with his position on the development of AI and rationality after all, so that if we are going to continue to have disagreements, at least they can be more meaningful and substantive disagreements, with all of us on the same page. It sounds very much to me like you're pattern matching Eliezer's writing and responding to what you expect him to think, but if his position were such a short hop of inferential distance for most readers, he wouldn't have needed to go to all the work of creating the sequences in the first place.
Desrtopa120

Eliezer hasn't argued for the unquestioned rightness of rapid, continual technological innovation. On the contrary, he's argued that scientists should bear some responsibility for the potentially dangerous fruits of their work, rather than handwaving it away with the presumption that the developments can't do any harm, or if they can, it's not their responsibility.

In fact, the primary purpose of the SIAI is to try and get a particular technological development right, because they are convinced that getting it wrong could fuck up everything worse than anything has ever been fucked up.

0thomblake
Definitely barking up the wrong tree there. Chaos-worshippersDynamists like me are under-represented here for such a technology-loving community - note that the whole basis of FAI is that rapidly self-improving technology by default results in a Bad End. Contrast EY's notion of AGI with Ben Goertzel's.
1TheOtherDave
Yup, implementation of technological innovation has costs as well as benefits. What kind of moderation do you have in mind?

The idea of using your time and various other resources carefully and efficiently is a good virtue of rationality. Framing it as being irrational is inaccurate and kinda incendiary.

Here is my reasoning for choosing this title. If you don't mind could you read it and tell me where you think I am mistaken.

I realize that saying 'rationally irrational' appears to be a contradiction. However, the idea is talking about the use of rational methodology at two different levels of analysis. Rationality at the level of goal prioritization potentially results in th... (read more)

4TheOtherDave
I think you're welcome to have whatever goals you like, and so are the soccer players. But don't be surprised if the soccer players, acknowledging that your goal does not in fact seem to be at all relevant to anything they care about, subsequently allocate their resources to things they care about more and treat you as a distraction rather than as a contributor to their soccer-playing community.

In your article, you seemed to be saying that you specifically think that one shouldn't have a single "final decision" function at the top of the meta stack. That's not going to be an easily accepted argument around here, for the reasons I stated above.

Yeah, this is exactly what I am arguing.

For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one's "utility function" or "terminal values".

Could you explain the technical... (read more)

0DSimon
In regards to why it's possible, I'll just echo what TheOtherDaveSaid. The reason it's helpful to try for a single top-level utility function is because otherwise, whenever there's a conflict among the many many things we value, we'd have no good way to consistently resolve it. If one aspect of your mind wants excitement, and another wants security, what should you do when you have to choose between the two? Is quitting your job a good idea or not? Is going rock climbing instead of staying at home reading this weekend a good idea or not? Different parts of your mind will have different opinions on these subjects. Without a final arbiter to weigh their suggestions and consider how important comfort and security are relative to each other, how do you do decide in a non-arbitrary way? So I guess it comes down to: how important is it to you that your values are self-consistent? More discussion (and a lot of controversy on whether the whole notion actually is a good idea) here.
2TheOtherDave
I'm no technical expert, but: if I want X, and I also want Y, and I also want Z, and I also want W, and I also want A1 through A22, it seems pretty clear to me that I can express those wants as "I want X and Z and W and A1 through A22." Talking about whether I have one goal or 26 goals therefore seems like a distraction.

So you didn't just go through and down vote a ton of my posts all at once?

6wedrifid
No, I couldn't have done that. I had already downvoted the overwhelming majority of your comments at the time when I encountered them. We've already had a conversation about whether or not the downvotes you had received were justified - if you recall I said 'yes'. I'm not allowed to vote down twice so no karma-assassination for me!

I understand what you are saying; you are saying that for the speaker of the statement it is not irrational, because the false statement might meet their motives. Or in other words, that rationality is completely dependent on the motives of the actor. Is this the rationality that your group idealizes? That as long as what I say or do works towards my personal motives it is rational? So if I want to convince the world that God is real, it is rational to make up whatever lies I see fit to delegitimize other belief systems?

So religious zealots are rational ... (read more)

2TheOtherDave
In the sense I think you mean it, yes. Two equally rational actors with different motives will perform different acts. Yes. If that's the most effective way to convince the world that God is real, and you value the world being convinced that God is real, yes. Not necessarily, in that religious zealots don't necessarily have such goals. But yes, if a religious zealot who in fact values things that are in fact best achieved through lies and craziness chooses to engage in those lies and craziness, that's a rational act in the sense we mean it here. Sure, that's most likely true. You may be right about thomblake's motives, though I find it unlikely. That said, deciding how likely I consider it is my responsibility. You are not obligated to provide evidence for it.
2thomblake
Yes. See the twelfth virtue: . No, I would generally not think someone was "being irrational" without specific reference to their motivations. If I must concern myself with the fulfillment of someone else's utility function, it would usually take the form of "You should not X in order to Z because Y will more efficiently Z." ETA: I would more likely think that their statement was a joke, and failing that think that it's false and try to correct it. In case anyone's curious, "the moon is made of green cheese" was a paradigm of a ridiculous, unproveable statement before humans went to the moon; and "green cheese" in this context means "new cheese", not the color green. No, I'd rather be working on my dissertation, but I have a moral obligation to correct mistakes and falsehoods posted on this site. Correct. As noted on another branch of this comment tree, this interpretation characterizes "instrumental rationality", though a similar case could be made for "epistemic rationality". That is not what I was arguing. If I understand you correctly however, you mean to say that what I'm arguing applies equally well to that case. The important part of that statement is "X is rational", where X is a human. Inasmuch as that predicate indicates that the subject behaves rationally most of the time, I would deny that it should be applied to any human. Humans are exceptionally bad at rationality. That said, if a person X decided that course of action Y was the most efficient way to fulfill their utility function, then Y is rational by definition. (Of course, this applies equally well to non-persons with utility functions). Even if Y = "lies and craziness" or "religious belief" or "pin an aubergine to your lapel". That's a difficult empirical question, and outside my domain of expertise. You might want to consult an expert on lying, though I'd first question whether the subgoal of convincing the world that God is real, really advances your overall goals.
2Random832
I think the idea that you are grasping for (and which I don't necessarily agree with) is that calling someone disingenuous is a dark side tool.
0wedrifid
Didn't I just do that? I phrased it in terms of what I can control (my own votes) and what influence that has on you (karma). That gives no presumption or expectation that you must heed my wishes. That is a big leap! I don't think I'm doing that. Mind you, given the power that hazing has in making significant and lasting change in people I would make use of it as a tool of influence if I could work out how!
-4thomblake
I'm confused. Aren't personal insecurities the sort of thing you claimed was 'irrational' to comment on? Have you reversed your position, or do you not care about being rational, or is this a special case?
0wedrifid
For what it is worth, I will (continue to) downvote comments that take the form and role that the great-grandparent takes. Take that into consideration to whatever extent karma considerations bother you.
4thomblake
I find it impolite - it increases the length of your comment and number of characters on the screen and does not provide any information. That said, I am not terribly bothered by it, so 'whatever floats your gerbil'.

Or it is just polite

(Leaving aside the problems with declaring a course of action "irrational" without reference to a goal...)

If you make a claim about the character of another person or the state of reality do you or do you not need some evidence to support it?

I can be wrong about your motivations, and you can be wrong about your motivations.

Isn't being rational about being less wrong, so if some declarative statements can be wrong wouldn't it be rational to avoid making them?

0thomblake
I can make claims about anything without supporting it, whether or not it's about someone's character. The moon is made of green cheese. George Washington was more akratic than my mother. See, there, I did it twice. It can often be rational to do so. For example, if someone trustworthy offers me a million dollars for making the claim "two plus two equals five", I will assert "two plus two equals five" and accept my million dollars. I'm confused that you do not understand this.

(I don't suppose you'd be enlightened if I said "Yes, that's incorrect")

Tell me honestly, do you really think that it is rational to make a declarative statement about something you know nothing about?

Do you consider it irrational to say the sky is blue when you are in a room with no window?

No, because there is reason and evidence to support the statement that the sky is blue. The most obvious of which is that it has been blue your entire life.

No offense, but your example is a gross misrepresentation of the situation. I am not saying tha... (read more)

0Random832
I think at this point I have to ask when you consider it to be rational to make a declarative statement, and what is "nothing" vs "enough". And in particular, why you must have direct knowledge of the subjective experience to say they are being insincere. If someone is here, on a site filled with reasonablly intelligent people who understand logic, and demonstrates elsewhere that they are reasonably intelligent and understand logic, and in one particular argument make obviously logically inconsistent statements, I don't need their state of mind to say they're being disingenuous. I don't know how well that maps to this situation or what has been claimed about it.
0wedrifid
This does not apply to this situation.

If by "not intentionally driven" you mean things like instincts and intuitions, I agree strongly.

Yes, exactly.

if you could tweak your brain to make certain sorts of situations trigger certain automatic reactions that otherwise wouldn't, or vice versa, what (if anything) would you pick?

I think both intentional and unintentional action are required at different times. I have tried to devise a method of regulation, but as of now, the best I have come up with is moderating against extremes on either end. So if it seems like I have been overly... (read more)

2DSimon
Right, this is a good idea! You might want to consider an approach that goes by deciding what situations best require intuition, and which ones require intentional thought, rather than aiming only to keep their balance even (though the latter does approximate the former to the degree that these situations pop up with equal frequency). Overall, what I've been getting at is this: Value systems in general have this property that you have to look at a bunch of different possible outcomes and decide which ones are the best, which ones you want to aim for. For technical reasons, it is always possible (and also usually helpful) to describe this as a single function or algorithm, typically around here called one's "utility function" or "terminal values". This is true even though the human brain actually physically implements a person's values as multiple modules operating at the same time rather than a single central dispatch. In your article, you seemed to be saying that you specifically think that one shouldn't have a single "final decision" function at the top of the meta stack. That's not going to be an easily accepted argument around here, for the reasons I stated above.

So Mr. Thomblake,

If someone were to make a statement about what another person was sincere about, without even knowing that person, without ever having met that person, or without having spent more than a week interacting with that person, would you say their statement was irrational?

0thomblake
By the way, you do not need to indicate who a comment is to in a reply - it is clearly listed at the top of any comment you post as a reply, and is automatically sent to the user's inbox.
0thomblake
No, I don't characterize actions as flatly "irrational", and statements are not a special case.

Yes it is irrational to say something is a lie if you have no way of knowing it is a lie or not. Is this incorrect?

2Random832
(I don't suppose you'd be enlightened if I said "Yes, that's incorrect") Do you consider it irrational to say the sky is blue when you are in a room with no window?

I am taking your subsequent rhetoric as confirmation that you do in fact agree "are you actually claiming" is a type of applause lights terminology.

I infer further, from what you've said elsewhere, that it's a type of repression that works by making some users less able to make comments/posts than others, and some comments less visible to readers than others, and some posts less visible to readers than others. Is that correct?

Yes.

Assuming it is, I infer you consider it a bad thing for that reason. Is that correct?

No, not exactly. As I to... (read more)

4TheOtherDave
No, not especially. I think it serves the purpose of allowing filtering posts and comments that other LessWrong users consider valuable. Sometimes they consider stuff valuable because it's well-written and interesting, yes. Sometimes because it's funny. Sometimes because they agree with it. Sometimes because it's engagingly contrarian. Sometimes for other reasons. I would certainly agree with this. I'm not sure what you intend to capture by the contrast between "well-written" and "rhetoric," though. That's not just false, it's downright bizarre. I would agree, though, that sometimes terminology is introduced to discussions in ways that people find valueless, and they vote accordingly. This is sometimes true, and sometimes false, depending (again) on whether the use is considered valuable or valueless. Downvoting a comment/post because it does those things in a valueless way (and has no compensating value) is perfectly valid. Downvoting a comment/post because it does those things in a valuable way is not valid. No, not especially. I would agree that that's a fine thing, but I'd be really astonished if that were the reason for downvoting in any significant number of cases.

I do that all the time. There seems to be nothing in the meaning of the word that means it cannot be applied to another.

Let me rephrase, it is irrational to make a declarative statement about the inner workings of another person's mind, seeing as there is no way for one person to fully understand the mental state of another.

That isn't true. It is simply a different form of communication. Description is different from argumentative persuasion. It is not (necessarily) irrational to do the former.

You talk to me about semantic gymnastics? No, it is no... (read more)

2wedrifid
You are blatantly ignoring the direct reference to the relevant evidence that I provided in the grandparent. I repeat that reference now - read your inbox, scroll back until you find the dozen or so messages saying 'this is just a contradiction!' or equivalent. I repeat with extra emphasis that your denial of any evidence is completely incredible. Any benefit of a doubt that you are communicating in good faith is rapidly eroding.
0thomblake
No. (Leaving aside the problems with declaring a course of action "irrational" without reference to a goal...) There is no fact that I am 100% certain of. Any knowledge about the world is held at some probability between 0 and 1, exclusive. We make declarative statements of facts despite the necessary uncertainty. Statements about the inner workings of another person's mind are in no way special with that respect; I can make declarative statements about your mind, and I can make declarative statements about my mind, and in neither case am I going to be completely certain. I can be wrong about your motivations, and you can be wrong about your motivations.
-2David_Gerard
This may make things clearer.
0Random832
Is it irrational to call a statement a lie? As I had understood the word, "disingenuous" is a fancy way to say "lying".

You can still make comments disagreeing with other comments-which to me seems like a much better way of voicing your ideas than a silent downvote.

I think so to.

I believe that the karma cap on making posts (20 karma needed for a top level post) is partly to make sure members understand the vocabulary and concepts used on LessWrong before they start making posts,

I understand the purpose of it. I just think there are some problems with it.

what does see grandparent mean?

2ArisKatsaris
The parent post of the parent post, in this case meaning that comment, which you'll note has an asterisk next to the date, because it has been edited.
2DSimon
When you reply to a comment, the comment you are replying to is called the parent, and the comment that it replies to is called the grandparent.

I do think the way negative karma works is a type of repression. Honestly I don't see how you could think otherwise.

And your use of "acutally claiming"?

Perhaps I was not clear enough. What I meant was that you saying "are you actually claiming" is applause light. Do you disagree?

4TheOtherDave
OK, thanks for clarifying that. I infer further, from what you've said elsewhere, that it's a type of repression that works by making some users less able to make comments/posts than others, and some comments less visible to readers than others, and some posts less visible to readers than others. Is that correct? Assuming it is, I infer you consider it a bad thing for that reason. Is that correct? Assuming it is, I infer you would consider it a better thing if all comments/posts were equally visible to readers, no matter how many readers considered those comments/posts valueless or valuable. Is that correct?
0TheOtherDave
I don't agree that it was an applause light specifically, but the distinction is relatively subtle and I'm uninterested in defending it, so we can agree it was an applause light for the sake of argument it if that helps you make some broader point. More generally, I agree that it was a rhetorical tactic in a similar class as applause lights.

Ok, then I probably made a mistake when I clicked on my new message from you. Sorry about that.

0TheOtherDave
An asterisk appears to the right of the date when a post has been edited. (See grandparent for an example)

No, it doesn't. It's a blatant contradiction, which is by definition false.

Rational Irrationality is talking about rationality within two different levels of analysis. The result of being rational at the level of goal prioritization, the individual abandons rational methodology at the level of goal achievement.

L1- Goal Prioritization L2- Goal Achievement

If I am at a party I have desired outcomes for my interactions and experiences that produce goals. In prioritizing my goals I am not abandoning these goals, but placing them in the context of having des... (read more)

Why did you change your post here?

0TheOtherDave
? If you're referring to this comment, I see no evidence that I changed it, nor do I recall changing it, so I suspect your premise is false. If you're referring to some other comment, I don't know. EDIT: Edited for demonstration purposes.

Ok, but your parent comment exists within a context. It was responding to Random832, who was responding to TheOtherDave's comment about democracy. I was not solely responding to you, but to your comment with the context of theotherdave's

In a game of soccer, you could want to improve teamwork, you could want to win the game, you could want to improve your skills, you could want to make a good impression. All these are potential goals of a game of soccer. There is a group of objecetives that would most accurately acheive each of these possible goals. I am suggesting that the for each goal, acheiving the goal to the utmost level requres an objective with relatively high resource demands.

Is that better?

3TimS
An observer who thinks you are being stupid for not committing all possible effort to achieving your goal in the game (for example, impressing others) needs a justification for why achieving this goal is that important. In the absence of background like "this is the only chance for the scout from the professional team to see you play, sign you, and cause you to escape the otherwise un-escapable poverty and starvation," the observer seems like an idiot. I hope you don't think pointing out the apparent idiocy of the observer is an insightful lesson. In short, show some examples of people here (or anywhere) making the mistake (or mistakes) you identify, or stop acting like you are so much wiser than us.

And your use of "acutally claiming"?

2TheOtherDave
Was not quoting anyone's use of it. Incidentally, I'm taking your subsequent rhetoric as confirmation that you did in fact intend the claim that your ideas are being repressed, since you don't seem likely to explicitly answer that question anytime soon.
6Swimmer963 (Miranda Dixon-Luinenburg)
Agreed that making a statement and not giving any supporting evidence doesn't qualify as "rational." I actually haven't found the quality of your argument to be low, most of the time, but I'll try to dredge up some examples of what I think wedrifid is talking about. The standard mindset on LessWrong is that words are useful because they are specific and thus transmit the same concept between two people. Some words are more abstract than others (for example, 'beauty' can never be defined as specifically as 'apple'), but the idea that we should embrace more possible definitions of a word goes deeply against LessWrong philosophy. It makes language less clear; a speaker will have to specify that "no, I'm talking about paradox2, not paradox1." In which case you might as well have 2 different words for the 2 different concepts in the first place. I think most people on LW would count this as a negative epistemic contribution This kind of comparison is very no-no on LessWrong, unless you very thoroughly explain all the similarities and justify why you think it's a good comparison. See Politics is the Mind-Killer. Comes across as belligerant. I don't think there are really that many places where you had 'bad' arguments. The main thing is that you're presenting a viewpoint very different from the established one here, and you're using non-LW vocabulary (or vocabulary that is used here, but you're using it differently as per your field of study), and when someone disagrees you start arguing about definitions, and so people pattern-match to 'bad argument.'
4wedrifid
I do that all the time. There seems to be nothing in the meaning of the word that means it cannot be applied to another. That isn't true. It is simply a different form of communication. Description is different from argumentative persuasion. It is not (necessarily) irrational to do the former. In the context the statement serves as an explanation for the downvotes. It is assumed that you or any readers familiar with the context will be able to remember the details. In fact this is one of those circumstances where "disingenuous" applies. There are multiple pages of conversation discussing your contradictions already and so pretending that there is not supporting evidence available is not credible. NO! "Burden of proof" is for courts and social battles, not thinking. This isn't debate club either! Yes, with respect to libel, the aforementioned 'burden of proof' becomes relevant. Of course this isn't libel, or a court. Consider that I would not have explained to you why (I perceive) your comments were downvoted if you didn't bring them up and make implications about the irrationality of the voters and community. If you go around saying "You downvoted me therefore you suck!" then it drastically increases the chances that you will receive a reply "No, the downvotes are right because your comments sucked!"

It is a false overstatment. I agree with your point.

I feel that your use of "actually claiming" and "repression" here falls under the category of applause light. mentioned by thomblake.

The fact that my essay becomes significantly harder to find because 11-27 people ( had some positives) disliked it, what would you call that?

0thomblake
I don't think you've grokked that expression.
2TheOtherDave
My use of "repression" was quoting your use of it, which I consider appropriate, since I was referencing your claim.

When you say "use above" I assume you are referring to TheOtherDave, because my questioning of the democratic principles of Lesswrong Karma were because it was described in response to my comment as democratic.

0thomblake
No, I was referring to your use of the word in this comment, whose parent (my comment) did not use the word "democratic" at all.

If humanity is as integral to our reality as you describe, then I am confused why our beliefs about how reality works don't totally control how reality actually works.

Wouldn't you say oxygen is integral to the current reality of earth? That does not mean that the current reality of earth is shaped by the will of oxygen. Saying that humanity is integral to the constitution of our reality is different from saying humanity consciously defines the constitution of its reality. Right?

No, your example is fine, but I would say it is the most elementary use of this idea. When faced with a serious threat to health it is relatively easy and obvious to realign goal-orientation. It is harder to make such realignments prior to facing serious damage or threats. In your example, a more sophisticated application of this idea would theoretically remove the possibility of twisting an ankle during training, excluding any extreme circumstances.

I imagine this might raise a lot of questions so let me explain a little more.

Training is not serious. The ... (read more)

I still don't think what I said is false, it is a rhetorical choice. Saying it is rational irrationality still makes sense, it just hits some buzz words for this group and is less appealing than choosing some other form of label.

3thomblake
No, it doesn't. It's a blatant contradiction, which is by definition false. Also: Yes, someone could consider it irrational, and that person would be wrong.

Let’s say I am playing soccer. I have decided that any goal-orientation within my soccer game is ultimately not worth the expenditure of resources beyond X amount. Because of this I have tuned out my rational calculating of how to best achieve a social, personal, or game-related victory. To anyone who has not appraised soccer related goal-orientations in this way, my actions would appear irrational within the game. Do you see how this could be considered irrational?

I definitely understand how this idea can also be understood as still rational, it is becaus... (read more)

2Nectanebo
Good, now that you've realised that, perhaps you might want to abandon that name. The idea of using your time and various other resources carefully and efficiently is a good virtue of rationality. Framing it as being irrational is innaccurate and kinda incendiary.
5thomblake
That is a good assessment. Saying something false constitutes exceptionally bad rhetoric here.
2TimS
Can you be more concrete with your soccer example. I don't understand what you mean.
Load More