All of BenAlbahari's Comments + Replies

Hi Zvi,

A couple of months ago I wrote a covid-19 risk calculator that's gotten some press, and even translated into Spanish. Here's the link:

https://www.solenya.org/coronavirus

I've updated the calculations to leverage your table for age & preconditions, which were better than what I had. You can check the code for the calculator by clicking on the link near the top of the page. I've also put a link in that code to your article here.

Note that I'm trying to keep the interface ultra-simple. I get a stream of suggestions (e.g. can ... (read more)

Also, there's ways of using uranium 238

And thorium.

Sigh.

A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose "fault" it was. Your "sigh" reaction comes across as expressing the sentiment "It's your fault for not getting me. Didn't you read what I wrote? It's so obvious". But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along ... (read more)

people who identify as rationalists they seem to moralize slightly less than average

Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.

2BillyOblivion
I suspect that what aspergers types like--if that post is correct and they do like it--is more the rules part of morality than the being judgmental[1] part of it. Rules for strict rules for interacting with other folks make social interactions less error prone when you literally don't--can't--get those social cues others do. I've been judged to be at best borderline aspery (absent any real testing, who knows) and manifest many of the more subtle symptoms, and my take(s) on morality are (1) that it is much like driving regulations. No one gives a flying f' which side of the road you drive on as long as everybody does. (no need to get judgemental about it unless someone is deliberately doing it wrong) and (2) that the human animal (at least neurotypical human animals) have behavior patterns that are a result of both evolution and society. Following these behavior patterns will keep you from some fun and lots of pain, and will generally get you into the fat part of the bell curve. Break the wrong ones and you will wind up in the ugly part of the curve. Figure out how to break the right ones the right way and you get into the cool part of the bell curve where interesting shit happens. Oh, and sometimes when you break these rules you hurt other people. When you hurt them by accident that's bad, when you hurt them on purpose and they don't deserve it, that's even worse. If they do deserve it then it's probably because they broke one of the rules. People do shit for all sorts of reasons, and in contemporary society there are all sorts of people in power advocating all sorts of mildly to wildly stupid shit. Can't really blame someone all that much if they spent 12 years in schools that pushed the sort of "education" that you get from compromising between fundamentalist Christians, New Age Fruit Cakes, Universal Church Members, and your typical politicians. Oh, and people with masters degrees in Education, much less Doctorates. Seriously, you're better off with the f'ing
7wedrifid
That's true, and usually I say 'a lot more' rather than 'slightly less'. However in this instance Eliezer seemed to be referring to a rather limited subset of 'moralizing'. He more or less excluded being obnoxiously judgemental but phrasing your objections with consequentialist language. So the worst of nerd-moralizing was cut out.
5Normal_Anomaly
It looked to me more like he was discussing the consequences.

don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers

OK, so you're saying that to change someone's mind, identify mental behaviors that are "world view building blocks", and then to instill these behaviors in others:

...come up with exercises which, if people go through them, causes them to experience the 5-second events

Such as:

...to feel the temptation to moralize, and to make the choice not to moralize, and to associate alternative procedural patterns such as pausing, reflecting...

Or:

...

... (read more)

The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.

Um, yes. This is supposed to increase your general ability to teach a human to do anything, good or bad. In much the same way, having lots of electricity increases your general ability to do anything that requires electricity, good or bad. This does not make electrical generation a Dark Art.

I have an image of Eliezer queued up in a coffee shop, guiltily eyeing up the assortment of immodestly priced sugary treats. The reptilian parts of his brain have commandeered the more recently evolved parts of his brain into fervently computing the hedonic calculus of an action that other, more foolish types, might misclassify as a sordid instance of discretionary spending. Caught staring into the glaze of a particularly sinful muffin, he now faces a crucial choice. A cognitive bias, thought to have been eradicated from his brain before the SIAI was found... (read more)

In accordance with the general fact that "calories in - calories out" is complete bullshit, I've had to learn that sweet things are not their caloric content, they are pharmaceutical weight-gain pills with effects far in excess of their stated caloric content. So no, I wouldn't be able to eat a triple chocolate muffin, or chocolate cake, or a donut, etcetera. But yes, when I still believed the bullshit and thought the cost was just the stated caloric content, I sometimes didn't resist.

5RHollerith
Very skillful exploitation of the humor potential of this thread of conversation! Bravo!

While I'm inclined to agree with the conclusion, this post is perhaps a little guilty of generalizing from one example - the paragraphs building up the case for the conclusion are all "I..." yet when we get to the conclusion it's suddenly "We humans...". Maybe some people can't handle the truth. Or maybe we can handle the truth under certain conditions that so far have applied to you.

P.S. I compiled a bunch of quotes from experts/influential people for the questions Can we handle the truth? and Is self-deception a fault?.

The chief role of metaethics is to provide far-mode superstimulus for those inclined to rationalize social signals literally.

Ethics and aesthetics have strong parallels here. Consider this quote from Oscar Wilde:

For we who are working in art cannot accept any theory of beauty in exchange for beauty itself, and, so far from desiring to isolate it in a formula appealing to the intellect, we, on the contrary, seek to materialise it in a form that gives joy to the soul through the senses. We want to create it, not to define it. The definition should follow the work: the work should not adapt itself to the definition.

Whereby any theory of art...

merely serves as after-the-fact justification of the sentiments that were already there.

Miller
110

Or Ben Franklin, contemplating his vegetarianism:

But I had formerly been a great Lover of Fish, & when this came hot out of the Frying Pan, it smelt admirably well. I balanc'd some time between Principle & inclination: till I recollected, that when the Fish were opened, I saw smaller Fish taken out of their Stomachs:--Then, thought I, if you eat one another, I don't see why we mayn't eat you. So I din'd upon Cod very heartily and continu'd to eat with other People, returning only now & then occasionally to a vegetable Diet. So convenient a thing it is to be a reasonable Creature, since it enables one to find or make a Reason for every thing one has a mind to do

I've gone through massive reversals in my metaethics twice now, and guess what? At no time did I spontaneously acquire the urge to rape people. At no time did I stop caring about the impoverished. At no time did I want to steal from the elderly. At no time did people stop having reasons to praise or condemn certain desires and actions of mine, and at no time did I stop having reasons to praise or condemn the desires and actions of others.

Metaethics: what's it good for...

4Garren
Well, metaethics isn't supposed to be good for telling us which things are wrong and which things are right. Nor is it supposed to be about providing us with motivation to do good. The chief role of metaethics is to answer questions about what it means for things to be morally right or wrong or what we're doing when we make moral judgments. In some ways, this is a relatively meek endeavor, which is why it's not completely outrageous for someone to claim it's 'solvable' now.
8nshepperd
Not much, except for knowing whether you need to build an FAI, or whether it's safe to broadcast our location to aliens. I suppose correct metaethics could also prevent people from making evolutionary-fitness-maximiser-like comments on the internet as well, which would be a bonus.
0Miller
That's my sentiment. And apparently it's also a diseased discipline.

I believe the primary form of entertainment for the last million years has had plenty of color.

0JoshuaZ
How is that relevant? If the entertainment at any point (especially when there's lots of entertainment) impacts what dreams are like then what entertainment our ancestors have had won't be relevant.

I don't think social influence alone is a good explanation for the delusion in the video. Or more precisely, I don't think the delusion in the video can be explained as just a riff on the Asch conformity experiment.

0wedrifid
I agree (for the right definition of 'social influence', of course). That 'small step away' really is a step away.

I'm merely less skeptical that the woman in the video is a stooge after hearing what Nancy had to say. But yes, the anchoring techniques he uses in the video might be nothing but deliberate misdirection.

Interesting. This makes me less skeptical of Derren Brown's color illusion video (summary: a celebrity mentalist uses NLP techniques to convince a woman yellow is red, red is black etc.).

1wedrifid
It's only a small step away from what complete amateurs can do in a room in a university. Human judgement is careful not to get caught up with the actual real world when there is social influence at stake!
0[anonymous]
Derren Brown's explanations for his effects are not to be relied on. Remember, he is a magician. Misdirection is one of the pillars of conjuring, and a plausible lie is a powerful misdirector.

Perhaps the post could be improved if it laid out the types of errors our intuitions can make (e.g. memory errors, language errors, etc.). Each type of error could then be analyzed in terms of how seriously it impacts prevalent theories of cognition (or common assumptions in mainstream philosophy). As it stands, the post seems like a rather random (though interesting!) sampling of cognitive errors that serve to support the somewhat unremarkable conclusion that yes, our seemingly infallible intuitions have glitches.

I dunno Nancy. I mean you start off innocently clicking on a link to a math blog. Next minute you're following these hyperlinks and soon you find yourself getting sucked into a quantum healing website. I'm still trying to get a refund on these crystals I ended up buying. Let's face it. These seemingly harmless websites with unrigorous intellectual standards are really gateway drugs to hard-core irrationality. So I have a new feature request: every time someone clicks on an external link from Less Wrong, a piece of Javascript pops up with the message: "... (read more)

Speaking of the yellow banana, people do a lot of filling in with color.

One of Dennett's points is the misleading notion that our mind "fills in". In the case of vision, our brain doesn't "paint in" missing visual data, such as the area in our field of vision not captured by our fovea. Our brains simply lack epistemic hunger for such information in order to perform the tasks that they need to.

I've noticed that this account potentially explains how color works in my dreams. My dreams aren't even black and white - the visual aspects ar... (read more)

Daniel Dennett's Consciousness Explained is a very relevant piece of work here. Early in his book, he seeks to establish that our intuitions about our own perceptions are faulty, and provides several scientific examples to build his case. The Wikipedia entry on his multiple drafts theory gives a reasonable summary.

You've articulated some of the problems of a blogroll well. Perhaps the blogroll idea could be evolved into a concept that better fits the needs of this community, while retaining its core value and simplicity:

1) Along side a link could be its controversy level, based on the votes for and against the link. By making the controversy explicit, the link can no longer be seen as a straight-up endorsement.

2) Along side a link could be its ranking based on say only the top 50 users. This would let people explicitly see what the majority vs. the "elite ratio... (read more)

9JGWeissman
I wouldn't do this. The top 50 users by karma score are more likely to be members who make a lot of comments than "elite rationalists". The controversy meter and using recent votes are good ideas (I wouldn't split it, use only the recent votes).

A website has a specific goal that it's trying to uniquely achieve, and a general goal that places it within a community of like-minded websites. Less Wrong's specific goal is to refine the art of human rationality, and its general goal is to raise the sanity waterline. If other websites are successfully raising the sanity waterline, it behooves Less Wrong to link to them.

I don't like this idea. The choice of websites to put on the sidebar is likely to be contentious. What exactly qualifies a website to be endorsed by LW? How should a website be judged c

... (read more)

What I dislike most about the idea is that it gives some sort of official collective endorsement to external websites. One thing I like about LW is that except for the institutions that historically gave rise to it (OB and SIAI), it has no official doctrine and official endorsements. There are issues of broad consensus, but they are never officially presented as such. Thus, even if I have some disagreements with the majority on these issues, I can always voice my arguments without the unpleasant feeling that I'm invading the forum as an outsider trying to ... (read more)

Blogroll / Side Bar Section for Links to Rationality Related Websites. I love Overcoming Bias, but it seems a bit biased that Overcoming Bias is the only other website linked from here.

Reply to this comment with a comment for each website nomination?

Hmm... maybe with this feature new links could be added by users (presuming a minimum karma criteria), and then each link other users could vote up and down, so that the ordering of the list was organic.

2steven0461
PredictionBook
0JGWeissman
TakeOnIt
-1JoshuaZ
You Are Not So Smart.
4Dreaded_Anomaly
Measure of Doubt.
0Eugine_Nier
Unenumerated.
6Vladimir_M
I don't like this idea. The choice of websites to put on the sidebar is likely to be contentious. What exactly qualifies a website to be endorsed by LW? How should a website be judged considering the various PR implications of endorsing it? Also, who exactly stands behind the endorsement, considering that LW is a group blog? What's more, LW members already have the option to put website links in their profiles, and the websites authored or endorsed by prominent LW contributors are thus already given significant promotion.

Emotional awareness is a skill that can be cultivated, and increases one's agreeableness. Watch a disagreeable person in action and it's pretty obvious that they're not really picking up how other people are reacting to their behavior. Note that it's much easier to see disagreeable behavior is in others than in oneself. The challenge in becoming more agreeable lies partly in seeing oneself as others see you.

if you really want to know how valid a particular idea you've read is, there are quantitative ways to get closer to answering that question.

The ultimate in quantitative analysis is to have a system predict what your opinion should be on any arbitrary issue. The TakeOnIt website does this by applying a collaborative filtering algorithm on a database of expert opinions. To use it you first enter opinions on issues that you understand and feel confident about. The algorithm can then calculate which experts you have the highest correlation in opinion with. ... (read more)

1[anonymous]
I remember TakeOnIt and I like the principle. The downside is that the opinions and relationships have to be put in by hand, which means that it'll take time and work to fill it up with enough experts to really model the whole body of expert opinion. But it's a great site.

I guess the moral is "Don't trust anyone but a mathematician"?

Safety in numbers? ;)

Perhaps it's useful to distinguish between the frontier of science vs. established science. One should expect the frontier to be rather shaky and full of disagreements, before the winning theories have had time to be thoroughly tested and become part of our scientific bedrock. There was a time after all when it was rational for a layperson to remain rather neutral with respect to Einstein's views on space and time. The heuristic of "is this science established / uncontroversial amongst experts?" is perhaps so boring we forget it, but it's one of the most useful ones we have.

If you have social status, it is worth sparing some change in getting used to not only being wrong, but being socially recognized as wrong by your peers...

Emperor Sigismund, when corrected on his Latin, famously replied:

I am king of the Romans and above grammar.

I know that most men — not only those considered clever, but even those who are very clever and capable of understanding most difficult scientific, mathematical, or philosophic, problems — can seldom discern even the simplest and most obvious truth if it be such as obliges them to admit the falsity of conclusions they have formed, perhaps with much difficulty — conclusions of which they are proud, which they have taught to others, and on which they have built their lives.

— Leo Tolstoy, 1896 (excerpt from "What Is Art?")

4diegocaleiro
Not only to recognize my mistakes, but to actually speak outloud about them frequently has given me great strengh in doing it in questions that really matter. If you have social status, it is worth sparing some change in getting used to not only being wrong, but being socially recognized as wrong by your peers...

Illusory superiority seems to be the cognitive bias to overcome here.

It seems to me that people here are very aware of the very real ways in which they are in fact very superior to the general public, even the elite public, and also of the very real ways in which they are, in most cases, very inferior to the elite public (and frequently somewhat inferior to the general public). The problem is that they tend to morally endorse both their strengths AND their weaknesses, seeing, as Robin does, both as 'high'. I see the high/low motivation intuition as actually being somewhat rarer in the general world that Robin and most peo... (read more)

Voted up.

if you have to choose between fitting in with your group etc and believing the truth, you should shun the truth.

I think many people develop a rough map of other people's beliefs, to the extent that they avoid saying things that would compromise fitting in the group they're in. Speaking of which:

irrationalists free-ride on the real-world achievements of rationalists

Trying to get to level 4 are we? (Clearly I'm not ;)) Conversely, you could argue that "irrationalists" are better at getting things done due to group leverage and rationalists free-ride of those achievements.

Perhaps it would be a good idea to remember, and keep remembering, and make it clear in your writing, that "women" are not a monolithic block and don't all want the same thing.

A woman who doesn't want a generalization applied to them? :)

For me, understanding "what's really going on" in typical social interactions made them even less interesting than when I didn't.

Merely "tuning in" to a social interaction isn't enough. Subtextual conversations are often tedious if they're not about you. You have to inject your ego into the conversation for things to get interesting.

So if I'm with a bunch of people from my class ... and none of us have any major conflict of interest...

If you were a character in a sitcom I was writing, I'd have your dream girl walk in just as you were saying that.

2Roko
BenAlbahari, that is mean -- but funny!

It seems this post bundled together the CPU vs. GPU theory regarding the AS vs. NT mindset, with a set of techniques on how to improve social skills. The techniques however - and in a sense this is a credit to the poster - are useful to anyone who wants to improve their social skills, regardless of whether the cause of their lack of skill is:

1) High IQ
2) Introversion
3) Social Inexperience
4) AS
5)

A combination of several of these factors might be the cause of social awkwardness. It's possible to place too much importance on looking for a root cause. The i... (read more)

0Roko
Note that Introversion is a principal component of human personality, so it must correlate with AS and social inexperience. This is simply because "introversion", by definition, is that component of human personality surrounding desire and ability to interact socially.
9SilasBarta
I think the distinction can be helpfully represented in terms of my levels of understanding. NTs understand social interaction at Level 1, the level at which one is capable of outputting the right (winning) results, even if that is due to an inscrutable black-box model contained inside oneself (which is the case here). But there are higher levels to reach than that, and it is not an academic distinction. To advance to Level 2, you must not only produce the right results, but also be able to "plug in" your understanding of the social interaction domain to various other domains, and make inferences between them. And most NTs cannot do this: they get the results "for free", even (and perhaps especially) if they cannot derive these results as implications of other domains (or vice versa). Roko's point, in turn, can be rephrased as saying that HFASes try to build up a Level 2 understanding directly, checking for cross-domain consistency before they adopt any rules; and that this is because their hardware doesn't feed them the correct black-box output, as happens in NTs. Further, inferences that HFASes make come from applying a more general-purpose "reasoning engine" to social interaction; to NTs, the inferences just look dumb, even if they can't explain why. Someone with sufficiently advanced understanding will be able to connect the NT black-box model to useful models for everything else, explaining the basis for NT conventions. This can grow from an NT mind or a HFAS one, but they will take different paths. In any case, there's a higher level of understanding to reach, even if a specific threshold suffices for some purpose.

Thanks for the feedback.

there's a lot of chaff.

Do you mean chaff as in "stuff that I personally don't care about" or chaff as in "stuff that anyone would agree is bad"?

there doesn't seem to be enough activity yet.

Yes, the site is still in the bootstrapping phase. Having said that, the site needs to have a better way of displaying recent activity.

3[anonymous]
Stuff that I think is bad, and that I would say "reasonable" people agree is bad -- celebrities as experts, Deepak Chopra, mentalists, and so on. But I don't necessarily think that's a problem for the site. If people really get their information from those sources, then I want to know that.

Franklin's quote is more about cryonics being good if it were feasible than if it is feasible. Ben, do you think it should be moved to this question?

Good call.

to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.

No no no! It's vital that the opinions of influential people - even if they're completely wrong - are included on TakeOnIt. John Stuart Mill makes my point perfectly:

...the peculiar evil of silencing the expression of an opinion is... If an opinion is right, [people] are deprived of the opportunity of exchanging error for truth: if wrong, they lose what is almost as great a benefit, the clearer perception and livelier impr

... (read more)
4[anonymous]
I've been playing with the site and from my perspective there are two problems. One is that there's a lot of chaff. The other is that there doesn't seem to be enough activity yet. If there were a lot of activity, I wouldn't necessarily mind that there are "experts" I don't respect; it would still be extremely useful as a microcosm of the world's beliefs. I do want to know which people the public considers to be "experts." That's a useful service in itself. Censorship? Not in a political sense, of course. But there are privately owned institutions which have an interest in permitting a diversity of views. Universities, for instance. This is a site whose usefulness depends on it having no governing ideology. Blocking "unreliable" sources isn't really censorship, but it makes the site less good at what it purports to do.
0JoshuaZ
I'm almost inclined to say that calling Conservapedia a Christian Encyclopedia is an insult to Christianity more than it deserves (theism is very likely incorrect but Conservapedia's attitude towards the universe is much more separated from reality than that of most Christians). Also, I don't think that what John Stuart Mill is talking about is the same thing. First, note that I'm not saying one should censor Chopra, merely that he's not worth including for this sort of thing. That's not "silencing" by any reasonable definition. And there are other experts there who I disagree with whom I wouldn't put in that category. Thus for example, in both the cryonics and Singularity questions there are included people whom I disagree with whom I don't think are at all helpful. Or again consider Benjamin Franklin, whose opinion on cryonics I'm sympathetic with but whom just didn't have any knowledge that would justify considering his opinion worthy of weight.

TakeOnIt records the opinions of BOTH experts and influencers - not just experts. Perhaps I confused you by not being clear about this in my original comment. In any case, TakeOnIt groups opinions by the expertise of those who hold the opinions. This accentuates - not blurs - the distinction between those who have relevant expertise and those who don't (but who are nonetheless influential). It also puts those who have expertise relevant to the question topic at the top of the page. You seem to be saying readers will easily mistake an expert for an influencer. I'm open to suggestions if you think it could be made clearer than it is.

3JoshuaZ
I don't think they are doing as good a job as you think separating experts from non-experts. For example, they describe Conservapedia as an "encyclopedia" with no other modifier. Similarly they describe Deepak Chopra as an "expert on alternative medicine." If they want to make a clear distinction I'd suggest having different color schemes (at minimum). Overall, to even include some of these people together is simply to give weight to views which should have effectively close to zero weight.

Voted up because I think AS is a great example of psychological diversity. I'm curious however as to the origin of your belief that AS people are more attracted to decompartmentalization than neurotypicals are.

1SilasBarta
I don't have evidence for that proposition, but I wanted to (shamelessly) point out that attraction to decompartmentalization can be phrased as a willingness to go from Level 1 to Level 2 in my hierarchy. That is, to go from understanding domains independently, to checking for global consistency and multi-directional implication across them.
8Roko
See the following for weak evidence that AS tend to be more "utilitarian": http://politicalscience.stanford.edu/politicaltheoryworkshop/0809papers/YoungSaxe_review_Appiah.pdf "Furthermore, when function in the RTPJ is disrupted using a technique called transcranial magnetic stimulation (TMS), moral judgments reflect a reduced influence of mental states and a greater influence of outcomes: unintentional harms are judged as more forbidden, and failed attempts to harm are judged as more permissible (Young, Camprodon, Hauser, Pascual-Leone, & Saxe, submitted). This pattern mirrors that observed in individuals with Asperger’s Syndrome and five-year-old children, as described above." My evidence for AS-types to be less compartmentalizing is informal, from metting and talking to people. It is well known that AS <==> more logical, and it seems that logical utilitarian people are more likely to not morally compartmentalize (e.g. think that destruction of the world is OK, but be horrified by the death of a particular person).

This is the bunk-detection strategy on TakeOnIt:

  1. Collect top experts on either side of an issue, and examine their opinions.
  2. If '1' does not make the answer clear, break the issue down into several sub-issues, and do '1' for each sub-issue.

Examples that you alluded to in your post (I threw in cryonics because that's a contrarian issue often brought up on LW):

Global Warming
Cryonics
Climate Engineering
9-11 Conspiracy Theory
Singularity

In addition, TakeOnIt will actually predict what you should believe using collaborative filtering. The way it works, is th... (read more)

1simplicio
From a comment to Bryan Caplan's contra opinion in the cryonics article:
8JoshuaZ
I'm unimpressed by this method. First, the procedure as given does more to reinforce pre-existing beliefs and point one to people who will reinforce those beliefs than anything else. Second, the sourcing used as experts is bad or outright misleading. For example, consider global warming. Wikipedia is listed as an expert source. But Wikipedia has no expertise and is itself an attempt at a neutral summary of experts. Even worse, Conservapedia is used both on the global warming and 9-11 pages. Considering that Conservapedia is Young Earth Creationist and thinks that the idea that Leif Erickson came to the the New World is a liberal conspiracy, I don't think any rational individual will consider them to be a reliable source (and the vast majority of American right-wingers I've ever talked to about this cringe when Conservapedia gets mentioned. So this isn't even my own politics coming into play). On cryonics we have Benjamin Franklin listed as pro. Now, that's roughly accurate. But it is also clear that he was centuries too early to have anything resembling relevant expertise. Looking at many of the fringe subjects a large number of the so-called experts who are living today have no intrinsic justification for their expertise (actors are not experts on scientific issues for example). TakeOnIt seems devoted if anything to blurring the nature of expert knowledge to the point where it becomes almost meaningless. The Bayesian Conspiracy would not approve.

Actions speak louder than words. A thousand "I love you"s doesn't equal one "I do". Perhaps our most important beliefs are expressed by what we do, not what we say. Daniel Dennett's Intentional Stance theory uses an action-oriented definition of belief:

Here is how it works: first you decide to treat the object whose behavior is to be predicted as a rational agent; then you figure out what beliefs that agent ought to have, given its place in the world and its purpose. Then you figure out what desires it ought to have, on the same consi

... (read more)

Status is relative to a group, and each group values different skills and traits. We gravitate towards groups where we have value.

Yes. But if the topic of something you're not good at comes up, what are you going to do? Various strategies:

a) Downplay the importance of the thing that you're not good at.
b) Change the subject.
c) Make a joke about totally sucking at that thing (while keeping the literal subject the same, it changes the implicit subject to the social ability of making other people laugh).
d) Mention a close relative, friend, or partner who's really good at that thing (increasing status by affiliation).

I think I may even do e) which is to show enthusiastic appreciation for ... (read more)

5AstroCJ
Are you serious? You missed g) Make an honest attempt at grasping the subject matter. I'm not sure if this is what you intended e) to cover, but if I meet a topic I'm completely unfamiliar with, my first instinct isn't to destroy the conversation.

Being dismissive of things you're not good at is beneficial to your status.

0Jack
But calling attention to things you're not good at is bad for your status.
8AdeleneDawner
If status was always about one particular skill or trait (for example, the ability to beat people up), this strategy wouldn't work.

Based on your idea and the discussion that followed, I've added the feature to flag a quote as fictional.

On the question pages, fictional quotes are put in their own group:
Is information-theoretic death the most real interpretation of death?

On the expert pages, fictional quotes are flagged per quote:
H.P.Lovercraft's Opinions

Fictional quotes are discounted from the prediction analysis.

Unless you count things like "on top of stalagmites" as sitting methods.

From Blackadder:

Aunt: 'Chair'? You have chairs in your house?
Edmund: Oh, yes.
Aunt: [slaps him twice] Wicked child!!! Chairs are an invention of Satan! In our house, Nathaniel sits on a spike!
Edmund: ...and yourself...?
Aunt: I sit on Nathaniel -- two spikes would be an extravagance.

Being understimulated is intolerable to me.

How can something intolerable be understimulating? Sure, I'm equivocating on the type of stimulation you're referring to here, but in the spirit of luminosity, shouldn't we be interested in exploring the places in our minds that we're afraid to go? I'm not recommending you step into a sensory deprivation chamber (or have your brain emulated without hooking up the inputs and outputs), but experimenting with meditation seems like a potentially luminous activity, even if you did it with the modest goal of simply g... (read more)

4Alicorn
It's not like I never tried it because I was like, "Oh, that sounds understimulating". I've tried meditating, and it was understimulating and I was in a bad mood for a considerable period afterwards trying to get the crick out of my back and haul my brain back into a more suitable level of interaction with the world.
7sketerpot
When I'm in a room where you're ostensibly supposed to be listening to someone talk -- a lecture, a sermon, etc. -- I can't properly stop listening. So if the speaker is really boring, I will try to zone out, but usually with very little success. It combines the worst parts of being with other people with the worst parts of being alone, for an experience that is both understimulating and agonizing. I once had to sit through a two-hour Southern Baptist church service. There was one guy who delivered a long, rambling, over-excited monologue about "casting out demons and devils", sounding exactly like a random street lunatic, and then another guy who spoke in a more sedate tone for about an hour on how evolution is false, society is being corrupted, and how "we did not come from monkeys!!". And then there was one man whose job appeared to be simply to sit in a chair next to whoever was speaking and periodically agree with whatever was being said. Whenever there was a pause, this guy would jump in with a "YES!" or an "AMEN!". I think the funniest thing that happened was when somebody mentioned Jesus and then stopped to inhale, and this guy blurted out "THAT'S HIM, THAT'S HIM!!". At first it was morbidly fascinating, in a mentally painful sort of way. But as time wore on, it just became excruciatingly boring, as they covered the same ground again and again, as if their target audience was suffering from profound mental retardation. I tried to think about something else to escape from the dull horror of my surroundings, but the preacher's delusional ravings just kept impinging on my train of thought, inescapable. So, yes, it's entirely possible to be intolerably understimulated. (I enjoy meditation, though. It's quiet enough that I don't get bored, if that makes any sense.)

Construal Level Theory (the one used to explain near-far mode) can also be used to explain self-control. One of the creators of the theory explains in a paper here%20construal%20levels%20and%20self%20control.pdf) and another paper is here.

The authors propose that self-control involves making decisions and behaving in a manner consistent with high-level versus low-level construals of a situation. Activation of high-level construals (which capture global, superordinate, primary features of an event) should lead to greater self-control than activation of lo

... (read more)
Load More