All of asparisi's Comments + Replies

One could judge the strength of these with a few empirical tests: such as for (2), comparing industries where it is clear that the skills learned in college (or in a particular major) are particularly relevant vs. industries where it is not as clear, and comparing the number of college grads w/ the relevant skill-signals vs. college grads w/o the relevant skill-signals vs. non-college grads; and for (3), looking to industries where signals of pre-existing ability in that industry do not conform to being in college and comparing their rate of hiring grads v... (read more)

I don't get paid on the basis of Omega's prediction given my action. I get paid on the basis of my action given Omega's prediction. I at least need to know the base-rate probability with which I actually one-box (or two-box), although with only two minutes, I would probably need to know the base rate at which Omega predicts that I will one-box. Actually, just getting the probability for each of P(Ix|Ox) and P(Ix|O~x) would be great.

I also don't have a mechanism to determine if 1033 is prime that is readily available to me without getting hit by a trolley (... (read more)

Accounting for qualia and starting from qualia are two entirely different things. Saying "X must have qualia" is unhelpful if we cannot determine whether or not a given thing has qualia.

Qualia can perhaps best be described, briefly, as "subjective experience." So what do we mean by 'subjective' and 'experience'?

If by 'subjective' we mean 'unique to the individual position' and by 'experience' we mean 'alters its internal state on the basis of some perception' then qualia aren't that mysterious: a video camera can be described as having ... (read more)

-2Juno_Watt
We can tell that we have qualia, and our won consciousnessn is the ntarual starting point. "Qualia" can be defined by giving examples: the way anchiovies taste, the way tomatos look, etc. You are makiing heavy weather of the indefinability of some aspects of consciousness, but the flipside of that is that we all experience out won consciousness. It is not a mystery to us. So we can substitute "inner ostension" for abstract definition. OTOH, we don't have examples of non-biological consc.
  1. You've chosen one of the easier aspects of consciousness: self-awareness rather than, eg. qualia.

I cover this a bit when I talk about awareness, but I find qualia to often be used in such a way as to obscure what consciousness is rather than explicate it. (If I tell you that consciousness requires qualia, but can't tell you how to distinguish things which have qualia from things which do not, along with good reason to believe that this way of distinguishing is legitimate, then rocks could have qualia.)

  1. The "necessarily biological" could be
... (read more)
-2Juno_Watt
If we want to understand how consciousness works in humans, we have to accou t for qualia as part of it. Having an undertanding of human consc. is the best practical basis for deciding whether other entitieies have consc. OTOH, starting by trying to decide which entities have consc. is unlikely to lead anywhere. The biological claim can be ruled out if it is incoherent, but not if it for being unproven, since the funciontal/computational alternative is also unproven.

I find it helps to break down the category of 'consciousness.' What is it that one is saying when one says that "Consciousness is essentially biological"? Here it's important to be careful: there are philosophers who gerrymander categories. We can start by pointing to human beings, as we take human beings to be conscious, but obviously we aren't pointing at every human attribute. (For instance, having 23 base pairs of chromosomes isn't a characteristic we are pointing at.) We have to be careful that when we point at an attribute, that we are actu... (read more)

-1Juno_Watt
1. You've chosen one of the easier aspects of cosnciousness: self-awareness rather than, eg. qualia. 2. The "necessarily biological" could be aposteriori nomic necessity, not apriori concpetual necessity, which is the only kind you knock down in your comment.

On those criteria, I would say Plato. Because Plato came up with a whole mess of ideas that were... well, compelling but obviously mistaken. Much of Western Philosophy can be put in terms of people wrestling with Plato and trying to show just why he is wrong. (Much of the rest is wrestling with Aristotle and trying to show why HE is wrong... but then, one can put Aristotle into the camp of "people trying to show why Plato is wrong.")

There's a certain sort of person who is most easily aroused from inertia when someone else says something so blatantly, utterly false that they want to pull their hair out. Plato helped motivate these people a lot.

The New Organon, particularly Aphorisms 31-46, show not only an early attempt to diagnose human biases (what Bacon referred to as "The Idols of the Mind") but also some of the reasons why he rejected Aristotelian thought, common at the time, in favor of experimental practice.

Maybe there are better ways to expand than through spacetime, better ways to make yourself into this sort of maximizing agent, and we are just completely unaware of them because we are comparatively dull next to the sort of AGI that has a brain the size of a planet? Some way to beat out entropy, perhaps. That'd make it inconsistent to see any sort of sky with UFAI or FAI in it.

I can somewhat imagine what these sorts of ways would be, but I have no idea if those things are likely or even feasible, since I am not a world-devouring AGI and can only do wild ma... (read more)

I get that feeling whenever I hit a milestone in something: if I run a couple miles further than I had previously, if I understand something that was opaque before, if I am able to do something that I couldn't before, I get this "woo hoo!" feeling that I associate with levelling up.

3Suryc11
Same here. This feeling is especially prevalent for me in weightlifting--my strength/dexterity/stamina attribute is increasing! Too many RPGs played as a kid.

Even if they are sapient, it might not have the same psychological effect.

The effect of killing a large, snarling, distinctly-not-human-thing on one's mental faculties and the effect of killing a human being are going to be very different, even if one recognizes that thing to be sapient.

If they are, Harry would assign moral weight to the act after the fact: but the natural sympathy that is described as eroding in the above quote doesn't seem as likely to be affected given a human being's psychology.

asparisi-10

since I don't know what "philosophy" really is (and I'm not even sure it really is a thing).

I find it's best to treat philosophy as simply a field of study, albeit one that is odd in that most of the questions asked within the field are loosely tied together at best. (There could be a connection between normative bioethics and ontological questions regarding the nature of nothingness, I suppose, but you wouldn't expect a strong connection from the outset) To do otherwise invites counter-example too easily and I don't think there is much (if anything) to gain in asking what philosophy really is.

Technical note: some of these are Torts, not Crimes. (Singing Happy Birthday, Watching a Movie, or making an Off-Color Joke are not crimes, barring special circumstances, but they may well be Torts.)

Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)

kenzi190

Hey; we (CFAR) are actually going to be running a shuttles from SFO Thursday evening, since the public transit time / drive time ratio is so high for the April venue. So we'll be happy to come pick you up, assuming you're willing to hang out at the airport for up to ~45 min after you get in. Feel free to ping me over email if you want to confirm details.

I tend to think this is the wrong question.

Here's roughly what happens: there are various signals (light, air waves, particulates in the air) that humans have the capacity to detect and translate into neural states which can then be acted on. This is useful because the generation, presence, and redirection of these signals is affected by other objects in the world. So a human can not only detect objects that generate these signals, it can also detect how other objects around it are affected by these signals, granting information that the human brain can th... (read more)

0kremlin
'Breaking it down into other questions' is exactly what needed to be done. I agree. And once it is broken down, the question is dissolved.

I just hope that the newly-dubbed Machine Intelligence Research Institute doesn't put too much focus on advertising for donations.

That would create a MIRI-ad of issues.

Sorry, if I don't let the pun out it has to live inside my head.

Yes, instead of putting too much focus on advertising, they should put the correct amount of focus on it. The argument also applies to the number of pens.

-6Rukifellth

I find it unlikely that scientific secrecy is never the right answer, just as I find it unlikely that scientific secrecy is always the right answer.

Qualitatively, I'd say it has something to do with the ratio of expected harm of immediate discovery vs. the current investment and research in the field. If the expected risks are low, by all means publish so that any risks that are there will be found. If the risks are high, consider the amount of investment/research in the field. If the investment is high, it is probably better to reveal your research (or pa... (read more)

0Troshen
This is a good discussion of the trade-offs that should be considered when deciding to reveal or keep secret new, dangerous technologies.
1V_V
Sure, you can find exceptional scenarios where secrecy is appropriate. For instance, if you were a scientist working on the Manhattan Project, you certainly wouldn't have wanted to let the Nazis know what you were doing, and with good reason. But barring such kind of exceptional circumstances, scientific secrecy is generally inappropriate. You need some pretty strong arguments to justify it. How much likely it is that some potentially harmful breakthrough happens in a research field where there is little interest? Is that actually true? And anyway, what is the probability that a new theory of mind is potentially harmful? That statement seems contrived, I suppose that by "can map onto the state of the world" you mean "is logically consistent". Of course, I didn't make that logically inconsistent claim. My claim is that "X probably won't work, and if you think that X does work in your particular case, then unless you have some pretty strong arguments, you are most likely mistaken".

I usually turn to the Principle of Explosion to explain why one should have core axioms in their ethics, (specifically non-contradictory axioms). If some principle you use in deciding what is or is not ethical creates a contradiction, you can justify any action on the basis of that contradiction. If the axioms aren't explicit, the chance of a hidden contradiction is higher. The idea that every action could be ethically justified is something that very few people will accept, so explaining this usually helps.

I try to understand that thinking this way is odd... (read more)

"[10065] No route to host Error"

I figure the easiest way to delay a human on the other end of a computer is to simulate an error as best I can. For a GAI, this time is probably invaluable.

3handoflixue
By default, I'd type "AI DESTROYED" in response to ANY input, including "Admin has joined #AIBOX", "Admin> Hey Gatekeeper, we're having some technical difficulties, the AI will be here in a few minutes", etc.. It also makes me conclude "clearly hostile" once I catch on, which seems to be a BIG tactical error since then nothing you say going forward will convince me that you're actually friendly - buying yourself time is only useful if I can be hacked (in which case why not just open with a one-sentence hack?) or if you can genuinely convince me that you're friendly.

That depends on the level of explanation the teacher requires and the level of the material. I'd say that at least until you get into calculus, you can work off of memorizing answers. I'd even go so far as to say that most students do, and succeed to greater or lesser degrees, based on my tutoring experiences. I am not sure to what degree you can "force" understanding: you can provide answers that require understanding, but it helps to guide that process.

I went to a lot of schools, so I can contrast here.

I had more than one teacher that taught me... (read more)

0DanArmak
Yes, but this only really works if, when the student is presented with an example they didn't memorize, they can still solve it using their understanding. And to make sure they do understand, after they've practiced on the simple cases they can memorize, you routinely set problems that require understanding. You can't start with understanding because when solving a few simple cases (like 5x5), memorization really is effective, and students may choose to memorize even if you don't explicitly tell them to.

I think the worry is that they are only concerned about getting the answer that gets them the good grade, rather than understanding why the answer they get is the right answer.

So you end up learning the symbols "5x5=25," but you don't know what that means. You may not even have an idea that corresponds to multiplication. You just know that when you see "5x5=" you write "25." If I ask you what multiplication is, you can't tell me: you don't actually know. You are disconnected from what the process you are learning is supposed to be tracking, because all you have learned is to put in symbols where you see other symbols.

0DanArmak
But surely in math, of all subjects, it's easily possible to construct problems that cannot be solved without thinking and understanding, that do not reduce to mere memory and recognition of a known question "5x5". Then students who want the "right" answer will be forced to understand. This isn't absolute, of course. When learning elementary multiplication, pretty much all you can ask about is multiplying, and there are only a few dozen pairs in the 10x10 multiplication table, and students generally just remember them, they don't calculate. But by the time you're up to arbitrary size multiplication or long division, you need to apply an algorithm; that's a step of understanding, because you can see that the algorithm also produces the results you memorized earlier. And so on. When students are at the 5x5 level, I don't think there is an answer to "why is 25 the right answer?" that they could understand - it just is the right answer, a brute fact about life, just like the sky is blue and sun comes up every day. But that doesn't continue forever. In my personal experience schools go way too far in the other direction, and keep asking for rote memorization when it's already possible to ask for understanding.

I think you are discounting effects such as confirmation bias, which lead us to notice what we expect and can easily label while leading us to ignore information that contradicts our beliefs. If 99 out of 100 women don't nag and 95 out of 100 men don't nag, given a stereotype that women nag, I would expect people think of the one woman they know that nags, rather than the 5 men they know that do the same.

Frankly, without data to support the claim that:

There is a lot of truth in stereotypes

I would find the claim highly suspect, given even a rudimentary understanding of our psychological framework.

2Alsadius
It's a system seriously prone to false positives, of course. But I think the odds of a true stereotype getting established are sufficiently higher than the odds of a false one getting established that it still counts as positive evidence.

I seriously doubt that most people who make up jokes or stereotypes truly have enough data on hand to reasonably support even a generalization of this nature.

1Alsadius
Stereotypes are largely consensus-based, which gives them a larger data pool than any individual would have. If a comedian starts making jokes about the foibles of a large group, and most people haven't experienced those same foibles, they're not going to find it funny. Now, smaller groups can get a lot nastier treatment, both because there's less evidence to contradict a stereotype, and because they can turn into the token butt of jokes(Newfies being the stereotypical example where I'm from - nobody actually believes the jokes, but everybody makes them just because they're the group you make dumb-people jokes about). But "women" is a far too common group to get much in the way of false stereotypes, for example. At this point, I should also point out the dangers of stereotypes that are true only because culture forces them to be. For example, saying that women needed protection in the 19th century was basically true, but it was largely true because we didn't let women protect themselves. Feedback loops are a real danger.
asparisi120

Groupthink is as powerful as ever. Why is that? I'll tell you. It's because the world is run by extraverts.

The problem with extraverts... is a lack of imagination.

pretty much everything that is organized is organized by extraverts, which in turn is their justification for ruling the world.

This seems to be largely an article about how we Greens are so much better than those Blues rather than offering much that is useful.

0Swimmy
Yeah, my first thought was, because we're animals that evolved to be quite social and have developed cognitive biases in light of that fact! Bet you can find powerful "groupthink" in introverts as well. . . like, say, an insistence on thinking in terms of introverts vs. extroverts.

I don't have the answer but would be extremely interested in knowing it.

(Sorry this comment isn't more helpful. I am trying to get better at publicly acknowledging when I don't know an answer to a useful question in the hopes that this will reduce the sting of it.)

A potential practical worry for this argument: it is unlikely that any such technology will grant just enough for one dose for each person and no more, ever. Most resources are better collected, refined, processed, and utilized when you have groups. Moreover, existential risks tend to increase as the population decreases: a species with only 10 members is more likely to die out than a species with 10 million, ceteris paribus. The pill might extend your life, but if you have an accident, you probably need other people around.

There might be some ideal number... (read more)

Where is the incentive for them to consider the public interest, save for insofar as it is the same as the company interest?

It sounds like you think there is a problem: that executives being ruthless is not necessarily beneficial for society as a whole. But I don't think that's the root problem. Even if you got rid of all of the ruthless executives and replaced them with competitive-yet-conscientious executives, the pressures that creates and nurtures ruthless executives would still be in place. There are ruthless executives because the environment favors them in many circumstances.

1Desrtopa
I'm not arguing that the core of the problem is that business executives are too ruthless. But I do suspect that to the extent that the current system rewards ruthlessness, it may be purely or almost purely due to ways in which it deviates from a system that offers no perverse incentives from a societal perspective.
asparisi-10

Your title asks a different question than your post: "useful" vs. being a "social virtue."

Consider two companies: A and B. Each has the option to pursue some plan X, or its alternative Y. X is more ruthless than Y (X may involve laying off a large portion of their workforce, a misinformation campaign, or using aggressive and unethical sales tactics) but X also stands to be more profitable than Y.

If the decision of which plan to pursue falls to a ruthless individual in company A, company A will likely pursue X. If the decision falls to a... (read more)

1Desrtopa
I chose a somewhat misleading title deliberately, although I can understand if people take issue with that. As I acknowledged in the post itself, it's clear that ruthlessness can be useful from the perspective of individual companies. From the perspective of a person judging their value to society, it's not so clear that ruthless business executives are useful. Their competitive advantage may lie purely in allowing them to make decisions that are in the company's interest, but not the public interest.
1DaFranker
I'm guessing you mean X stands to be more profitable than Y?

You say this is why you are not worried about the singularity, because organizations are supra-human intelligences that seek to self-modify and become smarter.

So is your claim that you are not worried about unfriendly organizations? Because on the face of it, there is good reason to worry about organizations with values that are unfriendly toward human values.

Now, I don't think organizations are as dangerous as a UFAI would be, because most organizations cannot modify their own intelligence very well. For now they are stuck with (mostly) humans for hardwar... (read more)

-1timtyler
Today's orgaisations are surely better candidates for self-improvement of intelligence than today's machines are. Of course both typically depend somewhat on the surrounding infrastructure, but organisations like the US government are fairly self-sufficient - or could easily become so - whereas machines are still completely dependent on others for extended cumulative improvements.. Basically, organisations are what we have today. Future intelligent machines are likely to arise out of today's organisations. So, these things are strongly linked together.
5jsteinhardt
He says he's not worried about the singularity because he is more worried about unfriendly organizations, as that is a nearer-term issue.

Definitely. These are the sorts of things that would need to be evaluated if my very rough sketch were to be turned into an actual theory of values.

Well, effectiveness and desire are two different things.

That aside, you could be posting for desires that are non-status related and still desire status. Human beings are certainly capable of wanting more than one thing at a time. So even if this post was motivated by some non-status related desire, that fact would not, in and of itself, be evidence that you don't desire status.

I'm not actually suggesting you update for you: you have a great deal more access to the information present inside your head than I do. I don't even have an evidence-based argument... (read more)

0handoflixue
In a vacuum, and assuming a perfectly spherical point, I think we agree :)

Eh... but people like rock stars even though most people are NOT rock stars. People like people with really good looks even though most people don't have good looks. And most people do have some sort of halo effect on wealthy people they actually meet, if not "the 1%" as a class.

I am not sure that a person who has no desire for status will write a post about how they have no desire for status that much more often than someone who does desire status. Particularly if this "desire" can be stronger or weaker. So it could be:

A- The person re... (read more)

0handoflixue
Y'know, that reply (of mine) misses the point: Read this thread, and pay attention to the voting. Most every comment of mine is a 0 or a 1 karma post. There's lots of 5-10 karma posts in this thread. If I value status to the point that I'm willing to lie, why am I so bad at it? :)
0handoflixue
Can we agree that "being different" can be both dangerous or beneficial? I've been raised with the idea that my brand of different, in general, is dangerous for me (threatened with expulsion from school, numerous threats of violence until I learned to shut up about some topics), so my prior is that any abnormal behaviour of mine is more likely dangerous than beneficial. As to the rest, I think we'll just have to disagree - you're making a good point from an external standpoint, but nothing that would really prompt ME to update (for one thing, I like to think I'm smart and socially skilled enough to pull vastly more than a +10 on a post if I just wanted status :))

I upvoted it because the minimum we'd get without running a study would be anecdotal evidence.

I'm not sure that there is a close link between "status" and "behaving." Most of the kids I knew who I would call "status-seeking" were not particularly well behaved: often the opposite. Most of the things you are talking about seem to fall into "good behavior" rather than "status."

Additionally... well, we'd probably need to track a whole lot of factors to figure out which ones, based on your environment, would be selected for. And currently, I have no theory as to which timeframes would be the most important to look at, which would make such a search more difficult.

2NancyLebovitz
There may be important differences between avoiding low status and seeking high status.
2Sabiola
Good behaviour on your part would get your parents higher status with their peers, bad behaviour (for certain values of 'bad') would get you higher status with your peers.

I wouldn't say it has no bearing. If C. elegans could NOT be uploaded in a way that preserved behaviors/memories, you would assign a high probability to human brains not being able to be uploaded. So:

If (C. elegans) & ~(Uploading) goes up, then (Human) & ~(Uploading) goes WAY up.

Of course, this commits us to the converse. And since the converse is what happened we would say that it does raise the Human&Uploadable probabilities. Maybe not by MUCH. You rightly point out the dissimilarities that would make it a relatively small increase. But it certainly has some bearing, and in the absense of better evidence it is at least encouraging.

asparisi140

Yeesh. Step out for a couple days to work on your bodyhacking and there's a trench war going on when you get back...

In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.

This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.

Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain... (read more)

Hm. I know that the biological term may not be quite right here (although the brain is biological, scaling this idea up may be problematic) but I have wondered if certain psychological traits are not epigenetic: that is, it isn't that you are some strange mutant if you express terminal value X strongly and someone else expresses it weakly. Rather, that our brain structures lead to a certain common set of shared values but that different environmental conditions lead to those values being expressed in a stronger or weaker sense.

So, for instance, if "st... (read more)

2handoflixue
As a kid, my parents gave me a TON of trouble for exhibiting routine low-status behaviour (chewing on my shirt, refusing to wear a shirt, wearing stained shirts, showering once or twice a week, getting all sorts of dirty, eating food with the wrong fork...), and my mom specifically taught me a fair amount of etiquette (correct fork, how to set a table for a 3-course meal, so on) So, I'm anecdotally evidence against your theory :)
asparisi120

The fact that I won't be able to care about it once I am dead doesn't mean that I don't value it now. And I can value future-states from present-states, even if those future-states do not include my person. I don't want future sapient life to be wiped out, and that is a statement about my current preferences, not my 'after death' preferences. (Which, as noted, do not exist.)

The difference is whether or not you care about sapience as instrumental or terminal values.

If I only instrumentally value other sapient beings existing, then of course, I don't care whether or not they exist after I die. (They will cease to add to my utility function, through no fault of their own.)

But if I value the existence of sapient beings as a terminal value, then why would it matter if I am dead or alive?

So, if I only value sapience because, say, other sapient beings existing makes life easier than it would be if I was the only one, then of course ... (read more)

1TrE
(Playing devil's advocate) Once you're dead, there's no way you can feel good about sapient life existing. So if I toss a coin 1 second after your death and push the red button causing a nuclear apocalypse iff it comes up heads, you won't be able to feel sorrow in that case. You can certainly be sad before you die about me throwing the coin (if you know I'll do that), but once you're dead, there's just no way you could be happy or sad about anything.

I think I have a different introspection here.

When I have a feeling such as 'doing-whats-right' there is a positive emotional response associated with it. Immediately I attach semantic content to that emotion: I identify it as being produced by the 'doing-whats-right' emotion. How do I do this? I suspect that my brain has done the work to figure out that emotional response X is associated with behavior Y, and just does the work quickly.

But this is maleable. Over time, the emotional response associated with an act can change and this does not necessarily in... (read more)

I am not sure that all humans have the empathy toward humanity on the whole that is assumed by Adams here.

students might learn about the debate between Realism and Nominalism, and then be expected to write a paper about which one they think is correct (or neither). Sure, we could just tell them the entire debate was confused...

This would require a larger proportion of philosophy professors to admit that the debate is confused.

Working in philosophy, I see some move toward this, but it is slow and scattered. The problem is probably partially historical: philosophy PhDs trained in older methods train their students, who become philosophy PhDs trained in their professor's methods+anything that they could weasel into the system which they thought important. (which may not always be good modifications, of course)

It probably doesn't help that your average philosophy grad student starts off by TAing a bunch of courses with a professor who sets up the lecture and the material and the gr... (read more)

Well, you can get up to 99 points for being 99 percent confident and getting the right answer, or minus several hundred (I have yet to fail at a 99 so I don't know how many) for failing at that same interval.

Wrong answers are, for the same confidence interval, more effective at bringing down your score than right answers are at bringing it up, so in some sense as long as you are staying positive you're doing good.

But if you want to compare further, you'd have to take into account how many questions you've answered, as your lifetime total will be different ... (read more)

High score seems to be good in terms of "My confident beliefs tend to be right."

Having your bars on the graph line up with the diagonal line would be an "ideal" graph (neither over- nor under- confident)

0JoshuaFox
What is a high score? I realize that there is no absolute scale, but I have no idea if 10 is good or 1000 is bad.

To clarify, wrong in any of my answers at the 99% level. I have been wrong at other levels (including, surprisingly, hovering within around 1% of 90% at the 90% level.

Well, in 11 out of 145 answers (7.5%) I so far have answered 99%, and I have yet to be wrong in any of my answers.

If I continue at this rate, in approximately 1,174 more answers, I'll be able to tell you if I am well callibrated (less, if I fail at more than one answer in the intervening time)

0asparisi
To clarify, wrong in any of my answers at the 99% level. I have been wrong at other levels (including, surprisingly, hovering within around 1% of 90% at the 90% level.

Question 1: This depends on the technical details of what has been lost. If it merely an access problem: if there are good reasons to believe that current/future technologies of this resurrection society will be able to restore my faculties post-resurrection, I would be willing to go for as low as .5 for the sake of advancing the technology. If we are talking about permanent loss, but with potential repair (so, memories are just gone, but I could repair my ability to remember in the future) probably 9.0. If the difficulties would literally be permanent, 1.... (read more)

asparisi230

I had an interesting experience with this, and I am wondering if others on the male side had the same.

I tried to imagine myself in these situations. When a situation did not seem to have any personal impact from the first person or at best a very mild discomfort, I tried to rearrange the scenario with social penalties that I would find distressing. (Social penalties do differ based on gender roles)

I found this provoked a fear response. If I give it voice, it sounds like "This isn't relevant/I won't be in this scenario/You would just.../Why are you doi... (read more)

Another thought: once you have a large bank of questions, consider "theme questions" as something people can buy with coins. Yes, that becomes a matter of showing off rather than the main point, but people LIKE to show off.

asparisi140

Suggestions (for general audience outside of LW/Rationalist circles)

I like the name "Confidence Game"- reminds people of a con game while informing you as to the point of the game.

Try to see if you can focus on a positive-point scale, if you can. Try to make it so that winning nets you a lot of points but "losing" only a couple. (Same effect on scores, either way) This won't seem as odd if you set it up as one long scale rather than two shorter ones: so 99-90-80-60-50-60-80-90-99.

Setting it to a timer will make it ADDICTIVE. Set it up ... (read more)

2asparisi
Another thought: once you have a large bank of questions, consider "theme questions" as something people can buy with coins. Yes, that becomes a matter of showing off rather than the main point, but people LIKE to show off.
Load More