Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Thomas 27 March 2017 04:19:26PM 0 points [-]

I must say, that this solutions of yours is quite impressive. Quite impressive indeed.

Comment author: dogiv 27 March 2017 04:16:39PM 0 points [-]

I agree with that... personally I have tried several times to start a private journal, and every time I basically end up failing to write down any important thoughts because I am inhibited by the mental image of how someone else might interpret what I write--even though in fact no one will read it. Subconsciously it seems much more "defensible" to write nothing at all, and therefore effectively leave my thoughts unexamined, than to commit to having thought something that might be socially unacceptable.

Comment author: Thomas 27 March 2017 04:12:16PM *  0 points [-]

I promise you scores and images of solutions, whatever they will be. Calculations are under way right now and they should be available soon.

Comment author: gathaung 27 March 2017 04:04:15PM *  0 points [-]

AFAIK (and wikipedia tells), this is not how IQ works. For measuring intelligence, we get an "ordinal scale", i.e. a ranking between test-subjects. An honest reporting would be "you are in the top such-and-so percent". For example, testing someone as "one-in-a-billion performant" is not even wrong; it is meaningless, since we have not administered one billion IQ tests over the course of human history, and have no idea what one-in-a-billion performance on an IQ test would look like.

Because the IQ is designed by people who would try to parse HTML by regex (I cannot think of a worse insult here), it is normalized to a normal distribution. This means that one applies the inverse error-function with SD of 15 points to the percentile data. Hence, IQ is Gaussian-by-definition. In order to compare, use e.g. python as a handy pocket calculator:

from math import *

iqtopercentile = lambda x: erfc((x-100)/15)/2

iqtopercentile(165)

4.442300208692339e-10

So we see that claims of any human beeing having an IQ of 165+ is statistically meaningless. If you extrapolated to all of human history, an IQ of 180+ is meaningless:

iqtopercentile(180)

2.3057198811629745e-14

Yep, by current definition you would need to test 10^14 humans to get one that manages an IQ of 180. If you test 10^12 humans and one god-like super-intelligence, then the super-intelligence gets an IQ of maybe 175 -- because you should not apply the inverse error-function to an ordinal scale, because ordinal scales cannot capture bimodals. Trying to do so invites eldritch horrors on our plane who will parse HTML with a regex.

Comment author: MaryCh 27 March 2017 03:45:41PM 0 points [-]

If rational thinking depends, to whatever extent, on self-knowledge, why not write poetry to raise self-awareness?

Comment author: Dagon 27 March 2017 03:41:29PM 0 points [-]

Is that investment game packaged anywhere (preferably with a better UI) so I can run it multiple times, and (afterward) see the actual probabilities of the items? I think I got somewhat unlucky and my explore/exploit mix didn't pay off as I'd hoped, but I'd have fun exploring an optimium method.

Comment author: Dagon 27 March 2017 03:37:30PM 0 points [-]

Hadn't thought about it, but you're right, it is an odd categorization. Web browsing is already pretty deeply computer-assisted no matter what, and the ability to run arbitrary scripts seems like it's closer to being a bot rather than further.

It's not a binary choice, but more of a continuum from "very little automation, lots of human button-pushing" to "mostly automated, with very high-level human direction (decision to write the code, decision to run it on that site, etc.)". It's funny to assert that a simpler (or deliberately crippled) browser is further to the right when it's actually further to the left.

So, have I removed what little humor there is in the fact that the message will only ever be parsed by humans but it accuses the reader of being a bot?

Comment author: Dagon 27 March 2017 03:29:48PM 0 points [-]

I do care about his reasoning, and disagree with it (most notably the "any torture -> infinite torture" part, with no counterbalancing "any pleasure -> ?) term in the calculation.

but I'm with Iahwran on the conclusion: destroying the last copy of someone is especially heinous, and nowhere near justified by your reasoning. I'll join his precommittment to punish you if you commit crimes in pursuit of these wrong beliefs (note: plain old retroactive punishment, nothing acausal here).

Comment author: Viliam 27 March 2017 03:18:57PM *  0 points [-]

The reference. (Edited to "smart robots" afterwards.)

I think manga-style illustrations would actually be cool, and Eliezer would most likely approve. (We probably don't have the capacities to make a movie. Yet.)

BTW, you might enjoy reading this.

In response to Act into Uncertainty
Comment author: Viliam 27 March 2017 03:13:15PM *  0 points [-]

Mathematically, refusing to make a prediction may be equivalent to going with some prior distribution of possible values.

Socially, it's different. For example, people have different "prior distributions", so talking about your one explicitly exposes a lot of information about you, while refusing to make a prediction exposes little. (You might get into unnecessary conflicts over the parts where the probability is small anyway, so it wouldn't make a practical difference.)

I suspect that refusing to make prediction, even for yourself, is just an internalization of this rule. You know that doing something would make other people laught at you, so it feels silly to do even if no one is watching.

Comment author: Error 27 March 2017 03:06:05PM 0 points [-]

Being reasonable: strong robots and dead walking people

I don't get the reference, but my first thought: super robots vs. zombies sounds like an awesome anime.

Comment author: jmh 27 March 2017 02:54:47PM 0 points [-]

The answer seems fairly simple to me. You're not in any position to decide the risks others assume. If you're concerned about the potential torture the only mind you can really do anything about is yours -- you don't run around killing everyone else, just yourself.

Comment author: Lumifer 27 March 2017 02:53:17PM *  1 point [-]

The parallel still holds. If you pick a random person with the lesion, he will either develop cancer or not. The probability that the random person with the lesion develops cancer is 90%. Is that not so?

Comment author: MaryCh 27 March 2017 02:36:48PM *  1 point [-]

My kid has recently decided he's into meteorology. First, he just walked around with scribblings of "+5, -11" and hummed a weather forecast theme. This got boring for everybody else, so we explained to him that 'weather' happens in some places, and people watch it for explicit purposes of deciding some matters, not for sheer cuteness. (I don't think he believed me on that one.)

So now we make 'forecasts' - several for Ukraine (first the video, then the sound, for 25 cities or for north-south-center-west-east), Earth ('Australiaaa... thirty-six degrees... kangoroo can live'), and space in general (with fictional planets, although he ordered a Solar system, too). The upcoming one is going to be for mammoths (I'm thinking Eurasia + North America).

This lets us work on reading, writing, reciting (short messages; he doesn't like learning poetry by heart), painting and building things from cardboard, finding places on the globe. Although my husband groans about having to edit the end product (without delay).

Comment author: Viliam 27 March 2017 02:33:30PM *  1 point [-]

Explaining to actual kids is fun, and a good rationalist exercise. I recently told this to my two years old daughter, when she asked me what I was writing about, when I was preparing a blog article on my computer. (She liked the explanation a lot. She insisted that I repeat it to her for the rest of the evening.)

Daddy has a lot of books, but these five he likes most: The first book says that people should eat tomatoes, and cucumber, and carrot, and cabbage, and peas, and beans. The second book says that people should exercise, like do squats, or hang from a bar. The third book says people should talk nice, not yell at each other, and say 'please' when they want something. The fourth book is about a lady who taught dogs and dolphins. And the fifth book says people should not do stupid stuff.

But yeah, for more complex topics, 5 years seem like a more appropriate age. I wonder how well people are actually calibrated about this; whether the actual 5 years olds would understand most of the ELI5 posts. Maybe someone could do an experiment with real kids -- tell them the stories, and then report how they repeated the lesson using their own words.

I am looking forward to the "Being reasonable: smart robots and dead walking people" book. :D

Comment author: ImmortalRationalist 27 March 2017 02:07:35PM 0 points [-]

This, and find better ways to optimize power efficiency.

Comment author: ImmortalRationalist 27 March 2017 02:00:10PM 0 points [-]

How do you even define free will? It seems like a poorly defined concept in general, and is more or less meaningless. The notion of free will that people talk about seems to be little more than a glorified form of determinism and randomness.

Comment author: Viliam 27 March 2017 02:00:05PM *  0 points [-]

trigger warning: discussing rape, in near mode

Now sure how much can I generalize from the few data points of women who trusted me personally enough to tell me about their bad experiences, but within that set, it was neither the archetypal "stranger hiding in a bush", nor the archetypal "lecherous uncle". I remember these three patterns:

a) Girl's mother has a new boyfriend. In mother's absence, the boyfriend starts making sexually suggestive remarks to the girl. Girl complains to her mother. Mother confronts the boyfriend, he dismisses it with laughing, telling the mother that her daughter is simply jealous of him, wanting to keep the mother only for herself. Mother gets angry, scolds the daughter for "lying", and categorically refuses to listen to her arguments anymore. Girl stops reporting to her mother, and their relationship goes from already quite bad to completely ignoring each other. The boyfriend keeps pushing further. (Luckily, in the cases I heard, the predictable bad end didn't actually happened, because something totally unrelated disrupted the setting.)

b) A girl is at a party with her friends. There is also a guy, stranger to her, but friendly with her friends. The party either happens at the guy's place, or at a large place with many rooms. Girl remains talking with the guy, while other people gradually leave. When they are left alone in the room, the guy suddenly becomes physical and rapes her. (In one case, when the girl afterwards starts inconspicuously asking their friends what is exactly is their relationship with the guy, she is surprised to hear almost all of them telling her "actually, I don't like that guy, he seems like an asshole, but he is a friend of my friends, so I just try to ignore him when he comes to a party" or "I noticed him, but didn't pay any attention".)

c) A girl's boyfriend constantly refuses to take "no" for an answer; starting with relatively small things, gradually increasing the requests, until one day he rapes her. The girl keeps dating him, until later something else ends their relationship.

Again, not sure how typical these stories are, but... assuming they are relatively frequent, then the dichotomy between a stranger and a non-stranger doesn't properly fit the territory.

Technically, all three cases are "non-strangers". I believe that in many surveys, "mom's boyfriend" would even be classified as a family; the friend-of-friends is a part of the social circle; and the boyfriend is obviously not a stranger, if they already spent some weeks or months dating.

Yet, in the first two cases, the rapist was a stranger to the girl, which makes him quite a non-central example of a "non-stranger". In the first case, she was unable to avoid him; he didn't really "pass her filters". In the second case, that was the kind of error in judgement that I warn against -- believing that the other person was already filtered by someone else, when actually the other people similarly believed that someone else did the filtering, or were just being polite towards a person that didn't pass their personal filter but didn't seem bad enough to initiate a conflict. The third case, yeah, that was a direct failure at filtering.

So, I object connotationally against the suggestion that it is useless to use "spending a lot of time with someone, without seeing something bad" as a heuristics against abuse, because most abusers pass the filters of the neurotypical people anyway. First, we don't know how many abusers didn't pass the filters; maybe without them, there would be much more abuse. Second, in these three examples, the filter (a) couldn't be used by the victim, (b) was used improperly, and (c) wasn't used at all. I am not saying the filters are flawless, just that not using them at all is a fallacy of grey.

For a rationalist community living together, I suspect the first two scenarios could be relevant. A parent, for whom living with other rationalists has high value, might turn a blind eye to the red flags reported by their children or a spouse, and rationalize them away. A person no one actually personally vouches for could be invited, simply because they participated at a LW meetup, said hello to many people, and friended them on facebook.

I think it would be reasonable for the wannabe neighbors to spend some time together before buying the new house. For example, spend a vacation together, preferably at a place where you are expected to cook for yourselves. And maybe, collect some feedback on personal feelings towards each other, in a way that would prevent transitive reporting of "I feel X, but I guess most people are going to say Y, therefore I am saying Y too". Be honest; not having an opinion either way is a valid option. Generally, have an intermediate step between "met each other at a meetup" and "living in a baugruppe".

Comment author: Elo 27 March 2017 01:58:40PM 0 points [-]

Cars are net positive.

Edit: ignoring global warming because it's really hard to quantify. Just comparing deaths to global productivity increase because of cars. Cars are a net positive.

Comment author: dglukhov 27 March 2017 01:44:35PM *  0 points [-]

Cars also directly involve people in motor vehicle accidents, one of the leading causes of death in the developed world. Cars, and motor vehicles in general, also contribute to an increasingly alarming concentration of emissions into the atmosphere, with adverse effects to follow, most notably global warming. My point still stands.

A technology is only inherently good if it solves more problems than it causes, with each problem weighed by their impacts on the world.

Comment author: chaosmage 27 March 2017 01:07:52PM 0 points [-]

Cool. Is there a recording? I cannot read notation well enough to imagine what it sounds like.

Do you make other compositions, and how does this compare to your other work?

Comment author: sone3d 27 March 2017 12:44:15PM *  4 points [-]

ELI5 Posts Series

I would like to start an ELI5 Posts Series. Separated posts tagged ELI5 (Explain like I'm five) about relevant posts in the LW Community.

Examples: ELI5: Belief in Belief. ELI5: Leaky Generalizations. ELI5: Guessing the Teacher's Password. ...

In the comments users try its best to explain the key concepts and insights of each post with ELI5 examples. I think this could be very useful for incoming aspiring rationalists.

Let me know what you think.

Comment author: madhatter 27 March 2017 12:22:01PM 1 point [-]

Let me also add that while a sadist can parallelize torture, it's also possible to parallelize euphoria, so maybe that mitigates things to some extent.

Comment author: Viliam 27 March 2017 09:15:06AM 0 points [-]

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Problem is, this is probably not a good news for LW. Tomorrow, the RB section will most likely be back, possibly with a warning on the talk page that the evil cultists from LW are trying to hide their scandals.

Comment author: ChristianKl 27 March 2017 08:31:18AM 0 points [-]

Learning different mental models is quite useful.

On the other hand I'm not sure that it makes sense to think that there's one list with "the major mental models". Many fields have their own mental models.

Comment author: ChristianKl 27 March 2017 08:27:00AM 1 point [-]

It's worth noting that David Gerard was a LW contributor with a significant amount of karma: http://lesswrong.com/user/David_Gerard/

Comment author: ChristianKl 27 March 2017 07:07:19AM 0 points [-]

No, they are disappearing because the mods happen to enforce the ban and delete the post of the banned people.

Comment author: entirelyuseless 27 March 2017 05:42:47AM 1 point [-]

50% of the people have the lesion. That is a frequency. But if you pick a random person, that person either has the lesion or not. The probability, and not the frequency (which is not meaningful in the case of such an individual), that the random person has the lesion is 50%, because that is our expectation that the person has the lesion.

Comment author: lahwran 27 March 2017 04:58:29AM 2 points [-]

of course not, you're not destroying the primary copy of me. But that's changing the case you're making; you specifically said that killing now is preferable. I would not be ok with that.

Comment author: lahwran 27 March 2017 04:55:54AM *  0 points [-]

I DON'T CARE about your hell reasoning. I AM ALREADY FIGHTING for my future, don't you dare decide you know so much better that you won't accept the risk that I might have some measure that suffers. If you want good things for yourself, update your moral theory to get it out of my face. Again: if you try to kill me, I will try to kill you back, with as much extra pain as I think is necessary to make you-now fear the outcome.

Maybe some people would rather kill themselves than risk this outcome. That's up to them. But don't you force it on me, or goddamn else.

Comment author: math67 27 March 2017 02:01:03AM 0 points [-]

So now whole threads are disappearing because they point out the disappearing of comments. Seriously, what are the mods afraid of?

Comment author: Lumifer 27 March 2017 12:33:10AM 0 points [-]

SPAMMITY SPAM SPAM

Comment author: johnsonmx 27 March 2017 12:27:26AM 0 points [-]

We don't live in a universe that's nice or just all the time, so perhaps there are nightmare scenarios in our future. Not all traps have an escape. However, I think this one does, for two reasons.

(1) all the reasons that RobinHanson mentioned;

(2) we seem really confused about how consciousness works, which suggests there are large 'unknown unknowns' in play. It seems very likely that if we extrapolate our confused models of consciousness into extreme scenarios such as this, we'll get even more confused results.

Comment author: RobinHanson 26 March 2017 11:18:00PM 3 points [-]

If it is the possibility of large amounts of torture that bothers you, instead of large ratios of torture experience relative to other better experience, then any growing future should bother you, and you should just want to end civilization. But if it is ratios that concern you, then since torture isn't usually profitable, most em experience won't be torture. Even if some bad folks being rich means they could afford a lot of torture, that would still be a small fraction of total experience.

Comment author: Regex 26 March 2017 11:15:39PM *  0 points [-]

It seems to me that the sadistic simulator would fill up their suffering simulator to capacity. But is it worse for two unique people to be simulated and suffering compared to the same person simulated and suffering twice? If we say copies suffering is less bad than unique minds, If they didn't have enough unique human minds, they could just apply birth/genetics and grow some more.

This is more of a simulating-minds-at-all problem than a unique-minds-left-to-simulate problem.

Comment author: RomeoStevens 26 March 2017 10:48:49PM 0 points [-]

3.1.4 seems totally ungrounded from analysis and fairly random in its speculation.

Comment author: gjm 26 March 2017 09:51:01PM 1 point [-]

Did you see the little investment game at the end? I would be very surprised if you could replicate that in SurveyMonkey or Google Forms.

Comment author: Lumifer 26 March 2017 09:48:25PM 0 points [-]

Yeah, I guessed as much, but I find it funny that browsers unwilling to run scripts are auto-classified as bots :-)

Comment author: Lumifer 26 March 2017 09:47:23PM 0 points [-]

Let's look at the context.

You said

90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer

That, you are saying, are frequencies and not probabilities. OK, let's continue:

Let's suppose that 50% of the people have the lesion and 50% do not, just to make the situation specific.

The probability of having the lesion given a random person ... will be 50%, and the probability of not having the lesion will be 50%.

So why having a lesion (as a function of being a human in this particular population) is a probability and having cancer (as a function of having a lesion) is a frequency?

Comment author: RedMan 26 March 2017 07:46:57PM 0 points [-]

Correct, this is very much an 'I'll pray for you' line of reasoning. To use a religious example, it is better to martyr a true believer (who will escape hell) than to permit a heretic to live, as the heretic may turn others away from truth, and thus curse them to hell. So if you're only partially sure that someone is a heretic, it is safer for the community to burn them. Anyone who accepts this line of argument would rather be burnt than allowed to fall into heresy.

Unfortunately, mind uploading gives us an actual, honest road to hell, so the argument cannot be dispelled with the statement that the risk of experiencing hell is unquantifiable or potentially zero. As I argue here, it is non-zero and potentially high, so using moral arguments that humans have used previously, it is possible to justify secure deletion in the context of 'saving souls'. This does not require a blender, a 'crisis uploading center' may do the job just as well

Comment author: RedMan 26 March 2017 07:35:02PM *  0 points [-]

I discover evidence that some sadistic jerk has stolen copies of both our minds, uploaded them to a toture simulation, and placed the torture simulation on a satellite orbiting the sun with no external communication inputs and a command to run for as long as possible at maximum speed. Rescue via spaceship is challenging and would involve tremendous resources that we do not have available to us.

I have a laser I can use to destroy the satellite, but a limited window in which to do it (would have to wait for orbits to realign to shoot again).

Would you be upset if I took the shot without consulting you?

Comment author: entirelyuseless 26 March 2017 07:12:56PM 1 point [-]

Presumably someone who accepted the argument would be happy with this deal.

Comment author: lahwran 26 March 2017 07:08:51PM 2 points [-]

morality is about acausal contracts between counterfactual agents, and I do not want my future defended in this way. I don't care what you think of my suffering; if you try to kill me to prevent my suffering, I'll try to kill you back.

Comment author: entirelyuseless 26 March 2017 06:36:41PM 0 points [-]

"90% of the people" etc is a statement about frequencies, not probabilities.

Comment author: ChristianKl 26 March 2017 06:16:57PM 0 points [-]

The Wikipedia summary of the Fair Housing Act says: "The Fair Housing Act is a federal act in the United States intended to protect the buyer or renter of a dwelling from seller or landlord discrimination. Its primary prohibition makes it unlawful to refuse to sell, rent to, or negotiate with any person because of that person's inclusion in a protected class."

Not being a rationalist doesn't seem like a protected class.

Comment author: Vaniver 26 March 2017 06:04:19PM 0 points [-]

It looks like it's finishing soon, though.

Comment author: ChristianKl 26 March 2017 06:00:25PM *  1 point [-]

You could also subtract properties or multiply or divide them.

More trusted adults might increase the chances that the child isn't isolated and talks about his experiences with someone which makes them less susceptible to be a victim.

The WHO for example says that among the risk factors for abuse there are:

being isolated in the community or lacking a support network

a breakdown of support in child rearing from the extended family.

If your true concern is the children not getting abused it makes sense to look at the actual risk factors that the literature supports.

Children in this project might actually be less at risk because there's a support network. The textbook says "have a strong support network" and not keep the support network small to reduce the number of trusted adults.

Comment author: MaryCh 26 March 2017 05:47:30PM *  0 points [-]

A New Year Song about the Hirsch Index, on the joy of reciprocal citing (funnier if you know Russian, but should be clear enough even if you don't. Unfortunately, no subtitles. Ends with 'Happy New Year! Happy New Hirsch! I'm off to write an article.')

Comment author: username2 26 March 2017 05:38:43PM 0 points [-]

Summation of probabilities.

Comment author: Dagon 26 March 2017 04:26:24PM 0 points [-]

it depends on what you're trying to discern with the question. If you want to know what people would do with a windfall in addition to their current allocation, ask that. but then you probably need to ask some intrusive quesitons about what they previously had.

You could state "X is all you have in the world, and you make enough to live on but not to save additional. which of the following would you choose to save for a vacation in 6 months" or "for retirement in 20 years" or "for emergencies".

Comment author: Oscar_Cunningham 26 March 2017 04:26:16PM 0 points [-]

Any luck? I'd be interested in seeing some of the computer solutions even if their scores didn't beat mine.

By the way I can now improve my score to 14sqrt(3)-24 = 0.249... . My covering shape is a 1/4 by 1/7 right-angled triangle. This clearly tiles the square perfectly and you can also fit 24 of them into the equilateral triangle. To see this first divide the equilateral triangle exactly into 24 right-angled triangles of sides 1/4 and 1/(4sqrt(3)), and then note that 1/7 < 1/(4sqrt(3)). There's no point in drawing a picture since you can barely see the gaps.

Comment author: ChristianKl 26 March 2017 04:08:12PM 0 points [-]

There's no reason to run your own survey software. SurveyMonkey or even Google forms do the job.

Comment author: SquirrelInHell 26 March 2017 04:01:09PM *  0 points [-]

OK, I've re-read the original papers carefully to check this.

Your criticism of Patch 1 is entirely based on the wording of "For example, in an appendix we show how by assuming that the difference is no greater than a factor of one million we can derive the key tripartite disjunction".

The wording is indeed misleading and wrong, but to be fair - only in this one sentence. In all other places the authors are consistent in saying that you need the factor to be no greater N/1000000, or an "astronomically large number", with the understanding that an "astronomically large number" divided by 10^6 is still an "astronomically large number".

So overall the criticism is sorta uninteresting - I think you are attacking a particularly strawmanned version of how you can understand from the whole "patch" paper.

As for Patch 2, didn't read yet.

Comment author: RedMan 26 March 2017 11:39:18AM 0 points [-]

As stated in a separate comment, the human mind runs at 20W, so that's probably a reasonable design goal for the power consumption of an emulation. Keeping a few copies of minds around for torture will eventually be a cheap luxury, comparable to leaving a lightbulb on.

Comment author: gjm 26 March 2017 11:37:28AM 0 points [-]

Fair enough! I hope what I've written above will be helpful to anyone else who finds it initially puzzling the same way as I did.

Comment author: RedMan 26 March 2017 11:34:27AM *  0 points [-]

For the sake of argument, some numbers to match the assumptions you named. Let's base these assumptions on some numbers available to Americans today, rounded to even numbers in the direction least favorable to my argument.

Percentage of population that are psychopaths: 1% (two orders of magnitude more non psychopaths than psychopaths exist today) Probability of being victim of violent crime varies a lot based on demographics, 10 per 1000 per year is reasonable...so 1% Power consumption of human mind: 20W (based on the human brain, we will not hit this immediately, but it is a design goal, and may even be exceeded in efficiency as we get better) Power consumed by typical American household: 900kWh per month (100 years in brain-seconds) Number of humans available for uploading: 10 billion.

Over a hundred thousand years, that's a lot of terrible people, a lot of spare capacity for evil, and a high probability of everyone eventually experiencing a violent crime, like upload-torment. Changes to those numbers unfavorable to this scenario require incredible optimism about social developments, and pessimism about technical developments.

I feel like just about anyone, even without a stanford prison experiment like environment, can muster up the will to leave a lightbulb on for a while out of spite.

Arguably, once 'captured', the aggregate total time spent experiencing torture for a given future copy of you may vastly exceed the time spent on anything else.

Anyone who argues in favor of 'merciful' euthanasia for people on the way to horrific medical problems would likely argue in favor of secure deletion to avoid an eternity in hell.

Comment author: ChristianKl 26 March 2017 09:55:50AM 0 points [-]

As far as I understand the Accelerator project is supposed to go in this direction: https://www.facebook.com/groups/664817953593844/

Comment author: ChristianKl 26 March 2017 09:52:25AM 1 point [-]

But there are good reasons why neurotypical people require long time before they start trusting someone; so "they are a member of the same community" should never be used as a replacement for "I have a lot of personal experience from gradually deepening interaction with this specific person".

Given that most abuse happens from people who aren't strangers and successfully passed the filters of neurotypical people that are required to build trust, I don't think trusting a person because you spent a lot of time with them is generally a good heuristic.

Comment author: TheAncientGeek 26 March 2017 09:17:26AM 0 points [-]

. We know far more about the world than we do about our senses.

Well, so long as we can be sure we know anything without doing epistemology....

Comment author: ChristianKl 26 March 2017 09:16:19AM 0 points [-]

There is a middle ground of having a secure, private environment for your family with walls and clear separation. Such as most conventional living situations.

Most conventional living situations lead to the abuse rate of 25%. I don't think you have provided good evidence that the conventional layout is better.

Comment author: TheAncientGeek 26 March 2017 09:00:57AM 0 points [-]

Yep.

Comment author: ChristianKl 26 March 2017 09:00:11AM *  0 points [-]

Even if 4% of adults are sexually attracted to children that doesn't mean that they are going to abuse children. There are guy's in this communities who are sexually attracted to women but who never had sex and also wouldn't rape a woman just to have sex.

If it's clear a rationalist that abusing a child will mean that he get's expelled from the community in which he lives and might face legal challenges than I think most of the people in this community wouldn't act on a system I desire to engage in sexual abuse because their system II is strong enough to think through the situation.

Practically that means that it's important to have an environment where open communication happens so the expectation that a child will communicate about situations with whom they are uncomfortable exists. I think a lot of abuse does happen in environments where that open communication is lacking and a child will stay silent about abuse.

Comment author: TheAncientGeek 26 March 2017 08:58:56AM *  0 points [-]

Yikes. The current version of the WP article is a lot less balanced than the RW one!

Also, the edit warring is two way...someone wholesale deleted the Rs B section.

Comment author: ChristianKl 26 March 2017 08:55:36AM 2 points [-]

As the other anonymous said, this doesn't follow at all. A group living situation creates a larger field of "trusted adults" per child.

Do you have sources that suggests that having a larger circle of trusted adults per child increases the likelihood of getting abused?

Comment author: ChristianKl 26 March 2017 08:48:51AM 0 points [-]

A group living situation creates a larger field of "trusted adults" per child. Unless all the adults are mindful of these risks, a situation arises where any adult may at any time be put in charge of watching any child or children. This is frankly the textbook definition of what not to do.

Could you point out a textbook that describes that is isn't what should be done?

Comment author: -necate- 26 March 2017 08:35:07AM 0 points [-]

Of course I can no longer change the framing while the survey is running, but for future surveys how would you frame it? Do you think it is worth specifying time-frame, other wealth an so on? I am not sure if most participants of an online survey would carefully read the instructions and actually try to visualise the specified situation.

What I have seen often is the framing "on top of what you already own you have been given..." which implies that you should make your decision based on your current situation. Do you think this would be superior?

Comment author: TheAncientGeek 26 March 2017 08:32:47AM 0 points [-]

AI researchers, a group of people who are fairly disjoint from LessWrongians, may have a rigorous and stable definition of UF, but that is not relevant. the point is that writings on MIRI and LessWrong use,and in fact dep3end on, shifting an ambiguous definitions.

Comment author: -necate- 26 March 2017 08:30:06AM 0 points [-]

Unfortunately I cannot fix the UI problems, because I lack the programming skills and asked a fried to set up the survey for me. He is pretty busy now and I don't want to bother him again, since I already requested many changes before the survey went live. The winner will gain the amount he ended up with not only the gains.

Comment author: -necate- 26 March 2017 08:25:59AM 0 points [-]

I use a captcha, so if you block scripts this happens.

Comment author: ChristianKl 26 March 2017 07:47:16AM *  1 point [-]

Highlites from the answer:

laundry machine from the 1980 did maybe 400 or 600 U/min for spinning while nowadays 1600 U/min is usual. Duration of a washing cycle has probably tripled (1 h vs. 3 h), and there are spinning cycles in between now, whereas there used to be only a final one.

They cite Miele engineering their machines for 5000 washing cycles or 20 years, while e.g. Whilpool, BSH and Electrolux use 2009 washing cycles and 10 years.

machines from the 70s and 80s needed about 4+ times as much energy (and also much more detergent powder and water) to achieve the same cleaning results as a machine from 2004

They find that 75% of laundry machines break at their first owner, and for those machines the average age went down from 12.5 to 11.6 years from 2004 to 2012/13. The fraction of laundry machines that breaks within the first 5 years has increased a lot (6 -> 15% of those that break, i.e. 4.5 -> 11.25% of all laundry machines).

there are huge differences between the cheaper and the expensive categories: e.g. after 5 1/2 years' equivalent, 25 % of the 350 - 550 EUR machines are broken, in contrast to 8 % of the > 700 EUR machines.


It seems that it's still possible to buy quality washing machine's if you don't choose a budget model.

Comment author: stevearc 26 March 2017 04:39:13AM 0 points [-]

I have wanted exactly this ever since I moved to the Bay Area. Definitely interested if this idea starts getting closer to reality!

Comment author: Dagon 26 March 2017 12:04:45AM 3 points [-]

I always hate these kinds of things which say "you're given X, what do you want to do" and don't specify what timeframe, goals, or other resources/investments I already have.

My answers are very different if this is a marginal investment on top of my current allocations vs a sole investment and I have nothing else. Likewise very different if I expect to need the results in 6 months or 20 years, and whether I can change/rebalance in the meantime.

Comment author: Dagon 25 March 2017 11:57:43PM 1 point [-]

Umm, stop waving your hands and start putting some estimates down. Especially when you say things like

Over a long enough timeline, the probability of a copy of any given uploaded mind falling into the power of a sadistic jerk approaches unity.

You show an inability to actually figure out the relative frequencies that would make this true or false. There's lots of ways this could be false - most notably there may be dozens of orders of magnitude more uploaded minds than sadistic jerks, and any nonzero cost of running a mind means the SJs simply can't afford to torture most of them.

Once an uploaded mind has fallen under the power of a sadistic jerk, there is no guarantee that it will ever be 'free', and the quantity of experienced sufferring could be arbitrarily large, due in part to the embarrassingly parallel nature of torture enabled by running multiple copies of a captive mind.

More unstated assumptions (with which I think I disagree). How are you aggregating suffering (or value generally) for minds? Do you think that identical tortures for two copies of a mind is different than torture of one? Why? Do you think that any amount of future potential torture can remove the value of current pleasure? Why?

Even if you try to just quantify "value * experienced-seconds" and simply multiply, it's going to be hard to think anyone is better off NOT being uploaded.

Feel free to make choices for yourself, and even to advocate others to securely erase their information-patterns before it's too late. But without a lot more clear probability estimates and aggregation methodology, I think I'll take my chances and seek to continue living.

Comment author: SquirrelInHell 25 March 2017 11:20:28PM *  0 points [-]

I got from it that for the Simulation Argument to work, it is important what constants we assume in each clause, in relation to each other. So checking each disjunctive claim separately allows one to do a sorta sleight-of-hand, in which one can borrow some unseen "strength" from the other claims - and there actually isn't enough margin to be so lax. Is this correct?

Comment author: Lumifer 25 March 2017 11:14:41PM 1 point [-]

Getting cancer largely depends on having the lesion or not. But the probability of getting cancer depends, not on the thing, but on the probability of having the lesion.

Let me quote your own post where you set up the problem:

90% of the people with the lesion get cancer, and 1% of the people without the lesion get cancer.

This is the probability of getting cancer which depends on the "thing", that is, the lesion. It does NOT depend on the probability of having a lesion.

Comment author: gwillen 25 March 2017 11:13:03PM 0 points [-]

The obvious next step seems to be a fork of this extension that doesn't restrict itself to legal sources. That would make it a hell of a lot more useful.

Comment author: Lumifer 25 March 2017 11:10:42PM 0 points [-]

LOL. I poke my nose in there and what does it tell me?

You could not be verified. Begone, bot!

Heh.

Comment author: DustinWehr 25 March 2017 09:49:25PM 2 points [-]

Seeing that there was some interest in Bostrom's simulation argument before (http://lesswrong.com/lw/hgx/paper_on_the_simulation_argument_and_selective/), I wanted to post a link to a paper I wrote on the subject, together with the following text, but I was only able to post into my (private?) Drafts section. I'm sorry I don't know better about where the appropriate place is for this kind of thing (if it's welcome here at all). The paper: http://www.cs.toronto.edu/~wehr/rd/simulation_args_crit_extended_with_proofs.pdf

This is a very technical paper, which requires some (or a lot) of familiarity with Bostrom/Kulczycki's "patched" Simulation Argument (www.simulation-argument.com/patch.pdf). I'm choosing to publish it here after experiencing Analysis's depressing version of peer review (they rejected a shorter, more-professional version of the paper based on one very positive review, and one negative review that was almost certainly written by Kulczycki or Bostrom themself).

The positive review (of the earlier shorter, more-professional version of the paper) does a better job of summarizing the contribution than I did, so with the permission of the reviewer I'm including an excerpt here:

Bostrom (2003) argued that at least one of the following three claims is true: (1) the fraction of civilizations that reach a 'post-human' stage is approximately zero; (2) the fraction of post-human civilizations interested in running 'significant numbers' of simulations of their own ancestors is approximately zero; (3) the fraction of observers with human-type experiences that are simulated is approximately one.

The informal argument for this three-part disjunction is that, given what we know about the physical limits of computation, a post-human civilization would be so technologically advanced that it could run 'hugely many' simulations of observers very easily, should it choose to do so, so that the falsity of (1) and (2) implies the truth of (3). However, this informal argument falls short of a formal proof.

Bostrom himself saw that his attempt at a formal proof in the (2003) paper was sloppy, and he attempted to put it right in Bostrom and Kulczycki (2011). The take-home message of Sections 1 and 2 of the manuscript under review is that these (2011) reformulations of the argument are still rather sloppy. For example, the author points out (p. 6) that the main text of B&K inaccurately describes the mathematical argument in the appendix: the appendix uses an assumption much more favourable to B&K's desired conclusion than the assumption stated in the main text. Moreover, B&K's use of vague terms such as 'significant number' and 'astronomically large factor' creates a misleading impression. The author shows, amusingly, that the 'significant number' must be almost 1 million times greater than the 'astronomically large factor' for their argument to work (p. 9).

In Section 3, the author provides a new formulation of the simulation argument that is easily the most rigorous I have seen. This formulation deserves to be the reference point for future discussions of the argument's epistemological consequences."

Comment author: SquirrelInHell 25 March 2017 09:42:33PM 1 point [-]

If you take the economic perspective (such as I understand R. Hanson's version to be), the only simulations we will ever run at scale are those that generate profits.

Torture is a money-sink with no economic value other than blackmail.

So torture in simulations will necessarily be marginalized (esp. so if humanity becomes better at pre-commitment to not respond to blackmail).

Comment author: TheAncientGeek 25 March 2017 07:39:47PM 0 points [-]

Citing it is going to be difficult,

To which the glib answer is "that's because it isn't true".

" I'm forced to site philosophers who are opposed to it because they seem to be the only ones willing to actually define it in a concise way. I'll just reference this essay by Dennett to start with.

Dennett gives a concise definition because he has the same simplistic take on the subject as you. What he is not doing is showing that there is an actually group of people who describe themselves as postmodernists, and have those views. The use of the terms "postmodernist" is a bad sign: it's a tern that works like "infidel" and so on, a label for an outgroup, and an ingroup's views on an outgroup are rarely bedrock reality.

On the other hand, refraining from condemning others when you have skeletons in your own closet is easy.

I'm not sure I understand what you're referring to here.

When we, the ingroup, can't define something it's Ok, when they, the outgroup, can't define something, it shows how bad they are.

For instance, when you tell outsiders that you have wonderful answers to problems X, Y and Z, but you concede to people inside the tent that you actually don't.

That's called lying.

People are quite psychologically capable of having compartmentalised beliefs, that sort of thing is pretty ubiquitous, which is why I was able to find an example from the rationalist community itself. Relativism without contextualisation probably doesn't make much sense, but who is proposing it?

There's no such thing as postmodernism

You know exactly what I mean when I use that term, otherwise there would be no discussion.

As you surely know that I mean there is no group of people who both call themselves postmodernists and hold the views you are attributing to postmodernists.

It seems that you can't even name it without someone saying that's not what it's called, it actually doesn't have a definition, every philosopher who is labeled a postmodernist called it something else, etc.

It's kind of diffuse. But you can talk about scepticism, relativism, etc, if those are the issues.

If I can't define it, there's no point in discussing it. But it doesn't change the fact that the way the mainstream left has absorbed the philosophy has been in the "there is no objective truth" / "all cultures/beliefs/creeds are equal" sense.

There's some terrible epistemology on the left, and on the right, and even in rationalism.

My position is more about doing rationality right than not doing it all. If you critically apply rationality to itself, you end up with something a lot less elf confident and exclusionary than Bay Area rationalism.

I'd like to hear more about this. By "Bay Area rationalism", I assume you are talking about a specific list of beliefs like the likelihood of intelligence explosion? Or are you talking about the Bayesian methodology in general?

I mean Yudkowsky's approach. Which flies under the flag of Bayesianism, but doesn't make much use of formal Bayesianism.

View more: Next