All of topynate's Comments + Replies

The first image is a dead hotlink. It's in the internet archive and I've uploaded it to imgur.

It was considerably easier before the Dunblane massacre (1996).

That very much depends on what you choose to regard as the 'true nature' of the AI. In other words we're flirting with reification fallacy by regarding the AI as a whole as 'living on the blockchain', or even being 'driven' by the blockchain. It's important to fix in mind what makes the blockchain important to such an AI and to its autonomy. This, I believe, is always the financial aspect. The on-blockchain process is autonomous precisely because it can directly control resources; it loses autonomy in so far as its control of resources no longer fulfils it... (read more)

That was pretty good, thanks.

topynate100

There's an asymptotic approximation in the OEIS: a(n) ~ n!2^(n(n-1)/2)/(M*p^n), with M and p constants. So log(a(n)) = O(n^2), as opposed to log(2^n) = O(n), log(n!) = O(n log(n)), log(n^n) = O(n log(n)).

I want a training session in Unrestrained Pessimism.

As someone who moved to Israel at the age of 25 with very minimal Hebrew (almost certainly worse than yours), went to an ulpan for five months and then served in the IDF for 18 months while somehow avoiding the 3 month language course I certainly should have been placed in based on my middle-of-ulpan level of fluency:

Ulpan (not army ulpan, real ulpan) is actually pretty good at doing what it's supposed to. I had a great time - it depends on the ulpan but I haven't heard of a single one that would be psychologically damaging. Perhaps your experience with a ... (read more)

Then perhaps my assessment was mistaken! But in any case, I wasn't referring to the broad idea of cryonics patients ending up in deathcubes, but of their becoming open-access in an exploitative society - c.f. the Egan short.

My attempt at a reply turned into an essay, which I've posted here.

topynate120

It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.

1jow
This argument has made me start seriously reconsidering my generally positive view of cryonics. Does anyone have a convincing refutation? The best I can come up with is that if resuscitation is likely to happen soon, we can predict the values of the society we'll wake up in, especially if recovery becomes possible before more potentially "value disrupting" technologies like uploading and AI are developed. But I don't find this too convincing.
0Ishaan
This answer raises the question of how narrow the scope of the contest is: Do you want to specifically hear arguments from scientific evidence about how cryonics is not going to preserve your consciousness? Or, do you want arguments not to do cryonics in general? Because that can also be accomplished via arguments as to the possible cons of having your consciousness preserved, arguments towards opportunity costs of attempting it (effective altruism), etc. It's a much broader question. (Edit - nevermind, answered in the OP upon more careful reading)

The article is crap but referring to the sample size without considering the baseline success rate is misleading. If, say, the task were to be creating a billion dollar company, and the treated group had even one success, then that would be quite serious evidence for an effect, just because of how rare success is.

0Kawoomba
As is usus, consider such data to be within expected parameters unless mentioned.

I can't find it by search, but haven't you stated that you've written hundreds of KLOC?

2BT_Uytya
Yep, he have.
0Eliezer Yudkowsky
Sounds about right. It wasn't good code, I was young and working alone. Though it's more like the code was strategically stupid than locally poorly written.
topynate210

The front page is, in my opinion, pretty terrible. The centre is filled with static content, the promoted posts are barely deserving of the title, and any dynamic content loads several seconds after the rest of the page, even though the titles of posts could be cached and loaded far more quickly.

[anonymous]220

My personal solution is to treat the URL of http://lesswrong.com/r/all/recentposts as my Less Wrong home page, since it appears to load all articles from Main and Discussion equally for convenient viewing in newest to oldest order without kruft. I can't claim original credit for this url (which doesn't appear to be prominently linked anywhere that I see), since I'm fairly sure someone else showed this feature to me, but it has been long enough ago that I don't remember who.

If I were to be charitable, I could say the front page appears oriented to people w... (read more)

This is analogous to zero determinant strategies in the iterated prisoner's dilemma, posted on LW last year. In the IPD, there are certain ranges of payoffs for which one player can enforce a linear relationship between his payoff and that of his opponent. That relationship may be extortionate, i.e. such that the second player gains most by always cooperating, but less than her opponent.

2Eliezer Yudkowsky
Zero determinant strategies are not new. I am asking if the solution is new. Edited post to clarify.

Yet another article on the terribleness of schools as they exist today. It strikes me that Methods of Rationality is in large part a fantasy of good education. So is the Harry Potter/Sherlock Holmes crossover I just started reading. Alicorn's Radiance is a fair fit to the pattern as well, in that it depicts rapid development of a young character by incredible new experiences. So what solutions are coming out of the rational community? What concrete criteria would we like to see satisfied? Can education be 'solved' in a way that will sell outside this community?

0ChristianKl
I think that's a bad question. I don't think that every school should follow the same criteria. It's perfectly okay if different school teach different things. http://www.kipp.org/ would be an educational project financed by Bill Gates which tries to use a lot of testing. On the other hand you have unschooling and enviroments like Sudbury Valley School. I don't think that every child has to learn the same way. Both ways are viable. When it comes to the more narrow rationality community I think there more thought about building solutions that educate adults than about educating children. If however something like Anki helps adults learn, there no real reason why the same idea can't help children as well. Similar things go for the Credence game and predictionbook. If those tools can help adults to become more calibrated they probably can also help kids even if some modifications might be needed. Without having the money to start a completly new school I think it's good to focus on building tool that build a particular skill.
9bramflakes
The characters in those fics are also vastly more intelligent and conscientious than average. True, current school environments are stifling for gifted kids, but then they are also a very small minority. Self-directed learning is counterproductive for not-so-bright, and attempts to reform schools to encourage "creativity" and away from the nasty test-based system tend to just be smoke-screens for any number of political and ideological goals. Like the drunk man and the lamppost, statistics and science are used for support rather than illumination, and the kids are the ones who suffer. There are massive structural problems wracking the educational system but I wouldn't take the provincial perspectives of HPMoR or related fiction as good advice for the changes with the biggest marginal benefit.

I doubt he can Transfigure antimatter. If he can, the containment will be very hard to get right, and he would absolutely have to get it right. How do you even stop it blowing up your wand, if you have to contact the material you're Transfiguring?

Maybe Tazers! They'd work against some shields, are quite tricky to make, and if you want lots of them they're easier to buy. Other things: encrypted radios, Kevlar armour (to avoid Finite Incantem). Most things that can be bought for 5K could have been bought in Britain in the early 90s, apart from that sort of paramilitary gear. Guns are unlikely because the twins would have heard of them.

0TobyBartels
I don't think that Tasers were really a thing until 1994, which saw the first version that didn't use gunpowder as a propellant (and hence was not legally a firearm). Harry could still have heard of them by good luck, of course.
0DanArmak
Transfigure the containment device first. Then find a way to transfigure the antimatter inside it. To solve the wand contact problem, transfigure the empty container into a full container. Then his wand is in contact with the container, but the container doesn't actually change. Granted, it's extremely dangerous, especially when practicing.
CAE_Jones120

Guns are unlikely because the twins would have heard of them.

Consensus on /r/HPMOR was that Harry would have specified a type of gun and its ammo, since if he just said "guns" the twins would probably have brought muskets.

6ikrase
The twins just scanned the list, and the twins probably wouldn't have recognized, say '9mm automatic pistol' even if they know what guns are.

If he can make a model rocket, he can make a uranium gun design. It's one slightly sub-critical mass of uranium with a suitable hole for the second piece, which is shaped like a bullet and fired at it using a single unsynchronised electronic trigger down a barrel long enough to get up a decent speed. Edit: And then he or a friendly 7th year casts a charm of flawless function on it.

5ikrase
That was a lot more than a model rocket.... probably weighed at least 5kg. Also, the fact that either he OR Quirrel could make a working large rocket engine without knowing the exact composition of propellant, precise geometry of nozzle, etc indicates that Transfiguration can work at a really high level of abstraction. He probably would have no trouble at all transfiguring a nuclear weapon with a mechanical timer trigger.

Was I alone in expecting something on recursive self improvement?

3aelephant
I was thinking "rapid sequence intubation". I've noticed that in published works, the 1st instance of a term is usually spelled out / clarified. So in the title, you could use "repetitive strain injury (RSI)" & then use RSI for every instance after that.
-1NancyLebovitz
I thought the title might be misread as being about the geometry of workspace ergonomics.
0TsviBT
Nay.
0Kaj_Sotala
You weren't.

Perhaps gewunnen, meaning conquered, and not gewunen. I don't think you can use present subjunctive after béo anyway. Here béo is almost surely the 3rd person singular subjunctive of béon, the verb that we know as to be. If gewunnen, then we can interpret it as being the past participle, which makes a lot more sense (and fits the provided translation). The past participle of gewunian is gewunod, which clearly isn't the word used here.

Edit: translator's automatic conjugation is broken, sorry for copy-paste.

3Thrasymachus77
Good catch, I wasn't even thinking if there were a different, related verb that might be used there, nor of the particular grammar. That's just where that form gewunen showed up in the translator. If the verb is winnan or gewinnan, the past participle would be gewunnen. In either case, the sense is conquering to obtain, or alternatively resisting, struggling against, enduring or suffering. And there are less ambiguous words to use if the sense was that Death would be defeated and eliminated, i.e. destroyed, or even mastered or overcome. In other words, it still looks ambiguous enough to me that it could mean that "...three shall be their devices by which Death shall be tolerated."
topynate180

Aha! The prophecy we just heard in chapter 96 is Old English. However, by the 1200s, when, according to canon, the Peverell brothers were born, we're well into Middle English (which Harry might well understand on first hearing). I was beginning to wonder if there was not some old wizard or witch listening, for whom that prophecy was intended.

There's still the problem of why brothers with an Anglo-Norman surname would have Old English as a mother tongue... well, that could happen rather easily with a Norman father and English mother, I suppose.

And the coinc... (read more)

0Aureateflux
The name isn't really an issue for a number of reasons. It could have been changed by the family itself to take advantage of political and social conditions, and storytellers also would have reason to update the name to appeal to their audiences. In fact, considering the centuries-long game of telephone that would be at play, it's more surprising that the modern name is as close as it is to the name that appears in the prophecy itself. This makes it fairly likely that the whole story had been lost and was rediscovered relatively recently and then gallicized.

If you cock up and define a terminal value that refers to a mutable epistemic state, all bets are off. Like Asimov's robots on Solaria, who act in accordance with the First Law, but have 'human' redefined not to include non-Solarians. Oops. Trouble is that in order to evaluate how you're doing, there has to be some coupling between values and knowledge, so you must prove the correctness of that coupling. But what is correct? Usually not too hard to define for the toy models we're used to working with, damned hard as a general problem.

I have a comment waiting in moderation on the isteve post Konkvistador mentioned, the gist of which is that the American ban on the use of genetic data by health insurers will cause increasing adverse selection as these services get better and cheaper, and that regulatory restrictions on consumer access to that data should be seen in that light. [Edit: it was actually on the follow-up.]

A pertinent question is what problem a government or business (not including a general AI startup) may wish to solve with a general AI that is not more easily solved by developing a narrow AI. 'Easy' here factors in the risk of failure, which will at least be perceived as very high for a general AI project. Governments and businesses may fund basic research into general AI as part of a strategy to exploit high-risk high-reward opportunities, but are unlikely to do it in-house.

One could also try and figure out some prerequisites for a general AI, and see wh... (read more)

1NancyLebovitz
How much of the world do you need to understand to make reliably good investments? Do you want your investment computer to be smart enough to say "there's a rather non-obvious huge bubble in the derivatives based on real estate"? Smart enough to convince you when you don't want to believe it?

Consider those charities that expect their mission to take years rather than months. These charities will rationally want to spread their spending out over time. Particularly for charities with large endowments, they will attempt to use the interest on their money rather than depleting the principal, although if they expect to receive more donations over time they can be more liberal.

This means that a single donation slightly increases the rate at which such a charity does good, rather than enabling it to do things which it could not otherwise do. So the s... (read more)

I don't think you should write the post. Reason: negative externalities.

It looks like wezm has followed your suggestion, with extra hackishness - he added a new global variable.

Just filed a pull request. Easy patch, but it took a while to get LW working on my computer, to get used to the Pylons framework and to work out that articles are objects of class Link. That would be because LW is a modified Reddit.

topynate170

I just gave myself a deadline to write a patch for that problem.

Edit: Done!

topynate210

Task: Write a patch for the Less Wrong codebase that hides deleted/banned posts from search engines.

Deadline: Sunday, 30 January.

6topynate
Just filed a pull request. Easy patch, but it took a while to get LW working on my computer, to get used to the Pylons framework and to work out that articles are objects of class Link. That would be because LW is a modified Reddit.
3lukeprog
Yes please!

The thrust of your argument is that an agent that uses causal decision theory will defect in a one-shot Prisoner's Dilemma.

You specify CDT when you say that

No matter what Agent_02 does, actually implementing Action_X would bear no additional value

because this implies Agent_01 looks at the causal effects of do(Action_X) and decides what to do based solely on them. Prisoner's Dilemma because Action_X corresponds to Cooperate, and not(Action_X) to Defect, with an implied Action_Y that Agent_02 could perform that is of positive utility to Agent_01 (hence,... (read more)

And how do you propose to stop them. Put a negative term in their reward functions?

This is a TDT-flavoured problem, I think. The process that our TDT-using FAI uses to decide what to do with an alien civilization it discovers is correlated with the process that a hypothetical TDT-using alien-Friendly AI would use on discovering our civilization. The outcome in both cases ought to be something a lot better than subjecting us/them to a fate worse than death.

topynate250

If that's the case, then when a page is hidden the metadata should be updated to remove it from the search indexes. If you search 'pandora site:lesswrong.com' on Google, all the pages are still there, and can be followed back to LW. That is to say, the spammers are still benefiting from every piece of spam they've ever posted here.

0wedrifid
I just did the search and noticed that both HP:MoR and RationalWiki://lesswrong make it onto the first page. Neither of them include the word 'pandora'. That's impressive!
2wedrifid
Emphasising parent. If spammers don't get any benefit from including this site in their bots then they are less likely to take the effort to include it - and the effort of handling catphcas and configuring to local conditions.

All of those phenomena are caused by human action! Once you know humans exist, the existence of macroeconomics is causally screened off from any other agentic processes. All of those phenomena, collectively, aren't any more evidence for the existence of an intelligent cause of the universe than the existence of humans: the existence of such a cause and the existence of macroeconomics are conditionally independent events, given the existence of humans.

-1Will_Newsome
Right, I was responding to Dreaded_Anomaly's argument that interesting things tend not to be caused by agenty things, which was intended as a counterargument to my observation that interesting things tend to be caused by agenty things. The exchange was unrelated to the argument about the relatively (ab)normal interestingness of this universe. I think that is probably the reason for the downvotes on my comment, since without that misinterpretation it seems overwhelmingly correct. Edit: Actually, I misinterpreted the point of Dreaded_Anomaly's argument, see above.
topynate-10

If you don't mind my asking, how did it come to be that you were raised to believe that convincing arguments against theism existed without discovering what they are? That sounds like a distorted reflection of a notion I had in my own childhood, when I thought that there existed a theological explanation for differences between the Bible and science but that I couldn't learn them yet; but to my recollection I was never actually told that, I just worked it out from the other things I knew.

0Will_Newsome
I knew some convincing arguments against theism, but I suppose what I explicitly did not know of were counterarguments to the theistic counterarguments against those atheistic convincing arguments, because I was quick to dismiss the theistic counterarguments in the first place.

It's roughly as many words as are spoken worldwide in 2.5 seconds, assuming 7450 words per person per day. It's very probably less than the number of English words spoken in a minute. It's also about the number of words you can expect to speak in 550 years. That means there might be people alive who've spoken that many words, given the variance of word-production counts.

So, a near inconceivable quantity for one person, but a minute fraction of total human communication.

//Not an economist//

The minimum wage creates a class of people who it isn't worth hiring (their productivity is less than their cost of employment). If you have a device which raises the productivity of these guys, they can enter the workforce at minimum wage.

Additionally, there may be zero marginal product workers - workers whose cost of employment equals the marginal increase in productivity that results from hiring them. This could happen in a contracting job market if the fear of losing employment causes other workers to increase their productivity eno... (read more)

I thought we were talking about how to use necessary requirements without risking a suit, not how to conceal racial preferences by using cleverly chosen proxy requirements. But it looks like you can't use job application degree requirements without showing a business need either.

8Vladimir_M
topynate: The relevant landmark case in U.S. law is the 1971 Supreme Court decision in Griggs v. Duke Power Co. The court ruled that not just testing of prospective employees, but also academic degree requirements that have disparate impact across protected groups are illegal unless they are "demonstrably a reasonable measure of job performance." Now of course, "a reasonable measure of job performance" is a vague criterion, which depends on controversial facts as well as subjective opinion. To take only the most notable example, these people would probably say that IQ tests are a reasonable measure of performance for a great variety of jobs, but the present legal precedent disagrees. This situation has given rise to endless reams of of case law and a legal minefield that takes experts to navigate. At the end, as might be expected, what sorts of tests and academic requirements are permitted to different institutions in practice depends on arbitrary custom and the public perception of their status. The de facto rules are only partly codified formally. Thus, to take again the most notable example, the army and the universities are allowed to use what are IQ tests in all but name, which is an absolute taboo for almost any other institution.
2gwern
I wasn't. I was talking about how the obvious loopholes are already closed or have been heavily restricted (even at the cost of false positives), and hence how Quirrel's comments are naive and uninformed. Yes, that doesn't surprise me in the least.

You can put degree requirements on the job advertisement, which should act as a filter on applications, something that can't be caught by the 80% rule.

(Of course, universities tend to use racial criteria for admission in the US, something which, ironically, can be an incentive for companies to discriminate based on race even amongst applicants with CS degrees.)

2gwern
The 80% rule is only part of it. Again, racist requirements is an obvious loophole you should expect to have been addressed; you can only get away with a little covert discrimination if any. From http://en.wikipedia.org/wiki/Disparate_impact#Unintentional_discrimination : If you add unnecessary requirements as a stealth filter, how do you show the requirements are job-related?

it says nothing about the properties that really define qualia, like the "" that we've been talking about in another thread

So we can set up state machines that behave like people talking about qualia the way you do, and which do so because they have the same internal causal structure as people. Yet that causal structure doesn't have anything to do with the referent of 'redness'. It looks like your obvious premise that redness isn't reducible implies epiphenomenalism. Which is absurd, obviously.

Edit: Wow, you (nearly) bite the bullet in this co... (read more)

0Mitchell_Porter
No, it just means that plays a causal role in us, which would be played by something else in a simulation of us. There's nothing paradoxical about the idea of an unconscious simulation of consciousness. It might be an ominous or a disconcerting idea, but there's no contradiction. See what I just said to William Sawin about fundamental versus derived causality. These are derived causal relations; really, they are regularities which follow indirectly from large numbers of genuine causal relations. My eccentricity lies in proposing a model where mental states can be fundamental causes and not just derived causes, because the conscious mind is a single fundamental entity - a complex one, that in current language we might call an entangled quantum system in an algebraically very distinctive state, but still a single entity, in a way that a pile of unentangled atoms would not be. Being a single entity means that it can enter directly into whatever fundamental causal relations are responsible for physical dynamics. Being that entity, from the inside, means having the sensations, thoughts, and desires that you do have; described mathematically, that will mean that you are an entity in a particular complicated, formally specified state; and physically, the immediate interactions of that entity would be with neighboring parts of the brain. These interactions cause the qualia, and they convey the "will". That may sound strange, but even if you believe in a mind that is material but non-fundamental, it still has to work like that or else it is causally irrelevant. So when you judge the idea, remember to check whether you're rejecting it for weirdness that your own beliefs already implicitly carry.

Have you heard of the Kullback-Leibler divergence? One way of thinking about it is that it quantifies the amount you learn about one random variable when you learn something about another random variable. I.e., if your variables are X and Y, then D(p(X|Y=y),p(X)) is the information gain about X when you learn Y=y. It isn't a metric, as it isn't symmetric: D(p(X|Y=y),p(X)) != D(p(X),p(X|Y=y)). Nevertheless, with two people with different probability distributions on some underlying space, it's a good way of representing how much more one knows than the othe... (read more)

0InquilineKea
Ah okay, thanks for the reply. Yes, I've heard about the KL divergence, although I haven't really worked with it before. "I'm less familiar with what tools are available to formalize differences in what's valued than I am with tools to formalize differences in what's known." Oh, good points. LessWrong is more concerned with what's known than what's valued. Although what's valued does matter, since what's valued is of relevance when we want to operationalize utility.

How about: Allirea's been shielding Bella both from being seen directly and from Alice's power. Addy found them - Bella was in the vicinity - made a deal with them, and then they came back together. Allirea may still be around and using her power, or she may have left. Possibly Addy's taking of Siobhan's power enabled her to take Allirea into account, somehow, which made it easier for her to find them.

By the way Alicorn, I've been thoroughly enjoying your two stories. Your portrayal of Allirea's power is one of my favourite parts.

3Giriath
I agree! She's a very quirky woman with a very interesting power that I very much enjoy, even if she does eat people and see herself as part of a 'master' race.

It's really not that subtle a trick. If it sounds unnatural it may be more a consequence of a lack of practice in persuasive writing generally (in which case, bravo for practising, icebrand!) than of special brain chemistry that irreparably cripples and nerdifies you if you try anything socially 'fancy'.

3icebrand
I hadn't thought of it specifically in terms of persuasive writing. But that's essentially what I want to do; persuade cryonics advocates to take more action, and persuade fence-sitters to become advocates. Perhaps reading some formal persuasive writing literature would be instructive to getting a more natural feel. But as you say it is likely to be more a matter of practice. My normal style is more explanatory than persuasive.
0[anonymous]
I guess my trouble is I don't have much practice with this particular kind of writing where I'm being selective about relating just the details (and context) that will get the result I want. I'm normally very good at explaining exactly what's on my mind, i.e. communicating when the result I'm shooting for is solely conveying my point, and perhaps winning the argument. In this case the desired result is to define the argument "properly" to begin with. There is certainly a part of my mind that keeps whispering "you'll never make it in Slytherin..." whenever I try stuff like this. I'm trying to ignore it and see what happens. If it's really just a practice issue it should clear up eventually.
topynate120

Actions which increase utility but do not maximise it aren't "pointless". If you have two charities to choose from, £100 to spend, and you get a constant 2 utilons/£ for charity A and 1 utilon/£ for charity B, you still get a utilon for each pound you donate to B, even if to get 200 utilons you should donate £100 to A. It's just the wrong word to apply to the action, even assuming that someone who says he's donated a small amount is also saying that he's donated a small proportion of his charitable budget (which it turns out wasn't true in this case).

topynate100

The idea is that the optimal method of donation is to donate as much as possible to one charity. Splitting your donations between charities is less effective, but still benefits each. They actually have a whole page about how valuable small donations are, so I doubt they'd hold a grudge against you for making one.

-1David_Gerard
Yes, I'm sure the charity has such a page. I am intimately familiar with how splitting donations into fine slivers is very much in the interests of all charities except the very largest; I was speaking of putative benefit to the donors.

If Archimedes and the American happen to extrapolate to the same volition, why should that be because the American has values that are a progression from those of Archimedes? It's logically possible that both are about the same distance from their shared extrapolated volition, but they share one because they are both human. Archimedes could even have values that are closer than the American's.

1Nornagest
The extrapolated volition of a cultural group might, given certain assumptions, resolve to a single set of values. If that were the case, you could express changes in expressed volition in that group over time as either progress toward or regression from that EV, and for various reasons I'd expect them to usually favor the "progress" option. I suspect that's what cousin_it is getting at. I'm not convinced that we gain anything by expressing that in simple terms of progress, though. The expressed volition of modern Western society is probably closer to the CEV of humanity than the cultural norms of 500 BC were, but cultural progression along the metric described above hasn't been monotonic and isn't driven by reference to CEV; it's the result of a stupendous hodgepodge of philosophical speculation, scale effects, technological changes, and random Brownian motion. That might resolve to something resembling capital-P progress, especially since the Enlightenment, but it's not predictively equivalent to what people usually mean by the term. And it certainly can't be expected to apply over all cultural traditions. The check on CEV described in the OP, however, should.
topynate130

I do not consider myself a rationalist, because I doubt my own rationality.

This site isn't called Always Right, you know.

topynate100

That quote completely ignores the risk of worsening the situation each 'solution' might carry. The venture-capital method only works because of limited liability.

Assuming a roughly 50-50 split the inverse square-root rule is right. Now my issue is why you incorporate that factor in scenario 2, but not scenario 3. I honestly thought I was just rephrasing the problem, but you seem to see it differently? I should clarify that this isn't you unconditionally receiving a speck if you're willing to, but only if half the remainder are also so willing.

The point of voting, for me, is not an attempt to induce scope insensitivity by personalizing the decision, but to incorporate the preferences of the vast majority (3^^^^3 out... (read more)

0benelliott
There are actually two differences between 2 and 3. The first is that in 2 my chance of affecting the torture is negligible, whereas in 3 it is quite high. The second difference is that in 2 I have the power to save huge numbers of others from dust specks, and it is this difference which is important to me, since when I have that power it dwarfs the other factors so much as to be the only deciding factor in my decision. In your 'rephrasing' of it you conveniently ignore the fact that I can still do this, so I assumed I no longer could, which made the two scenarios very different. I think also, as a general principle, any argument of the type you are formulating which does not pay attention to the specific utilities of torture and dust-specks, instead just playing around with who makes the decision, can also be used to justify killing 3^^^^3 people to save one person from being killed in a slightly more painful manner.

Compare two scenarios: in the first, the vote is on whether every one of the 3^^^3 people are dust-specked or not. In the second, only those who vote in favour are dust-specked, and then only if there's a majority. But these are kind of the same scenario: what's at stake in the second scenario is at least half of 3^^^3 dust-specks, which is about the same as 3^^^3 dust-specks. So the question "would you vote in favour of 3^^^3 people, including yourself, being dust-specked?" is the same as "would you be willing to pay one dust-speck in your eye to save a person from 50 years of torture, conditional on about 3^^^3 other people also being willing?"

-1benelliott
Let me try and get this straight, you are presenting me with a number of moral dilemmas and asking me what I would do in them. 1) Me and 3^^^^3 - 1 other people all vote on whether we get dust specks in the eye or some other person gets tortured. I vote for torture. It is astonishingly unlikely that my vote will decide, but if it doesn't then it doesn't matter what I vote, so the decision is just the same as if it was all up to me. 2) Me and 3^^^^3 - 1 other people all vote on whether everyone who voted for this option gets a dust speck in the eye or some other person gets tortured. This is a different dilemma, since I have to weigh up three things instead of two, the chance that my vote will save about 3^^^^3 people from being dust-specked if I vote for torture, the chance that my vote will save on person from being tortured if I vote for dust specks and the (much higher) chance that my vote will save me and only me from being dust-specked if I vote for torture. I remember reading somewhere that the chance of my vote being decisive in such a situation is roughly proportional to the square root of the number of people (please correct me if this is wrong). Assuming this is the case then I still vote for torture, since the term for saving everyone else from dust specks still dwarfs the other two. 3) I have to choose whether I will receive a dust speck or whether someone else will be tortured, but my decision doesn't matter unless at least half of 3^^^^3 - 1 other people would be willing to choose the dust speck. Once again the dilemma has changed, this time I have lost my ability to save other people from dust specks and the probability of me successfully saving someone from torture has massively increased. I can safely ignore the case where the majority of others choose torture, since my decision doesn't matter then. Given that the others choose dust specks, I am not so selfish as to save myself from a dust speck rather than someone else from torture. You try
Load More