Comment author: RichardKennaway 12 October 2015 06:41:46AM 1 point [-]

It is not obvious at all to me that the closeness (emotionally or physically) to someone changes the weight of their suffering.

Where they are does not change their suffering, but perhaps it changes the weight of your obligation to do something about it?

Comment author: Regex 13 October 2015 11:31:44PM 0 points [-]

In social situations perhaps. But that's only because you can't physically act or it is more optimal economically and logistically for everyone to manage their own sphere of influence. If you have in front of you two buttons and you must press one, this changes nothing.

Comment author: cousin_it 01 April 2009 10:42:16AM *  4 points [-]

Maybe relevant to this post: the googolplex dust specks issue seems to be settled by nonlinearity/proximity.

Other people's suffering is non-additive because we value different people differently. The pain of a relative matters more to me than the pain of a stranger. A googolplex people can't all be important to me because I don't have enough neural circuitry for that. (Monkeysphere is about 150 people.) This means each subsequent person-with-dust-speck means less to me than the previous one, because they're further from me. The infinite sum may converge to a finite value that I feel is smaller than 50 years of torture.

It seems that to shut up and multiply, an altruist/rationalist needs to accept a non-obvious axiom that each person's joy or suffering carries equal weight regardless of proximity to the altruist. I for one refuse to accept this axiom because it's immoral to me; think about it.

Comment author: Regex 11 October 2015 06:17:30PM 0 points [-]

It seems that to shut up and multiply, an altruist/rationalist needs to accept a non-obvious axiom that each person's joy or suffering carries equal weight regardless of proximity to the altruist. I for one refuse to accept this axiom because it's immoral to me; think about it.

I have the exact opposite intuition. It is not obvious at all to me that the closeness (emotionally or physically) to someone changes the weight of their suffering. If someone is going to get their fingers slammed in a door, then it matters not should I know them personally or be a thousand light years distant.

Admittedly, I may have a slightly more visceral reaction if someone I know gets in a car wreck than looking at the statistics, but I disagree that means it is Right for me to prevent that car wreck of someone close, only to thereby cause another and in addition lead someone to stub their toe.

Comment author: Dorikka 30 March 2011 05:10:38PM 4 points [-]

I would have made this into a longer post, but it works much better appended to this one:

It's clear that you can't just make willpower appear with a snap of your fingers, so I consider fuzzies to be utilons for many human utility functions. However, utilitarians have it even better -- if they get fuzzies by giving fuzzies to someone else, they get to count all of the fuzzies generated as utilons. I urge people focused on being effective utilitarians to keep this in mind if they feel like they're running low on fuzzies.

Comment author: Regex 11 October 2015 06:05:04PM 0 points [-]

I think you meant they should count all the utilons generated as fuzzies?

Comment author: Regex 11 October 2015 03:49:10AM 1 point [-]

Generate a fantasy world with certain rules of magic. The goal is to figure out precisely what those rules are, all the while working towards some end goal. Perhaps this could be run by a handful game masters who know exactly what the rules are supposed to be, or magics are input into a computer program, so no one knows for sure. One would promise to keep the rules secret once figure out. This would encourage proper hypothesis testing and thoughtful use of evidence, especially if resources are limited. I suspect this wouldn't just be a one-off, but a repeatable exercise if one had multiple worlds or the ability to arbitrarily generate the system. Perhaps one could engage in duels using the uncovered magics, and would be able to encourage creativity by applying these in different ways. I'd imagine one could use systems much like those in role playing games, but but qualitatively based perhaps?

There was a card game based around hypotheses in a class I took once which I've improved upon somewhat here: https://lordregexrationalist.wordpress.com/2015/09/29/rationalist-belief-card-game/

Comment author: Psy-Kosh 13 March 2009 03:59:25AM *  10 points [-]

While developing a rationality metric is obviously crucial, I have this nagging suspicion that what it may take is simply a bunch of committed wanna-be rationalists to just get together and, well, experiment and teach and argue, etc with each other in person regularly, try to foster explicit social rules that support rather than inhibit rationality, and so on.

From there, at least use a fuzzy this "seems" to work/not work type metric, even if it's rather subjective and imprecise, as a STARTING POINT, until one can more precisely do that, until one gets a better sense of exactly what to look for, explicitly.

But, my main point is my suspicion that "do it, even if you're not entirely sure yet what you're doing, just do it anyways and try to figure it out on the fly" may actually be what it takes to get started. If nothing else, it'll produce some nice case study in failure that at least one can look at and say "okay, let's actually try to work out what we did wrong here"

EDIT: hrm... maybe I ought reconsider my position. Will leave this up, at least for now, but with the added note that now I'm starting to suspect myself of basically just trying to "solve the problem without having to, well, actually solve the problem"

Comment author: Regex 11 October 2015 02:32:35AM *  0 points [-]

I've been predicted! This almost exactly describes what I've been up to recently... (Will make a post for it later. Still far too rough to show off. Anyone encountering this comment in 2016 or later should see a link in my profile. Otherwise, message me.)

Edit: Still very rough, and I ended up going in a slightly different direction than I'd hoped. Strange looking at how much my thoughts of it changed in a mere two months. Here it is

In response to Sensual Experience
Comment author: Regex 10 October 2015 05:12:28PM 0 points [-]

<MRAmes> I want a sensory modality for regular expressions.

I approve.

In response to comment by Regex on A Priori
Comment author: hairyfigment 24 September 2015 02:35:50AM 2 points [-]

Near as I can tell, you're describing the same conjunction rule from your previous comment!

This conjunction rule says that a claim like 'The laws of physics always hold,' has less probability than, 'The laws of physics hold up until September 25, 2015 (whether or not they continue to hold after).'

Solomonoff Induction is an attempt to find a rule that says, 'OK, but the first claim accounts for nearly all of the probability assigned to the second claim.'

In response to comment by hairyfigment on A Priori
Comment author: Regex 24 September 2015 05:08:04AM 0 points [-]

Hrm, yeah. I think I need more tools and experience to be able to think about this properly.

In response to comment by Regex on A Priori
Comment author: gjm 23 September 2015 04:14:58PM 1 point [-]

As Richard Kennaway has said, this only deals with cases where one hypothesis is a conjunction including another (e.g., "There is a god" and "There is a god called Bill"), but most cases in which we actually want to apply OR aren't like that; they're more like "geocentric astronomy with circular orbits plus epicycles" and "heliocentric astronomy with elliptical orbits".

In response to comment by gjm on A Priori
Comment author: Regex 23 September 2015 10:54:37PM 0 points [-]

Ah. Yeah that does clear things up a bit. What would a solution look like, then? To show the complexity of an idea impacts its probability... but unless you use the historic argument of 'it's looked that way in the past for stuff like this' I don't see any way of even approaching that.

What if we imagine the space of hypotheses? A simpler hypothesis would be a larger circle because there may be more specific rules that act in accordance with it. 'The strength of a hypothesis is not what it can explain, but what it fails to account for', so a complicated prediction should occupy a very tiny region and therefore have a tiny probability.

Or... is that just another version of Solomonoff Induction, and so the same thing?

In response to A Priori
Comment author: idlewire 17 July 2009 03:45:23PM 3 points [-]

Could you not argue Occam's Razor from the conjunction fallacy? The more components that are required to be true, the less likely they are all simultaneously true. Propositions with less components are therefore more likely, or does that not follow?

In response to comment by idlewire on A Priori
Comment author: Regex 23 September 2015 07:30:18AM *  1 point [-]

I was wondering this myself. I roughly knew of Solomonoff Induction as related... but apparently that is equivalent! The next thing my memory turned up was "Minimum Description Length" principle, which as it turns out... is also a version of Occam's Razor. Funny how that works.

If we look at the original question again... "If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?" If I understand the conjunction fallacy correctly, it is strictly true that adding more propositions cannot increase the probability.That is to say, P( A & B) <= P(B)... and P( A & B) <= P(A).

So the argument could be made that B might have probability one and therefore would be an equally probable hypothesis with its addition. So if you start with A, and B has probability less than one it will strictly lower the probability to include it. Thus as far as I can tell, Occam's Razor holds except where additional propositions have probability one.

...But if they have probability one, wouldn't they have to be axiomatically identical to just having proposition A? Or would it perhaps have to be probability one given A? I honestly don't know enough here, but I think the basic idea stands?

Comment author: Regex 08 September 2015 03:42:55PM *  0 points [-]

If I missed something along the line, I'm really willing to learn.

kamenin on Collapse Postulates

View more: Prev | Next