Comment author: dfranke 12 April 2011 10:12:01AM 1 point [-]

If computation doesn't exist because it's "a linguistic abstraction of things that exist within physics", then CPUs, apples, oranges, qualia, "physical media" and people don't exist; all of those things are also linguistic abstractions of things that exist within physics. Physics is made of things like quarks and leptons, not apples and qualia. I don't think this definition of existence is particularly useful in context.

Not quite reductionist enough, actually: physics is made of the relationship rules between configurations of spacetime which exist independently of any formal model of them that give us concepts like "quark" and "lepton". But digging deeper into this linguistic rathole won't clarify my point any further, so I'll drop this line of argument.

As to your fruit analogy: two apples do in fact produce the same qualia as two oranges, with respect to number! Obviously color, smell, etc. are different, but in both cases I have the experience of seeing two objects. And if I'm trying to do sums by putting apples or oranges together, substituting one for the other will give the same result. In comparing my brain to a hypothetical simulation of my brain running on a microchip, I would claim a number of differences (weight, moisture content, smell...), but I hold that what makes me me would be present in either one.

If you started perceiving two apples identically to the way you perceive two oranges, without noticing their difference in weight, smell, etc., then you or at least others around you would conclude that you were quite ill. What is your justification for believing that being unable to distinguish between things that are "computationally identical" would leave you any healthier?

Comment author: AstroCJ 12 April 2011 01:25:50PM 1 point [-]

If I have in front of me four apples that appear to me to be identical, but a specific two of them consistently are referred to as oranges by sources I normally trust, they are not computationally identical. If everyone perceived them as apples, I doubt I would be seen as ill.

Comment author: dfranke 12 April 2011 02:40:21AM *  0 points [-]

A sufficiently advanced simulation on any substrate would have this property - the simulated qualia would feed back on the simulated world.

Correct, but both still are just simulated. The qualia that are actually occurring are those associated with the simulator substrate, not those associated with the simulated world, and in the context of the simulated world, they would not make sense.

Comment author: AstroCJ 12 April 2011 08:37:18AM 0 points [-]

they would not make sense

Proof?

Comment author: Will_Sawin 12 April 2011 02:14:35AM *  -4 points [-]

"Qualia are not pure "outputs": they feed back on the rest of the world."

A sufficiently advanced simulation on any substrate would have this property - the simulated qualia would feed back on the simulated world.

Maybe the qualia of people who ACTUALLY have bodies are completely different from yours, a person who has no body.

Comment author: AstroCJ 12 April 2011 08:36:20AM 0 points [-]

DV for being unconstructive.

Comment author: Swimmer963 22 March 2011 07:26:52AM 5 points [-]

I deleted this post. I will write another post later about why I deleted it.

Comment author: AstroCJ 22 March 2011 03:04:54PM 2 points [-]

I hope you didn't take my initial comment as being aggressive or judgemental; it was a good post, well written and interesting. I hope, too, that there's no kind of fallout.

Comment author: komponisto 18 March 2011 05:45:08PM *  2 points [-]

if you're going to share incredibly personal details about "a friend"... we need to know if this information has been posted with her consent.

I think (or, anyway, hope) what you meant to write was "you need her consent before posting", rather than "we need to know whether you obtained her consent [so that we can socially penalize you if it turns out you didn't]."

Comment author: AstroCJ 18 March 2011 10:09:35PM 1 point [-]

Socially penalise, nothing. Something as personal as this, it's deeply unusual not to make it clear that you have permission; my concern is for the privacy of person under discussion.

Comment author: AstroCJ 18 March 2011 05:33:19PM *  11 points [-]

I am alarmed and dismayed that no-one has raised the issue of privacy in this thread. Swimmer963, just from glancing through your comments, you're [rot13'd description of Swimmer963 deleted].

I didn't whizz through those to be creepy (actually I was impressed at how you seem to be consistently sensible), but if you're going to share incredibly personal details about "a friend" who was raped, we need to know if this information has been posted with her consent. The above is very easily enough to personally identify you.

On whether or not this will be important or not: [blanked].

EDIT: Deleted precis of Swimmer963's situation; it had served its purpose. EDIT: Deleted some personal information.

Comment author: AstroCJ 19 February 2011 12:20:09PM -9 points [-]

DV. I've pretty much lost patience with posts that attribute male gender to idealised agents. Destroyed my interest in rest of post.

Comment author: Snowyowl 05 February 2011 01:48:39AM 2 points [-]

No, then too.

Comment author: AstroCJ 05 February 2011 01:43:44PM 1 point [-]

Unless...?

Comment author: Dr_Manhattan 03 February 2011 04:07:33PM 2 points [-]

For me to argue further would be to argue the meaning of "failure" in this context, when I'm pretty sure I actually agree with you on all of the substance of our posts.

I really do not want to argue about semantics either, but our agreed interpretation makes Niel's statement equivalent to "our visual system is not optimal for non-ancestral environments", which is highly uninteresting. I think the Dawkin's larengyal nerve example is much more interesting in this sense, since it points out body designs do not come from a sane Creator, at least in some instances (which is enough for his point).

Comment author: AstroCJ 05 February 2011 01:36:31PM 3 points [-]

Since we do not live in the ancestral environment now, I think the quotation could be just underlining how we should viscerally know our brain is going to output sub-optimal crud given certain inputs. Upvoted original.

Comment author: XiXiDu 05 February 2011 11:31:32AM -1 points [-]

If everyone was to take Landsburg's argument seriously, which would imply that all humans were rational, then everyone would solely donate to the SIAI. If everyone only donated to the SIAI, would something like Wikipedia even exist? I suppose the SIAI would have created Wikipedia if it was necessary. I'm just wondering how much important stuff out there was spawned by irrational contributions and how the world would look like if such contributions would have never been made. I'm also not sure how venture capitalist growth funding differs from the idea to diversify one's contributions to charity.

Note that I do not doubt the correctness of Landsburg's math. I'm just not sure if it would have worked out given human shortcomings (even if everyone was maximally rational). If nobody was to diversify, contributing to what seems to be the most rational option given the current data, then being wrong would be a catastrophe. Even maximally rational humans can fail after all. This wouldn't likely be a problem if everyone contributed to a goal that could be verified rather quickly, but something like the SIAI could eat up the resources of the planet and still turn out to be not even wrong in the end. Since everyone would have concentrated on that one goal (no doubt being the most rational choice at the moment), might such a counterfactual world have been better off diversifying its contributions or would the SIAI have turned into some kind of financial management allocating those contributions and subsequently become itself a venture capitalist?

Comment author: AstroCJ 05 February 2011 01:30:24PM 2 points [-]

Downvoted.

For games where there are multiple agents interacting, the optimal strategy will usually involve some degree of weighted randomness. If there are noncommunicating rational agents A, B, C each with (an unsplittable) $1, and charities 1 and 2 - both of which fulfil a vital function but 1 requires $2 to function and 2 requires $1 to function, I would expect the agents to donate to 1 with p = 2/3.

A rational agent is aware that other rational agents exist, and will take account of their actions.

View more: Prev | Next