Comment author: pangel 02 June 2016 02:40:38PM *  -1 points [-]

I have a question, but I try to be careful about the virtue of silence. So I'll try to ask my question as a link :

http://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation

Also, these ideas are still weird enough to win against his level of status, as I think the comments here show:

https://news.ycombinator.com/item?id=11822302

Comment author: Lumifer 07 March 2016 08:32:55PM 5 points [-]

Give me everything I want...

In capitalist economies scarce resources are effectively auctioned off to the highest bidder. If you're noticeably poorer than people around you, you will likely be unable to get to these resources. A simple example: buying a house.

But that doesn't imply that e.g. a capitalist economy with basic income couldn't provide even more.

At one level, no, it doesn't. But at the same level it also doesn't imply that a capitalist economy with X (where X can be anything) couldn't provide even more as well.

At another level yes, it does, because there are reasons why a capitalist economy works and a command economy doesn't. These reasons are relevant to evaluating whether a basic income is a good idea.

Comment author: pangel 07 March 2016 11:07:40PM 1 point [-]

Could you expand on this?

...there are reasons why a capitalist economy works and a command economy doesn't. These reasons are relevant to evaluating whether a basic income is a good idea.

Comment author: Viliam 07 March 2016 08:22:00PM *  3 points [-]

Similarly to you, unless the rich people use their money to abuse me, I care more about my absolute than relative wealth. My struggles are not with comparing myself to other people, but with getting what I want. Give me everything I want, and I won't care if you give other people 10 times more.

To me it has profound implications about what kind of economic world we should strive for -- if most folks are like me, the current system is fine.

If you took the wealth existing today and distributed it more flatly, many people would have higher absolute wealth. So I don't see how caring about absolute wealth makes current system fine.

We do have the data point that a capitalist economy provides higher average wealth than a communist one. But that doesn't imply that e.g. a capitalist economy with basic income couldn't provide even more. (Maybe the problem with communism was lack of competition and the micromanagement of everything by political nitwits, not the flatter distribution of wealth per se.)

Comment author: pangel 07 March 2016 11:04:55PM 0 points [-]

Sorry, "fine" was way stronger than what I actually think. It just makes it better than the (possibly straw) alternative I mentioned.

Comment author: Lumifer 07 March 2016 06:20:37PM 4 points [-]

Have you ever been poor?

Comment author: pangel 07 March 2016 11:01:35PM 0 points [-]

No. Thanks for making me notice how relevant that could be.

I see that I haven't even thought through the basics of the problem. "power over" is felt whenever scarcity leads the wealthier to take precedence. Okay, so to try to generalise a little, I've never been really hit by the scarcity that exists because my desires are (for one reason or another) adjusted to my means.

I could be a lot wealthier yet have cravings I can't afford, or be poorer and still content. But if what I wanted kept hitting a wealth ceiling (a specific type, one due to scarcity, such that increasing my wealth and everyone else's in proportion wouldn't help), I'd start caring about relative wealth really fast.

Comment author: ChristianKl 07 March 2016 02:26:44PM 4 points [-]

In particular, I sincerely do not care about my relative wealth.

How do you know?

Comment author: pangel 07 March 2016 06:16:17PM 0 points [-]

I see it as a question of preference so I know by never having felt envy, etc. at someone richer than me just for being richer. I only feel interested in my wealth relative to what I need or want to purchase.

As noted in the comment thread I linked, I could start caring if someone's relative wealth gave them power over me but I haven't been in this situation so far (stuff like boarding priority for first-class tickets are a minor example I did experience, but that's never bothered me).

Comment author: pangel 07 March 2016 11:56:09AM *  3 points [-]

Responding to a point about the rise of absolute wealth since 1916, this article makes (not very well) a point about the importance of relative wealth.

Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.

I've had a short discussion about this earlier, and find it very interesting.

In particular, I sincerely do not care about my relative wealth. I used to think that was universal, then found out I was wrong. But is it typical? To me it has profound implications about what kind of economic world we should strive for -- if most folks are like me, the current system is fine. If they are like some people I have met, a flatter real wealth distribution, even at the price of a much, much lower mean, could be preferable.

I'm interested in any thoughts you all might have on the topic :)

Comment author: gwern 10 February 2016 09:25:05PM *  24 points [-]

Probably not. If you look at the comments on posts about the Prize, you can see how clearly people have already set up their fallback arguments once the soldier of 'possible bad vitrification when scaled up to human brain size' has been knocked down. For example, on HN: https://news.ycombinator.com/item?id=11070528

  • 'you may have preserved all the ultrastructure but despite the mechanism of crosslinking, I'm going to argue that all the real important information has been lost'
  • 'we already knew that glutaraldehyde does a good job of fixating, this isn't news, it's just a con job looking for some free money'
  • 'it irreversibly kills cells by fixing them in place so this is irrelevant'
  • 'regardless of how good the scans look, this is just a con job'
  • 'what's the big deal, we already know frogs can do this, but what does it have to do with humans; anyway, it's a quack science which we know will never work'

Even if a human brain is stored, successfully scanned, and emulated, the continued existence - nay, majority - of body-identity theorists ensures that there will always be many people who have a bulletproof argument against: 'yeah, maybe there's a perfect copy, but it'll never really be you, it's only a copy waking up'.

More broadly, we can see that there is probably never going to be any 'Sputnik moment' for cryonics, because the adoption curve of paid-up members or cryopreservations is almost eerily linear over the past 50 years and entirely independent of the evidence. Refutation of 'exploding lysosomes' didn't produce any uptick. Long-term viability of ALCOR has not produced any uptick. Discoveries always pointing towards memory being a durable feature of neuronal connections rather than, as so often postulated, an evanescent dynamic property of electrical patterns, have never produced an uptick. Continued pushbacks of 'death' have not produced upticks. No improvement in scanning technology has produced an uptick. Moore's law proceeding for decades has produced no uptick. Revival of rabbit kidney, demonstration of long-term memory continuity in revived C. elegans, improvements in plastination and vitrification - all have not or are not producing any uptick. Adoption is not about evidence.

Even more broadly, if you could convince anyone, how many do you expect to take action? To make such long-term plans on abstract bases for the sake of the future? We live in a world where most people cannot save for retirement and cannot stop becoming obese and diabetic despite knowing full well the highly negative consequences, and where people who have survived near-fatal heart attacks are generally unable to take their medicines and exercise consistently as their doctors keep begging them. And for what? Life sucks, but at least then you get to die. Even after a revival, I would predict that maybe 5% of the USA population (~16m people) would be meaningfully interested in cryonics, and of that only a fraction would go through with it, so 'millions' is an upper bound.

Comment author: pangel 11 February 2016 04:49:03PM 2 points [-]

...people have already set up their fallback arguments once the soldier of '...' has been knocked down.

Is this really good phrasing or did you manage to naturally think that way? If you do it automatically: I would like to do it too.

It often takes me a long time to recognize an argument war. Until that moment, I'm confused as to how anyone could be unfazed by new information X w.r.t. some topic. How do you detect you're not having a discussion but are walking on a battlefield?

Comment author: TheAncientGeek 20 September 2015 07:59:12PM 1 point [-]

hey imply that we should not try to build an FAI using current machine learning techniques

Buit people are using ML techniques. Should MIRI be campaigning to get this research stopped?

Comment author: pangel 24 September 2015 09:05:09AM 0 points [-]

I think practitioners of ML should be more wary of their tools. I'm not saying ML is a fast track to strong AI, just that we don't know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.

I don't think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).

Comment author: TheOtherDave 21 August 2015 08:02:25PM *  -1 points [-]

To know what I'm referring to by a term is to know what properties something in the world would need to have to be a referent for that term.

The ability to recognize such things in the world is beside the point. When I say "my ancestors," I know what I mean, but in most cases it's impossible to pick that attribute out empirically -- I can't pick out most of my ancestors now, because they no longer exist to be picked out, and nobody could have picked them out back when they were alive, because the defining characteristic of the category is in terms of something that hadn't yet been born. (Unless you want to posit atypical time-travel, of course, but that's not my point.)

So, sure, if by "flying saucer" I refer to an alien spaceship, I don't necessarily have any way of knowing whether something I'm observing is a flying saucer or not, but I know what I mean when I claim that it is or isn't.

And if by "consciousness" I refer to anything sufficiently similar to what I experience when I consider my own mind, then I can't tell whether a rock is conscious, but I know what I mean when I claim it is or isn't.

Rereading pangel's comment, I note that I initially understood "we don't know actually know what those concepts refer to" to mean we don't have the latter thing... that we don't know what we mean to express when we claim that the concept refers to something... but it can also be interpreted as saying we don't know in what things in the world the concept correctly refers to (as with your example of being wrong about believing something is an alien spaceship).

I'll stand by my original statement in the original context I made it in, but sure, I also agree that just because we don't currently know what things in the world are or aren't conscious (or flying saucers, or accurate blueprints for anti-gravity devices, or ancestors of my great-great-grandchild, or whatever) doesn't mean we can't talk sensibly about the category. (Doesn't mean we can, either.)

And, yes, the fact that I don't know how subjective experience comes to be doesn't prevent me from recognizing subjective experience.

As for urgency... I dunno. I suspect we'll collectively go on inferring that things have a consciousness similar to our own with a confidence proportional to how similar their external behavior is to our own for quite a long time past the development of (human) brains in vats. But sure, I can easily imagine various legal prohibitions like you describe along the way.

Comment author: pangel 22 August 2015 04:32:19PM 0 points [-]

I meant it in the sense you understood first. I don't know what to make of the other interpretation. If a concept is well-defined, the question "Does X match the concept?" is clear. Of course it may be hard to answer.

But suppose you only have a vague understanding of ancestry. Actually, you've only recently coined the word "ancestor" to point at some blob of thought in your head. You think there's a useful idea there, but the best you can for now is: "someone who relates to me in a way similar to how my dad and my grandmother relate to me". You go around telling people about this, and someone responds "yes, this is the brute fact from which the conundrum of ancestry start". An other tells you you ought to stop using that word if you don't know what the referent is. Then they go on to say your definition is fine, it doesn't matter if you don't know how someone comes to be an ancestor, you can still talk about an ancestor and make sense. You have not gone through all the tribe's initiation rituals yet, so you don't know how you relate to grey wolves. Maybe they're your ancestors, maybe not. But the other says : "At least, you know what you mean when you claim they are or are not your ancestors.".

Then your little sisters drops by and says: "Is this rock one of your ancestors?". No, certainly not. "OK, didn't think so. Am I one of your ancestors?". You feel about it for a minute and say no. "Why? We're really close family. It's very similar to how dad or grandma relate to you." Well, you didn't include it in your original definition, but someone younger than you can definitely not be your ancestor. It's not that kind of "similar". A bit of time and a good number of family members later, you have a better definition. Your first definition was just two examples, something about "relating", and the word "similar" thrown in to mean "and everyone else who is also an ancestor." But similar in what way?

Now the word means "the smallest set such that your parents are in it, and any parent of an ancestor is an ancestor"..."union the elders of the tribe, dead or alive, and a couple of noble animal species." Maybe a few generations later you'll drop the second term of the definition and start talking about genes, whatever.

My "fuzziest starting point" was really fuzzy, and not a good definition. It was one example, something about being able to "experience" stuff, and the word "similar" thrown in to mean "and everyone else who is conscious." I may (kind of) know what I mean when I say a rock is not conscious, since it doesn't experience anything, but what do I mean exactly when I say that a dog isn't conscious?

I don't think I know what I mean when I say that, but I think it can help to keep using the word.

P.S. The final answer could be as in the ancestor story, a definition which closely matches the initial intuition. It could also be something really weird where you realize you were just confused and stop using the word. I mean, the life force of vitalism was probably a brute fact for a long time.

Comment author: Lumifer 20 August 2015 08:00:37PM *  -1 points [-]

Do I have to provide a full specification of what would be "satisfactory" just to recognize an ethical problem?

Not "full", but some, yes. Otherwise anyone can squint at anything and say "I think there is an ethical problem here. I can't quite put my finger on it, but my gut feeling ("visceral level") is that there is" -- and there is no adequate response to that.

Comment author: pangel 20 August 2015 08:25:17PM *  2 points [-]

As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.

The fuzziest starting point for "consciousness" is "something similar to what I experience when I consider my own mind". But this doesn't help much. Someone can still claim "So rocks probably have consciousness!", and another can respond "Certainly not, but brains grown in labs likely do!". Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.

For some concepts, we disagree on definitions because we don't know actually know what those concepts refer to (this doesn't include concepts like "art", etc.). I'm not sure what the best way to talk about whether an entity possesses such a concept is. Are there existing articles/discussions about that?

View more: Next