All of pangel's Comments + Replies

pangel60

An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees.

For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced.

A 2-3-4 tree just generalises the above.

Now the intuition is that red means "I am part of a bigger node." Tha... (read more)

pangel70

Although I appreciate the parallel, and am skeptical of both, the mental paths that lead to those somewhat related ideas are seriously dissimilar.

-10Lumifer
pangel-10

I have a question, but I try to be careful about the virtue of silence. So I'll try to ask my question as a link :

http://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation

Also, these ideas are still weird enough to win against his level of status, as I think the comments here show:

https://news.ycombinator.com/item?id=11822302

-9Lumifer
pangel10

Could you expand on this?

...there are reasons why a capitalist economy works and a command economy doesn't. These reasons are relevant to evaluating whether a basic income is a good idea.

3Lumifer
Consider incentives. Under capitalism one incentive is the possibility of becoming rich, but another, more basic one, is the desire not to starve. Under a command economy you won't usually starve (because you're a useful labour unit), at least in a situation where you can do something about it. You still might starve because of incompetence or a political decision. A large number of people do not enjoy their jobs and, given the opportunity, would... take early retirement, let's put it this way. That's a problem. Command economies solve it by command (recall that being unemployed was a criminal offense in the Soviet Union). Capitalist economies solve it by saying "OK, I'll wait till you get hungry". A livable basic income would make that incentive disappear. Yes, some people would be happy. The consequences for society, though, are debatable :-/
pangel00

Sorry, "fine" was way stronger than what I actually think. It just makes it better than the (possibly straw) alternative I mentioned.

pangel00

No. Thanks for making me notice how relevant that could be.

I see that I haven't even thought through the basics of the problem. "power over" is felt whenever scarcity leads the wealthier to take precedence. Okay, so to try to generalise a little, I've never been really hit by the scarcity that exists because my desires are (for one reason or another) adjusted to my means.

I could be a lot wealthier yet have cravings I can't afford, or be poorer and still content. But if what I wanted kept hitting a wealth ceiling (a specific type, one due to scarcity, such that increasing my wealth and everyone else's in proportion wouldn't help), I'd start caring about relative wealth really fast.

pangel00

I see it as a question of preference so I know by never having felt envy, etc. at someone richer than me just for being richer. I only feel interested in my wealth relative to what I need or want to purchase.

As noted in the comment thread I linked, I could start caring if someone's relative wealth gave them power over me but I haven't been in this situation so far (stuff like boarding priority for first-class tickets are a minor example I did experience, but that's never bothered me).

4Lumifer
Have you ever been poor?
pangel50

Responding to a point about the rise of absolute wealth since 1916, this article makes (not very well) a point about the importance of relative wealth.

Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.

I've had a short discussion about this earlier, and find it very interesting.

In particular, I sincerely do not care about my relative wealth. I used to think that was universal, then found out I was wrong. But is it typical? To me it has profound implications a... (read more)

2knb
It's complicated. It seems clear to me that right now a huge number of people want to increase their absolute wealth more than their relative position. People who move from poor countries to rich countries often wind up in a lower percentile in the new country, but are better off in an absolute sense. Relatively few independently wealthy first-worlders move to poor countries to increase their relative wealth (although a handful of people do, admittedly.) This is complicated somewhat by the fact that recent migrants might still have enough connections to their old country that their higher relative position in the old country is more salient to them than their lower relative position in the new country.
5Viliam
Similarly to you, unless the rich people use their money to abuse me, I care more about my absolute than relative wealth. My struggles are not with comparing myself to other people, but with getting what I want. Give me everything I want, and I won't care if you give other people 10 times more. If you took the wealth existing today and distributed it more flatly, many people would have higher absolute wealth. So I don't see how caring about absolute wealth makes current system fine. We do have the data point that a capitalist economy provides higher average wealth than a communist one. But that doesn't imply that e.g. a capitalist economy with basic income couldn't provide even more. (Maybe the problem with communism was lack of competition and the micromanagement of everything by political nitwits, not the flatter distribution of wealth per se.)
6ChristianKl
How do you know?
pangel30

...people have already set up their fallback arguments once the soldier of '...' has been knocked down.

Is this really good phrasing or did you manage to naturally think that way? If you do it automatically: I would like to do it too.

It often takes me a long time to recognize an argument war. Until that moment, I'm confused as to how anyone could be unfazed by new information X w.r.t. some topic. How do you detect you're not having a discussion but are walking on a battlefield?

gwern220

Is this really good phrasing

Yes, I was referring to Eliezer's essay there. I liked my little flourish there, so I'm glad someone noticed.

How do you detect you're not having a discussion but are walking on a battlefield?

In this case it's easy when you look over all the comments on HN and elsewhere. It's like when Yvain is simultaneously accused of being racist Neo-reactionary scum and a Marxist SJW beta-cuckold Jew scum - it's difficult to see how both sets of accusations could be right simultaneously, so clearly at least one set of accusers are unhi... (read more)

pangel00

I think practitioners of ML should be more wary of their tools. I'm not saying ML is a fast track to strong AI, just that we don't know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.

I don't think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).

pangel00

I meant it in the sense you understood first. I don't know what to make of the other interpretation. If a concept is well-defined, the question "Does X match the concept?" is clear. Of course it may be hard to answer.

But suppose you only have a vague understanding of ancestry. Actually, you've only recently coined the word "ancestor" to point at some blob of thought in your head. You think there's a useful idea there, but the best you can for now is: "someone who relates to me in a way similar to how my dad and my grandmother relat... (read more)

0TheOtherDave
Hm. So, I want to point out explicitly that in your example of ancestry, I intuitively know enough about this concept of mine to know my sister isn't my ancestor, but I don't know enough to know why not. (This isn't an objection; I just want to state it explicitly so we don't lose sight of it.) And, OK, I do grant the legitimacy of starting with an intuitive concept and talking around it in the hopes of extracting from my own mind a clearer explicit understanding of that concept. And I'm fine with the idea of labeling that concept from the beginning of the process, just so I can be clear about when I'm referring to it, and don't confuse myself. So, OK. I stand corrected here; there are contexts in which I'm OK with using a label even if I don't quite know what I mean by it. That said... I'm not quite so sanguine about labeling it with words that have a rich history in my language when I'm not entirely sure that the thing(s) the word has historically referred to is in fact the concept in my head. That is, if I've coined the word "ancestor" to refer to this fuzzy concept, and I say some things about "ancestry," and then someone comes along "this is the brute fact from which the conundrum of ancestry start" as in your example, my reaction ought to be startlement... why is this guy talking so confidently about a term I just coined? But of course, I didn't just coin the word "ancestor." It's a perfectly common English word. So... why have I chosen that pre-existing word as a label for my fuzzy concept? At the very least, it seems I'm risking importing by reference a host of connotations that exist for that word without carefully considering whether I actually intend to mean them. And I guess I'd ask you the same question about "conscious." Given that there's this concept you don't know much about explicitly, but feel you know things about implicitly, and about which you're trying to make your implicit knowledge explicit... how confident are you that this concept c
pangel20

As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.

The fuzziest starting point for "consciousness" is "something similar to what I experience when I consider my own mind". But this doesn't help much. Someone can still claim "So rocks probably have consciousness!", and another can respond "Certainly not, but brains grown in labs likely do!". Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.

F... (read more)

0TheOtherDave
If I don't know what I'm referring to when I say "consciousness," it seems reasonable to conclude that I ought not use the term.
pangel00

Straussian thinking seems like a deep well full of status moves !

  • Level 0 - Laugh at the conspiracy-like idea. Shows you are in the pack.
  • Level 1 - As Strauss does, explain it / present instances of it. Shows you are the guru.
  • Level 2 - Like Thiel, hint at it while playing the Straussian game. Shows you are an initiate.
  • Level 3 - Criticize it for failing too often (bad thinking attractor, ideas that are hard to check and deploy usual rationality tools on). Shows you see through the phyg's distortion field.
pangel40

You probably already agreed with "Ghosts in the Machine" before reading it since obviously, a program executes exactly its code even in the context of AI. Also obviously, the program can still appear to not do what it's supposed to if "supposed" is taken to mean to programmer's intent.

These statements don't ignore machine learning; they imply that we should not try to build an FAI using current machine learning techniques. You're right, we understand (program + parameters learned from dataset) even less than (program). So while the outs... (read more)

1TheAncientGeek
Buit people are using ML techniques. Should MIRI be campaigning to get this research stopped?
pangel150

I have met people who explicitly say they prefer a lower gap between them and the better-offs over a better absolute level for themselves. IIRC they were more concerned about 'fairness' than about what the powerful might do to them. They also believed that most would agree with them (I believe the opposite).

0A1987dM
So, that which certain right-wingers here on LW were fighting against wasn't a straw man after all. :-/
1TheOtherDave
Yes, 'fairness' is often a concept that gets invoked in these sorts of discussions. For my own part, given world W1 where I have X1 and the best-off people have Y1, and world W2 where I have X2 and the best-off people have Y2, such that (X2 < X1) and (Y2-X2) << (Y1 - X1), within a range of worlds such that X1 and X2 are both not vastly different from what I have today, I expect that when transitioning from W2 to W1 I would experience myself as better off, and when transitioning from W1 to W2 I would experience myself as worse off. I expect that's true of most people. It's not necessarily the only important question here, though.
pangel00

Gentzen’s Cut Elimination Theorem for Non-Logicians

Knowledge and Value, Tulane Studies in Philosophy Volume 21, 1972, pp 115-126

3VincentYu
Here.
3gwern
I can't get it either, sorry.
2somervta
http://philpapers.org/rec/MILGCE The PDC appears to be offline, and although wayback machine has the Tulane page here, it doesn't seem like it has the pdf linked to by philpapers. Hopefully someone else can work with this.
pangel20

Being in a situation somewhat similar to yours, I've been worrying that my lowered expectations about others' level of agency (with elevated expectations as to what constitutes a "good" level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I'd be more generally prone to take initiative if I saw trust in my peers' eyes.

pangel110

There is an animated series for children aimed at explaining the human body which personifies bacteria, viruses, etc. Anyone interested in pursuing your idea may want to pick up techniques from the show:

Wikipedia article: http://en.wikipedia.org/wiki/Once_Upon_a_Time..._Life

Example: http://www.youtube.com/watch?v=LIyvrcHnriE&t=1m11s

pangel20

So MoR might be a meta-fantasy of the wizarding world as The Sword of Good is a meta-fantasy of the muggle world. Or at least, MoR!Harry might make the same impression to a wizard reading one fic as Hirou does to a muggle reading the other.

Although my instinct is still that Harry fails at the end.

8DanArmak
As long as he's alive, he'll keep trying, so either he dies, or he succeeds, or something very unexpected happens. Otherwise there will be no closure to the story.
pangel50

Or Harry transfigured Hermione's body into a rock and then the rock into a brown diamond. Unless the story explicitly disallows double transfigurations and I missed it.

2gwern
My problem with that is that the rock should then still be a 'transfigured object' for the purposes of the spells and detected when Dumbledore examines the untransfigured diamond.
7Ben Pace
I thought this, but the spell used to undo a transformation by Harry in 89 is 'Finite Incantatem' which sounds more like 'stop magic happening here' rather than 'undo a single transformation', especially considering its varied other uses. Assuming Dumbledore didn't make a basic error (he didn't) I feel as though my stone hypothesis from earlier has been falsified.
pangel00

Sounds right, but the present-day situation is the same: orbs may float to you if and only if you enter the Hall. So Dumbledore should know whether he is involved in the prophecy or not. Unless I missed something?

0gwern
The easiest way to stop the floating is to stop the floating entirely, in which case entering the hall wouldn't necessarily help. And we don't know Dumbledore has entered the hall or not, for that matter: he may not be willing to risk another break-in for anything short of Voldemort itself, Trelawney's second prophecy may seem benign, or he fears that hearing the prophecy may narrow down his options or some other harm.
pangel00

... a great room of shelves filled with glowing orbs, one after another appearing over the years. (...) Those mentioned within a prophecy would have an glowing orb float to their hand, and then hear the prophet's true voice speaking.

I interpret it as: Anyone who enters this room sees a glowing orb float to their hand for every prophecy that mentions them. How do you interpret it?

0hairyfigment
I was probably just wrong, though gwern makes a good point. I think I was imagining people actively summoning an orb. While we're on the topic, though, note that self-fulfilling prophecies seem easier for the Unspeakables to detect (and decide to prevent) than do "temporal pressures".
2gwern
I took it as meaning that the orbs visited anyone mentioned in a prophecy before the Unspeakables sealed the Hall precisely to prevent people from learning of the prophecy they were in. If you had to know that a prophecy was made involving you and travel to the Hall of your own volition, that would be essentially useless by Merlin's lights ('because knowing is half the battle!') since the only ones who would know this in advance would be the prophecy hearers and their allies, and what's the point of that? If you were setting up a system to screw with Destiny, you'd arrange for the system to tell all involved automatically! We know that the glowy orbs are trapped because no one in the story mentions or see them happening: Snape does not mention globes coming to him nor does McGonagall nor anyone else, Voldemort has to be informed by Snape, neither Harry nor Quirrel nor anyone else receive orbs from Trelawney's second cut-off prophecy (perhaps the real reason that Dumbledore doesn't want to take Harry to the Hall), and Dumbledore has to personally take Harry's parents to the Hall to hear their copies.
pangel50

"Those who are spoken of in a prophecy, may listen to that prophecy there. Do you see the implication, Harry?"

Shouldn't Minerva see another implication, that Dumbledore has no reason to wonder whether he is the dark lord of the prophecy?

9EternalStargazer
There is a more interesting implication in that section actually. 'To some people than others' implies Harry. Ergo, it would be more dangerous for Harry to go there. Ergo, there are other things in the Hall of Prophecy which would effect Harry. Ergo, there are more Prophecies about Harry there. We already know or can suspect one, the HE IS COMING one. (Which incidentally I suspect is 'TEAR APART THE VERY FABRIC OF SPACE AND TIME' and that [rot13] Gur Zna Haqre gur Ung, naq gur bevtvangbe bs n ohapu bs gurfr cynaf vf n Shgher Uneel jub unf tbggra nebhaq Gvzr Gheare erfgevpgvbaf. Ur shpxrq fbzrguvat hc naq vf gelvat gb svk vg, be ur qvqa'g naq ernyvmrq gung ur arrqrq gb qb gurfr guvatf gb rafher uvf bja gvzryvar.) Also from this, since Dumbledore will not take him there, we can assume that whatever it is that Harry might discover there would be detrimental to Dumbledore. On a meta note, it is also a reason (beyond the many others) for which Eleizer would have had to deny Harry a phoenix. Far too much freedom of movement for the plot to remain on any semblance of rails.
7hairyfigment
Surely that only applies if Albus tested it.
pangel50

Thank you for the link! Note that the .pdf version of the article (which is also referenced in dbaupp's link) has a record of the "hostile-wife" cases over a span of 8 years.

pangel50

Women don't like cryonics.

What made you believe this? Is there a pattern to the declared reasons?

You can look at cryonic signup rates by gender, and there's also the article that advancedatheist linked. I'll add that in my anecdotal experience, women seem more likely to dismiss the idea when I bring it up in casual conversation.

For myself, personally, I don't like cryonics because I think the research largely points to it being non-viable. Of the three other women I can remember speaking to recently, two others had the same objection, and the last one's issue is that they live in Australia so the difficulty of getting cryopreserved soon after death is ridiculously high (they view it as plausible-but-unlikely)

8advancedatheist
I guess you missed the controversy this article generated a couple years back: http://www.evidencebasedcryonics.org/is-that-what-love-is-the-hostile-wife-phenomenon-in-cryonics/ I've wondered if we do make a transition to a society where extreme healthy life extension becomes feasible and a part of mainstream medicine whether we'll see a pattern where women on average still choose to die more or less on schedule while men on average choose the longevity treatments. That could work out well for the straight alpha males and the alpha wannabes who value women for sex but not much else, because they would always have new crops of women coming to fruition for their sexual adventures while they forget the dying older ones; but the situation could distress the men who become emotionally involved with the women in their lives, value their companionship and don't want to see these women age and die. I've noticed that the relatively few women who sign up for cryonics on their own initiative generally don't have, and apparently don't want, children, though I know of a couple of fertility-oriented mom types. One of these motherhood-averse women told me that well before she discovered cryonics and sought out male cryonicists as companions, she had a tubal ligation in her early 20's. (She had a scar in the right place.) But for the most part the cryonics movement remains a male-dominated social space, and I don't see that changing any time soon.
7dbaupp
I think it's an empirical belief, e.g. even Robyn Hanson's wife is resistant to cryonics.
pangel40

The fictional college of the article only selects incoming students on price.

I have often wondered if anyone has tried to save their acceptance letters from colleges they couldn't afford to go to and show them to employers. Why doesn't this work?

pangel60

I had the exact same argument with my girlfriend (a bad idea) a while ago and asked for references to point her to on the IRC channel. I was given The Simple Truth and The Relativity of Wrong.

So I was about to write a very supportive response when I saw Mitchell Porter's comment. And this

(...) the children of post-Christian agnostics grow up to be ideologically aggressive posthuman rationalists.

aptly describes recent interactions I've had with my father¹. The accusation of narrowmindedness was present.

So, recurring conflicts with friends and family bec... (read more)

pangel70

I see your point. As an author I would think I'm misdirecting my readers by doing that though; "Voldemort has the same deformity as in canon? He's been playing with Horcruxes!" is the reasoning I would expect from them. Which is why I would, say, remove Quirrell's turban as soon as my plot had Voldemort not on the back of Quirrell's head.

pangel20

The soul-mangling is what causes Voldemort's snake-like appearance, IIRC, and MoR!McGonagall remembers a snake-like Voldemort from her battles. So either MoR!Voldemort has been doing some serious damage to his soul, or he decided to look freakish just for effect and stumbled by chance upon the exact same look which canon!Voldemort got from making Horcruxes.

0Desrtopa
I don't think this is ever made explicit. It's probably the reason J. K. Rowling had in mind, but I don't think there's anything in the text that rules out the possibility that he looked that way because he wanted to.
8gjm
Why isn't "EY is making him look like in canon" a sufficient explanation for the look being exactly the same? It would be a rotten explanation within the MoRverse, of course, but within the MoRverse there's no coincidence to need explaining.
pangel30

As an anecdote, I had an opposite slight tendency to go for what seemed like the worst answer and I had to switch answers twice because of this.

pangel170

I understood the introductory question as "Frodo Baggins from the Lord of the Rings is buying pants. Which of these is he most likely to buy?", and correctly answered (c). I suggest rephrasing your question to ensure that it actually tests the reader's fictional bias. Also, Szalinski in Journal of Cognitive Minification is a nice one.

pangel50

Unless its utility function has a maximum, we are at risk. Observing Mandelbrot fractals is probably enhanced by having all the atoms of a galaxy playing the role of pixels.

Would you agree that unless the utility function of a random AI has a (rather low) maximum, and barring the discovery of infinite matter/energy sources, its immediate neighbourhood is likely to get repurposed?

I must say that at least I finally understand why you think botched FAIs are more risky than others.

But consider, as Ben Goertzel mentioned, that nobody is trying to build a random... (read more)

0Dmytry
Cruel physics, cruel physics. There is speed of light delay, that's thing, and I'm not maniacal about mandelbox (its a 3d fractal) anyway, I won't want to wipe out interesting stuff in the galaxy for minor gain in the resolution. And if i can circumvent speed of light, all bets are off WRT what kind of resources i would need (or if i would need any, maybe i get infinite computing power in finite space and time) How's about generating human brain (in crude emulation of developmental biology)? It's pretty darn random. My argument is that, the AI whose only goal is helping humans, if bugged, has the only goal that is messing with humans. The AI that just represents humans in a special way is not this scary, albeit still is, to some extent. Consider this seed AI: evolution. Comes up with mankind, that tries to talk with outside (god) without even knowing that outside exists, has endangered species list. Of course, if we are sufficiently resource bound, we are going to eat up all other forms of life, but we'd be resource bound because we are too stupid to find a way to go to space, and we clearly would rather not exerminate all other lifeforms. This example ought to entirely invalidate this notion that 'almost all' AIs in AI design space are going to eat you. We have 1 example: evolution going FOOM via evolving human brain, and it cares about wildlife somewhat, yes we do immense damage to environment, but we would not if we could avoid it , even at some expense. If you have 1 example probe into random AI space, and it's not all this bad, you seriously should not go around telling how you're extremely sure it is just blind luck et cetera.