Although I appreciate the parallel, and am skeptical of both, the mental paths that lead to those somewhat related ideas are seriously dissimilar.
I have a question, but I try to be careful about the virtue of silence. So I'll try to ask my question as a link :
http://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation
Also, these ideas are still weird enough to win against his level of status, as I think the comments here show:
Could you expand on this?
...there are reasons why a capitalist economy works and a command economy doesn't. These reasons are relevant to evaluating whether a basic income is a good idea.
Sorry, "fine" was way stronger than what I actually think. It just makes it better than the (possibly straw) alternative I mentioned.
No. Thanks for making me notice how relevant that could be.
I see that I haven't even thought through the basics of the problem. "power over" is felt whenever scarcity leads the wealthier to take precedence. Okay, so to try to generalise a little, I've never been really hit by the scarcity that exists because my desires are (for one reason or another) adjusted to my means.
I could be a lot wealthier yet have cravings I can't afford, or be poorer and still content. But if what I wanted kept hitting a wealth ceiling (a specific type, one due to scarcity, such that increasing my wealth and everyone else's in proportion wouldn't help), I'd start caring about relative wealth really fast.
I see it as a question of preference so I know by never having felt envy, etc. at someone richer than me just for being richer. I only feel interested in my wealth relative to what I need or want to purchase.
As noted in the comment thread I linked, I could start caring if someone's relative wealth gave them power over me but I haven't been in this situation so far (stuff like boarding priority for first-class tickets are a minor example I did experience, but that's never bothered me).
Responding to a point about the rise of absolute wealth since 1916, this article makes (not very well) a point about the importance of relative wealth.
Comparing folks of different economic strata across the ages ignores a simple fact: Wealth is relative to your peers, both in time and geography.
I've had a short discussion about this earlier, and find it very interesting.
In particular, I sincerely do not care about my relative wealth. I used to think that was universal, then found out I was wrong. But is it typical? To me it has profound implications a...
...people have already set up their fallback arguments once the soldier of '...' has been knocked down.
Is this really good phrasing or did you manage to naturally think that way? If you do it automatically: I would like to do it too.
It often takes me a long time to recognize an argument war. Until that moment, I'm confused as to how anyone could be unfazed by new information X w.r.t. some topic. How do you detect you're not having a discussion but are walking on a battlefield?
Is this really good phrasing
Yes, I was referring to Eliezer's essay there. I liked my little flourish there, so I'm glad someone noticed.
How do you detect you're not having a discussion but are walking on a battlefield?
In this case it's easy when you look over all the comments on HN and elsewhere. It's like when Yvain is simultaneously accused of being racist Neo-reactionary scum and a Marxist SJW beta-cuckold Jew scum - it's difficult to see how both sets of accusations could be right simultaneously, so clearly at least one set of accusers are unhi...
I think practitioners of ML should be more wary of their tools. I'm not saying ML is a fast track to strong AI, just that we don't know if it is. Several ML people voiced reassurances recently, but I would have expected them to do that even if it was possible to detect danger at this point. So I think someone should find a way to make the field more careful.
I don't think that someone should be MIRI though; status differences are too high, they are not insiders, etc. My best bet would be a prominent ML researcher starting to speak up and giving detailed, plausible hypotheticals in public (I mean near-future hypotheticals where some error creates a lot of trouble for everyone).
I meant it in the sense you understood first. I don't know what to make of the other interpretation. If a concept is well-defined, the question "Does X match the concept?" is clear. Of course it may be hard to answer.
But suppose you only have a vague understanding of ancestry. Actually, you've only recently coined the word "ancestor" to point at some blob of thought in your head. You think there's a useful idea there, but the best you can for now is: "someone who relates to me in a way similar to how my dad and my grandmother relat...
As an instance of the limits of replacing words with their definitions to clarify debates, this looks like an important conversation.
The fuzziest starting point for "consciousness" is "something similar to what I experience when I consider my own mind". But this doesn't help much. Someone can still claim "So rocks probably have consciousness!", and another can respond "Certainly not, but brains grown in labs likely do!". Arguing from physical similarity, etc. just relies on the other person sharing your intuitions.
F...
Straussian thinking seems like a deep well full of status moves !
You probably already agreed with "Ghosts in the Machine" before reading it since obviously, a program executes exactly its code even in the context of AI. Also obviously, the program can still appear to not do what it's supposed to if "supposed" is taken to mean to programmer's intent.
These statements don't ignore machine learning; they imply that we should not try to build an FAI using current machine learning techniques. You're right, we understand (program + parameters learned from dataset) even less than (program). So while the outs...
Congratulations!
I have met people who explicitly say they prefer a lower gap between them and the better-offs over a better absolute level for themselves. IIRC they were more concerned about 'fairness' than about what the powerful might do to them. They also believed that most would agree with them (I believe the opposite).
Gentzen’s Cut Elimination Theorem for Non-Logicians
Knowledge and Value, Tulane Studies in Philosophy Volume 21, 1972, pp 115-126
Being in a situation somewhat similar to yours, I've been worrying that my lowered expectations about others' level of agency (with elevated expectations as to what constitutes a "good" level of agency) has an influence on those I interact with: if I assume that people are somewhat influenced by what others expect of them, I must conclude that I should behave (as far as they can see) as if I believed them to be as capable of agency as myself, so that their actual level of agency will improve. This would would work on me, for instance I'd be more generally prone to take initiative if I saw trust in my peers' eyes.
There is an animated series for children aimed at explaining the human body which personifies bacteria, viruses, etc. Anyone interested in pursuing your idea may want to pick up techniques from the show:
Wikipedia article: http://en.wikipedia.org/wiki/Once_Upon_a_Time..._Life
So MoR might be a meta-fantasy of the wizarding world as The Sword of Good is a meta-fantasy of the muggle world. Or at least, MoR!Harry might make the same impression to a wizard reading one fic as Hirou does to a muggle reading the other.
Although my instinct is still that Harry fails at the end.
Or Harry transfigured Hermione's body into a rock and then the rock into a brown diamond. Unless the story explicitly disallows double transfigurations and I missed it.
I'll be there as well.
Sounds right, but the present-day situation is the same: orbs may float to you if and only if you enter the Hall. So Dumbledore should know whether he is involved in the prophecy or not. Unless I missed something?
... a great room of shelves filled with glowing orbs, one after another appearing over the years. (...) Those mentioned within a prophecy would have an glowing orb float to their hand, and then hear the prophet's true voice speaking.
I interpret it as: Anyone who enters this room sees a glowing orb float to their hand for every prophecy that mentions them. How do you interpret it?
"Those who are spoken of in a prophecy, may listen to that prophecy there. Do you see the implication, Harry?"
Shouldn't Minerva see another implication, that Dumbledore has no reason to wonder whether he is the dark lord of the prophecy?
Same here.
Thank you for the link! Note that the .pdf version of the article (which is also referenced in dbaupp's link) has a record of the "hostile-wife" cases over a span of 8 years.
Women don't like cryonics.
What made you believe this? Is there a pattern to the declared reasons?
You can look at cryonic signup rates by gender, and there's also the article that advancedatheist linked. I'll add that in my anecdotal experience, women seem more likely to dismiss the idea when I bring it up in casual conversation.
For myself, personally, I don't like cryonics because I think the research largely points to it being non-viable. Of the three other women I can remember speaking to recently, two others had the same objection, and the last one's issue is that they live in Australia so the difficulty of getting cryopreserved soon after death is ridiculously high (they view it as plausible-but-unlikely)
The fictional college of the article only selects incoming students on price.
I have often wondered if anyone has tried to save their acceptance letters from colleges they couldn't afford to go to and show them to employers. Why doesn't this work?
I had the exact same argument with my girlfriend (a bad idea) a while ago and asked for references to point her to on the IRC channel. I was given The Simple Truth and The Relativity of Wrong.
So I was about to write a very supportive response when I saw Mitchell Porter's comment. And this
(...) the children of post-Christian agnostics grow up to be ideologically aggressive posthuman rationalists.
aptly describes recent interactions I've had with my father¹. The accusation of narrowmindedness was present.
So, recurring conflicts with friends and family bec...
I see your point. As an author I would think I'm misdirecting my readers by doing that though; "Voldemort has the same deformity as in canon? He's been playing with Horcruxes!" is the reasoning I would expect from them. Which is why I would, say, remove Quirrell's turban as soon as my plot had Voldemort not on the back of Quirrell's head.
The soul-mangling is what causes Voldemort's snake-like appearance, IIRC, and MoR!McGonagall remembers a snake-like Voldemort from her battles. So either MoR!Voldemort has been doing some serious damage to his soul, or he decided to look freakish just for effect and stumbled by chance upon the exact same look which canon!Voldemort got from making Horcruxes.
As an anecdote, I had an opposite slight tendency to go for what seemed like the worst answer and I had to switch answers twice because of this.
I understood the introductory question as "Frodo Baggins from the Lord of the Rings is buying pants. Which of these is he most likely to buy?", and correctly answered (c). I suggest rephrasing your question to ensure that it actually tests the reader's fictional bias. Also, Szalinski in Journal of Cognitive Minification is a nice one.
Unless its utility function has a maximum, we are at risk. Observing Mandelbrot fractals is probably enhanced by having all the atoms of a galaxy playing the role of pixels.
Would you agree that unless the utility function of a random AI has a (rather low) maximum, and barring the discovery of infinite matter/energy sources, its immediate neighbourhood is likely to get repurposed?
I must say that at least I finally understand why you think botched FAIs are more risky than others.
But consider, as Ben Goertzel mentioned, that nobody is trying to build a random...
I am too.
An intuition is that red-black trees encode 2-3-4 trees (B-trees of order 4) as binary trees.
For a simpler case, 2-3 trees (Ie. B-trees of order 3) are either empty, a (2-)node with 1 value and 2 subtrees, or a (3-)node with 2 values and 3 subtrees. The idea is to insert new values in their sorted position, expand 2-nodes to 3-nodes if necessary, and bubble up the extra values when a 3-node should be expanded. This keeps the tree balanced.
A 2-3-4 tree just generalises the above.
Now the intuition is that red means "I am part of a bigger node." Tha... (read more)