I agree that something unusual is going on. Humans, unlike any other species I'm aware of, are voluntarily restricting our own population growth. But I don't know why you say that there's "no reason" to believe that this strange behavior might benefit us. Surely you can think of at least one reason? After all, all those other species that don't voluntarily limit their own reproduction eventually see their populations crash, or level off in the face of fierce competition over resources, when they meet or exceed their environment's carrying cap...
I downvoted common_law's post, because of some clear-cut math errors, which I pointed out. I'm downvoting your comment because it's not saying anything constructive.
There's nothing wrong with what common_law was trying to do here, which is to show that infinite sets shouldn't be part of our ontology. Experience can't be the sole arbiter of which model of reality is best; there is also parsimony. Whether infinite quantities are actually real, is no less worthy of discussion than whether MWI is actually real, or merely a computational convenience. I on...
I agree with premise (1), that there is no reason to think of infinitesimal quantities as actually part of the universe. I don't agree with premise (2), that actual infinities imply actual infinitesimals. If you could convince me of (2), I would probably reject (1) rather than accept (3). Since an argument for (2) would be a good argument against (1), given that our universe does seem to have actual infinities.
the points on a line are of infinitesimal dimension ... yet compose lines finite in extent.
No. Points have zero dimension. "Infinitesi...
Fascinating. But note that these are still very old people with declining cholesterol as they age. The study is more relevant to physicians deciding whether to prescribe statins to their elderly patients, and less relevant to young people deciding whether to keep cholesterol low throughout life with diet.
I'd need to read the whole study, but what I see so far doesn't even contradict the hypothesis I outlined. The abstract says that people who had low cholesterol at the last two examinations did worse than people who had low cholesterol at only the last ...
http://www.youtube.com/watch?v=xiNvQ-g1XGs&list=PLDBBB98ACA18EF67C&index=19
This (admittedly biased) youtuber has a pretty thorough criticism of the study. The bottom line is that cholesterol tends to drop off before death (6:26 in the video), not just because cholesterol-lowering medications are administered to those at highest risk of heart attack (as Kawoomba points out), but also because of other diseases. When you correct for this, or follow people throughout their lives, this reverse causation effect disappears, and you find exactly the asso...
It's a good theory and the priors for it being true are high, but the one study that should have been able to test it directly got the opposite results as the theory would have predicted; patients with consistently low cholesterol over twenty years had higher mortality rate than patients with sudden drops in cholesterol.
One study isn't enough to draw any conclusions, but it does prevent me from considering the issue completely solved despite the elegance of this explanation.
Agreed. The multiverse idea is older than, and independent of quantum theory. Actually, a single infinitely large classical universe will do, since statistically, every possibility should play out. Nietzsche even had a version of immortality based on an infinitely old universe. Though it's not clear whether he ever meant it literally, he very well could have, because it was consistent with the scientific understanding of the time.
That said, I like the idea of sminux's post. I try to steer clear of quantum language myself, and think others should too, if all they mean by "quantum" is "random".
All the possible reasons for the conflict you listed suggest that the solution is to help feminists understand evolutionary psychology better, so they won't have a knee-jerk defensive reaction against it. This could come off as a little condescending, but more importantly, it misses the other side of the issue. In order to leave itself less open to criticism, evolutionary psychology could be more rigorous, just as other "soft" sciences like medicine and nutrition could be more rigorous. This would make it harder for critics to find things to o...
Actually I'm not sure if any of that is a problem. Spaun is quite literally "anthropomorphic" - modeled after a human brain. So it's not much of a stretch to say that it learns and understands the way a human does. I was just pointing out that the more progress we make on human-like AIs, without progress on brain scanning, the less likely a Hansonian singularity (dominated by ems of former humans) becomes. If Spaun as it is now really does work "just like a human", then building a human-level AI is just a matter of speeding it up. ...
I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI. This actually looks pretty bad for Robin Hanson's singularity hypothesis, where the first emulations to perfectly emulate existing humans suddenly make the cost of labor drop dramatically. If this research pans out, then we could have a "soft takeoff", where AI slowly catches up to us, and slowly overtakes us.
CNRG_UWaterloo, regarding mind uploads:
...Being able to simulate a particular person's brain is incredibly far away. There aren't
From the article:
AIs learn slowly now mainly because they know so little.
This seems implausible, because humans learn almost everything we know over our lifetimes, from a starting instruction set smaller than existing pieces of software. If architecture really is overrated (meaning that existing architectures are already good enough) isn't it more likely that AIs learn so slowly now, simply because the computers aren't powerful enough yet?
If Omega gets it right more than 99% of the time, then why would Alpha take 10-to-1 odds against Omega messing up?
Yeah, that was my impression. One of the things that's interesting about the article is that many of the technologies Taleb disparages already exist. He lists space colonies and flying motorcycles right along side mundane tennis shoes and video chat. So it's hard to tell when he's criticizing futurists for expecting certain new technologies, and when he's criticizing them for wanting those new technologies. When he says that he's going to take a cab driven by an immigrant, is he saying that robot cars won't arrive any time soon? Or that it wouldn't ma...
I think the real difference between people like Taleb and the techno-optimists is that we think the present is cool. He brags about going to dinner in minimalist shoes, and eating food cooked over a fire, whereas I think it's awesome that I can heat things up instantly in a microwave oven, and do just about anything in meticulously engineered and perfectly fitted, yet cheaply mass-produced, running shoes without worrying about damaging my feet. I also like keyboards, and access to the accumulated knowledge of humanity from anywhere, and contact lenses. ...
You might get a different perspective on the present when you reach your 50's, as I have. I used Amazon's book-previewing service to read parts of W. Patrick McCray's book, The Visioneers, and I realized that I could nearly have written that book myself because my life has intersected with the story he tells at several points. McCray focuses on Gerard K. O'Neill and Eric Drexler, and in my Amazon review I pointed out that after a generation, or nearly two in O'Neill's case, we can get the impression that their respective ideas don't work. No one has gott...
Oops, looks like I was wrong about what you meant (ignore the edit). But yes, if you give a stupid thing lots of power you should expect bad outcomes. A car directed with zero intelligence is not a car sitting still, but precisely what you said was dangerous: a car having its controls blindly fiddled with. But if you just run a stupid program on a computer, it will never acquire power in the first place. Most decisions are neutral, unless they just happen to be plugged into something that has already been optimized to have large physical effects (like ...
1) You'd need a way to even specify the set of "output" of any possible OP. This seems hard to me because many OPs do not have clear boundaries or enumerable output channels, like forest fires or natural selection or car factories.
How do you define an optimization process without defining its output? If you want to think of natural selection as a force that organizes matter into self-replicators, then compare the reproductive ability of an organism to the reproductive ability of a random clump of matter, to find out how much natural selection...
Good questions. I don't know the answers. But like you say, UDT especially is basically defined circularly - where the agent's decision is a function of itself. Making this coherent is still an unsolved problem. So I was wondering if we could get around some of the paradoxes by giving up on certainty.
Caveat: if someone is paralyzed because of damage to their brain, rather than to their peripheral nerves or muscles, then this is not true,
That's why I specified that the you don't get penalized for disabilities that have nothing to do with the signals leaving your brain.
which creates and undesirable dependency of the measured optimization power on the location of the cause of the disability.
I disagree. I think that's kind of the point of defining "optimization power" as distinct from "power". A man in a prison cell isn't less ...
What I am saying is that I don't assume that I maximize expected utility. I take the five-and-ten problem as a proof that an agent cannot be certain that it will make the optimal choice, while it is choosing, because this leads to a contradiction. But this doesn't mean that I can't use the evidence that a choice would represent, while choosing. In this case, I can tell that U($10) > U($5) directly, so conditioning on A=$10 or A=$5 is redundant. The point is that it doesn't cause the algorithm to blow up, as long I don't think my probability of maxim...
What I can't figure out is how to specify possible worldstates “in the absence of an OP”.
Can we just replace the optimizer's output with random noise? For example, if we have an AI running in a black box, that only acts on the rest of the universe through a 1-gigabit network connection, then we can assign a uniform probability distribution over every signal that could be transmitted over the connection over a given time (all 2^(10^9) possibilities per second), and the probability distribution of futures that yields is our distribution over worlds that ...
I get that. What I'm really wondering is how this extends to probabilistic reasoning. I can think of an obvious analog. If the algorithm assigns zero probability that it will choose $5, then when it explores the counterfactual hypothesis "I choose $5", it gets nonsense when it tries to condition on the hypothesis. That is, for all U,
is undefined. But is there an analog for this problem under uncertainty, or was my sketch correct about how that would work out?
That's exactly the impression that I got. That it was awkward phrasing, because you just didn't know how to phrase it - but that it wasn't a coincidence that you defaulted to that particular awkward phrasing. It seems that, on some level, you were surprised to see people outside lesswrong discussing "lesswrong ideas." Even though, intellectually, you know that most of the good ideas on lesswrong didn't originate here. Don't be too hard on yourself. I probably have the opposite problem, where, as a meta-contrarian, I can't do anything but cr...
I don't understand how Uspensky's definition is different from Eliezer's. Is there some minimum number of people a proof has to convince? Does it have to convince everyone? If I'm the only person in the world, is writing a proof impossible, or trivial? It seems that both definitions are saying that a proof will be considered valid by those people who find it absolutely convincing. And those people who do not find it absolutely convincing will not consider it valid. More importantly, it seems that this is all those two definitions are saying, which is why neither of them is very helpful if we want something more concrete than the colloquial sense of proof.
I liked the fact that the author didn't use cognitive bias as an excuse to give up on talking about politics altogether (which seems to be LWian consensus), but instead made demonstrable claims about politics.
EDIT: in response to the previous version of Michaelos' post, I said:
It makes me uncomfortable when LWers say things like:
"Politics is the Mindkiller" appears to be acknowledged as early as the second sentence.
It smacks of, "Oh, look at the unenlightened people finally catching on." Lesswrong didn't invent cognitive science, a...
Can anyone point me toward work that's been done on the five-and-ten problem? Or does someone want to discuss it here? Specifically, I don't understand why it is a problem for probabilistic algorithms. I would reason:
There is a high probability that I prefer $10 to $5. Therefore I will decide to choose $5, with low probability.
And there's nowhere to go from there. If I try to use the fact that I chose $5 to prove that $5 was the better choice all along (because I'm rational), I get something like:
...The probability that I prefer $5 to $10 is low. B
Reading Less Wrong is consumptive, not productive. You need to have something to show for your work, ex. a novel draft, a fitter body, a cleaner house.
Isn't easy/hard the more useful distinction than consumptive/productive? After all, reading the news is productive in the sense of having something to show for it, because you will seem more informed in conversation. And working out can be a form of consumption, if you buy a gym membership.
Personally, I've always loved working out. So I don't have much to gain by trying to motivate myself to work out e...
little gray men emerging from airborn thingies is HUGE in itself.
Um, no. A short guy in a grey suit stepping off a helicopter is a little grey man emerging from an airborn thingy.
Or did you go through all previous sightings and came to that conclusion in every one case?
No. I don't see the point in digging through all the reports, when the reports I have heard about have been so underwhelming. I was skipping around, watching bits and pieces of the video you linked, until Manfred pointed this out:
...The geiger counter reading is reported as "10
Even if you could rule out man-made and weather-related causes for some UFOs, that wouldn't imply that they were caused by an extra-terrestrial civilization either. Some UFOs may still be unexplained, but all that means is that we don't know enough about them to say what they are.
That said, I don't think you can rule out weather and human craft. Others have already explained why I find the "primary" evidence unconvincing.
This is very speculative to me. I don't think we can use it as evidence for or against.
Let me put it this way. My guess ...
Yes, I was wrong. I was explaining why I got so focused on the blank-slate version of the prior.
Right. What I want to do is calculate the probability that a random conscious entity would find itself living in a world where someone satisfying the definition of Julius Caesar had existed. And then calculate the conditional probability given the evidence, which is everything I've ever observed about the world including the newly discovered account.
Obviously that's not what you do in real life, but the point remains that everything after the original prior (based on Kolmogorov complexity or something) is just conditioning. If we're going to talk about how and why we should formulate priors, rather than what Bayes' rule says, this is what we're interested in.
I think you may be confused by an oversimplification of Occam's Razor: "Extraordinary claims require extraordinary evidence." That's not actually how you derive a prior - the very word "extraordinary" implies that you already have experience about what is ordinary and what isn't. If we really throw out all evidence that could tell us how likely aliens are, we end up with a probability which (by the usual method of generating priors), depends on the information-theoretic complexity of the statement "There are aliens on earth."...
Wait, what? Bayesians never assign 0 probability to anything, because it means the probability will always remain 0 regardless of future updates. And "prior probability", by definition, means that we throw out all previous evidence.
I endorse this idea, but have a minor nitpick:
In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that.
This certainly gets proposed a lot. But isn't it lesswrongian consensus that this is backwards? That the only way to build a FAI is to build an AI that will extrapolate and adopt the humane utility function on its own? (since human values are too complicated for mere humans to state explicitly).
How does trivialism differ from assuming the existence of a Tegmark IV universe?
Tegmark IV is the space of all computable mathematical structures. You can make true and false statements about this space, and there is nothing about it that implies a contradiction. You may think that any coherent empirical claim is true in Tegmark IV, in that anything we say about the world is true of some world. But being true in some world does not make it true in this world. If I say that the sky is green, I am implicitly referring to the sky that I experience, whi...
Isn't temporal inconsistency just selfishness? That is, before you know whether the coin came up heads or tails, you care about both possible futures. But after you find out that you're in the tails' universe you stop caring about the heads' universe, because you're selfish. UDT acts differently, because it is selfless, in that it keeps the same importance weights over all conceivable worlds.
It makes perfect sense to me that a rational agent would want to restrict the choices of its future self. I want to make sure that future-me doesn't run off and do his own thing, screwing current-me over.
This question is for anyone who says they saw a benefit from supplementation, not just Kevin.
What was your diet like at the time? Were you taking a daily multivitamin?
I also think that I am conscious, but you keep telling me I have the wrong definitions of words like this, so I don't know if we agree. I would say being conscious means that some part of my brain is collating data about my mental states, such that I could report accurately on my mental states in a coherent manner.
How do I know whether I am having a conscious subjective experience of a sensation or emotion?
Okay, I've tabooed my words. Now it's your turn. What do you mean by "feeling"?
You're right, we're starting to go around in circles. So we should wrap this up. I'll just address what seems to be the main point.
I find it obvious that there is a huge, important aspect of what it is to be in pain that [your definition] completely misses.
This is the crux of our disagreement, and is unlikely to change. But you still seem to misunderstand me slightly, so maybe we can still make progress.
...You have decided that pain is a certain kind of behaviour displayed by entities other than yourself and seen from the outside, and you have coded
I thought you were denying "pains hurt"
Not at all. I'm denying that there is anything left over to know about pain (or hurting) after you understand what pain does. As my psych prof. pointed out, you often see weird circular definitions of pain in common usage, like "pain is an unpleasant sensation". Whereas psychologists use functional definitions, like "a stimulus is painful, iff animals try to avoid it". I believe that the latter definition of pain is valid (if simplistic), and that the former is not.
...If you think y
My pains hurt. My food tastes. Voices and music sound like something.
Um, those are all tautologies, so I'm not sure how to respond. If we define "qualia" as "what it feels like to have a feeling", then, well - that's just a feeling, right? And "qualia" is just a redundant and pretentious word, whose only intelligible purpose is to make a mystery of something that is relatively well understood (e.g: the "hard problem of consciousness"). No?
Erm, sorry for the snark, but seriously: has talk of qualia, as distinct ...
I don't want to experience pain even in ways that promote my goals
Don't you mean that avoiding pain is one of your goals?
It would have been helpful to say why you reject it.
It just seems like the default position. Can you give me a reason to take the idea of qualia seriously in the first place?
would you maintain that personally experiencing pain for the first time would teach you nothing?
Yes.
Only in their rhetoric, which is at most weakly correlated with their actual policy decisions.
Yes, but in this case, the rhetoric matters. I believe this was Stuart's point. If we want to raise the "sanity waterline", then, all else being equal, saner political dialog is a good thing. Right?
I don't think a program has to be very sophisticated to feel pain. But it does have to exhibit some kind of learning. For example:
.def wanderer (locations, utility, X):
..while True:
.
...for some random l1, l2 in locations:
....if utility[l1] < utility[l2]:
.....my_location = l2
....else:
.....my_location = l1
.
...if X(my_location, current_time):
....utility[my_location] = utility[my_location] - 1
.
...current_time = current_time + 1
This program aimlessly wanders over a space of locations, but eventually tends to avoid locations where X has return...
Probably some of them do (I don't play video games). But they aren't even close to being people, so I don't really care.
I think that split-brain study shows the opposite of what you think it shows. If you observed yourself to be writhing around in agony, then you would conclude that you were experiencing the qualia of pain. Try to imagine what this would actually be like, and think carefully about what "trying to avoid similar circumstances in the future" actually means. You can't sit still, can't think about anything else. You plead with anyone around to help you - put a stop to whatever is causing this - insisting that they should sympathize with you. The m...
If an actor stays in character his entire life, making friends and holding down a job, in character - and if, whenever he seemed to zone out, you could interrupt him at any time to ask what he was thinking about, and he could give a detailed description of the day dream he was having, in character...
Well then I'd say the character is a lot less fictional than the actor. But even if there is an actor - an entirely different person putting on a show - the character is still a real person. This is no different from saying that a person is still a person, even if they're a brain emulation running on a computer. In this case, the actor is the substrate on which the character is running.
As far as I know, to feel is to detect, or perceive, and pain is positive punishment, in the jargon of operant conditioning. So to say "I feel pain" is to say that I detect a stimulus, and process the information in such a way that (all else equal) I will try to avoid similar circumstances in the future. Not being a psychologist, I don't know much more about pain. But (not being a psychologist) I don't need to know more about pain. And I reject the notion that we can, through introspection, know something more about what it "is like"...
When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about. So, sure: I don't feel pain in that sense. That's not going to stop me from complaining about having my hand chopped off!
I would definitely make it every other week, if it's weekly.