All of aaronde's Comments + Replies

aaronde20

I would definitely make it every other week, if it's weekly.

aaronde70

I agree that something unusual is going on. Humans, unlike any other species I'm aware of, are voluntarily restricting our own population growth. But I don't know why you say that there's "no reason" to believe that this strange behavior might benefit us. Surely you can think of at least one reason? After all, all those other species that don't voluntarily limit their own reproduction eventually see their populations crash, or level off in the face of fierce competition over resources, when they meet or exceed their environment's carrying cap... (read more)

aaronde50

I downvoted common_law's post, because of some clear-cut math errors, which I pointed out. I'm downvoting your comment because it's not saying anything constructive.

There's nothing wrong with what common_law was trying to do here, which is to show that infinite sets shouldn't be part of our ontology. Experience can't be the sole arbiter of which model of reality is best; there is also parsimony. Whether infinite quantities are actually real, is no less worthy of discussion than whether MWI is actually real, or merely a computational convenience. I on... (read more)

2prase
There are arguments which are wrong because they lack rigor, but in my opinion this isn't one of them. The main problem is asking a question about "actual existence" of abstract objects without clear understanding what such "actual existence" would represent. I can imagine a rigorous version of this post where "actual existence" was given a rigorous definition, but I doubt it would convince me about anything (as I remain unconvinced that e.g. modal logic is a useful epistemological tool although it can be formalised). Note that (one of) the apparent motivation(s) of all recent anti-infinity posts is rejection of many-world interpretation of QM, i.e. it is unlikely that the author is aiming at constructing a neat rigorous theory.
aaronde60

I agree with premise (1), that there is no reason to think of infinitesimal quantities as actually part of the universe. I don't agree with premise (2), that actual infinities imply actual infinitesimals. If you could convince me of (2), I would probably reject (1) rather than accept (3). Since an argument for (2) would be a good argument against (1), given that our universe does seem to have actual infinities.

the points on a line are of infinitesimal dimension ... yet compose lines finite in extent.

No. Points have zero dimension. "Infinitesi... (read more)

aaronde40

Fascinating. But note that these are still very old people with declining cholesterol as they age. The study is more relevant to physicians deciding whether to prescribe statins to their elderly patients, and less relevant to young people deciding whether to keep cholesterol low throughout life with diet.

I'd need to read the whole study, but what I see so far doesn't even contradict the hypothesis I outlined. The abstract says that people who had low cholesterol at the last two examinations did worse than people who had low cholesterol at only the last ... (read more)

aaronde80

http://www.youtube.com/watch?v=xiNvQ-g1XGs&list=PLDBBB98ACA18EF67C&index=19

This (admittedly biased) youtuber has a pretty thorough criticism of the study. The bottom line is that cholesterol tends to drop off before death (6:26 in the video), not just because cholesterol-lowering medications are administered to those at highest risk of heart attack (as Kawoomba points out), but also because of other diseases. When you correct for this, or follow people throughout their lives, this reverse causation effect disappears, and you find exactly the asso... (read more)

It's a good theory and the priors for it being true are high, but the one study that should have been able to test it directly got the opposite results as the theory would have predicted; patients with consistently low cholesterol over twenty years had higher mortality rate than patients with sudden drops in cholesterol.

One study isn't enough to draw any conclusions, but it does prevent me from considering the issue completely solved despite the elegance of this explanation.

aaronde00

Agreed. The multiverse idea is older than, and independent of quantum theory. Actually, a single infinitely large classical universe will do, since statistically, every possibility should play out. Nietzsche even had a version of immortality based on an infinitely old universe. Though it's not clear whether he ever meant it literally, he very well could have, because it was consistent with the scientific understanding of the time.

That said, I like the idea of sminux's post. I try to steer clear of quantum language myself, and think others should too, if all they mean by "quantum" is "random".

aaronde50

All the possible reasons for the conflict you listed suggest that the solution is to help feminists understand evolutionary psychology better, so they won't have a knee-jerk defensive reaction against it. This could come off as a little condescending, but more importantly, it misses the other side of the issue. In order to leave itself less open to criticism, evolutionary psychology could be more rigorous, just as other "soft" sciences like medicine and nutrition could be more rigorous. This would make it harder for critics to find things to o... (read more)

7ChristianKl
Maybe it would also help if the evolutionary psychologists folks would understand feminism better to communicate in a way that reduces conflict. As we are on LessWrong it would make more sense to focus here on evolutionary psychologists folks understanding feminism than the other way around.
2diegocaleiro
Yes, also disentangling social scientists notion of what used to be called evolutionism in the social sciences, back in the 1920-1940s and what was once sociobiology applied to humans, from the actual evolutionary psychology of our time.
aaronde40

Actually I'm not sure if any of that is a problem. Spaun is quite literally "anthropomorphic" - modeled after a human brain. So it's not much of a stretch to say that it learns and understands the way a human does. I was just pointing out that the more progress we make on human-like AIs, without progress on brain scanning, the less likely a Hansonian singularity (dominated by ems of former humans) becomes. If Spaun as it is now really does work "just like a human", then building a human-level AI is just a matter of speeding it up. ... (read more)

3Wei Dai
As I explained in this comment, Spaun can only perform tasks that are specifically and manually programmed into it. It is very, very far from working just like a human. It's definitely incapable of learning new skills or concepts, for example. What the original article said was: Well gosh, my desktop computer can also shift from task to task, just like the human brain, mining bitcoins one moment and decoding MPEGs the next. This is either PR or (perhaps unintentional) hype by the reporter, saying something that is literally true but gives the impression of much greater accomplishment. (Which isn't to say that Spaun might not continue with or inspire further more interesting developments, but a lot of people seem to be overly impressed with it in its current state.)
aaronde60

I think we need to separate the concept of whole brain emulation, from that of biology-inspired human-like AI. This actually looks pretty bad for Robin Hanson's singularity hypothesis, where the first emulations to perfectly emulate existing humans suddenly make the cost of labor drop dramatically. If this research pans out, then we could have a "soft takeoff", where AI slowly catches up to us, and slowly overtakes us.

CNRG_UWaterloo, regarding mind uploads:

Being able to simulate a particular person's brain is incredibly far away. There aren't

... (read more)
8CarlShulman
Robin's economic model for growth explosion with AI uses a continuum of tasks for automation. The idea is that as you automate more tasks, those tasks are done very efficiently, but the remaining ones become bottlenecks, making up more of GDP and limiting growth. Think Baumol's cost disease: as our manufacturing productivity has increased, economic growth winds up more limited by productivity improvements in service sectors like health care and education, computer programming, and science that have been resistant to automation. As you eliminate the last bottlenecks where human labor is important, you can speed the whole process of growth up to computer-scales, rather than being held back by the human weakest link. One can make an analogy to Amdahl's law: if you can parallelize 50% of a problem you can double your speed, at 90% you can get a 10x speedup, at 99% a 100x speedup, and as you approach 100% you can rush up to other limits. Similarly, smooth human progress in automating elements of AI development (which then proceed very quickly, bottlenecked by the human-limited elements) could produce stark speedup as automation approaches 100%. However, such developments do make a world dominated by whole brain emulations rather than neuromorphic AI less likely, even if they still allow an intelligence explosion.
3[anonymous]
This seems completely true. Part of the problem is that the media hype surrounding this stuff drops lines like this: Basically: to explain this stuff to normal readers, writers anthropomorphize the hell out of the project and you end up with words like 'intuition' and 'understanding' and 'learn' and 'remember' - which make the articles both sexier and way more misleading. The same thing happened with IBM's project and, to my understanding, the Blue Blain Project as well.
aaronde00

From the article:

AIs learn slowly now mainly because they know so little.

This seems implausible, because humans learn almost everything we know over our lifetimes, from a starting instruction set smaller than existing pieces of software. If architecture really is overrated (meaning that existing architectures are already good enough) isn't it more likely that AIs learn so slowly now, simply because the computers aren't powerful enough yet?

aaronde00

If Omega gets it right more than 99% of the time, then why would Alpha take 10-to-1 odds against Omega messing up?

0Kindly
Would the problem be different if we changed the Alpha-Omega betting odds so that if Alpha wins 1 in 10000 times (which is still plausible given the data) then Alpha would make a profit?
0Douglas_Reay
It is a convenient way of representing reputation gain. Think of Alpha as being the AI society at large, or a reporter detailed to watch and report on Omega, on behalf of the AI society. So they're not actually doing an explicit bet. Rather it is Omega wanting to improve Omega's reputation at human-prediction within the AI society by doing the experiment. The more Omega does the experiment and makes correct predictions, the higher the others in AI society will rate Omega's expertise. Presumably it is valuable to Omega in some way to have a high reputation for doing this; maybe he wants to sell an algorithm or database, maybe he wants his advice about a human-related problem taken seriously without disclosing his actual source code and evidence - the reason isn't relevant. From Fred's perspective, when it comes to understanding and planning for Omega's actions, the dynamics of the situation are close enough that, to a first approximation, it is as though Omega had made a bet with some Alpha. What the odds of that bet are a variable input to the problem, which I shall talk about in part 3, but note for now that this isn't an arbitrage situation where someone else can step in and improve Omega's reputation for him in order to chisel a corner off the odds. The odds for Omega, from the perspective of Omega's knowledge of the situation, will be in Omega's favour of improving Omega's net reputation, else he wouldn't be running the experiment.
aaronde80

Yeah, that was my impression. One of the things that's interesting about the article is that many of the technologies Taleb disparages already exist. He lists space colonies and flying motorcycles right along side mundane tennis shoes and video chat. So it's hard to tell when he's criticizing futurists for expecting certain new technologies, and when he's criticizing them for wanting those new technologies. When he says that he's going to take a cab driven by an immigrant, is he saying that robot cars won't arrive any time soon? Or that it wouldn't ma... (read more)

aaronde190

I think the real difference between people like Taleb and the techno-optimists is that we think the present is cool. He brags about going to dinner in minimalist shoes, and eating food cooked over a fire, whereas I think it's awesome that I can heat things up instantly in a microwave oven, and do just about anything in meticulously engineered and perfectly fitted, yet cheaply mass-produced, running shoes without worrying about damaging my feet. I also like keyboards, and access to the accumulated knowledge of humanity from anywhere, and contact lenses. ... (read more)

You might get a different perspective on the present when you reach your 50's, as I have. I used Amazon's book-previewing service to read parts of W. Patrick McCray's book, The Visioneers, and I realized that I could nearly have written that book myself because my life has intersected with the story he tells at several points. McCray focuses on Gerard K. O'Neill and Eric Drexler, and in my Amazon review I pointed out that after a generation, or nearly two in O'Neill's case, we can get the impression that their respective ideas don't work. No one has gott... (read more)

aaronde20

Oops, looks like I was wrong about what you meant (ignore the edit). But yes, if you give a stupid thing lots of power you should expect bad outcomes. A car directed with zero intelligence is not a car sitting still, but precisely what you said was dangerous: a car having its controls blindly fiddled with. But if you just run a stupid program on a computer, it will never acquire power in the first place. Most decisions are neutral, unless they just happen to be plugged into something that has already been optimized to have large physical effects (like ... (read more)

aaronde10

1) You'd need a way to even specify the set of "output" of any possible OP. This seems hard to me because many OPs do not have clear boundaries or enumerable output channels, like forest fires or natural selection or car factories.

How do you define an optimization process without defining its output? If you want to think of natural selection as a force that organizes matter into self-replicators, then compare the reproductive ability of an organism to the reproductive ability of a random clump of matter, to find out how much natural selection... (read more)

2Alex_Altair
Forest fires are definitely OPs under my intuitive concept. They consistently select a subset of possible future (burnt forests). They're probably something like chemical energy minimizers; if I were to measure their efficacy, it would be something like number of carbon-based molecules turned into CO2. But the only reason we can come up with semi-formal measures like CO2 molecules or output on wires is because we're smart human-things. I want to figure out how to algorithmically measure it. Yes. But what does "could" mean? It doesn't mean that you they all have equal probability. If literally all you know is that there are n outputs, then giving them 1/n weight is correct. But we usually know more, like the fact that it's an AI, and it's unclear how to update on this. Absolutely. Like how random outputs of a car cause it to jerk around and hit things, whereas a zero-capability car just sits there. Also, we're averaging over all possible outputs with equal weights. Even if most outputs are neutral or harmless, there are usually more damaging outputs than good ones. It's generally easier to harm than destroy. The more powerful actuators the AI has, the most damage random outputs will do. Thanks for all your comments!
aaronde00

Good questions. I don't know the answers. But like you say, UDT especially is basically defined circularly - where the agent's decision is a function of itself. Making this coherent is still an unsolved problem. So I was wondering if we could get around some of the paradoxes by giving up on certainty.

aaronde10

Caveat: if someone is paralyzed because of damage to their brain, rather than to their peripheral nerves or muscles, then this is not true,

That's why I specified that the you don't get penalized for disabilities that have nothing to do with the signals leaving your brain.

which creates and undesirable dependency of the measured optimization power on the location of the cause of the disability.

I disagree. I think that's kind of the point of defining "optimization power" as distinct from "power". A man in a prison cell isn't less ... (read more)

aaronde00

What I am saying is that I don't assume that I maximize expected utility. I take the five-and-ten problem as a proof that an agent cannot be certain that it will make the optimal choice, while it is choosing, because this leads to a contradiction. But this doesn't mean that I can't use the evidence that a choice would represent, while choosing. In this case, I can tell that U($10) > U($5) directly, so conditioning on A=$10 or A=$5 is redundant. The point is that it doesn't cause the algorithm to blow up, as long I don't think my probability of maxim... (read more)

1Vaniver
Well, then why even update? (Or, more specifically, why assume that this is harmless normally, but an ace up your sleeve for a particular class of problems? You need to be able to reliably distinguish when this helps you and when this hurts you from the inside, which seems difficult.) I'm not sure that I understand this; I'm under the impression that many TDT applications require that they be able to simulate themselves (and other TDT reasoners) this way.
aaronde100

What I can't figure out is how to specify possible worldstates “in the absence of an OP”.

Can we just replace the optimizer's output with random noise? For example, if we have an AI running in a black box, that only acts on the rest of the universe through a 1-gigabit network connection, then we can assign a uniform probability distribution over every signal that could be transmitted over the connection over a given time (all 2^(10^9) possibilities per second), and the probability distribution of futures that yields is our distribution over worlds that ... (read more)

3Alex_Altair
We considered random output as a baseline. It doesn't seem correct, to me. 1) You'd need a way to even specify the set of "output" of any possible OP. This seems hard to me because many OPs do not have clear boundaries or enumerable output channels, like forest fires or natural selection or car factories. 2) This is equal to a flat prior over your OPs outputs. You need some kind of specification for what possibilities are equally likely, and a justification thereof. 3) Even if we consider an AGI with well-defined output channels, it seems to me that random outputs are potentially very very very destructive, and therefore not the "default" or "status quo" against which we should measure. I think the idea should be explored more, though.
2AlexMennen
Caveat: if someone is paralyzed because of damage to their brain, rather than to their peripheral nerves or muscles, then this is not true, which creates and undesirable dependency of the measured optimization power on the location of the cause of the disability. Despite this drawback, I like this formalization. No, that clearly makes no sense if EU[av] <= 0. If you want to divide by something to normalize the measured optimization power (so that multiplying the utility function by a constant doesn't change the optimization power), the standard deviation of the expected utilities of the counterfactual probability distributions over world states associated with each of the agent's options would be a better choice.
aaronde20

I get that. What I'm really wondering is how this extends to probabilistic reasoning. I can think of an obvious analog. If the algorithm assigns zero probability that it will choose $5, then when it explores the counterfactual hypothesis "I choose $5", it gets nonsense when it tries to condition on the hypothesis. That is, for all U,

  • P(utility=U | action=$5) = P(utility=U and action=$5) / P(action=$5) = 0/0

is undefined. But is there an analog for this problem under uncertainty, or was my sketch correct about how that would work out?

1Vaniver
A causal reasoner will compute about P(utility=U| do{action=$5}), which doesn't run into this trouble. This is the approach I recommend. Probabilistic reasoning about actions that you will make is, to the best of my knowledge, not a seriously considered approach to making decisions outside of the context of mixed strategies in game theory, and even there it doesn't apply that strong, as you can see mixed strategies as putting forth a certain (but parameterized) action whose outcome is subject to uncertainty. I don't think your sketch is correct for two reasons: 1. The assumption that your action is utility-maximizing requires that you choose the best action, and so using it to justify your choice of action leads to circularity. 2. Your argument hinges on P(U($10)>U($5)|A=$10) > P(U($5)>U($10)|A=$5), which seems like an odd statement to me. If you take the actions maximize utility assumption seriously, both of those are 1, and thus the first can't be higher than the second. If you view the actions as not at all informative about the preference probabilities, then you're just repeating your prior. If the action gives some information, there's no reason for the information to be symmetric- you can easily construct a 2x2 matrix example where the reverse inequality holds (that if we know they picked $5, they are more likely to prefer $5 to $10 than someone who picked $10 is to prefer $10 to $5, even though most people prefer $10 to $5.
aaronde60

That's exactly the impression that I got. That it was awkward phrasing, because you just didn't know how to phrase it - but that it wasn't a coincidence that you defaulted to that particular awkward phrasing. It seems that, on some level, you were surprised to see people outside lesswrong discussing "lesswrong ideas." Even though, intellectually, you know that most of the good ideas on lesswrong didn't originate here. Don't be too hard on yourself. I probably have the opposite problem, where, as a meta-contrarian, I can't do anything but cr... (read more)

2[anonymous]
Thank you for the advice, and I will try to follow that rule of thumb more in the future.
aaronde00

I don't understand how Uspensky's definition is different from Eliezer's. Is there some minimum number of people a proof has to convince? Does it have to convince everyone? If I'm the only person in the world, is writing a proof impossible, or trivial? It seems that both definitions are saying that a proof will be considered valid by those people who find it absolutely convincing. And those people who do not find it absolutely convincing will not consider it valid. More importantly, it seems that this is all those two definitions are saying, which is why neither of them is very helpful if we want something more concrete than the colloquial sense of proof.

2vi21maobk9vp
As I understand Eliezer's definition: "Your text is proof if it can convince me that the conclusion is true" As I understand Uspenskiy's definition: "Your text is proof if it can convince me that the conclusion is true and is I am willing to reuse this text to convince other people". The difference is whether the mere text convinces me that I myself can also use it succesfully. Of course this has to rely on social norms for convincing arguments in some way. Disclosure: I have heard the second definition from Uspenskiy first-person, and I have never seen Eliezer in person.
aaronde90

I liked the fact that the author didn't use cognitive bias as an excuse to give up on talking about politics altogether (which seems to be LWian consensus), but instead made demonstrable claims about politics.

EDIT: in response to the previous version of Michaelos' post, I said:

It makes me uncomfortable when LWers say things like:

"Politics is the Mindkiller" appears to be acknowledged as early as the second sentence.

It smacks of, "Oh, look at the unenlightened people finally catching on." Lesswrong didn't invent cognitive science, a... (read more)

4[anonymous]
Edited! If that's poor phrasing, I want to fix it. My intended goal was "I need to reference the topic of this article in some manner, so that people will know why to read it." and from your post that wasn't getting across. However, that is not the first critique I have gotten about phrasing, and in retrospect, I am concerned that I am more of a rationality pretender than an actual rationalist. I mean, I approve of rationality, and I try to follow the math (and can't when it starts getting hard, frequently because it would take too long and I am usually following Less Wrong intermittently while focusing on other things as well), but I have received multiple complaints that I feel like I can fairly sum up as "You're the rationalist equivalent of a annoying cheerleader yelling 'Go Team, Smash the Other Team', that's not what rationality is about, please stop." I think it is safe to say that I really do have that as a problem (multiple different sources seem to indicate it to me.) And I would prefer to fix it, but I'm not sure how to fix it. If you or anyone else have thoughts on how to change, I am open to suggestion.
aaronde00

Can anyone point me toward work that's been done on the five-and-ten problem? Or does someone want to discuss it here? Specifically, I don't understand why it is a problem for probabilistic algorithms. I would reason:

There is a high probability that I prefer $10 to $5. Therefore I will decide to choose $5, with low probability.

And there's nowhere to go from there. If I try to use the fact that I chose $5 to prove that $5 was the better choice all along (because I'm rational), I get something like:

The probability that I prefer $5 to $10 is low. B

... (read more)
5Vaniver
The problem is how classical logical statements work. The statement "If A then B" more properly translates as "~(A and ~B)". Thus, we get valid logical statements that look bizarre to humans: "If Paris is the capital of France, then Rome is the capital of Italy" seems untrue in a causal sense (if we changed the capital of France, we would not change the capital of Italy, and vice versa) but it is true in a logical sense, because A is true, B is true, true and ~true is false, and ~false is true. That example seems just silly, but the problem is the reverse example is disastrous. Notice that, because of the "and," if A is false then it doesn't matter what B is: false and X is false, ~false is true. If I choose the premise "Marseilles is the capital of France," then any B works. "If Marseilles is the capital of France, then I will receive infinite utility" is a true relationship under classical logic, but is clearly not a causal relationship: changing the capital will not grant me infinite utility, and as soon as the capital changes, the logical truth of the sentence will change. If you have a reasoner that makes decisions, they need to use causal logic, not classical logic, or they'll get tripped up by the word "implication."
4[anonymous]
To me, it looks like the five-and-ten problem is that the quotation is not the referent. It seems to me that a program reasoning about its utitlity function in the way explained in the article is like a person saying " ' "Snow is white." is true.' is a true statement." The word true cannot coherently have the same meaning in both locations within the sentence.
aaronde40

Reading Less Wrong is consumptive, not productive. You need to have something to show for your work, ex. a novel draft, a fitter body, a cleaner house.

Isn't easy/hard the more useful distinction than consumptive/productive? After all, reading the news is productive in the sense of having something to show for it, because you will seem more informed in conversation. And working out can be a form of consumption, if you buy a gym membership.

Personally, I've always loved working out. So I don't have much to gain by trying to motivate myself to work out e... (read more)

0MileyCyrus
Yeah that's a better way of putting it. Reading Less Wrong might be work for some people, but it's not for me it probably isn't for CAE_Jones.
aaronde60

little gray men emerging from airborn thingies is HUGE in itself.

Um, no. A short guy in a grey suit stepping off a helicopter is a little grey man emerging from an airborn thingy.

Or did you go through all previous sightings and came to that conclusion in every one case?

No. I don't see the point in digging through all the reports, when the reports I have heard about have been so underwhelming. I was skipping around, watching bits and pieces of the video you linked, until Manfred pointed this out:

The geiger counter reading is reported as "10

... (read more)
aaronde40

Even if you could rule out man-made and weather-related causes for some UFOs, that wouldn't imply that they were caused by an extra-terrestrial civilization either. Some UFOs may still be unexplained, but all that means is that we don't know enough about them to say what they are.

That said, I don't think you can rule out weather and human craft. Others have already explained why I find the "primary" evidence unconvincing.

This is very speculative to me. I don't think we can use it as evidence for or against.

Let me put it this way. My guess ... (read more)

1[anonymous]
"Even if you could rule out man-made and weather-related causes for some UFOs, that wouldn't imply that they were caused by an extra-terrestrial civilization either." I agree. But in the cases of grey beings emerging from UFOs we can at least conclude that grey beings can occupy UFOs, if we trust primary evidence. This would be a massive discovery in itself, so why don't we hear about it? We don't have to conclude they come from outer space - who knows, they maybe live underground. Lets not speculate on that as we have plenty of interesting observations to delve into already - little gray men emerging from airborn thingies is HUGE in itself. "So what is it that you think you know about these "Aliens"?" It's not that I know anything about aliens. It's that more earthly explanations are completely implausible in many cases. "That said, I don't think you can rule out weather and human craft." In which cases? Just all cases, a priory? Or did you go through all previous sightings and came to that conclusion in every one case? Maybe others did the study for you, so you could provide a reference?
aaronde40

Yes, I was wrong. I was explaining why I got so focused on the blank-slate version of the prior.

1[anonymous]
Oh, gotcha.
aaronde00

Right. What I want to do is calculate the probability that a random conscious entity would find itself living in a world where someone satisfying the definition of Julius Caesar had existed. And then calculate the conditional probability given the evidence, which is everything I've ever observed about the world including the newly discovered account.

Obviously that's not what you do in real life, but the point remains that everything after the original prior (based on Kolmogorov complexity or something) is just conditioning. If we're going to talk about how and why we should formulate priors, rather than what Bayes' rule says, this is what we're interested in.

0[anonymous]
But that's not what I'm talking about. I was specifically responding to your claim that: So far as I can tell, that's not part of the accepted definition. For example, Jaynes' work on prior probabilities explicitly invokes prior information: I don't mean to come off as a dick for nit-picking about definitions. But rigorous mathematical definitions are really important, especially if you are claiming to argue something is true by definition - and you were.
aaronde70

I think you may be confused by an oversimplification of Occam's Razor: "Extraordinary claims require extraordinary evidence." That's not actually how you derive a prior - the very word "extraordinary" implies that you already have experience about what is ordinary and what isn't. If we really throw out all evidence that could tell us how likely aliens are, we end up with a probability which (by the usual method of generating priors), depends on the information-theoretic complexity of the statement "There are aliens on earth."... (read more)

0[anonymous]
"I don't think that generic aliens should be considered especially improbable a priori - before the evidence is considered. I think that they are unlikely a posteriori - based on the fact that we don't see them" Citation? There's plenty of evidence for non-man made, non-hoaxed, non-astronomical, non-weatherrealated unidentified flying objects according to studies made by the US and French military: http://en.wikipedia.org/wiki/Project_Blue_Book#Project_Blue_Book_Special_Report_No._14 most important highlights: http://lesswrong.com/lw/ffd/struck_with_a_belief_in_alien_presence/7t4i The black swan example was just a general pondering. "I don't think that generic aliens should be considered especially improbable a priori - before the evidence is considered. I think that they are unlikely a posteriori - based on the fact that we don't see them. I think that any intelligent space-faring life would be busy building spheres around stars (if not outright disassembling the stars) as quickly as they spread out into the cosmos. So we'd notice them by the wake of solar systems going dark. At the very least, there's no reason to think that they would hide from us, which is what these scenarios tend to require" This is very speculative to me. I don't think we can use it as evidence for or against.
0[anonymous]
As mentioned elsewhere, this kind of reasoning: "I don't think that generic aliens should be considered especially improbable a priori - before the evidence is considered. I think that they are unlikely a posteriori - based on the fact that we don't see them. I think that any intelligent space-faring life would be busy building spheres around stars (if not outright disassembling the stars) as quickly as they spread out into the cosmos. So we'd notice them by the wake of solar systems going dark. At the very least, there's no reason to think that they would hide from us, which is what these scenarios tend to require (though I haven't watched the documentary)." is at best secondary evidence and thus shouldn't be weighted as high as primary evidence such as sightings or knowledge of time+space-correlating weather balloon flights.
aaronde30

Wait, what? Bayesians never assign 0 probability to anything, because it means the probability will always remain 0 regardless of future updates. And "prior probability", by definition, means that we throw out all previous evidence.

4[anonymous]
Yes. This name for this is Cromwell's rule. Not quite. The prior probability is the probability of the hypothesis and the background information, independent from the evidence we are updating on. This includes previous evidence. We usually write the "prior probability" as P(H), but it should really be written as P(H.B), where "H" is hypothesis and "B" is background information. For example, let's say I am asking you to update your belief that Julius Caesar existed given a recently discovered, apparently first-hand account of Caesar's crossing the Rubicon. Your prior probability should NOT exclude all previous evidence on whether Caesar actually existed - e.g. official Roman documents and coins with his face. Ideally, your prior probability should be your posterior probability from your most recent update.
aaronde40

I endorse this idea, but have a minor nitpick:

In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that.

This certainly gets proposed a lot. But isn't it lesswrongian consensus that this is backwards? That the only way to build a FAI is to build an AI that will extrapolate and adopt the humane utility function on its own? (since human values are too complicated for mere humans to state explicitly).

aaronde30

How does trivialism differ from assuming the existence of a Tegmark IV universe?

Tegmark IV is the space of all computable mathematical structures. You can make true and false statements about this space, and there is nothing about it that implies a contradiction. You may think that any coherent empirical claim is true in Tegmark IV, in that anything we say about the world is true of some world. But being true in some world does not make it true in this world. If I say that the sky is green, I am implicitly referring to the sky that I experience, whi... (read more)

-1[anonymous]
From one of Tegmark's pop sci papers: Trivialism induces a mathematical structure, and so is contained in the level IV multiverse. I think there's some meta-level confusion in the rest of the first part of your comment. It's not clear to me how this claim affects the argument. Asserting the negation of the converse of (c) doesn't imply anything about (c). The argument is not central to the dissertation. He reports it from a trivialist to establish the existence of at least one trivialist.
aaronde00

Isn't temporal inconsistency just selfishness? That is, before you know whether the coin came up heads or tails, you care about both possible futures. But after you find out that you're in the tails' universe you stop caring about the heads' universe, because you're selfish. UDT acts differently, because it is selfless, in that it keeps the same importance weights over all conceivable worlds.

It makes perfect sense to me that a rational agent would want to restrict the choices of its future self. I want to make sure that future-me doesn't run off and do his own thing, screwing current-me over.

aaronde00

This question is for anyone who says they saw a benefit from supplementation, not just Kevin.

What was your diet like at the time? Were you taking a daily multivitamin?

aaronde00

I also think that I am conscious, but you keep telling me I have the wrong definitions of words like this, so I don't know if we agree. I would say being conscious means that some part of my brain is collating data about my mental states, such that I could report accurately on my mental states in a coherent manner.

aaronde-20

How do I know whether I am having a conscious subjective experience of a sensation or emotion?

-2Peterdjones
You're conscious. Being conscious of things kind of goes with the territory.
aaronde00

Okay, I've tabooed my words. Now it's your turn. What do you mean by "feeling"?

0Peterdjones
The conscious subjective experience of a sensation or emotion.
aaronde00

You're right, we're starting to go around in circles. So we should wrap this up. I'll just address what seems to be the main point.

I find it obvious that there is a huge, important aspect of what it is to be in pain that [your definition] completely misses.

This is the crux of our disagreement, and is unlikely to change. But you still seem to misunderstand me slightly, so maybe we can still make progress.

You have decided that pain is a certain kind of behaviour displayed by entities other than yourself and seen from the outside, and you have coded

... (read more)
0Peterdjones
I don't accept that all stimuli are feelings. A thermostat is stimulated by changes in temperature, but I don't think it feels the cold. It is about "feelings" as you define the word, which is not general usage. Which is itslef consistent with the fact that your "explanations" of feelign invariabel skirt the central issues. However, I am never goign to be able to provide you with objective proof of subjective feelings. It is for you to get out of the loop of denying subjectivity because it is not objective enough. "subjective experience" means "exprerience" and both mean the same thing as "qualia". Which is to say, it is incoherent to me that you could deny qualia and accept experience. I don't think introspection is sufficient for feeling, since I can introspect thought as well.
aaronde00

I thought you were denying "pains hurt"

Not at all. I'm denying that there is anything left over to know about pain (or hurting) after you understand what pain does. As my psych prof. pointed out, you often see weird circular definitions of pain in common usage, like "pain is an unpleasant sensation". Whereas psychologists use functional definitions, like "a stimulus is painful, iff animals try to avoid it". I believe that the latter definition of pain is valid (if simplistic), and that the former is not.

If you think y

... (read more)
0Peterdjones
I don't have to like either definition, and I don't. The second definition attempts to define pain from outside behaviour, and therefore misses the central point of a feeling--that it feels like something, subjectively, to the organism having it. Moreover, it is liable to over-extend the definition of pain. Single celled organisms can show avoidant behaviour, but it is doubtful that they have feelings. Putting things on an objective basis is often and rightly seen as a Good Thing in science, but when what you are dealing with is subjective, a problem is brewing.. I find it obvious that there is a huge, important aspect of what it is to be in pain that that completely misses. There is nothing there that deals at all, in any way, with any kind of subjective feeling or sensaton whatsoever. You have decided that pain is a certain kind of behaviour dsiaplyed by entities pother than yourself and seen from the outside, and you have coded that up. I inspect the code, and find nothing that relates in any way to how I introspect pain or any other feeling. But I suspect we will continue to go round in circles on this issue until I can persuade you to make the paradigm shift into thinking about subjective feelings from the POV of your own subjectivity. It's about both, because you can't prefer to personally have certain experiences if there is no such thing as subjective experience. Would you want to go on a holiday, or climb a mountain, and then have your memories of the expereince wiped? You would still have done it.
aaronde20

My pains hurt. My food tastes. Voices and music sound like something.

Um, those are all tautologies, so I'm not sure how to respond. If we define "qualia" as "what it feels like to have a feeling", then, well - that's just a feeling, right? And "qualia" is just a redundant and pretentious word, whose only intelligible purpose is to make a mystery of something that is relatively well understood (e.g: the "hard problem of consciousness"). No?

Erm, sorry for the snark, but seriously: has talk of qualia, as distinct ... (read more)

0Peterdjones
a) I thought you were denying "pains hurt" b) "food tastes" isn't. c) The others can be rephrased as "injuries hurt" and "atmospheric compression waves sound like something". d) All words are inidivdually redundant e) If you think you can make the Hard Problem easy by tabooing "qualia", lets see you try. Well, you haven't. And there is something. Do you send disadvantaged kids to Disneyland, or just send them the brochure? Even if you don't personally care about experiencing things for yourself, it is difficult to see how you could ignore its importance in your "good outcomes".
aaronde00

I don't want to experience pain even in ways that promote my goals

Don't you mean that avoiding pain is one of your goals?

It would have been helpful to say why you reject it.

It just seems like the default position. Can you give me a reason to take the idea of qualia seriously in the first place?

would you maintain that personally experiencing pain for the first time would teach you nothing?

Yes.

0Peterdjones
Yes. Because pain hurts. Yes. My pains hurt. My food tastes. Voices and music sound like something. Do you go drink the wine or just read the label? Do you go on holiday or just read the brochure?
aaronde10

Only in their rhetoric, which is at most weakly correlated with their actual policy decisions.

Yes, but in this case, the rhetoric matters. I believe this was Stuart's point. If we want to raise the "sanity waterline", then, all else being equal, saner political dialog is a good thing. Right?

-1Shmi
oxymoron.
aaronde40

I don't think a program has to be very sophisticated to feel pain. But it does have to exhibit some kind of learning. For example:

.def wanderer (locations, utility, X):
..while True:
.
...for some random l1, l2 in locations:
....if utility[l1] < utility[l2]:
.....my_location = l2
....else:
.....my_location = l1
.
...if X(my_location, current_time):
....utility[my_location] = utility[my_location] - 1
.
...current_time = current_time + 1

This program aimlessly wanders over a space of locations, but eventually tends to avoid locations where X has return... (read more)

aaronde-10

Probably some of them do (I don't play video games). But they aren't even close to being people, so I don't really care.

2Risto_Saarelma
Would you say a thermostat feels pain when it can't adjust the temperature towards its preferred setting? Otherwise you might have some strange ideas about the complexity of video game characters. There's a very long way to go in internal complexity from a video game character to, say, a bacterium.
aaronde00

I think that split-brain study shows the opposite of what you think it shows. If you observed yourself to be writhing around in agony, then you would conclude that you were experiencing the qualia of pain. Try to imagine what this would actually be like, and think carefully about what "trying to avoid similar circumstances in the future" actually means. You can't sit still, can't think about anything else. You plead with anyone around to help you - put a stop to whatever is causing this - insisting that they should sympathize with you. The m... (read more)

aaronde10

If an actor stays in character his entire life, making friends and holding down a job, in character - and if, whenever he seemed to zone out, you could interrupt him at any time to ask what he was thinking about, and he could give a detailed description of the day dream he was having, in character...

Well then I'd say the character is a lot less fictional than the actor. But even if there is an actor - an entirely different person putting on a show - the character is still a real person. This is no different from saying that a person is still a person, even if they're a brain emulation running on a computer. In this case, the actor is the substrate on which the character is running.

-2Eugine_Nier
So would you say video game characters "feel" pain?
aaronde00

As far as I know, to feel is to detect, or perceive, and pain is positive punishment, in the jargon of operant conditioning. So to say "I feel pain" is to say that I detect a stimulus, and process the information in such a way that (all else equal) I will try to avoid similar circumstances in the future. Not being a psychologist, I don't know much more about pain. But (not being a psychologist) I don't need to know more about pain. And I reject the notion that we can, through introspection, know something more about what it "is like"... (read more)

0Peterdjones
I don't think we have to argue whether it is the goal-frustration or the pain-quale that is the bad. They are both bad. I don't want to have my goals frustrated painlessly, and I don't want to experience pain even in ways that promote my goals, such as being cattle-proded every time I slip into Akrasia. It would have been helpful to say why you reject it. If you were in a Mary-style experiment, whre you studied pain whilst being anaesthetised from birth, would you maintinan that personally experiencing pain for the first time would teach you nothing?
09eB1
If someone offered me a pill that would merely reduce my qualia experience of pain I would take it, even if it still triggered in me a process of information that would cause me to try to avoid similar circumstances in the future, and even if it were impossible to tell observationally that I had taken it, except by asking about my qualia of experiencing pain and other such philosophical topics. That is, if I am going to writhe in agony, I would prefer to have my mind do it for me without me having to experience the agony. If I'm going to never touch a hot stove because of one time when I burned me, I'd prefer to do that without having the memory of the burn. This idea is not malformed, given what we know about the human brain's lack of introspection on it's actions. In practice it seems that the only reason that it frustrates a person's goals to receive pain is because they have a goal, "I don't want to be in pain." There are certainly reasons that the pain is adaptive, but it certainly seems from the inside like the most objectionable part is the qualia. If the sophisticated intelligence HAS qualia but doesn't have as a goal avoidance of pain, that suggests your ethical system would be OK to subject it to endless punishment (a sentiment with which I may agree).
aaronde50

When people say that it's conceivable for something to act exactly as if it were in pain without actually feeling pain, they are using the word "feel" in a way that I don't understand or care about. So, sure: I don't feel pain in that sense. That's not going to stop me from complaining about having my hand chopped off!

0Peterdjones
OK. But you're using "feel" in a sense I don't understand.
0randallsquared
Taken literally, this suggests that you believe all actors really believe they are the character (at least, if they are acting exactly like the character). Since that seems unlikely, I'm not sure what you mean.
Load More