Hyperventilating leads to hallucinations instead of stimulation. I went to a Holotropic Breathwork session once. Some years before that, I went to a Sufi workshop in NYC where Hu was chanted to get the same result. I have to admit I cheated at both events -- I limited my breathing rate or depth so not much happened to me.
Listening to the reports from the other participants of the Holotropic Breathwork session made my motives very clear to me. I don't want any of that. I like the way my mind works. I might consider making purposeful and careful changes to h...
If you give up on the AIXI agent exploring the entire set of possible hypotheses and instead have it explore a small fixed list, the toy models can be very small. Here is a unit test for something more involved than AIXI that's feasible because of the small hypothesis list.
Getting a programming job is not contingent on getting a degree. There's an easy test for competence at programming in a job interview: ask the candidate to write code on a whiteboard. I am aware of at least one Silicon Valley company that does that and have observed them to hire people who never finished their BS in CS. (I'd rather ask candidates to write code and debug on a laptop, but the HR department won't permit it.)
Getting a degree doesn't hurt. It might push up your salary -- even if one company has enough sense to evaluate the competence of a pro...
I have experienced consequences of donating blood too often.The blood donation places check your hemoglobin, but I have experienced iron deficiency symptoms when my hemoglobin was normal and my serum ferritin was low. The symptoms were twitchy legs when I was trying to sleep and insomnia, and iron deficiency was confirmed with a ferritin test. The iron deficiency symptoms went away and ferritin went back to normal when I took iron supplements and stopped donating blood, and I stopped the iron supplements after the normal ferritin test.
The blood donation pl...
Well, I suppose it's an improvement that you've identified what you're arguing against.
Unfortunately the statements you disagree with don't much resemble what I said. Specifically:
The argument you made was that copy-and-destroy is not bad because a world where that is done is not worse than our own.
I did not compare one world to another.
...Pointing out that your definition of something, like harm, is shared by few people is not argumentum ad populum, it's pointing out that you are trying to sound like you're talking about something people care about
Nothing I have said in this conversation presupposed ignorance, blissful or otherwise.
I give up, feel free to disagree with what you imagine I said.
Check out Argumentum ad Populum. With all the references to "most people", you seem to be committing that fallacy so often that I am unable to identify anything else in what you say.
This reasoning can be used to justify almost any form of "what you don't know won't hurt you". For instance, a world where people cheated on their spouse but it was never discovered would function, from the point of view of everyone, as well as or better than the similar world where they remained faithful.
Your example is too vague for me to want to talk about. Does this world have children that are conceived by sex, children that are expensive to raise, and property rights? Does it have sexually transmitted diseases? Does it have paternity ...
OTOH, some such choices are worse than others.
If you have an argument, please make it. Pointing off to a page with a laundry list of 37 things isn't an argument.
One way to find useful concepts is to use evolutionary arguments. Imagine a world in which it is useful and possible to commute back and forth to Mars by copy-and-destroy. Some people do it and endure arguments about whether they are still the "same" person when they got back, some people don't do it because of philosophical reservations about being the "same" person. Since w...
Suppose we define a generalized version of Solomonoff Induction based on some second-order logic. The truth predicate for this logic can’t be defined within the logic and therefore a device that can decide the truth value of arbitrary statements in this logical has no finite description within this logic. If an alien claimed to have such a device, this generalized Solomonoff induction would assign the hypothesis that they're telling the truth zero probability, whereas we would assign it some small but positive probability.
I'm not sure I understand you c...
Consider an arbitrary probability distribution P, and the smallest integer (or the lexicographically least object) x such that P(x) < 1/3^^^3 (in Knuth's up-arrow notation). Since x has a short description, a universal distribution shouldn't assign it such a low probability, but P does, so P can't be a universal distribution.
The description of x has to include the description of P, and that has to be computable if a universal distribution is going to assign positive probability to x.
If P has a short computable description, then yes, you can conclude ...
You're absolutely right that learning to lie really well and actually lying to one's family, the "genuinely wonderful people" they know, everyone in one's "social structure" and business, as well as one's husband and daughter MIGHT be the "compassionate thing to do". But why would you pick out exactly that option among all the possibilities?
Because it's a possibility that the post we're talking about apparently did not consider. The Litany of Gendlin was mentioned in the original post, and I think that when interpreted as ...
You seem to think that if you can imagine even one possible short-term benefit from lying or not-disclosing something, then that's sufficient justification to do so.
That's not what I said. I said several things, and it's not clear which one you're responding to; you should use quote-rebuttal format so people know what you're talking about. Best guess is that you're responding to this:
...[learning to lie really well] might be the compassionate thing to do, if you believe that the people you interact with would not benefit from hearing that you no lon
The Litany of Gendlin is specifically about what you should or should not believe, and your feelings about reality. It says nothing about telling people what you think is true — although "owning up to it" is confusingly an idiom that normally means admitting the truth to some authority figure, whereas in this case it is meant to indicate admitting the truth to yourself.
Just drink two tablespoons of extra-light olive oil early in the morning... don't eat anything else for at least an hour afterward... and in a few days it will no longer take willpower to eat less; you'll feel so full all the time, you'll have to remind yourself to eat.
...and then increase the dose to 4 tablespoons if that doesn't work, and then try some other stuff such as crazy-spicing your food if that doesn't work, according to page 62 and Chapter 6 of Roberts' "Shangri-La" Diet" book. I hope you at least tried the higher dose before giving up.
How do you add two utilities together?
They are numbers. Add them.
So are the atmospheric pressure in my room and the price of silver. But you cannot add them together (unless you have a conversion factor from millibars to dollars per ounce).
Your analogy is invalid, and in general analogy is a poor substitute for a rational argument. In the thread you're replying to, I proposed a scheme for getting Alice's utility to be commensurate with Bob's so they can be added. It makes sense to argue that the scheme doesn't work, but it doesn't make sense to pretend it does not exist.
I would expect that peer pressure can make people stop doing evil things (either by force, or by changing their cost-benefit calculation of evil acts). Objective morality, or rather a definition of morality consistent within the group can help organize efficient peer pressure.
So in a conversation between a person A who believes in objective morality and a person B who does not, a possible motive for A is to convince onlookers by any means possible that objective morality exists. Convincing B is not particularly important, since effective peer pressur...
A fallacy is a false statement
Not a pattern of an invalid argument?
With [the universal] prior, TSUF-like utility functions aren't going to dominate the set of utility functions consistent with the person's behavior
How do you know this? If that's true, it can only be true by being a mathematical theorem...
No, it's true in the same sense that the statement "I have hands" is true. That is, it's an informal empirical statement about the world. People can be vaguely understood as having purposeful behavior. When you put them in strange situations, this breaks down a bit and if you wish to understand them as hav...
Some agents, but not all of them, determine their actions entirely using a time-invariant scalar function U(s) over the state space.
If we're talking about ascribing utility functions to humans, then the state space is the universe, right? (That is, the same universe the astronomers talk about.) In that case, the state space contains clocks, so there's no problem with having a time-dependent utility function, since the time is already present in the domain of the utility function.
Thus, I don't see the semantic misunderstanding -- human behavior is cons...
This is the Texas Sharpshooter fallacy again. Labelling what a system does with 1 and what it does not with 0 tells you nothing about the system.
You say "again", but in the cited link it's called the "Texas Sharpshooter Utility Function". The word "fallacy" does not appear. If you're going to claim there's a fallacy here, you should support that statement. Where's the fallacy?
It makes no predictions. It does not constrain expectation in any way. It is woo.
The original claim was that human behavior does not conform t...
The Utility Theory folks showed that behavior of an agent can be captured by a numerical utility function iff the agent's preferences conform to certain axioms, and Allais and others have shown that human behavior emphatically does not.
A person's behavior can always be understood as optimizing a utility function, it just that if they are irrational (as in the Allais paradox) the utility functions start to look ridiculously complex. If all else fails, a utility function can be used that has a strong dependency on time in whatever way is required to matc...
Before my rejection of faith, I was plagued by a feeling of impending doom.
I was a happy atheist until I learned about the Friendly AI problem and estimated the likely outcome. I am now plagued by a feeling of impending doom.
...If everyone's inferred utility goes from 0 to 1, and the real-life utility monster cares more than the other people about one thing, the inferred utility will say he cares less than other people about something else. Let him play that game until the something else happens, then he loses, and that's a fine outcome.
That's not the situation I'm describing; if 0 is "you and all your friends and relatives getting tortured to death" and 1 is "getting everything you want," the utility monster is someone who puts "not getting one thing
There seems to be an assumption here that empathy leads to morality. Sometimes, at least, empathy leads to being jerked around by the stupid goals of others instead of pursuing your own stupid goals, and in this case it's not all that likely to lead to something fitting any plausible definition of "moral behavior". Chogyam Trungpa called this "idiot compassion".
Thus it's important to distinguish caring about humanity as a whole from caring about individual humans. I read some of the links in the OP and did not see this distinction mentioned.
I procrastinated when in academia, but did not feel particularly attracted to the job, so option 1 is not always true. Comparison with people not in academia makes it seem that option 3 is not true for me either.
More questions to perhaps add:
What is self-modification? (In particular, does having one AI build another bigger and more wonderful AI while leaving "itself" intact count as self-modification? The naive answer is "no", but I gather the informed answer is "yes", so you'll want to clarify this before using the term.)
What is wrong with the simplest decision theory? (That is, enumerate the possible actions and pick the one for which the expected utility of the outcome is best. I'm not sure what the standard name for that is.) ...
A common tactic in human interaction is to care about everything more than the other person does, and explode (or become depressed) when they don't get their way. How should such real-life utility monsters be dealt with?
If everyone's inferred utility goes from 0 to 1, and the real-life utility monster cares more than the other people about one thing, the inferred utility will say he cares less than other people about something else. Let him play that game until the something else happens, then he loses, and that's a fine outcome.
...I doubt it can measur
It's understanding of you doesn't have to be more rigorous than your understanding of you.
It does if I want it to give me results any better than I can provide for myself.
No. For example, if it develops some diet drug that lets you safely enjoy eating and still stay skinny and beautiful, that might be a better result than you could provide for yourself, and it doesn't need any special understanding of you to make that happen. It just makes the drug, makes sure you know the consequences of taking it, and offers it to you. If you choose take it, th...
In some sense, the problem of FAI is the problem of rigorously understanding humans, and evo psych suggests that will be a massively difficult problem.
I think that bar is unreasonably high. If you have conflict between enjoying eating a lot vs being skinny and beautiful, and the FAI helps you do one or the other, then you aren't in a position to complain that it did the wrong thing. It's understanding of you doesn't have to be more rigorous than your understanding of you.
For example, maybe you could chill the body rapidly to organ-donation temperatures, garrote the neck,..
It's worse than I said, by the way. If the patient is donating kidneys and is brain dead, the cryonics people want the suspension to happen as soon as possible to minimize further brain damage. The organ donation people want the organ donation to happen when the surgical team and recipient are ready, so there will be conflict over the schedule.
In any case, the fraction of organ donors is small, and the fraction of cryonics cases is much smaller, and ...
I would think that knowing evo psych is enough to realize [having an FAI find out human preferences, and then do them] is a dodgy approach at best.
I don't see the connection, but I do care about the issue. Can you attempt to state an argument for that?
Human preferences are an imperfect abstraction. People talk about them all the time and reason usefully about them, so either an AI could do the same, or you found a counterexample to the Church-Turing thesis. "Human preferences" is a useful concept no matter where those preferences come from,...
The process of vitrifying the head makes the rest of the body unsuitable for organ donations. If the organs are extracted first, then the large resulting leaks in the circulatory system make perfusing the brain difficult. If the organs are extracted after the brain is properly perfused, they've been perfused too, and with the wrong substances for the purposes of organ donation.
If "humility" can be used to justify both activities and their opposites so easily, perhaps it's a useless concept and should be tabooed.
PMing or emailing official SIAI people should get to link to safer avenues to discussing these kinds of basilisks.
Hmm, should I vote you up because what you're saying is true, or should I vote you down because you are attracting attention to the parent post which harmful to think about?
If an idea is guessable, then it seems irrational to think it is harmful to communicate it to somebody, since they could have guessed it themselves. Given that this is a website about rationality, IMO we should be able to talk about the chain of reasoning that leads to t...
Make sure that each CSA above the lowest level actually has "could", "should", and "would" labels on the nodes in its problem space, and make sure that those labels, their values, and the problem space itself can be reduced to the managing of the CSAs on the level below.
That statement would be much more useful if you gave a specific example. I don't see how labels on the nodes are supposed to influence the final result.
There's a general principle here that I wish I could state well. It's something like "general ideas...
Well, one story is that humans and brains are irrational, and then you don't need a utility function or any other specific description of how it works. Just figure out what's really there and model it.
The other story is that we're hoping to make a Friendly AI that might make rational decisions to help people get what they want in some sense. The only way I can see to do that is to model people as though they actually want something, which seems to imply having a utility function that says what they want more and what they want less. Yes, it's not true, ...
Okay, I watched End of Evangelion and a variety of the materials leading up to it. I want my time back. I don't recommend it.
So many people might be willing to go be a health worker in a poor country where aid workers are commonly (1 in 10,000) raped or killed, even though they would not be willing to be certainly attacked in exchange for 10,000 times the benefits to others.
I agree with your main point, but the thought experiment seems to be based on the false assumption that the risk of being raped or murdered are smaller than 1 in 10K if you stay at home. Wikipedia guesstimates that 1 in 6 women in the US are on the receiving end of attempted rape at some point, so someone...
The story isn't working for me. A boy or novice soldier, depending on how you define it, is inexplicably given the job of running a huge and difficult-to-use robot to fight with a sequence of powerful similarly huge aliens while trying not to do too much collateral damage to Tokyo in the process. In the original, I gather he was an unhappy boy. In this story, he's a relatively well-adjusted boy who hallucinates conversations with his Warhammer figurines. I don't see why I should care about this scenario or any similar scenarios, but maybe I'm missing s...
Your strength as a rationalist is your ability to be more confused by fiction than by reality.
Does that lead to the conclusion that Newcomb's problem is irrelevant? Mind-reading aliens are pretty clearly fiction. Anyone who says otherwise is much more likely to be schizophrenic than to have actual information about mind-reading aliens.
When dealing with trolls, whether on the Internet or in Real Life, no matter how absolutely damn sure you are of your point, you have no time to unravel their bullshit for what it is, and if you try it you will only bore your audience and exhaust their patience. Debates aren't battles of truth: there's publishing papers and articles for that. Debates are battles of status.
I agree. There's also the scenario where you're talking to a reasonable person for the purpose of figuring out the truth better than either of you could do alone. That's useful, and ...
Terror Management seems to explain the reactions to cryonics pretty well. I've only skimmed the OP enough to want to trot out the standard explanation, so I may have missed something, but so far as I can tell the Historical Death Meme and Terror Management make the same predictions.
It is in fact absolutely unacceptable, from a simple humanitarian perspective, that something as nebulous as the HDM -- however artistic, cultural, and deeply ingrained it may be -- should ever be substituted for an actual human life.
Accepting something is the first step to changing it, so you'll have to do better than that.
Please tell me you've at least read Methods Of Rationality and Shinji and Warhammer40k.
I read the presently existing part of MoR. I could read Shinji 40K. Why do you think it's worthwhile? Should I read or watch Neon Genesis Evangelion first?
I have a fear that becoming skilled at bullshitting others will increase my ability to bullshit myself. This is based on my informal observation that the people who bullshit me tend to be a bit confused even when manipulating me isn't their immediate goal.
However, I do find that being able to authoritatively blame someone else who is using a well-known rhetorical technique for doing that is very useful, and therefore I have found reading "Art of Controversy" to be very useful. The obviously useful skill is to be able to recognize each rhetorical technique and be able to find a suitable retort in real time; the default retort is to name the rhetorical technique.
minds are behavior-executors and not utility-maximizers
I think it would be more accurate to say that minds are more accurately and simply modeled as behavior-executors than as utility-maximizers.
There are situations where the most accurate and simple model isn't the one you want to use. For example, if I'm wanting to cooperate with somebody, one approach is to model them as a utility-maximizer, and then to search for actions that improve everybody's utility. If I model them as a behavior-executors then I'll be perceived as manipulative if I don't get ...
An alternative to CEV is CV, that is, leave out the extrapolation.
You have a bunch of non-extrapolated people now, and I don't see why we should think their extrapolated desires are morally superior to their present desires. Giving them their extrapolated desires instead of their current desires puts you into conflict with the non-extrapolated version of them, and I'm not sure what worthwhile thing you're going to get in exchange for that.
Nobody has lived 1000 years yet; maybe extrapolating human desires out to 1000 years gives something that a normal h...
Peter Wakker apparently thinks he found a way to have unbounded utilities and obey most of Savage's axioms. See Unbounded utility for Savage's "Foundations of Statistics," and other models. I'll say more if and when I understand that paper.
We can't use Solomonoff induction - because it is uncomputable.
Generating hypotheses is uncomputable. However, once you have a candidate hypothesis, if it explains the observations you can do a computation to verify that, and you can always measure its complexity. So you'll never know that you have the best hypothesis, but you can compare hypotheses for quality.
I'd really like to know if there's anything to be known about the nature of the suboptimal predictions you'll make if you use suboptimal hypotheses, since we're pretty much certain to be using suboptimal hypotheses.
I agree with jsteinhardt, thanks for the reference.
I agree that the reward functions will vary in complexity. If you do the usual thing in Solomonoff induction, where the plausibility of a reward function decreases exponentially with its size, so far as I can tell you can infer reward fuctions from behavior, if you can infer behavior.
We need to infer a utility function for somebody if we're going to help them get what they want, since a utility function is the only reasonable description I know of what an agent wants.
Surely we can talk about rational agents in other ways that are not so confusing?
Who is sure? If you're saying that, I hope you are. What do you propose?
Either way, just because something is mathematically proven to exist doesn't mean that we should have to use it.
I don't think anybody advocated what you're arguing against there.
The nearest thing I'm willing to argue for is that one of the following possibilities hold:
We use something that has been mathematically proven to exist, now.
We might be speaking nonsense, depending on whether the conc
Humans can be recognized inductively: Pick a time such as the present when it is not common to manipulate genomes. Define a human to be everyone genetically human at that time, plus all descendants who resulted from the naturally occurring process, along with some constraints on the life from conception to the present to rule out various kinds of manipulation.
Or maybe just say that the humans are the genetic humans at the start time, and that's all. Caring for the initial set of humans should lead to caring for their descendants because humans care about t... (read more)