Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: TimFreeman 22 April 2015 05:52:47AM 1 point [-]

Humans can be recognized inductively: Pick a time such as the present when it is not common to manipulate genomes. Define a human to be everyone genetically human at that time, plus all descendants who resulted from the naturally occurring process, along with some constraints on the life from conception to the present to rule out various kinds of manipulation.

Or maybe just say that the humans are the genetic humans at the start time, and that's all. Caring for the initial set of humans should lead to caring for their descendants because humans care about their descendants, so if you're doing FAI you're done. If you want to recognize humans for some other purpose this may not be sufficient.

Predicting human behavior seems harder than recognizing humans, so it seems to me that you're presupposing the solution of a hard problem in order to solve an easy problem.

An entirely separate problem is that if you train to discover what humans would do in one situation and then stop training and then use the trained inference scheme in new situations, you're open to the objection that the new situations might be outside the domain covered by the original training.

Comment author: [deleted] 11 March 2015 01:43:59PM 1 point [-]

Thank you. I mean the looted ones too, I don't think everybody knows about them.

For example, here is a technique I always thought it should exist and simply cannot find it, nowhere at all, maybe it doesn't or maybe I am just really missing something: I have always thought that with something like rapid breathing I should be able to stimulate my central nervous system the way say amphetamines do it, to temporarily be quicker thinking, physically faster, and ignore fatigue, it would be handy in many situations, really this is something that probably exists because if the CNS can be stimulated at all then probably not only through chems, and I am probably just overlooking something. Maybe there is a tribal people somewhere who do this jumping up and down and chanting, calling it a sacred rage induced by the war god or something.

Comment author: TimFreeman 11 March 2015 03:09:53PM 4 points [-]

Hyperventilating leads to hallucinations instead of stimulation. I went to a Holotropic Breathwork session once. Some years before that, I went to a Sufi workshop in NYC where Hu was chanted to get the same result. I have to admit I cheated at both events -- I limited my breathing rate or depth so not much happened to me.

Listening to the reports from the other participants of the Holotropic Breathwork session made my motives very clear to me. I don't want any of that. I like the way my mind works. I might consider making purposeful and careful changes to how my mind works, but I do not want random changes. I don't take psychoactive drugs for the same reason.

Comment author: shminux 12 March 2014 05:52:09PM *  1 point [-]

Are there toy models of, say, a very simple universe and an AIXItl-type reasoner in it? How complex does the universe have to be to support AIXI? Game-of-life-complex? Chess-complex? D&D complex? How would one tell?

Comment author: TimFreeman 20 March 2014 03:26:04PM *  1 point [-]

If you give up on the AIXI agent exploring the entire set of possible hypotheses and instead have it explore a small fixed list, the toy models can be very small. Here is a unit test for something more involved than AIXI that's feasible because of the small hypothesis list.

Comment author: shminux 28 January 2014 05:52:54PM 0 points [-]

Is getting a job in programming really contingent on your getting the degree, or rather on you being capable of doing the job?

Comment author: TimFreeman 28 February 2014 06:58:17AM *  1 point [-]

Getting a programming job is not contingent on getting a degree. There's an easy test for competence at programming in a job interview: ask the candidate to write code on a whiteboard. I am aware of at least one Silicon Valley company that does that and have observed them to hire people who never finished their BS in CS. (I'd rather ask candidates to write code and debug on a laptop, but the HR department won't permit it.)

Getting a degree doesn't hurt. It might push up your salary -- even if one company has enough sense to evaluate the competence of a programmer directly, the other companies offering jobs to that programmer are probably looking at credentials, so it's rational for a company to base salaries on credentials even if they are willing to hire someone who doesn't have them. Last I checked, a BS in CS made sense financially, a MS made some sense too, and a PhD was not worth the time unless you want a career writing research papers. I got a PhD apparently to postpone coming into contact with the real world. Do not do that.

If you can't demonstrate competent programming in a job interview (either due to stage fright or due to not being all that competent), getting a degree is very important. I interview a lot of people and see a lot of stage fright. I have had people I worked with and knew to be competent not get hired because of how they responded emotionally to the interview situation. What I'm calling "stage fright" is really cognitive impairment due to the emotional situation; it is usually less intense than the troubles of a thespian trying to perform on stage. Until you've done some interviews, you don't know how much the interview situation will impair you.

Does anyone know if ex-military people get stage fright at job interviews? You'd think that being trained to kill people would fix the stage fright when there's only one other person in the room and that person is reasonably polite, but I have not had the opportunity to observe both the interview of an ex-military person and their performance as a programmer in a realistic work environment.

Comment author: TimFreeman 28 February 2014 06:10:01AM *  5 points [-]

I have experienced consequences of donating blood too often.The blood donation places check your hemoglobin, but I have experienced iron deficiency symptoms when my hemoglobin was normal and my serum ferritin was low. The symptoms were twitchy legs when I was trying to sleep and insomnia, and iron deficiency was confirmed with a ferritin test. The iron deficiency symptoms went away and ferritin went back to normal when I took iron supplements and stopped donating blood, and I stopped the iron supplements after the normal ferritin test.

The blood donation places will encourage you to donate every 2 months, and according to a research paper I found when I was having this problem essentially everyone will have low serum ferritin if they do that for two years.

I have no reason to disagree with the OP's recommendation of donating blood every year or two.

Comment author: Jiro 26 October 2013 06:14:03PM -2 points [-]

The argument you made was that copy-and-destroy is not bad because a world where that is done is not worse than our own. In turn, your belief that it is not worse than our own is, as far as I can tell, based on the belief that you can compare that world to our own by comparing whether it is good for the people who remain alive, and ignoring whether it is good for the people who are killed. This implies that the fact that the person is killed doesn't count towards making the world worse because being dead, he can't know that he has been harmed, and because the other people don't feel the loss they would feel that goes with a normal death. This amounts to blissful ignorance (although I suppose the dead person can be more accurately described as having 'uncaring ignorance', since dead people aren't very blissful).

Check out Argumentum ad Populum. With all the references to "most people", you seem to be committing that fallacy so often that I am unable to identify anything else in what you say.

Pointing out that your definition of something, like harm, is shared by few people is not argumentum ad populum, it's pointing out that you are trying to sound like you're talking about something people care about but you're really not.

Comment author: TimFreeman 27 October 2013 07:39:51PM -1 points [-]

Well, I suppose it's an improvement that you've identified what you're arguing against.

Unfortunately the statements you disagree with don't much resemble what I said. Specifically:

The argument you made was that copy-and-destroy is not bad because a world where that is done is not worse than our own.

I did not compare one world to another.

Pointing out that your definition of something, like harm, is shared by few people is not argumentum ad populum, it's pointing out that you are trying to sound like you're talking about something people care about but you're really not.

I did not define "harm".

The disconnect between what I said and what you heard is big enough that saying more doesn't seem likely to make things better.

The intent to make a website for the purpose of fostering rational conversation is good, and this one is the best I know, but it's still so cringe-inducing that I ignore it for months at a time. This dialogue was typical. There has to be a better way but I don't know what it is.

Comment author: Jiro 25 October 2013 02:51:45PM 1 point [-]

"Most of us think X is bad" is perhaps true for the person-copying scheme

I'm not saying that most people think this scheme is bad, I'm saying that most people don't have the definition of harm that you do. Your idea that all harm must be knowing is not one commonly shared.

And the example has nothing to do with paternity. Most people would think that a world where people are cheated on but it is not discovered is one where the other partner is being harmed, simply because cheating on someone harms them and harm does not have to be knowing in order to be harm. Or, as I summarized, most people don't think blissful ignorance is a good thing.

Comment author: TimFreeman 26 October 2013 05:52:47AM 0 points [-]

Nothing I have said in this conversation presupposed ignorance, blissful or otherwise.

I give up, feel free to disagree with what you imagine I said.

Check out Argumentum ad Populum. With all the references to "most people", you seem to be committing that fallacy so often that I am unable to identify anything else in what you say.

Comment author: Jiro 21 October 2013 06:15:17PM 0 points [-]

This reasoning can be used to justify almost any form of "what you don't know won't hurt you". For instance, a world where people cheated on their spouse but it was never discovered would function, from the point of view of everyone, as well as or better than the similar world where they remained faithful.

Most of us think the former world is bad and, if pressed, would explain it by saying that blissful ignorance is not a good thing. Even though "my spouse cheats on me but I don't know it" and "my spouse doesn't cheat on me" are indistinguishable, I have been harmed in the former situation.

Comment author: TimFreeman 25 October 2013 05:50:46AM 1 point [-]

This reasoning can be used to justify almost any form of "what you don't know won't hurt you". For instance, a world where people cheated on their spouse but it was never discovered would function, from the point of view of everyone, as well as or better than the similar world where they remained faithful.

Your example is too vague for me to want to talk about. Does this world have children that are conceived by sex, children that are expensive to raise, and property rights? Does it have sexually transmitted diseases? Does it have paternity tests? Does it have perfect contraception? You stipulated that affairs are never discovered, so liberal use of paternity tests imply no children from the affairs.

I'm also leery of the example because I'm not sure it's relevant. If you turn off the children, in some scenarios you turn off the evolution so my idea of looking at evolution to decide what concepts are useful doesn't work. If you leave the children in the story, then for some values of the other unknowns jealousy is part of the evolutionarily stable strategy, so your example maybe doesn't work.

Can you argue your point without relying so much on the example? "Most of us think X is bad" is perhaps true for the person-copying scheme and if that's the entire content of your argument then we can't address the question of whether most of us should think X is bad.

Comment author: [deleted] 27 September 2013 04:25:11PM -1 points [-]

Whether you call it life or death is a choice, just as any other decision to use one word or another to describe a situation is a choice.

OTOH, some such choices are worse than others.

In response to comment by [deleted] on Quantum Mechanics and Personal Identity
Comment author: TimFreeman 21 October 2013 03:44:03PM 1 point [-]

OTOH, some such choices are worse than others.

If you have an argument, please make it. Pointing off to a page with a laundry list of 37 things isn't an argument.

One way to find useful concepts is to use evolutionary arguments. Imagine a world in which it is useful and possible to commute back and forth to Mars by copy-and-destroy. Some people do it and endure arguments about whether they are still the "same" person when they got back, some people don't do it because of philosophical reservations about being the "same" person. Since we hypothesized that visiting Mars this way is useful, the ones without the philosophical reservation will be better off, in the sense that if visiting Mars is useful enough they will be able to out-compete the people who won't visit Mars that way.

So if you want to say that going places by copy-and-destroy is a bad thing for the person taking the trip, you should be able to describe the important way in which this hypothetical world where copy-and-destroy is useful is different from our own. I can't do that, and I would be very interested if you can.

Freezing followed by destructive upload seems moderately likely to be useful in the next few decades, so this hypothetical situation with commuting to Mars is not irrelevant.

Comment author: TimFreeman 23 June 2013 06:34:35PM 0 points [-]

Suppose we define a generalized version of Solomonoff Induction based on some second-order logic. The truth predicate for this logic can’t be defined within the logic and therefore a device that can decide the truth value of arbitrary statements in this logical has no finite description within this logic. If an alien claimed to have such a device, this generalized Solomonoff induction would assign the hypothesis that they're telling the truth zero probability, whereas we would assign it some small but positive probability.

I'm not sure I understand you correctly, but there are two immediate problems with this:

  • If the goal is to figure out how useful Solomonoff induction is, then "a generalized version of Solomonoff Induction based on some second-order logic" is not relevant. We don't need random generalizations of Solomonoff induction to work in order to decide whether Solomonoff induction works. I think this is repairable, see below.
  • Whether the alien has a device that does such-and-such is not a property of the world, so Solomonoff induction does not assign a probability to it. At any given time, all you have observed is the behavior of the device for some finite past, and perhaps what the inside of the device looks like, if you get to see. Any finite amount of past observations will be assigned positive probability by the universal prior so there is never a moment when you encounter a contradiction.

If I understand your issue right, you can explore the same issue using stock Solomonoff induction: What happens if an alien shows up with a device that produces some uncomputable result? The prior probability of the present situation will become progressively smaller as you make more observations and asymptotically approach zero. If we assume quantum mechanics really is nondeterministic, that will be the normal case anyway, so nothing special is happening here.

View more: Next