Comment author: MixedNuts 26 January 2012 02:25:23PM 20 points [-]

Child sexual consent hits the same issues as child acting or any other thing that parents can allow

(Warning: Judging moral claims with System-1 is unreliable.) Thinking that as a kid I could have been allowed to have sex, have had people annoying me with undesired propositions (even after they knew my age), and have had people trying to manipulate me into sex, makes me at most kind of uneasy. Thinking that my parents could have had any kind of say over it gave me a panic attack.

Comment author: ScottMessick 26 January 2012 10:48:10PM 10 points [-]

Wow, when I read "should not be treated differently from those issues", I assumed the intention was likely to be "child acting, indoctrination, etc., should be considered abuse and not tolerated by society", a position I would tentatively support (tentatively due to lack of expertise).

Incidentally, I found many of the other claims to be at least plausible and discussion-worthy, if not probably true (and certainly not things that people should be afraid to say).

Comment author: mwengler 19 January 2012 03:55:49PM 4 points [-]

To be fair Eliezer gets good press from Professor Robin Hanson. This is one of the main bulwarks of my opinion of Eliezer and SIAI. (Other bulwarks include having had the distinct pleasure of meeting lukeprog at a few meetups and meeing Anna at the first meetup I ever attended. Whatever else is going on at SIAI, there is a significant amount of firepower in the rooms).

Comment author: ScottMessick 21 January 2012 11:47:14PM 5 points [-]

Yes, and isn't it interesting to note that Robin Hanson sought his own higher degrees for the express purpose of giving his smart contrarian ideas (and way of thinking) more credibility?

Comment author: wedrifid 20 January 2012 03:34:01AM 4 points [-]

The point of that was that dissolving free will is an exercise (a rather easy one once you know what you're doing), and it probably shouldn't be short-circuited.

My point was that I didn't approve of making that point in that manner in that place.

I refrained from nuking the page myself but I don't have to like it. I support Brilee's observation that going around and doing that sort of thing is bad PR for Eliezer Yudkowsky, which has non-trivial relevance to SingInst's arrogance problem.

Comment author: ScottMessick 21 January 2012 11:28:04PM 2 points [-]

One issue is that the same writing sends different signals to different people. I remember thinking about free will early in life (my parents thought they'd tease me with the age-old philosophical question) and, a little later in life, thinking that I had basically solved it--that people were simply thinking about it the wrong way. People around me often didn't accept my solution, but I was never convinced that they even understood it (not due to stupidity, but failure to adjust their perspective in the right way), so my confidence remained high.

Later I noticed that my solution is a standard kind of "compatibilist" position, which is given equal attention by philosophers as many other positions and sub-positions, fiercely yet politely discussed without the slightest suggestion that it is a solution, or even more valid than other positions except as the one a particular author happens to prefer.

Later I noticed that my solution was also independently reached and exposited by Eliezer Yudkowsky (on Overcoming Bias before LW was created, if I remember correctly). The solution was clearly presented as such--a solution--and one which is easy to find with the right shift in perspective--that is, an answer to a wrong question. I immediately significantly updated the likelihood of the same author having further useful intellectual contributions, to my taste at least, and found the honesty thoroughly refreshing.

Comment author: ScottMessick 02 January 2012 05:03:34AM 1 point [-]

What is "intuition" but any set of heuristic approaches to generating conjectures, proofs, etc., and judging their correctness, which isn't a naive search algorithm through formulas/proofs in some formal logical language? At a low level, all mathematics, including even the judgment of whether a given proof is correct (or "rigorous"), is done by intuition (at least, when it is done by humans). I think in everyday usage we reserve "intuition" for relatively high level heuristics, guesses, hunches, and so on, which we can't easily break down in terms of simpler thought processes, and this is the sort of "intuition" that Terence Tao is discussing in those quotes. But we should recognize that even regarding the very basics of what it means to accept a proof is correct, we are using the same kinds of thought processes, scaled down.

Few mathematicians want to bother with actual formal logical proofs, whether producing them or reading them.

(And there's an even subtler issue, that logicians don't have any one really convincing formal foundation to offer, and Godel's theorem makes it hard to know which ones are even consistent--if ZFC turned out to be inconsistent, would that mean that most of our math is wrong? Probably not, but since people often cite ZFC as being the formal logical basis for their work, what grounds do we we have for this prediction?)

Comment author: ScottMessick 18 December 2011 03:51:45AM *  0 points [-]

Imagine you have an oracle that can determine if an arbitrary statement is provable in Peano arithmetic. Then you can try using it as a halting oracle: for an arbitrary Turing machine T, ask "can PA prove that there's an integer N such that T makes N steps and then halts?". If the oracle says yes, you know that the statement is true for standard integers because they're one of the models of PA, therefore N is a standard integer, therefore T halts. And if the oracle says no, you know that there's no such standard integer N because otherwise the oracle would've found a long and boring proof involving the encoding of N as SSS...S0, therefore T doesn't halt. So your oracle can indeed serve as a halting oracle.

I don't think this works. We can't expect PA to decide whether or not any given Turing machine halts. For example, there is a machine which enumerates the theorems proven by PA and halts if it ever encounters a proof of 0=1. By incompleteness, PA will not prove that that this machine halts. (I'm assuming PA is consistent.) This argument works for any stronger consistent theory as well, such as ZFC or even much stronger ones. Note: I basically stole this argument from Scott Aaronson.

Note that this is different from the question of whether or not the halting problem is reducible to the set of theorems of PA (i.e. whether or not the oracle you've specified is enough to compute whether or not a given TM halts). It's just that this particular approach does not give such an algorithm.

ETA: I was in error, see replies. In the OP, PA doesn't need to prove that a non-halting machine doesn't halt, it only needs to fail to prove that it halts (and it certainly does, if we believe PA is sound).

Comment author: ScottMessick 09 December 2011 12:25:55AM 2 points [-]

Scott Aaronson (a well-known complexity theorist) has written a survey article about exactly this question.

Comment author: MinibearRex 27 August 2011 12:40:56AM 20 points [-]

Once Alzheimer's starts, the damage it does to the victim's brain appears to be permanent. The information appears to be gone. Once your memories start to get wiped away, I don't think that there is any real way to get the information back, so whenever we do invent a cure for Alzheimer's, all it is likely to do is to stop the deterioration of your brain, not bring your lost memories back. If we could start growing new neurons, you could probably get your brain functioning back to normal, but I doubt you could get much else.

Both my grandparents have Alzheimer's, and the disease is at a rather late stage. They're gone. They don't remember me; they don't remember my mother; they barely even recognize each other, and they've been married over 50 years. Even if I got my hands on a cure for Alzheimer's disease tomorrow, they would "wake up" in 80+ year old bodies, with no memories of their children and grandchildren.

Given my own experience and knowledge of the disease, if I knew someone who had been diagnosed and was starting to show signs of the disease, I would recommend that they get themselves frozen as quickly as possible.

Comment author: ScottMessick 28 August 2011 06:38:17PM 1 point [-]

Upvoted for accuracy. My maternal grandmother is the same way and just the resulting politics in my mother's family for how to deal with her empty shell are unpleasant, let alone the fact that she died so slowly hardly anyone acknowledged it as it was happening.

Comment author: PhilGoetz 22 August 2011 07:58:50PM *  3 points [-]

I think the larger question of rationality is, When is it good for us, and when is it bad for us?

I suffer more from too much rationality than too little. I have a hard time making decisions. I spend too much time thinking about things that other people handle competently without much thought. Rationality to the degree you desire may not be an evolutionary stable strategy - your rationality may provide a net benefit to society, and a net cost to you.

On the level of society, we don't know whether a society of rational personal utility maximizers could out-compete a society of members biased in ways that privileged the society over the individual. Defining "rational" as "rational personal utility" is a more radical step than most people realize.

On the even higher level of FAI, we run into the question of whether rationality is a good thing for God to have. Rationality only makes sense if you have values to maximize. If God had many values, it would probably makes the universe a more-homogenous and less-interesting place.

Comment author: ScottMessick 23 August 2011 01:49:35AM 0 points [-]

I think you mean, "When is it irrational to study rationality explicitly?"

Comment author: ScottMessick 23 August 2011 01:45:46AM 1 point [-]

For me there is always the lurking suspicion that my biggest reason for reading LessWrong is that it's a whole lot of fun.

Comment author: Jack 21 August 2011 02:49:17PM 2 points [-]

A species that evolved intelligence but for which the social brain hypothesis is false might be very different-- but I don't know if it is plausible that such a species would develop a civilization.

Comment author: ScottMessick 21 August 2011 04:34:01PM 5 points [-]

Do you mean what Eliezer calls the Machiavellian intelligence hypothesis? (That is, human intelligence evolved via runaway sexual selection--people who were politically competent were more reproductively successful, and as people got better and better at the game of politics, so too did the game get harder and harder, hence the feedback loop.)

Perhaps a species could evolve intelligence without such a mechanism, if something about their environment is just dramatically more complex in a peculiar way compared with ours, so that intelligence was worthwhile just for non-social purposes. The species' ancestors may have been large predators on top of the food chain, where members are typically solitary and territorial and hunt over a large region of land, with its ecosystem strangely different from ours in some way that I'm not specifying (but you'd need a pretty clever idea about it to make this whole thing work).

These aliens wouldn't be inherently social the way humans are, but they wouldn't be antisocial either--they would have noticed that cooperation allows them to solve even more difficult problems and get even more from their environment. (Still in a pre-technological stage. Remember, something about this environment is such that it provides a nearly smooth, practically unbounded (in difficulty) array of puzzles/problems to solve with increasing rewards.) Eventually, they may build a civilization just to more efficiently organize to obtain these benefits, which will also allow them to advance technologically. (I'm probably drawing too sharp of a distinction between technological advancement and interaction with their strangely complex environment.)

They might lack the following trait that is very central to human nature: affection through familiarity. When we spend a lot of time with a person/thing/activity/idea, we grow fond of it. They might not have this, or they might not have it in such generality (e.g. they might still have it for, say, mates, if they reproduce sexually). They might also be a lot less biased than we are by social considerations, for the obvious reason, but perhaps they have less raw cognitive horsepower (their environment being no substitute for the human pastimes of politics and sex).

Recklessly speculative, obviously, but I gather that's all we can hope to offer to Solvent.

View more: Prev | Next