All of ScottMessick's Comments + Replies

I was disappointed to see my new favorite "pure" game Arimaa missing from Bostrom's list. Arimaa was designed to be intuitive for humans but difficult for computers, making it a good test case. Indeed, I find it to be very fun, and computers do not seem to be able to play it very well. In particular, computers are nowhere close to beating top humans despite the fact that there has arguably been even more effort to make good computer players than good human players.

Arimaa's branching factor dwarfs that of Go (which in turn beats every other com... (read more)

0Houshalter
Reportedly this just happened recently: http://games.slashdot.org/story/15/04/19/2332209/computer-beats-humans-at-arimaa Go is super close to being beaten, and AIs do very well against all but the best humans.

The Pentium FDIV bug was actually discovered by someone writing code to compute prime numbers.

Suggestions for Slytherin: Sun Tzu's Art of War and some Nietzsche, maybe The Will to Power?

Suggestion for Ravenclaw: An Enquiry Concerning Human Understanding, David Hume.

2TimS
I don't think Nietzsche fits very easily into any House shelf, but Slytherin least of all.

The post seems to confuse the law of non-contradiction with the principle of explosion. To understand this point, it helps to know about minimal logic which is like intuitionistic logic but even weaker, as it treats ((false)) the same way as any other primitive predicate. Minimal logic rejects the principle of explosion as well as the law of the excluded middle (LEM, which the main post called TND).

The law of non-contradiction (LNC) is just ). (In the main post this is called ECQ, which I believe is erroneous; ECQ should refer to the principle of explos... (read more)

1MrMind
I think it's good to make all these distinctions in the comments, so that the main post is not cluttered but at the same time who wants to have all the details can have them just by reading further.

I'm not going to say they haven't been exposed to it, but I think quite few mathematicians have ever developed a basic appreciation and working understanding of the distinction between syntactic and semantic proofs.

Model theory is, very rarely, successfully applied to solve a well-known problem outside logic, but you would have to sample many random mathematicians before you could find one that could tell you exactly how, even if you restricted to only asking mathematical logicians.

I'd like to add that in the overwhelming majority of academic research in m... (read more)

The explanation "number of partners" question is problematic right now. It reads "0 for single, 1 for monogamous relationship, >1 for polyamorous relationship" which makes it sound like you must be monogamous if you happen to have 1 partner. I am polyamorous, have one partner and am looking for more.

In fact, I started wondering if it really meant "ideal number of partners", in which case I'd be tempted to put the name of a large cardinal.

I continue to be surprised (I believe I commented on this last year) that under "Academic fields" pure mathematics is not listed on its own; it is also not clear to me that pure mathematics is a hard science; relatedly, are non-computer science engineering folk expected to write in answers?

I second this: please include pure mathematics. I imagine there are a fair few of us, and there's no agreed upon way to categorize it. I remember being annoyed about this last year. (I'm pretty sure I marked "hard sciences".)

I wonder how it would be if you asked instead "When should we say a statement is true?" instead of "What is truth?" and whether your classmates would think them the same (or at least closely related) questions.

5[anonymous]
philosophy landmines. I hope that phrase serves you well.
4tgb
Agreed. It seems to me that we as a culture expect "What is truth?" to have a mysterious answer and we expect the asker to be looking for the best sounding answer.

I think this hypothesis is worth bearing in mind. However, it doesn't explain advancedatheist's observation that wealthy cryonicists are eager to put a lot of money in revival trusts (whose odds of success are dubious, even if cryonics works) rather than donate to improve cryonics research or the financial viability of cryonics organizations.

0V_V
Maybe it's something like the Egyptian pharaohs putting gold and valuables in their pyramids

I was mainly worried that she would suffer information-theoretic death (or substantial degradation) before she could be cryopreserved.

What about the brain damage her tumor is causing?

This seems important and I'm a little surprised no one's asked. How will her brain damage impact her chances of revival? (From the blog linked in the reddit post, it sounds like she is already experiencing symptoms.) Obviously she is quite mentally competent right now, but what about when she is declared legally dead? I am far from an expert and simply would like to hear some authoritative commentary on this. I am interested in donating but only if there's a reasonable chance brain damage won't make it superfluous.

4tarwatirno
It is in her brainstem. Which while that makes it very difficult to treat, it probably increases her chances of being revived intact.
6Eudoxia
Jim Glennie (A-1367) had a glioblastoma multiforme, and cryoprotective perfusion achieved the best Glycerol concentration at the time (6.02M glycerol, 1992). A-2091 (name withheld) also had a glioblastoma and reportedly "target cryoprotectant concentration was reached in the brain". Thomas Donaldson (A-1097) had an astrocytoma (I guess Astrocytes are a kind of glial cell, but I doubt the comparison can be extended further) and his cryopreservation was very good [p.16]. Disclaimer: I am not medically trained. EDIT: I'm not sure if you're referring to brain damage affecting cryoprotection or brain damage affecting her mental state and making her opt out.

This is a really good exposition of the two envelopes problem. I recall reading a lot about that when I first heard it, and didn't feel that anything I read satisfactorily resolved it, which this does. I particularly liked the more precise recasting of the problem at the beginning.

(It sounds like some credit is also due to VincentYu.)

I haven't read the article, but I want to point out that prisons are enormously costly. So there is still much to gain potentially even if the new system is only equally effective at deterrence and rehabilitation.

The fact that prisons are inhumane is another issue, of course.

I had long ago (but after being heavily influenced by Overcoming Bias) thought that signaling could be seen simply as a corollary to Bayes' theorem. That is, when one says something, one knows that its effect on a listener will depend on the listener's rational updating on the fact that one said it. If one wants the listener to behave as if X is true, one should say something that the listener would only expect in case X is true.

Thinking in this way, one quickly arrives at conclusions like "oh, so hard-to-fake signals are stronger" and "if... (read more)

6patrickmclaren
Your math department example reminds me of a few experiences. From time to time, I'd be present when a small group of 3-4 professors were quietly discussing roadblocks in their research. Problems would be introduced, mentioning a number of unexpectedly connected fields, Symplectic This-Thats, and the Cohomology of Riff-Raffs. Eventually as the speaker relaxed and their anxiety settled, it would turn out that they were having trouble with an inequality and lost a constant along the way. So, the group would get to work, perhaps they would be able to fix the issue, then the next speaker in the circle would start to announce his problem. What was surprising to me, was that they were not strangers. Most had been friends for over a decade. I wonder if the others were even still listening to the name-dropping. The context it provided wasn't at all helpful for finding a typo, that's for sure. I suppose it may be nice for "Keeping up with the Joneses", so to speak.
5kalos
This article made me think the same thing. Signaling is essentially gaming Bayes Theorem: providing what one believes others to count as evidence of appropriate strength to get them to update to a desired conclusion.

I'm really glad you pointed out that SI's strategy is not predicated on hard take-off. I don't recall if this has been discussed elsewhere, but that's something that always bothered me since I think hard take-off is relatively unlikely. (Admittedly, soft take-off still considerably diminishes my expected impact for SI and donating to it.)

0Bruno_Coelho
For some time I think EY support hard takeoff -- the bunch of guys in the garage argument --, but if luke say now it's not so, then ok.

But this elegant simplicity was, like so many other things, ruined by the Machiguenga Indians of eastern Peru.

Wait, is this a joke, or have the Machiguenga really provided counterexamples to lots of social science hypotheses?

3A1987dM
I took the “like so many other things” to only apply to “was ruined”, not to “was ruined by the Machiguenga”...
1ShardPhoenix
I think he means that many elegant, simple hypothesis have obscure counterexamples, not that the Machiguenga Indians are typically one of those counterexamples.
5KPier
He also says: I'm guessing both are a joke.

These phrases are mainly used in near mode, or when trying to induce near mode. The phenomenon described in the quote is a feature (or bug) of far mode.

I have direct experience of someone highly intelligent, a prestigious academic type, dismissing SI out of hand because of its name. I would support changing the name.

Almost all the suggestions so far attempt to reflect the idea of safety or friendliness into the name. I think this might be a mistake, because for people who haven't thought about it much, this invokes images of Hollywood). Instead, I propose having the name imply that SI does some kind of advanced, technical research involving AI and is prestigious, perhaps affiliated with a university (t... (read more)

0roll
Hmm what do you think would have happened with that someone if the name was more attractive and that person spent more time looking into SI? Do you think that person wouldn't ultimately dismiss it? Many of the premises here seem more far fetched than singularity. I know that from our perspective it'd be great to have feedback from such people, but it wastes their time and it is unclear if that is globally beneficial.
2[anonymous]
This name might actually sound scary to people worried about AI risks.

Summary: Expanding on what maia wrote, I find it plausible that many people could produce good technical arguments against cryonics but don't simply because they're not writing about cryonics at all.

I was defending maia's point that there are many people who are uninterested in cryonics and don't think it will work. This class probably includes lots of people who have relevant expertise as well. So while there are a lot of people who develops strong anti-cryonics sentiments (and say so), I suspect they're only a minority of the people who don't think cr... (read more)

I think you may be missing a silent majority of people who passively judge cryonics as unlikely to work, and do not develop strong feelings or opinions about it besides that, because they have no reason to. I think this category, together with "too expensive to think about right now", forms the bulk of intelligent friends with whom I've discussed cryonics.

3Paul Crowley
I don't think you're addressing the subject of this thread, which is "does there exist a strong technical argument against cryonics that a lot of people already know".

Wow, when I read "should not be treated differently from those issues", I assumed the intention was likely to be "child acting, indoctrination, etc., should be considered abuse and not tolerated by society", a position I would tentatively support (tentatively due to lack of expertise).

Incidentally, I found many of the other claims to be at least plausible and discussion-worthy, if not probably true (and certainly not things that people should be afraid to say).

Yes, and isn't it interesting to note that Robin Hanson sought his own higher degrees for the express purpose of giving his smart contrarian ideas (and way of thinking) more credibility?

One issue is that the same writing sends different signals to different people. I remember thinking about free will early in life (my parents thought they'd tease me with the age-old philosophical question) and, a little later in life, thinking that I had basically solved it--that people were simply thinking about it the wrong way. People around me often didn't accept my solution, but I was never convinced that they even understood it (not due to stupidity, but failure to adjust their perspective in the right way), so my confidence remained high.

Later I ... (read more)

What is "intuition" but any set of heuristic approaches to generating conjectures, proofs, etc., and judging their correctness, which isn't a naive search algorithm through formulas/proofs in some formal logical language? At a low level, all mathematics, including even the judgment of whether a given proof is correct (or "rigorous"), is done by intuition (at least, when it is done by humans). I think in everyday usage we reserve "intuition" for relatively high level heuristics, guesses, hunches, and so on, which we can't eas... (read more)

Imagine you have an oracle that can determine if an arbitrary statement is provable in Peano arithmetic. Then you can try using it as a halting oracle: for an arbitrary Turing machine T, ask "can PA prove that there's an integer N such that T makes N steps and then halts?". If the oracle says yes, you know that the statement is true for standard integers because they're one of the models of PA, therefore N is a standard integer, therefore T halts. And if the oracle says no, you know that there's no such standard integer N because otherwise the o

... (read more)
2cousin_it
The post argues that asking "does this machine halt?" is always equivalent to asking the oracle "does PA prove that this machine halts?" A counterexample should be a machine for which the answers to these two questions are different. Your machine is a "no" on both questions (it doesn't halt and PA doesn't prove that it halts), so it doesn't seem to be a counterexample.
1dbaupp
The oracle isn't working in PA, it's just deciding statements that are in PA.

Scott Aaronson (a well-known complexity theorist) has written a survey article about exactly this question.

Upvoted for accuracy. My maternal grandmother is the same way and just the resulting politics in my mother's family for how to deal with her empty shell are unpleasant, let alone the fact that she died so slowly hardly anyone acknowledged it as it was happening.

I think you mean, "When is it irrational to study rationality explicitly?"

For me there is always the lurking suspicion that my biggest reason for reading LessWrong is that it's a whole lot of fun.

Do you mean what Eliezer calls the Machiavellian intelligence hypothesis? (That is, human intelligence evolved via runaway sexual selection--people who were politically competent were more reproductively successful, and as people got better and better at the game of politics, so too did the game get harder and harder, hence the feedback loop.)

Perhaps a species could evolve intelligence without such a mechanism, if something about their environment is just dramatically more complex in a peculiar way compared with ours, so that intelligence was worthwhile j... (read more)

Ah, now I feel extremely silly. The irony did not occur to me; it was simply a long comment that I agreed with completely, and I wasn't satisfied merely upvoting it because it didn't have any (other) upvotes yet at the time. Plus, doubly ironically, I was on a moral crusade to defend the karma system...

Imagine a thousand professional philosophers would join lesswrong, or worse, a thousand creationists.

This test seems rather unfair--it's pretty much a known that people who join LessWrong are likely to be already sympathetic to the LessWrong's way of thinking. Besides, the only way to avoid a situation where thousands of dissidents joining could wreck the system is to have centralized power, i.e., more traditional moderation, which I think we were hoping to avoid for exactly the types of reasons that are being brought up here (politics, etc.).

The ava

... (read more)

Why all the karma bashing? Yes, absolutely, people will upvote or downvote for political reasons and be heavily influenced by the name behind the post/comment. All the time. But as far as I can tell, politics is a problem with any evaluation system whatsoever, and karma does remarkably well. In my experience, post and comment scores are strongly correlated with how useful I find them, how much they contribute to my experience of the discussion. And the list of top contributors is full of people who have written posts that I have saved forever, that ... (read more)

I have seen too many discussions of Friendly AI, here and elsewhere (e.g. in comments at Michael Anissimov's blog), detached from any concrete idea of how to do it....

At present, it is discussed in conjunction with a whole cornucopia of science fiction notions such as: immortality, conquering the galaxy, omnipresent wish-fulfilling super-AIs, good and bad Jupiter-brains, mind uploads in heaven and hell, and so on. Similarly, we have all these thought-experiments: guessing games with omniscient aliens, decision problems in a branching multiverse, "tor

... (read more)

I don't know if there's a big reason behind it, but because .com is so entrenched as the "default" TLD, I think it's probably best to be LessWrong.com rather than LessWrong.net or any other choice, simply because "LessWrong.com" is more likely to be correctly remembered by people who hear of it briefly, or correctly guessed by people who heard "Less Wrong" and randomly take a stab at their brower's navigation bar.

I admit this point may be relatively trivial since it's the first google hit for "less wrong" and that's probably how a lot of people look for it who've only heard of it.

Agreed, would like to see this again in the form of a top-level post. This cuts at the heart of one of the most important sets of lies we are told by society and expected to repeat.

2[anonymous]
+1 Weaponized morality is a clearly a fact of history both on a personal and on a societal level yet it seems the best people elsewhere can do is say boo religion and boo ideology, without discussion how the ethical frameworks that support them are themselves memetically selected and spread or even generated precisely for that purpose. Even here on LW this is an unpleasant unspeakable truth despite the fact that basically all of us have the building blocks for this interpretation right in front of our noses (namley selfserving rationalizations, bits of signaling theory and accepting the concept of memes and memeplexes)! Please make a top-level post of this HughRistik, even if you just copy paste it!

Personally, I'd need see good reasons to expect the charity I'm donating to is going to have a significant positive impact before I consider donating, relative to other charities I might be able to find on my own. Inefficiency, corruption, and poor choice of target are major concerns. (One example of the latter issuemight be donating to help US poor when it's possible to just as efficiently help people who are far worse off somewhere else). Also the mechanism by which to help maybe poorly thought out. (Do the poor really need education, as opposed to ... (read more)

So Alcor runs at a loss and doesn't actually freeze that many people because it can't afford to?

This seems extremely misleading. Unless I'm very much mistaken, Alcor cryopreserves every one of its members upon their legal death to the absolute best of its ability, as indeed they are contractually obligated to do. They even now have an arrangement with Suspended Animation so that an SA team can provide SST (standby, stabilization, and transport) in cases where Alcor cannot easily get there in time. (SA is a for-profit company founded to provide exactl... (read more)

But I think those are examples of neurons operating normally, not abnormally. Even in the case of mind-influencing drugs, mostly the drugs just affect the brain on its own terms by altering various neurotransmitter levels. On the other hand, a low-level emulation glitch could distort the very rules by which information is processed in the brain.

0asr
Note that I am distinguishing "design shortcomings" from "bugs" here. I don't quite see how you'd get "the overall rules" wrong. I figure standard software engineering is all that's required to make sure that the low-level pieces are put together properly. Possibly this is just a failure of imagination on my part, but I can't think of an example of a defect that is more pervasive than "we got the neuron/axion model wrong." And if you're emulating at the neuron level or below, I'd figure that an emulation shortcoming would look exactly like altering neural behavior.

They aren't showing up in comments on the older posts though (see above links). Perhaps the folks looking at the code now can explain why.

Yes, same symptoms. With the letters and the blockquotes.

EDIT: Also, it's not consistent for me even on this page. I can see the 'c' (letter after 'b') in "blockquotes" in your post that I replied to, and in a few other comments, including mine, but not in the original post.

0lukeprog
Yeah. The letter 'c' and 'x' show up in comments but not in post bodies.

Disclaimer: my formal background here consists only of an undergraduate intro to neuroscience course taken to fulfill a distribution requirement.

I'm wondering if this is actually a serious problem. Assuming we are trying to perform a very low-level emulation (say, electro-chemical interactions in and amongst neurons, or lower), I'd guess that one of two things would happen.

0) The emulation isn't good enough, meaning every interaction between neurons has a small but significant error in it. The errors would compound very, very quickly, and the emulated m... (read more)

0asr
I wonder if a lossy emulation might feel like/act like a human with a slightly altered brain chemistry. We have lots of examples of what it's like to have your neurons operating abnormally, due to emotion, tiredness, alcohol, other chemicals, etc etc. I'm not sure "uncanny valley" is the best term to capture that.

While beautifully written; it does sound all an idealist's dream. Or at least you have said very little to suggest otherwise.

More downvotes would send you to negative karma if there is such a place, and that's a harsh punishment for someone so eloquent. In sparing you a downvote, I encourage you to figure out what went wrong with this post and learn from it.

I downvoted the OP. A major turn-off for me was the amount of rhetorical flourish. While well-written posts should include some embellishment for clarity and engagement, when there's this much of i... (read more)

3Peter Wildeford
Maybe, but I think that's just because the post was also low on specifics. If Arandur brought the flourish and the specifics, I think it would be great, and would balance out the other stuff that can appear boring, dry, and overly technical. Though it could just be a difference in preferences.

Robin Hanson had an old idea about this which I liked: http://hanson.gmu.edu/equatalk.html

It's not going to be a silver bullet, but I think it would work well in contexts where the group of people who are in the conversation and how long it should last are well defined. Situations where an ad hoc committee is expected to meet and produce a solution to a problem, but there is no clear leader, for example. (Or there is a clear leader, but lacking expertise herself, she chooses to make use of this mechanism.)

It'd be nice to see a study on whether "Equ... (read more)

Fascinating question. I share your curiosity, and I'm not at all convinced by any attempted explanations so far. Further, I note that the trend makes a prediction: an economic crunch will be followed by a swell of corresponding magnitude. So who wants to go invest in the US stock market now?