Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Kal 26 November 2012 01:16:16PM 0 points [-]

I misunderstood the Q. In my opinion, yes. There is no other way to get a sound (ie, based on sheer deductive coherence) grasp of the subject. So, attack the issue for yourself:

The first one has a very minor error btw - prob a typo. Good exercise to find it. A friend of mine just pointed it out to me.

Comment author: army1987 26 November 2012 01:09:39PM 2 points [-]

I've always found it funny how modern society is basically formally libertarian about sex and not nearly anything else.

Some part of my brain tells me that can't be right... and yet I can't think of any actual counterexample.

Comment author: Kal 26 November 2012 01:07:58PM 1 point [-]

Thanks, Multiheaded.

Wonder what a FAI would know about human motivations, dynamics and joys that we don't and thus it chooses differently from the scenario above.

Based on my understanding thus far, this would be a consummation devoutly to be wished. Separating man from man, except where they voluntarily choose to interact. Of course, I likely misunderstand.

Comment author: listic 26 November 2012 12:51:09PM *  2 points [-]

Now that the whole thing is released, you can favorite and follow it again.

Comment author: MileyCyrus 26 November 2012 12:34:53PM 2 points [-]

There is a blog that I would care never to read again, even in moderation. I added the blog to my localhost list so now I can't visit the blog anymore. But my lizard-brain has found a workaround: if I google the blog I can read Google's cache. Is there a way to block Google's cache of the blog without blocking the rest of Google's functions?

Comment author: negamuhia 26 November 2012 12:28:02PM 3 points [-]

I know it's days later....but I'm interested.

Comment author: MugaSofer 26 November 2012 12:19:05PM -2 points [-]

This is just so utterly over the top I'm mystified that it was taken as anything but ritual insulting for the purpose of bonding/hazing in an informal group.

Hahaha no. That wasn't a "hazing ritual". Not even slightly.

Comment author: MugaSofer 26 November 2012 12:18:08PM -2 points [-]

Well, there goes whatever respect I had for this "gwern" fellow; at least regarding topics involving gender. Good to know these things, I guess. Upvoted.

Comment author: Luke_A_Somers 26 November 2012 11:26:50AM 4 points [-]

We know we're winning when they begin making rationality friends music videos.

Comment author: loup-vaillant 26 November 2012 11:23:37AM 4 points [-]

I had a related questions which may be of import for neuros. I heard that a significant part of our nervous system lies in the guts, and is sometimes called "the second brain" in jest. As another example, I'm an amateur musician, and I'm a bit worried that semi-automatic processing of finger motion may lie on nervous ganglia well outside my main brain, closer to my fingers (which may be necessary to play impossible pieces while smiling —no, I'm not that good). It would be a chore to learn basic movements again.

My question is, do we have any evidence about whether important information that cannot be recovered from stem cells, may lie outside our skull? (Which is not the same as saying the brain holds most such information.) Stated a bit differently, do we have reasons to think that 1.000 fidelity for neuros is impossible, even in principle?

In response to comment by [deleted] on What do you think of my reading list?
Comment author: Emile 26 November 2012 11:19:41AM 2 points [-]

Right now I only use it as a convenient way to find other LessWrongers and the books they read.

Comment author: VincentYu 26 November 2012 11:19:35AM 1 point [-]
In response to comment by [deleted] on What do you think of my reading list?
Comment author: RomeoStevens 26 November 2012 11:08:02AM 2 points [-]

I'm wondering as well. The intuitive feature would be showing me which books are most popular among this group.

Comment author: JoshuaFox 26 November 2012 11:05:05AM 0 points [-]

What's a good result, both in terms of the number and the graph? What are other people's results? Not that I want to be too competitive, but I have no idea if I am doing very well or very badly.

Comment author: Stuart_Armstrong 26 November 2012 10:30:07AM 0 points [-]

Do they obey known deterministic laws?

Comment author: taelor 26 November 2012 10:24:08AM 1 point [-]

If you're interested in experiencing what an actual D&D session is like without having to actually play in one, there are a number of actual play podcasts that are essentially recordings of peoples sessions on the internet.

Comment author: Academian 26 November 2012 09:38:18AM *  1 point [-]

Yes, Alexei (aka Bent Spoon Games) and I talked about the name recently; to promote its use in university courses teaching Bayesian statistics, we're sticking with Credence Game. Confidence means something slightly different in statistics, and the game is meant to teach not just calibration, but also the act of measuring belief strength itself. The name update on BSG, and in the app itself as downloaded from there, will happen soon enough.

Comment author: ChrisHallquist 26 November 2012 09:25:14AM -1 points [-]
Comment author: asparisi 26 November 2012 09:24:27AM *  1 point [-]

Question 1: This depends on the technical details of what has been lost. If it merely an access problem: if there are good reasons to believe that current/future technologies of this resurrection society will be able to restore my faculties post-resurrection, I would be willing to go for as low as .5 for the sake of advancing the technology. If we are talking about permanent loss, but with potential repair (so, memories are just gone, but I could repair my ability to remember in the future) probably 9.0. If the difficulties would literally be permanent, 1.0, but that seems unlikely.

Question 2: Outside of asking me or my friends/family (assume none are alive or know the answer) the best they could do is construct a model based on records of my life, including any surviving digital records. It wouldn't be perfect, but any port in a storm...

Question 3: Hm. Well, if it was possible to revive someone who already was in the equivalent state before cryonics, it would probably be ethical provided that it didn't make them WORSE. Assuming it did... draw lots. It isn't pretty, but unless you privledge certain individuals, you end up in a stalemate. (This is assuming it is a legitimate requirement: all other options have been effectively utilized to their maximum benefit, and .50 is the best we're gonna get without a human trial) A model of the expected damage, the anticipated recovery period, and what sorts of changes will likely need to be made over time could make some subjects more viable for this than others, in which case it would be in everyone's interest if the most viable subjects for good improvements were the ones thrown into the lots. (Quality of life concerns might factor in too: if Person A is 80% likely to come out a .7 and 20% likely to come out a .5; and Person B is 20% likely to come out a .7 and 80% likely to come out a .5, then ceteris paribus you go for A and hope you were right. It is unlikely that all cases will be equal.)

Comment author: Konkvistador 26 November 2012 08:11:01AM *  1 point [-]

Suppose you are an anti-natalist, what does efficent charity look like then? What is the most cost effective way to reduce the number of births? I imagine giving out cheap birth control in places undergoing a demographic transition is pretty ok?

straight number of births isn't the right metric you need number of births times misery per birth minus opportunity cost of one less person

Comment author: [deleted] 26 November 2012 07:26:25AM *  0 points [-]

I might. I don't have an account right now though; I'll have to sign up. What do you use the group for? I don't see anything in the discussions or bookshelf

Comment author: gokfar 26 November 2012 07:14:40AM *  6 points [-]

I think you should join us in the LessWrong group on Goodreads (18 members and growing).

Comment author: Konkvistador 26 November 2012 06:45:31AM *  5 points [-]

I've always found it funny how modern society is basically formally libertarian about sex and not nearly anything else. And how deontological Libertarians basically treat everything with the same ethical heuristics modern society uses for sex.

"Anything between consenting adults." and "The state has no buisness in my bedroom." don't seem like things that would only make sense for sex and the bedroom and practically nowhere else. This observation moved me towards thinking they make less sense for sex and the bedroom and more sense for other things than my society thought.

Now obviously our society isn't really libertarian about sexuality. We seem to regulate to death with social and legal norms nearly every aspect of interhuman interaction that is related to sex but isn't sex. This contributes to the desirability of a bare bones approach to sex logistics, the one night stand, if one is doing cost benefit analysis.

Comment author: David_Gerard 26 November 2012 06:31:19AM 4 points [-]

Super Rationality Adventure Pals the Saturday morning cartoon! On 1080p from a BitTorrent near you.

In response to comment by Yvain on My true rejection
Comment author: ikrase 26 November 2012 06:21:07AM 0 points [-]

If somebody was going to build an IBM profit AI, (of the sort of godlike AI that people here talk about) it would almost certainly end up doubling as the IBM CEO Charity Foundation AI.

In response to comment by Kyre on Musk, Mars and x-risk
Comment author: Kawoomba 26 November 2012 06:14:36AM 4 points [-]

Grey goo?

Comment author: Nisan 26 November 2012 05:55:21AM 6 points [-]

Yes. Also, "Hear me, rat-people."

Comment author: razor11 26 November 2012 05:53:01AM 0 points [-]

https://www.nytimes.com/2012/11/25/opinion/sunday/neuroscience-under-attack.html

What are your thoughts on this article? How can a layman discern between good and bad neuroscience in books?

Comment author: asparisi 26 November 2012 05:50:26AM 1 point [-]

Another thought: once you have a large bank of questions, consider "theme questions" as something people can buy with coins. Yes, that becomes a matter of showing off rather than the main point, but people LIKE to show off.

Comment author: asparisi 26 November 2012 05:47:15AM *  11 points [-]

Suggestions (for general audience outside of LW/Rationalist circles)

I like the name "Confidence Game"- reminds people of a con game while informing you as to the point of the game.

Try to see if you can focus on a positive-point scale, if you can. Try to make it so that winning nets you a lot of points but "losing" only a couple. (Same effect on scores, either way) This won't seem as odd if you set it up as one long scale rather than two shorter ones: so 99-90-80-60-50-60-80-90-99.

Setting it to a timer will make it ADDICTIVE. Set it up in quick rounds. Make it like a quiz show. No question limit, or a bonus if you hit the limit for being "Quick on your feet." Make it hard but not impossible to do.

Set up a leaderboard where you can post to FB, show friends, and possibly compare your score to virtual "opponents" (which are really just scoring metrics) Possibly make those metrics con-man themed, keeping with the game's name.

Graphics will help a lot. Consider running with the con-game theme.

Label people: maybe something like "Underconfident" "Unsure" "Confident" "AMAZING" "Confident" "Overconfident" "Cocksure" (Test labels to see what works well!) rather than using graphs. Graphs and percentages? Turn-off. Drop the % sign and just show two numbers with a label. Make this separate from points but related. (High points=greater chance of falling toward the center, but in theory not necessarily the same.) Yes, I know the point is to get people to think in percentages, but if you want to do that you have to get them there without actually showing them math, which many find off-putting.

Set up a coin system that earns you benefits for putting into the game: extended round, "confidence streak" bonuses, hints, or skips might be good rewards here. Test and see what works. Allow people to pay for coins, but also reward coins for play or another mini-game related to play or both. (Investment=more play)

Comment author: ikrase 26 November 2012 05:35:08AM *  3 points [-]

I might be committing a rationalist sin here, but some of his attitudes seem to be driven by unquestioned racism. His interpretation of the Vaiyasa is blatantly incorrect.

Formalisim strikes me as insufficiently utilitarian and also as something which will massively benefit people like Moldbug even though there are better self-interested ideologies.

Comment author: shokwave 26 November 2012 05:24:48AM 1 point [-]

OIC.

Comment author: Kyre 26 November 2012 05:24:39AM 2 points [-]

Is there any catastrophic risk that a Mars colony mitigates against that isn't also mitigated by a self-sufficient, self-powered (e.g. geothermal) deep undergound colony with enforced long quarantine periods ?

Comment author: Kyre 26 November 2012 05:09:31AM 2 points [-]

Good point. Mars would only be better off if the colonies over-engineered their radiation protection. Otherwise anything that gets through Earth's natural protection would probably get through Martian settlements designed to give the same level of protection. It might be relatively cheap to over-engineer (e.g. digging in an extra meter), but it might not.

Comment author: [deleted] 26 November 2012 05:05:44AM *  0 points [-]

Probably.

Comment author: evand 26 November 2012 04:51:29AM 2 points [-]

The difference between 60% credence and 80% credence seems much smaller to me than the difference between 90% and 99%. Is there a reason there's no option between 90% and 99%? In your testing, have you found any well-calibrated users who answer 99% a non-trivial fraction of the time?

Comment author: AlexMennen 26 November 2012 04:15:44AM 4 points [-]

From looking at David Talbott's wikipedia page and the thunderbolts of the gods web page, The Electric Universe looks like pseudoscience.

Comment author: [deleted] 26 November 2012 04:14:09AM 0 points [-]
Comment author: [deleted] 26 November 2012 03:31:02AM *  0 points [-]

I was especially unsure about that one. I had my suspicions but was sufficiently intrigued to add it anyway. Just deleted it.

Comment author: [deleted] 26 November 2012 03:21:03AM *  2 points [-]

Upvoted, and you're right, of course. In fact, I created much of this list by looking at Goodreads, and the lw textbook thread, and Eliezer's bookshelf, and the SI reading list, etc, and cherry-picking what I was interested in. I was more soliciting commentary on those specific books, than just general recommendations per se.

I assumed the odds were that most of Lesswrong wouldn't have read most of them, or just wouldn't want to bother, which would be understandable. Honestly, I wasn't expecting too much from posting this. I figured if I could improve on or drop one book from that list it would be worth it.

Edited to add: I looked at your favorites list and will probably make a few additions

Comment author: [deleted] 26 November 2012 03:00:31AM *  2 points [-]

Also, this would probably have been better placed in an open thread

Okay, noted, and thanks

you shouldn't automatically assume that drawing is independent of rationality

Thanks for the link; and I'll edit that part out.

Comment author: gwern 26 November 2012 02:28:28AM 6 points [-]

If you just want recommendations, you can look at past book recommendation threads like the textbook one, or at people's lists on places like Goodreads (eg. me).

Comment author: pleeppleep 26 November 2012 02:25:57AM *  5 points [-]

I haven't read most of these books, so I can't critique them. The only one I can really say anything about is Godel, Escher, Bach. Read it.

Also, this would probably have been better placed in an open thread, and you shouldn't automatically assume that drawing is independent of rationality; Drawing LessWrong.

Comment author: ahartell 26 November 2012 02:22:44AM *  7 points [-]

I would recommend against The Holographic Universe. A relative read it and apparently it talks a lot about very woo-ish subjects. Whenever I've disputed it's claims, I've found it to be very poorly sourced.

Comment author: pleeppleep 26 November 2012 02:15:33AM 3 points [-]

I feel that "rationality friends" should be a standard way of addressing Lesswrongers.

Comment author: lukeprog 26 November 2012 01:57:02AM *  9 points [-]

Confusingly, this game has at least three titles:

Comment author: fubarobfusco 26 November 2012 01:48:57AM 3 points [-]

Rather than considering it in terms of fatality rate, consider it in terms of curtailing humanity's possible expansion into the universe. The Industrial Revolution was possible because of abundant coal, and the 20th century's expansion of technology was possible because of petroleum. The easy-access coal and oil are used up; the resources being used today would not be accessible to a preindustrial or newly industrial civilization. So if our civilization falls and humanity reverts to preindustrial conditions, it stays there.

Comment author: Eliezer_Yudkowsky 26 November 2012 12:44:36AM 6 points [-]

Correct. SIs that only terminally care about a single green button on Earth instrumentally care about optimizing the rest of the universe to prevent anything from threatening that button.

Comment author: Despard 26 November 2012 12:15:14AM 0 points [-]

Not sure for this trip; I'm mostly going West from Detroit, and I'll be back in the States (to NYC) next year but probably not heading to DC. All plans can change however!

Comment author: Kindly 26 November 2012 12:05:56AM 5 points [-]

The OP of the OP in the OP is OP.

Comment author: gwern 25 November 2012 11:45:39PM 1 point [-]

A HTC would come with serious overhead costs too; the cooling is just the flip side of the electricity - a HTC isn't in Iceland and the obvious interpretation of a HTC as a very small pocket universe means that you have serious cooling issues as well (a years' worth of heat production to eject each opening).

Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.

I'm not sure how much of an advantage that would be: there are pretty good approximations for some (most/all?) problems like linear programming (remember Grötschel's report citing a 43 million times speedup of a benchmark linear programming problem since 1988) and such stuff tends to asymptote. How much of an advantage is running for a year rather than the otherwise available days/weeks? Is it large enough to pay for a year of premium HTC computing power?

Comment author: aaronde 25 November 2012 11:44:51PM 0 points [-]

Good questions. I don't know the answers. But like you say, UDT especially is basically defined circularly - where the agent's decision is a function of itself. Making this coherent is still an unsolved problem. So I was wondering if we could get around some of the paradoxes by giving up on certainty.

Comment author: Jabberslythe 25 November 2012 11:30:30PM 1 point [-]

Modafinil is a highly regarded money for time exchange.

Audiobooks really effective money for time exchange (if you aren't pirating them).

Comment author: ialdabaoth 25 November 2012 11:26:48PM 5 points [-]

That's an explicit assumption of the hypothetical - "The technology will not progress in refinement without practice, and practice requires actually restoring cryogenically frozen human brains." Suppose that the process requires a lot of recalibration between species, and tends to fail more for brains with more convolutions and synaptic density.

Comment author: RomeoStevens 25 November 2012 11:02:57PM 2 points [-]

Yes, that was assumed.

Comment author: evand 25 November 2012 10:02:57PM 1 point [-]

I can see a global computer catastrophe rising to the level of civilization-ending, and 90-99% fatality rate, if I squint hard enough. I could see the fatality rate being even higher if it happens farther in the future. I'm having trouble seeing it as an existential risk, that literally kills enough people that there is no viable population remaining anywhere. Even in the case of computer catastrophe as malicious event, I'm having trouble envisioning an existential risk that doesn't also include one of the other options.

Are there papers that make the case for computer catastrophe as X-risk?

Comment author: AttenuatePejoratives 25 November 2012 09:42:38PM 1 point [-]
  1. If you keep humans around for laughs and they pull off some wacky scheme to destroy you against the odds, it's your own damn fault.

  2. The correct answer to things like "This statement is a lie" and "Are you going to answer "no" to this question?" is "I don't care."

Comment author: Benja 25 November 2012 09:42:19PM *  -1 points [-]

That doesn't have the form of a proper argument. It's like arguing that, because Viagra was invented as a treatment for hypertension, it isn't useful for anything else.

No, it's like if someone says that the reason Viagra helps with erectile dysfunction is "completely different" from the reason it helps with hypertension, and you claim that no, the reason is in fact "exactly the same", and then a third person says "No. That's nonsense." and then you explain lucidly how it is in fact the same reason and everybody laughs at that other person...

Oh wait, your reply wasn't to explain why the reason is the same, it was to explain how everybody else is missing the important fact that Viagra helps with erectile dysfunction.

[ETA: Wait, I see how the first paragraph of my earlier post could sound like I was missing the point that surreal numbers can be used like that; edited to clarify.] [ETA2: But I'd still like to hear that lucid explanation and get the attendant egg on my face, if there is one. There isn't one, though.]

Comment author: drethelin 25 November 2012 09:36:27PM 3 points [-]

The rule book is there to resolve conflict, mainly in terms of combat. If you're familiar with the kid's game of cops and robbers, it's to make sure there's no arguments about "Bang! I shot you!" "No, I should you first!". The majority of mechanics are of this nature, and the rest of the book is less rules than a description of a fantasy world for players to build off of and improvise within.

In general it's fairly boisterous, and the communal nature of the game means there aren't a lot of gaps. You can do your thinking during the times other players are talking about their decisions or when the monsters are acting or when the DM is explaining, so if you're playing with people who are experienced there aren't a lot of long pauses. Watching from the sidelines is pretty unexciting because most people, while they put some effort into acting, aren't that great, so if you lack the emotional connection with the characters and situations and achievements it's just not that good.

Re: Shy outcasts. A lot of shy outcasts really enjoy the opportunity to act like NOT shy outcasts. DND is normally played in a safer environment where social experimentation is not just encouraged but pretty much required. Pretty much no one CHOOSES to be a shy outcast so much as they're forced to inhabit that corner of existence by everyone else. Being the center of attention of a bunch of people who you respect and who respect you is a lot more pleasant than being the center of attention of people who are primed to mock and belittle you.

Comment author: V_V 25 November 2012 09:31:28PM 2 points [-]

Even the so-called Embarrassingly parallel problems, those whose theoretical performance scales almost linearly with the number of cpus, in practice scale sublinearly in the amount of work done per dollar: massive parallelization comes with all kinds of overheads, from synchronization to cache contention to network communication costs to distributed storage issues. More trivially, large data centers have significant heat dissipation issues: they all need active cooling and many are also housed in high-tech buildings specifically designed to address this issue. Many companies even place data centers in northern countries to take advantage of the colder climate, instead of putting them in, say, China, India or Brazil where labor costs much less.

Problems that are not embarrassingly parallel are limited by Amdahl's law: as you increase the number of cpus, the performance quickly reach an asymptote where the sequential parts of the algorithms dominate.

I can't help but think that there being no obvious candidates means the candidates wouldn't be fantastically useful.

Take P-complete problems, for instance. These are problems which are efficient (polynomial time) on a sequential computer, but are conjectured to be inherently difficult to parallelize (the NC != P conjecture). This class contains problems of practical interest, notably linear programming and various problems for model checking. Being able to run these tasks overnight instead of in one year would a significant advantage.

Comment author: timtyler 25 November 2012 09:20:29PM *  0 points [-]

That doesn't have the form of a proper argument. It's like arguing that, because Viagra was invented as a treatment for hypertension, it isn't useful for anything else.

Surreal numbers solve the problem of adding values - in cases where 0 < A < B and any number of A < B. Such scenarios don't require determinism or adversaries - those are are irrelevant.

Comment author: David_Gerard 25 November 2012 09:03:40PM 5 points [-]

I would say that if you don't want to be thought of as the sort of person who propagates odious bullshit, the very first thing to do would be not to propagate odious bullshit, not to complain that the person who called you out on propagating odious bullshit didn't touch third base. But perhaps that's just me.

Comment author: JoshuaZ 25 November 2012 08:48:19PM *  1 point [-]

"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."

Do you mean that people don't care if they are philosophical zombies or not?

If you look above, you'll note that the statement you've quoted was in response to your claim that "people want is a living conscious artificial mind" and my sentence after the one you are quoting is also about AI. So if it helps, replace "it" with "functional general AI" and reread the above. (Although frankly, I'm confused by how you interpreted the question given that the rest of your paragraph deals with AI.)

But I think it is actually worth touching on your question: Do people care if they are philosophical zombies? I suspect that by and large the answer is "no". While many people care about whether they have free will in any meaningful sense, the question of qualia simply isn't something that's widely discussed at all. Moreover, whether a given individual think that they have qualia in any useful sense almost certainly doesn't impact how they think they should be treated.

The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.

If a problem is large, exploring false leads is going to be inevitable. This is true even for small problems. Moreover, I'm not sure what you mean by "strong AI proponents" in this context. Very few people actively work towards research directly aimed at building strong AI, and the research that does go in that direction often turns out to be useful in weaker cases like machine learning. That's how for example we now have practical systems with neural nets that are quite helpful.

Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?

So insisting that thinking has to occur in a specific substrate is not magical thinking but self-improvement is? Bootstraping doesn't involve physical processes arising out of nothing. The essential idea in most variants is self-modification producing a more and more powerful AI. There are precedents for this sort of thing. Human civilization for example has essentially self-modified itself, albeit at a slow rate, over time.

"And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness."

Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50's and early 60's it's main arguments for and against it as an explanation of consciousness are given.

I suspect this is a definitional issue. What do you think behaviorism says that is an attempt to explaine consciousness and not just argue that it doesn't need an explanation?

Premise 1 "If it is raining, Mr. Smith will use his umbrella." Premise 2 "It is raining" Conclusion "therefore Mr. Smith will use his umbrella."

That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.

Ok. I think I'm beginning to see the problem to some extent, and I wonder how much this is due to trying to talk about behaviorism in a non-behaviorist framework. The behaviorist isn't making any claim about "intent" at all. Behaviorism just tries to talk about behavior. Similarly "decides" isn't a statement that goes into their model. Moreover, the fact that some days Smith does one thing in response to rain and sometimes does other things isn't a criticism of behaviorism: In order to argue it is one needs to be claiming that some sort of free willed decision is going on, rather than subtle differences in the day or recent experiences. The objection then isn't to behaviorism, but rather one's asserting a strong notion of free will.

I thought you would get the reference to Ned Block's counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test

It may help to be aware of illusion of transparency. Oblique references are one of the easiest things to miscommunicate about. But yes, I'm familiar with Block's look-up table argument. It isn't clear how it is relevant here: Yes, the argument raises issues with many purely descriptive notions of consciousness, especially funcitonalism. But it isn't an argument that consciousness needs to involve free will and qualia and who knows what else. If anything, it is a decent argument that the whole notion of consciousness is fatally confused.

Is "Blockhead" (the name affectionately given to this robot) conscious?

No it is not.

So everything here is essentially just smuggling in the conclusion you want in other words. It might help to ask if you can give a definition of consciousness.

I'm pretty sure that Steven Moffat must have been aware of it and created the Teselecta.

Massive illusion of transparency here- you're presuming that Moffat is thinking about the same things that you are. The idea of miniature people running a person has been around for a long-time. Prior examples include a series of Sunday strips of Calvin and Hobbes, as well as a truly awful Eddie Murphy movie.

Comment author: Zack_M_Davis 25 November 2012 08:30:15PM 0 points [-]

there are people who understand spoken words better

Someone who is familiar with the relevant cognitive science is encouraged to correct me if it turns out that my current contrarian opinion is merely the result of my ignorance, but---I'm inclined to just call that a cognitive disability. To be sure, if you happen to be so lucky as to have a domain expert nearby who is willing to spend time with you to clear up your misconceptions, then that's a wonderful resource and you should take advantage of it. But human labor is expensive and text is cheap; people who understand something deeply enough to teach it well have better things to do with their lives than give the same lecture dozens of times. What happens when you want to know something that no one is willing to teach you (at an affordable price)? To be so incompetent at reading as to actually be dependent on a flesh-and-blood human to talk you through every little step every time you want to understand something complicated is a crippling disability, much much worse than not being able to walk. I weep for those who are cursed to live with such a hellishly debilitating condition, and look forward to some future day when our civilization's medical technology has advanced enough to cure this awful disease.

Comment author: Benja 25 November 2012 08:12:50PM *  3 points [-]

Except that surreal numbers were invented for and are useful for combinatorial game theory, which is confined to adversarial and deterministic interactions.

[ETA: Ok, this was unclear: I'm saying that this is how they are useful in the context of analyzing Go, and this is the only context where they are useful in this way; I'm agreeing with the grandparent that trying to use surreal numbers as probabilities or utilities is not even remotely related, not saying that they couldn't possibly be used like that.]

In particular, the reason why surreal numbers are useful when deciding what move to make in a game of go is the exact same reason why they are useful when making other kinds of decisions.

Well--

I believe I understand the issues involved well enough that my correct answer to this is not to ask what you could possibly mean by that, but simply to say:

No. That's nonsense.

Comment author: shokwave 25 November 2012 07:27:37PM 1 point [-]

OP also means "overpowered", which is a nice coincidence.

Comment author: Jayson_Virissimo 25 November 2012 07:12:01PM 1 point [-]

"Jay" is my name in low latency situations.

Comment author: vi21maobk9vp 25 November 2012 07:01:36PM 1 point [-]

People are different.

As far as I see around, there are people with various optimal bite sizes.

For something I do want to consume in entirety, I prefer long-form writing; there are people who prefer smaller-sized pieces or smaller-sized pieces with a rare chance to interrupt and ask a question.

I learn better from text; there are people who understand spoken words better. Spoken words have intonations and emotional connotations (and often there are relevant gestures at the same time); text reading speed can be changed without any loss.

So, I wouldn't discount the option that another form of presentation can be hypothetically interesting to some 10% of population. It would be just one separate thing for the mto consider, of course.

Comment author: vi21maobk9vp 25 November 2012 06:51:34PM 1 point [-]

As I understand Eliezer's definition: "Your text is proof if it can convince me that the conclusion is true"

As I understand Uspenskiy's definition: "Your text is proof if it can convince me that the conclusion is true and is I am willing to reuse this text to convince other people".

The difference is whether the mere text convinces me that I myself can also use it succesfully. Of course this has to rely on social norms for convincing arguments in some way.

Disclosure: I have heard the second definition from Uspenskiy first-person, and I have never seen Eliezer in person.

Comment author: gwern 25 November 2012 06:38:15PM 5 points [-]

By the same logic,

Supernova, GRB : probably ? Unlike impactors, a supernova or GRB would affect both Earth and Mars. However, if the major impact on Earth is deaths by radiation of exposed people and destruction of agriculture by destruction of the ozone layer, then Mars should be much more resilient, since settlements have to be more radiation hardened anyway, and the agriculture would be under glass or under ground.

Is not a good addition. The Mars-hardened facilities will be hardened only for Mars conditions (unless it's extremely easy to harden against any level of radiation?) in order to cut colonization costs from 'mindbogglingly expensive and equivalent to decades of world GDP' to something more reasonable like 'decade of world GDP'. So given a supernova, they will have to upgrade their facilities anyway and they are worse positioned than anyone on Earth: no ozone layer, no atmosphere in general, small resource & industrial base, etc. Any defense against supernova on Mars could be better done on Earth.

Comment author: gwern 25 November 2012 06:33:26PM 1 point [-]

My impression, from idly watching sometimes at a science fiction club, is that it's fairly boisterous, and few watch from sidelines (certainly I didn't understand what was happening, although if I had known the rules maybe I would've'd a better chance).

Comment author: pcm 25 November 2012 06:19:21PM 1 point [-]

Question 1: About 0.95.

Question 2: Ask people who knew me? Infer a model of my mind from that and my writings? I don't consider it more ethical to use uncertainty as a reason postpone it until some unforeseeable technology is developed.

Question 3: I'm reluctant to enter such a lottery because I don't trust someone who believes those assumptions. I expect the scanning part of the process to improve (without depending on human trials) to the point where enough information is preserved to make a >0.99 fidelity upload theoretically possible. I would accept a trial which took that information and experimented with a simulation of 0.5 fidelity in an attempt to improve the simulation software, assuming the raw information would later be used to produce a better upload.

Comment author: DataPacRat 25 November 2012 06:09:58PM 6 points [-]

... presumably at some point after lab-mice, lab-rats, lab-dogs, and lab-chimps have all been able to be revived fully successfully, as far as can be determined?

Comment author: Stabilizer 25 November 2012 05:11:20PM *  8 points [-]

Something very weird happened to me today after reading this paragraph in the article yesterday:

Another particularly well-documented case of the persistence of mistaken beliefs despite extensive corrective efforts involves the decades-long deceptive advertising for Listerine mouthwash in the U.S. Advertisements for Listerine had falsely claimed for more than 50 years that the product helped prevent or reduce the severity of colds and sore throats.

I had not known earlier that Listerine had claimed to alleviate colds and sore throats. Today morning, as I was using my Listerine mouthwash, I felt as though the Listerine was helping my sore throat. Not deliberatively of course, but instantaneously. And my mind also instantaneously constructed a picture where the mouthwash was killing germs in my throat. This happened after I learned about the claim from a source whose only reason for mentioning it was that it was false. From a source about the dangers of misinformation.

Misinformation is more insidious that I suspected.

Comment author: Alicorn 25 November 2012 03:54:52PM *  3 points [-]

Edit: What's in the rule book? If you forget the rule book at home, can you get along or do you have to go back for it?

If there's no books at the table, it depends on whether your fellow players are willing to trust you to remember rules neutrally and if the DM is willing to adjudicate where no one can remember. There's also the online version for most of the core rules, although not all the exotic extra classes and stuff.

Comment author: JoshuaZ 25 November 2012 03:22:51PM 4 points [-]

If a Mars colony mitigates catastrophic risk (existential / extinction risk?) from climate change, then climate change is not an existential risk to human civilization on earth

This does not follow. One possible (although very unlikely) result of climate change is a much more severe situation resulting in a Venus like situation (although not as high as temp and not as much nasty stuff in the atmosphere). If that happens, Mars will be much easier to survive on than Earth, since with a lot of energy from nuclear power, extremely cold environments are much more hospitable than extremely hot environments. Current models makes such a strong runaway result unlikely, but it is a possibility.

Comment author: Multiheaded 25 November 2012 03:21:53PM *  0 points [-]

philanthropy status divas, professional beggars and related hangers-on

those who believe in thinking, but only for fashionable thoughts

Okay, so would you kindly point to some awful, worthless posts/comments by those awful, worthless people? And explain what makes them so awful and worthless? So that the right-thinking users can learn to avoid them?

Or, if you don't have anything specific in mind, would you at least cease insulting the community?

Comment author: Kindly 25 November 2012 02:40:50PM 1 point [-]

There is no proper answer to several of my criticisms: she is simply flat out wrong or sloppy.

In that case, perhaps she agrees with your criticisms, but doesn't want to admit to being wrong.

Comment author: Tenoke 25 November 2012 02:37:09PM -1 points [-]

It depends on the technology and the actual risks, but yes it makes more sense to start with the best preserved, because after everything has been cleared up you will have less completely messed up people and also because the technology will most probably improve faster if at the beginning you start using it on better preserved people, because there are less factors to worry about.

Comment author: Viliam_Bur 25 November 2012 02:35:36PM 1 point [-]

The Mars colony could be useful to test the tools necessary to overcome the hostile climate, and it could make their development (possibly mass development) a higher priority.

So in case the Earth climate starts to change very rapidly, we would have a choice to use already developed and tested equipment, built in existing factories, instead of trying to invent it amidst global chaos.

View more: Prev | Next