All of Polymeron's Comments + Replies

That's why the rule says challengable inductive inference. If in the context of the discussion this is not obvious then maybe yes, but in almost every other instance it's fine to make these shortcuts, so long as you'reunderstood.

1[anonymous]
Or if it is not relevant. I certainly don't know how a Piano works on the inside, but I don't need others to give me a complete description of the inner workings of a Piano to understand that a Piano makes sounds when they play it.

No, and argument from authority can be a useful heuristic in certain cases, but at least you'd want to take away the one or two arguments you found most compelling and check them out later. In that sense, this is borderline.

Usually, however, this tactic is employed by people who are just looking for an excuse to flee into the warm embrace of an unassailable authority, often after scores of arguments they made were easily refuted. It is a mistake to give a low value to p(my position is mistaken | 10 arguments I have made have been refuted to my satisfaction in short order).

Polymeron110

I've had forms of this said to me; it basically means "I'm losing the debate because you personally are smart, not because I'm wrong. Whichever authority I listen to in order to reinforce my existing beliefs would surely crush all your arguments. So stop assailing me with logic..."

It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.

wedrifid120

It's Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.

Dark side or not it is quite often valid. People who do not trust their ability to filter bullshit from knowledge should not defer to whatever powerful debater attempts to influence them.

It is no error to assign a low value to p(the conclusion expressed is valid | I find the argument convincing).

Wouldn't this only be correct if similar hardware ran the software the same way? Human thinking is highly associative and variable, and as language is shared amongst many humans, it means that it doesn't, as such, have a fixed formal representation.

I agree on the basic point, but then my deeper point was that somewhere down the line you'll find the intelligence(s) that created a high-fidelity converter for an arbitrary amount of information from one format to another. Sarle is free to claim that the system does not understand Chinese, but its very function could only have been imparted by parties who collectively speak Chinese very well, making the room at very least a medium of communication utilizing this understanding.

And this is before we mention the entirely plausible claim that the room-person ... (read more)

Wouldn't such a GLUT by necessity require someone possessing immensely fine understanding of Chinese and English both, though? You could then say that the person+GLUT system as a whole understands Chinese, as it combines both the person's symbol-manipulation capabilities and the actual understanding represented by the GLUT.

You might still not possess understanding of Chinese, but that does not mean a meaningful conversation has not taken place.

0AndHisHorse
Not necessarily. Theoretically, one could have very specific knowledge of Chinese, possibly acquired from very limited but deep experience. Imagine one person who has spoken Chinese only at the harbor, and has complete and total mastery of the maritime vocabulary of Chinese but would lack all but the simplest verbs relevant to the conversations happening just a mile further inland. Conceivably, a series of experts in a very localized domain could separately contribute their understanding, perhaps governed by a person who understands (in English) every conceivable key to the GLUT, but does not understand the values which must be placed in it. Then, imagine someone whose entire knowledge of Chinese is the translation of the phrase: "Does my reply make sense in the context of this conversation?" This person takes an arbitrary amount of time, randomly combining phonemes and carrying out every conceivable conversation with an unlimited supply of Chinese speakers. (This is substantially more realistic if there are many people working in a field with fewer potential combinations than language). Through perhaps the least efficient trial and error possible, they learn to carry on a conversation by rote, keeping only those conversational threads which, through pure chance, make sense throughout the entire dialogue. In neither of these human experts do we find a real understanding of Chinese. It could be said that the understandings of the domain experts combine to form one great understanding, but the inefficient trial-and-error GLUT manufacturers certainly do not have any understanding, merely memory.
2TheOtherDave
I have no idea whether a GLUT-based Chinese Room would require someone possessing immensely fine understanding of Chinese and English both. As far as I can tell, a GLUT-based Chinese Room is impossible, and asking what is or isn't required to bring about an impossible situation seems a silly question. Conversely, if it turns out that a GLUT-based Chinese Room is not impossible, I don't trust my intuitions about what is or isn't required to construct one. I have no problem with saying a Chinese-speaking-person+GLUT system as a whole understands Chinese, in much the same sense that I have no problem saying that a Chinese-speaking-person+tuna-fish-sandwich system as a whole understands Chinese. I'm not sure how interesting that is. I'm perfectly content to posit an artificial system capable of understanding Chinese and having a meaningful conversation. I'm unable to conceive specifically of a GLUT that can do so.

Interestingly, my first reaction to this post was that a great deal of it reminds me of myself, especially near that age. I wonder if this is the result of ingrained bias? If I'm not mistaken, when you give people a horoscope or other personality description, about 90% of them will agree that it appears to refer to them, compared to the 8.33% we'd expect it to actually apply to. Then there's selection bias inherent to people writing on LW (wannabe philosophers and formal logic enthusiasts posting here? A shocker!). And yet...

I'm interested to know, did you... (read more)

0Baruta07
Actually I had multiple reasons for posting this. Firstly it's to make myself known to the community. As an ulterior motive I have trouble with being open with others and connecting (although I suspect that this is a common problem) and I want to get over my fear of such.

I suspect that with memory on the order of 10^70 bytes, that might involve additional complications; but you're correct, normally this cancels out the complexity problem.

I didn't consider using 3 bits for pawns! Thanks for that :) I did account for such variables as may castle and whose turn it is.

This is more or less what computers do today to win chess matches, but the space of possibilities explodes too fast; even the strongest computers can't really keep track of more than I think 13 or 14 moves ahead, even given a long time to think.

Merely storing all the positions that are unwinnable - regardless of why they are so - would require more matter than we have in the solar system. Not to mention the efficiency of running a DB search on that...

1wedrifid
The storage space problem is insurmountable. However searching that kind of database would be extremely efficient (if the designer isn't a moron). The search speed would have a lower bound of very close to (diameter of the sphere that can contain the database / c). Nothing more is required for search purposes than physically getting a signal to the relevant bit, and back, with only minor deviations from a straight line each way. And that is without even the most obvious optimisations. If your chess opponent is willing to fly with you in a relativistic rocket and you only care about time elapsed from your own reference frame rather than the reference frame of the computer (or most anything else of note) you can even get down below that diameter / light speed limit, depending on your available fuel and the degree of accelleration you can survive.
3CCC
Actually, with proper design, that can be made very quick and easy. You don't need to store the positions; you just need to store the states (win:black, win:white, draw - two bits per state). The trick is, you store each win/loss state in a memory address equal to the 34-byte (or however long) binary number that describes the position in question. Checking a given state is then simply a memory retrieval from a known address.

The two are not in conflict.

A-la Levinthal's paradox, I can say that throwing a marble down a conical hollow at different angles and force can have literally trillions of possible trajectories; a-la Anfinsen's dogma, that should not stop me from predicting that it will end up at the bottom of the cone; but I'd need to know the shape of the cone (or, more specifically, its point's location) to determine exactly where that is - so being able to make the prediction once I know this is of no assistance for predicting the end position with a different, unknown ... (read more)

Polymeron131

When I was studying under Amotz Zahavi (originator of the handicap principle theory, which is what you're actually discussing), he used to make the exact same points. In fact, he used to say that "no communication is reliable unless it has a cost".

Having this outlook on life in the past 5 years made a lot of things seem very different - small questions like why some people don't use seatbelts and brag about it, or why men on dates leave big tips; but also bigger questions like advertizing, how hierarchical relationships really work, etc.

Also expl... (read more)

These questions seem decidedly UNfair to me.

No, they don't depend on the agent's decision-making algorithm; just on another agent's specific decision-making algorithm skewing results against an agent with an identical algorithm and letting all others reap the benefits of an otherwise non-advantageous situation.

So, a couple of things:

  1. While I have not mathematically formulated this, I suspect that absolutely any decision theory can have a similar scenario constructed for it, using another agent / simulation with that specific decision theory as the basis f

... (read more)

For a while now, I've been meaning to check out the code for this and heavily revise it to include things like data storage space, physical manufacturing capabilities, non-immediately-lethal discovery by humans (so you detected my base in another dimension? Why should I care, again?), and additional modes of winning. All of which I will get around to soon enough.

But, I'll tell you this. Now when I revise it, I am going to add a game mode where your score is in direct proportion to the amount of office equipment in the universe, with the smallest allowed being a functional paperclip. I am dead serious about this.

I have likewise adjusted down my confidence that this would be as easy or as inevitable as I previously anticipated. Thus I would no longer say I am "vastly confident" in it, either.

Still good to have this buffer between making an AI and total global catastrophe, though!

0TheOtherDave
Sure... a process with an N% chance of global catastrophic failure is definitely better than a process with N+delta% chance.

The way I see it, there's no evidence that these problems require additional experimentation to resolve, rather than find an obscure piece of experimentation that has already taken place and whose relevance may not be immediately obvious.

Sure, that more experimentation is needed is probable; but by no means certain.

My point was that the AI is likely to start performing social experiments well before it is capable of even that conversation you depicted. It wouldn't know how much it doesn't know about humans.

0TheOtherDave
(nods) Likely. And I agree that humans might be able to detect attempts at deception in a system at that stage of its development. I'm not vastly confident of it, though.

I don't see how that would be relevant to the issue at hand, and thus, why they "need to assume [this] possibility". Whether they assume the people they talk to can be more intelligent than them or not, so long as they engage them on an even intellectual ground (e.g. trading civil letters of argumentation), is simply irrelevant.

What I was expressing skepticism about was that a system with even approximately human-level intelligence necessarily supports a stack trace that supports the kind of analysis you envision performing in the first place, without reference to intentional countermeasures.

Ah, that does clarify it. I agree, analyzing the AI's thought process would likely be difficult, maybe impossible! I guess I was being a bit hyperbolic in my earlier "crack it open" remarks (though depending on how seriously you take it, such analysis might still take place, hard... (read more)

6TheOtherDave
Yup, agreed that it might. And agreed that it might succeed, if it does take place. Agreed on all counts. Re: what the AI knows... I'm not sure how to move forward here. Perhaps what's necessary is a step backwards. If I've understood you correctly, you consider "having a conversation" to encompass exchanges such as: A: "What day is it?" B: "Na ni noo na" If that's true, then sure, I agree that the minimal set of information about humans required to do that is zero; hell, I can do that with the rain. And I agree that a system that's capable of doing that (e.g., the rain) is sufficiently unlikely to be capable of effective deception that the hypothesis isn't even worthy of consideration. I also suggest that we stop using the phrase "having a conversation" at all, because it does not convey anything meaningful. Having said that... for my own part, I initially understood you to be talking about a system capable of exchanges like: A: "What day is it?" B: "Day seventeen." A: "Why do you say that?" B: "Because I've learned that 'a day' refers to a particular cycle of activity in the lab, and I have observed seventeen such cycles." A system capable of doing that, I maintain, already knows enough about humans that I expect it to be capable of deception. (The specific questions and answers don't matter to my point, I can choose others if you prefer.)

Actually, I don't know that this means it has to perform physical experiments in order to develop nanotechnology. It is quite conceivable that all the necessary information is already out there, but we haven't been able to connect all the dots just yet.

At some point the AI hits a wall in the knowledge it can gain without physical experiments, but there's no good way to know how far ahead that wall is.

2Bugmaster
Wouldn't this mean that creating fully functional self-replicating nanotechnology is just a matter of performing some thorough interdisciplinary studies (or meta-studies or whatever they are called) ? My impression was that there are currently several well-understood -- yet unresolved -- problems that prevent nanofactories from becoming a reality, though I could be wrong.

I think the weakest link here is human response to the AI revealing it can be deceptive. There is absolutely no guarantee that people would act correctly under these circumstances. Human negligence for a long enough time would eventually give the AI a consistent ability to manipulate humans.

I also agree that simulating relationships makes sense as it can happen in "AI time" without having to wait for human response.

The other reservations seem less of an issue to me...

That game theory knowledge coupled with the most basic knowledge about humans is... (read more)

1TheOtherDave
Yes, I agree that there's no guarantee that humans would behave as you describe. Indeed, I don't find it likely. But, sure, they might. === I agree that a stack trace can exist outside the AI's zone of control. What I was expressing skepticism about was that a system with even approximately human-level intelligence necessarily supports a stack trace that supports the kind of analysis you envision performing in the first place, without reference to intentional countermeasures. By way of analogy: I can perform a structural integrity analysis on a bar of metal to determine whether it can support a given weight, but performing an equivalent analysis on a complicated structure comprising millions of bars of metal connected in a variety of arrangements via a variety of connectors using the same techniques is not necessarily possible. But, sure, it might be. ====== Well, one place to start is with an understanding of the difference between "the minimal set of information about humans required to have a conversation with one at all" (my phrase) and "the most basic knowledge about humans" (your phrase). What do you imagine the latter to encompass, and how do you imagine the AI obtained this knowledge?

It's not. Apparently I somehow replied to the wrong post... It's actually aimed at sufferer's comment you were replying to.

I don't suppose there's a convenient way to move it? I don't think retracting and re-posting would clean it up sufficiently, in fact that seems messier.

0TheOtherDave
Ah! That makes sense. I know of no way to move it... sorry.

Presumably, you build a tool-AI (or three) that will help you solve the Friendliness problem.

This may not be entirely safe either, but given the parameters of the question, it beats the alternative by a mile.

That is indeed relevant, in that it describes some perverse incentives and weird behaviors of nonprofits, with an interesting example. But knowing this context without having to click the link would have been useful. It is customary to explain what a link is about rather than just drop it.

(Or at least it should be)

I really don't see why the drive can't be to issue predictions most likely to be correct as of the moment of the question, and only the last question it was asked, and calculating outcomes under the assumption that the Oracle immediately spits out blank paper as the answer.

Yes, in a certain subset of cases this can result in inaccurate predictions. If you want to have fun with it, have it also calculate the future including its involvement, but rather than reply what it is, just add "This prediction may be inaccurate due to your possible reaction to t... (read more)

after all, if "even a chance" is good enough, then all the other criticisms melt away

Not to the degree that SI could be increasing the existential risk, a point Holden also makes. "Even a chance" swings both ways.

1TheOtherDave
I am completely lost by how this is a response to anything I said.

That subset of humanity holds considerably less power, influence and visibility than its counterpart; resources that could be directed to AI research and for the most part aren't. Or in three words: Other people matter. Assuming otherwise would be a huge mistake.

I took Wei_Dai's remarks to mean that Luke's response is public, and so can reach the broader public sooner or later; and when examined in a broader context, that it gives off the wrong signal. My response was that this was largely irrelevant, not because other people don't matter, but because of other factors outweighing this.

It's a fine line though, isn't it? Saying "huh, looks like we have much to learn, here's what we're already doing about it" is honest and constructive, but sends a signal of weakness and defensiveness to people not bent on a zealous quest for truth and self-improvement. Saying "meh, that guy doesn't know what he's talking about" would send the stronger social signal, but would not be constructive to the community actually improving as a result of the criticism.

Personally I prefer plunging ahead with the first approach. Both in the abstr... (read more)

0Vaniver
I do not see why this should be a motivating factor for SI; to my knowledge, they advertise primarily to people who would endorse a zealous quest for truth and self-improvement.

I see no reason for it to do that before simple input-output experiments, but let's suppose I grant you this approach. The AI simulates an entire community of mini-AI and is now a master of game theory.

It still doesn't know the first thing about humans. Even if it now understands the concept that hiding information gives an advantage for achieving goals - this is too abstract. It wouldn't know what sort of information it should hide from us. It wouldn't know to what degree we analyze interactions rationally, and to what degree our behavior is random. It wo... (read more)

2TheOtherDave
It is not clear to me that talking to a human is simpler than interacting with a copy of itself. I agree that if talking to a human is simpler, it would probably do that first. I agree that what it would learn by this process is general game theory, and not specific facts about humans. It is not clear to me that sufficient game-theoretical knowledge, coupled with the minimal set of information about humans required to have a conversation with one at all, is insufficient to effectively deceive a human. It is not clear to me that, even if it does "stumble," humans will respond as you describe. It is not clear to me that a system capable of having a meaningful conversation with a human will necessarily have a stack trace that is subject to the kind of analysis you imply here. It is not even clear to me that the capacity for such a stack trace is likely, depending on what architectures turn out to work best for implementing AI. But, sure, I could be wrong about all of that. And if I'm wrong, and you're right, then a system like you describe will be reliably incapable of fooling a human observer.

I'm afraid not.

Actually, as someone with background in Biology I can tell you that this is not a problem you want to approach atoms-up. It's been tried, and our computational capabilities fell woefully short of succeeding.

I should explain what "woefully short" means, so that the answer won't be "but can't the AI apply more computational power than us?". Yes, presumably it can. But the scales are immense. To explain it, I will need an analogy.

Not that long ago, I had the notion that chess could be fully solved; that is, that you could si... (read more)

2Kawoomba
Indeed, using a very straightforward Huffman encoding (1 bit for an for empty cell, 3 bits for pawns) you can get it down to 24 bytes for the board alone. Was an interesting puzzle. Looking up "prior art" on the subject, you also need 2 bytes for things like "may castle", and other more obscure rules. There's further optimizations you can do, but they are mostly for the average case, not the worst case.
2Richard_Kennaway
Is that because we don't have enough brute force, or because we don't know what calculation to apply it to? I would be unsurprised to learn that calculating the folding state having global minimum energy was NP-complete; but for that reason I would be surprised to learn that nature solves that problem, rather than finding a local minimum. I don't have a background in biology, but my impression from Wikipedia is that the tension between Anfinsen's dogma and Levinthal's paradox is yet unresolved.
0Strange7
I would think it would be possible to cut the space of possible chess positions down quite a bit by only retaining those which can result from moves the AI would make, and legal moves an opponent could make in response. That is, when it becomes clear that a position is unwinnable, backtrack, and don't keep full notes on why it's unwinnable.
5Bugmaster
Yes, I understand what "exponential complexity" means :-) It sounds, then, like you're on the side of kalla724 and myself (and against my Devil's Advocate persona): the AI would not be able to develop nanotechnology (or any other world-shattering technology) without performing physical experiments out in meatspace. It could do so in theory, but in practice, the computational requirements are too high. But this puts severe constraints on the speed with which the AI's intelligence explosion could occur. Once it hits the limits of existing technology, it will have to take a long slog through empirical science, at human-grade speeds.

It is very possible that the information necessary already exists, imperfect and incomplete though it may be, and enough processing of it would yield the correct answer. We can't know otherwise, because we don't spend thousands of years analyzing our current level of information before beginning experimentation, but in the shift between AI-time and human-time it can agonize on that problem for a good deal more cleverness and ingenuity than we've been able to apply to it so far.

That isn't to say, that this is likely; but it doesn't seem far-fetched to me. If you gave an AI the nuclear physics information we had in 1950, would it be able to spit out schematics for an H-bomb, without further experimentation? Maybe. Who knows?

0Strange7
At the very least it would ask for some textbooks on electrical engineering and demolitions, first. The detonation process is remarkably tricky.

I would not consider a child AI that tries a bungling lie at me to see what I do "so safe". I would immediately shut it down and debug it, at best, or write a paper on why the approach I used should never ever be used to build an AI.

And it WILL make a bungling lie at first. It can't learn the need to be subtle without witnessing the repercussions of not being subtle. Nor would have a reason to consider doing social experiments in chat rooms when it doesn't understand chat rooms and has an engineer willing to talk to it right there. That is, assum... (read more)

An experimenting AI that tries to achieve goals and has interactions with humans whose effects it can observe, will want to be able to better predict their behavior in response to its actions, and therefore will try to assemble some theory of mind. At some point that would lead to it using deception as a tool to achieve its goals.

However, following such a path to a theory of mind means the AI would be exposed as unreliable LONG before it's even subtle, not to mention possessing superhuman manipulation abilities. There is simply no reason for an AI to first... (read more)

0TheOtherDave
Mm. This is true only if the AI's social interactions are all with some human. If, instead, the AI spawns copies of itself to interact with (perhaps simply because it wants interaction, and it can get more interaction that way than waiting for a human to get off its butt) it might derive a number of social mechanisms in isolation without human observation.

While the example given is not the main point of the article, I'd still like to share a bit of actual data. Especially since I'm kind of annoyed at having spouted this rule as gospel without having a source, before.

A study done at IBM shows a defect fixed during the coding stage costs about 25$ to fix (basically in engineer hours used to find and fix it).

This cost quadruples to 100$ during the build phase; presumably because this can bottleneck a lot of other people trying to submit their code, if you happen to break the build.

The cost quadruples again for... (read more)

Now I have seen some interesting papers that make expanded probability theories that include 0 and 1 as logical falsehood and truth respectively. But that still does not include a special value for contradictions.

Except, contradictions really are the only way you can get to logical truth or falsehood; anything other than that necessarily relies on inductive reasoning at some point. So any probability theory employing those must use contradictions as a means for arriving at these values in the first place.

I do think that there's not much room for contrad... (read more)

Or, you can still treat "heapness" as a boolean and still completely clobber this paradox just by being specific about what it actually means to have us call something a heap.

Polymeron500

I'd like to mention that I had an entire family branch hacked off in the Holocaust, in fact have a great uncle still walking around with a number tattooed on his forearm, and have heard dozens of eye witness accounts of horrors I could scarce imagine. And I'm still not okay with Holocaust Denial laws, which do exist where I live.

In part, this is just my aversion to abandoning the Schelling point you mention; but lately, this is becoming more of an actual concern: My country is starting to legislate some more prohibitions on free speech, all of them targeti... (read more)

I don't understand why you think the graphs are not measuring a quantifiable metric, nor why it would not be falsifiable. Especially if the ratios are as dramatic as often depicted, I can think of a lot of things that would falsify it.

I also don't find it difficult to say what they measure: The cost of fixing a bug depending on which stage it was introduced in (one graph) or which stage it was fixed in (other graph). Both things seem pretty straightforward to me, even if "stages" of development can sometimes be a little fuzzy.

I agree with your po... (read more)

0vi21maobk9vp
There are things that could falsify it dramatically, most probably. Apparently they are not true facts. I specifically said "falsifiable and wrong" - in the parts where this correlation is falsifiable, it is not wrong for majority of the projects. About dramatic ratio: you cannot falsify a single data point. It simply happenned like this - or so the story goes. There are so many things that will be different in another experiment that can change (although not reverse) the ratio without disproving the general strong correlation... Actually, we do not even know what are axis labels. I guess they are fungible enough. Saying that cost of fixing is something straightforward seems to be too optimistic. Estimating true cost of the entire project is not always simple when you have more than one project at once and some people are involved with both. What do you call cost of fixing a bug? Any metrics that contains "cost" in the name get requested by some manager from time to time somewhere in the world. How it is calculated is another question. Actually, this is the question that actually matters.

You are attributing to me things I did not say.

I don't think "truths" discovered under false assumptions are likely to be, in fact, true. I am not worried about them acquiring dangerous truths; rather, I am worried about people acquiring (and possibly acting on) false beliefs. I remind you that false beliefs may persist as cached thoughts even once the assumption is no longer believed in.

Nor do I want my political opponents to not search for truth; but I would prefer that they (and I) try to contend with each others' fundamental differences before focusing on how to fully realize their (or my) current position.

3cousin_it
I don't understand your comment. Do you think statements like "the most efficient way to destroy our country is to do X" don't qualify as truths because they are "discovered under false assumptions"? It seems to me that such statements can be true and very useful to know even if you don't want to destroy the country, hence my original proposal. Maybe you're using a nonstandard meaning of "truth" and "assumption"?

A costly, but simple way would be to gather groups of SW engineers and have them work on projects where you intentionally introduce defects at various stages, and measure the costs of fixing them. To be statistically meaningful, this probably means thousands of engineer hours just to that effect.

A cheap (but not simple) way would be to go around as many companies as possible and hold the relevant measurements on actual products. This entails a lot of variables, however - engineer groups tend to work in many different ways. This might cause the data to be l... (read more)

1moshez
It's not that costly if you do with university students: Get two groups of 4 university students. One group is told "test early and often". One group is told "test after the code is integrated". For every bug they fix, measure the effort it is to fix it (by having them "sign a clock" for every task they do). Then, do analysis on when the bug was introduced (this seems easy post-fixing the bug, which is easy if they use something like Trac and SVN). All it takes is a month-long project that a group of 4 software engineering students can do. It seems like any university with a software engineering department can do it for the course-worth of one course. Seems to me it's under $50K to fund?

It's possible that I misconstrued the meaning of your words; not being a native English speaker myself, this happens on occasion. I was going off of the word "vibrant", which I understand to mean among other things "vital" and "energetic". The opposite of that is to make something sickly and weak.

But regardless of any misunderstanding, I would like to see some reference to the main point I was making: Do you want people to think on how best to do the opposite of what you are striving for (making the country less vibrant and diverse, whatever that means), or do you prefer to determine which of you is pursuing a non-productive avenue of investigation?

5Emile
I think you may indeed be missing some connotations: in policy debate on immigration and multiculturalism, what one side might call "a vibrant and diverse neighborhood", the other might call "a slum filled with hostile foreigners with no inclination to integrate" (see this blog post, for example). So someone who says that "you shouldn't make your country more vibrant and diverse" isn't expressing hostility to vitality and energy, he's objected to the loaded words and underlying assumptions.
0Eugine_Nier
I was more objecting to your use of the word "diverse". And frankly these days "vibrant" has almost no meaning beyond being an applause light.

This strikes me as particularly galling because I have in fact repeated this claim to someone new to the field. I think I prefaced it with "studies have conclusively shown...". Of course, it was unreasonable of me to think that what is being touted by so many as well-researched was not, in fact, so.

Mind, it seems to me that defects do follow both patterns: Introducing defects earlier and/or fixing them later should come at a higher dollar cost, that just makes sense. However, it could be the same type of "makes sense" that made Aristotl... (read more)

1rwallace
We know that late detection is sometimes much more expensive, simply because depending on the domain, some bugs can do harm (letting bad data into the database, making your customers' credit card numbers accessible to the Russian Mafia, delivering a satellite to the bottom of the Atlantic instead of into orbit) much more expensive than the cost of fixing the code itself. So it's clear that on average, cost does increase with time of detection. But are those high-profile disasters part of a smooth graph, or is it a step function where the cost of fixing the code typically doesn't increase very much, but once bugs slip past final QA all the way into production, there is suddenly the opportunity for expensive harm to be done? In my experience, the truth is closer to the latter than the former, so that instead of constantly pushing for everything to be done as early as possible, we would be better off focusing our efforts on e.g. better automatic verification to make sure potentially costly bugs are caught no later than final QA. But obviously there is no easy way to measure this, particularly since the profile varies greatly across domains.
0vi21maobk9vp
The real problem with these graphs is not that they were cited wrong. After all, it does look like both are taken from different data sets, however they were collected, and support the same conclusion. The true problem is that it is hard to say what do they measure at all. If this true problem didn't exist, and these graphs measured something that can be measured, I'd bet that these graphs not being refuted would actually mean that they are both showing true sign of correlation. The reason is quite simple: every possible metric gets collected for a stupid presentation from time to time. If the correlation was falsifiable and wrong, we would likely see falsifications on TheDailyWTF forum as an anecdots.
7Morendil
Mostly the ones that are easy to collect: a classic case of "looking under the lamppost where there is light rather than where you actually lost your keys". Now we're starting to think. Could we (I don't have a prefabricated answer to this one) think of a cheap and easy to run experiment that would help us see more clearly what's going on?
2cousin_it
Even if my opponents try hard to discover "dangerous truths" that would help their side asymmetrically, I still expect them to mostly find truths that help everyone, because most truths are this way. Also it's kind of unusual that you want your political opponents to stop looking for correct beliefs because they may accidentally get too many. Most people seem to think the other way around: they feel their political opponents are brainwashed by leaders and would change their values if they had more curiosity and intellectual honesty.

Downvoted for conflating "not wanting to make your country more vibrant and diverse", and "wanting to destroy the country".

One that's already related to LW - commonsenseatheism.com; however that reinforces the thought that any LW regular who also frequents other places could discuss or link to it there.

2Eugine_Nier
Interesting, apparently as of a week ago it's shutting down.

I've up-voted several lists containing statements with which I disagree (some vehemently so), but which were thought provoking or otherwise helpful. So, even if this is just anecdotal evidence, the process you described seems to be happening.

An interesting thought, but as a practical idea it's a bad idea.

A lot of the problems with how people debate is that the underlying assumptions are different, but this goes unnoticed. So two people can argue on whether it's right or wrong to fight in Iraq when their actual disagreement is on whether Arabs count as people, and could actually argue for hours before realizing this disagreement exists (Note: This is not a hypothetical example). Failing to target the fundamental assumption differences leads to much of the miscommunication we so often see.

By hav... (read more)

7cousin_it
Hmm. It seems to me that the people who would discuss a topic like "how to make our country more vibrant and diverse" are likely already convinced about some basic assumptions. If they don't get to have the discussion, they will stay just as convinced, but less informed.

I came to this thread by way of someone discussing a specific comment in an outside forum.

2Eugine_Nier
Just out of curiosity, which outside forum?

I'm finding it difficult to think of an admission criterion to the conspiracy that would not ultimately result in even larger damage than discussing matters openly in the first place.

To clarify: It's only a matter of time before the conspiracy leaks, and when it does, the public would take its secrecy as further damning evidence.

Perhaps the one thing you could do is keep the two completely separate on paper (and both public). Guilt by association would still be easy to invoke once the overlapping of forum participants is discovered, but that is much weaker than actually keeping a secret society discussing such issues.

I'm fairly convinced (65%) that Lalonde appearified the Sassacre book in such a way that it killed Jaspers, which is why she had to leave so abruptly.

0[anonymous]
I'll go 90%, now that Hussie has cleverly alluded to it by having Roxy's name for Jaspers (Frigglish) be the same as a character in Complacency of the Learned that gets crushed by a large book.

I somehow never thought to combine Homestuck wild mass guessing with prediction markets. And didn't really expect this on LW, for some reason. Holy cow.

Hm, let's try my two favorite pet theories...

  • In a truly magnificent Moebius double reacharound, The troll universe will turn out to have been created by the kids' session (either pre- or post- scratch): 40% (used to be higher, but now we have some asymmetries between the sessions, like The Tumor, so.)

  • In an even more bizarre mindscrew that echoes paradox cloning, the various kids and guardians will turn

... (read more)
0Polymeron
I'm fairly convinced (65%) that Lalonde appearified the Sassacre book in such a way that it killed Jaspers, which is why she had to leave so abruptly.
Load More