Open thread, July 29-August 4, 2013
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Of course, for "every Monday", the last one should have been dated July 22-28. *cough*
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Of course, for "every Monday", the last one should have been dated July 22-28. *cough*
Comments (381)
Open comment thread, Monday July 29th
If it's worth saying, but not worth its own top-level comment in the open thread, it goes here.
Regarding the obvious recursion, please note that jokes are generally only funny the first time. :)
In some cases, true iff you point it out in advance.
Also the n-th time for n >> 1.
Terrible pun of the day: Bias-ian.
Random idea for the Lobian obstacle that turned out not to work, but I decided to post anyway on the off chance someone can salvage it:
Inspired by the human brains bicameral system: Split the system into two, A and B. A has ((B proves C) -> C), B has ((A proves C) -> C). A, trusting B, can build B' as strong as B; B, trusting A, can build A' as strong as A.
Obvious flaw: A has ((B proves ((A proves C) -> C)) -> ((A proves C) -> C), so A has ((A proves C) -> C), and vice versa.
So according to this article a large factor in rising tuition costs in American universities is attributable to increases in administration and overhead costs. For example,
Certainly some of these increases are attributable to the need for more staff supporting new technological infrastructure such as network/computer administration but those needs don't explain the magnitude of the increases seen.
The author also highlights examples of excess and waste in administrative spending such as large pay hikes for top administrators in the face of budget cuts and the creation of pointless committees. How much these incidents contribute to the cost of tuition is somewhat questionable as the evidence is essentially a large list of anecdotes.
Anyway, this was surprising to me because I would naively predict that, if we were talking about almost any other product, we would begin to see less bureaucratically bloated competitors offering it for cheaper and driving the price down. What's unique about university that stops this from happening?
Possible explanations (based on an extremely basic understanding of economics, please correct),
The author notes that the boards of trustees tend to be ill-prepared for making the kinds of decisions that might lead to a trimming of the fat. However, for this to be the reason (or at least a large part of the reason) boards would have to be almost universally incompetent else the few universities that take such action would have a market advantage over those that don't.
Maybe, for whatever reason, its difficult for universities to grow past a certain point. If the market is already saturated with demand and universities are unable to expand in accommodation then they have no incentive to lower tuition. However, you would still expect lots of new universities to pop up as a result of this (which may or may not be the case as I couldn't find good statistics for this).
The situation we find ourselves in appears to fit well with the signaling model of education. That is, college isn't about learning, it's about signaling your worth to potential employers via an expensive piece of paper. If this were the case it would be hard for a new or non-prestigious institution to break into the market or increase their market share even if the actual education was of high quality and inexpensive relative to competitors. In fact, under this model, more expensive schools may be preferred simply because they signal a higher level of prestige.
Maybe I have been fooled by a misleading article that overblows the level of waste and inefficiency in American universities and that it would actually be quite difficult to run a modern educational institution without a comparable level of bureaucratic expenditure. There are parts of the article that do strike me as hyperbolic, but I've yet to come across a coherent argument that contends the current tuition levels are necessary and several that posit the opposite.
Also, worth considering is the idea that increased administration is needed to deal with new regulations and/or norms. For example many schools have added positions dealing with diversity, sexual assault, and disability accommodations.
I think combining your 2 and 3 with the observation that demand is not particularly sensitive to price (citation needed) provides a strong argument for why administrators would not be incentivized to cut costs.
Part of the reason the market can tolerate an increase in price is the same as the reason health care does likewise. The consumer is paying with someone elses money in many or most cases, and no one is looking closely at an itemized recipt/menu.
There are some new universities that aries and grow, especially technical colleges, things like U of Phoenix, etc., but it almost by definition they will be low status (signaling) and there are acrediting hurdles and other regulation that helps exiting universities function as a cartel.
We do see competition.
ETA: Two additional points:
A lot of the spending/waste is on prestige projects like new buildings, rather than on administrators.
If you're wondering why nobody is challenging the top schools, I have three responses:
1) It would require too high an initial investment. 2) It would require attracting top students, which is more difficult given scholarships and lack of reputation. 3) This college is trying to do so.
An interesting story -- about science, what gets published, and what the incentives for scientists are. But really it is about whether you ought to believe published research.
The summary has three parts (I am quoting from the story).
Part 1 : We were inspired by the fast growing literature on embodiment that demonstrates surprising links between body and mind (Markman & Brendl, 2005; Proffitt, 2006) to investigate embodiment of political extremism. Participants from the political left, right and center (N = 1,979) completed a perceptual judgment task in which words were presented in different shades of gray. Participants had to click along a gradient representing grays from near black to near white to select a shade that matched the shade of the word. We calculated accuracy: How close to the actual shade did participants get? The results were stunning. Moderates perceived the shades of gray more accurately than extremists on the left and right (p = .01). Our conclusion: political extremists perceive the world in black-and-white, figuratively and literally. Our design and follow-up analyses ruled out obvious alternative explanations such as time spent on task and a tendency to select extreme responses.
Part 2 : Before writing and submitting, we paused. ... We conducted a direct replication while we prepared the manuscript. We ran 1,300 participants, giving us .995 power to detect an effect of the original effect size at alpha = .05.
Part 3 : The effect vanished (p = .59).
I believe I've encountered a problem with either Solomonoff induction or my understanding of Solomonoff induction. I can't post about it in Discussion, as I have less than 20 karma, and the stupid questions thread is very full (I'm not even sure if it would belong there).
I've read about SI repeatedly over the last year or so, and I think I have a fairly good understanding of it. Good enough to at least follow along with informal reasoning about it, at least. Recently I was reading Rathmanner and Hutter's paper, and Legg's paper, due to renewed interest in AIXI as the theoretical "best intelligence," and the Arcade Learning Environment used to test the computable Monte Carlo AIXI approximation. Then this problem came to me.
Solomonoff Induction uses the size of the description of the smallest Turing machine to output a given bitstring. I saw this as a problem. Say AIXI was reasoning about a fair coin. It would guess before each flip whether it would come up heads or tails. Because Turing machines are deterministic, AIXI cannot make hypotheses involving randomness. To model the fair coin, AIXI would come up with increasingly convoluted Turing machines, attempting to compress a bitstring that approaches Kolmogorov randomness as its length approaches infinity. Meanwhile, AIXI would be punished and rewarded randomly. This is not a satisfactory conclusion for a theoretical "best intelligence." So is the italicized statement a valid issue? An AI that can't delay reasoning about a problem by at least labeling it "sufficiently random, solve later" doesn't seem like a good AI, particularly in the real world where chance plays a significant part.
Naturally, Eliezer has already thought of this, and wrote about it in Occam's Razor:
Does this warrant further discussion, if at least to validate or refute this claim? I don't think Eliezer's proposal for a version of SI that assigns probabilities to strings is strong enough, it doesn't describe what form the hypotheses would take. Would hypotheses in this new description be universal nondeterministic Turing machines, with the aforementioned probability distribution summed over the nondeterministic outputs?
Hypotheses in this description are probabilistic Turing machines. These can be cashed out to programs in a probabilistic programming language.
I think it's going too far to call this a "problem with Solomonoff induction." Solomonoff induction makes no claims; it's just a tool that you can use or not. Solomonoff induction as a mathematical construct should be cleanly separated from the claim that AIXI is the "best intelligence," which is wrong for several reasons.
Can probabilistic Turing machines be considered a generalization of deterministic Turing machines, so that DTMs can be described in terms of PTMs?
Editing in reply to your edit: I thought Solomonoff Induction was made for a purpose. Quoting from Legg's paper:
I'm just pointing out what I see as a limitation in the domain of problems classical Solomonoff Induction can successfully model.
Yes.
Yes.
I don't think anyone claims that this limitation doesn't exist (and anyone who claims this is wrong). But if your concern is with actual coins in the real world, I suppose the hope is that AIXI would eventually learn enough about physics to just correctly predict the outcome of coin flips.
The steelman is to replaces coin flips with radioactive decay and then go through with the argument.
Might be worth having those more often too; the last one was very popular, and had lots of questions that open threads don't typically attract.
Just a naïve thought, but maybe it would come up with MWI fairly quickly because of this. (I can imagine this being a beisutsukai challenge – show a student radioactive decay, and see how long it takes them to come up with MWI.) A probabilistic one is probably better for the other reasons brought up, though.
To come up with MWI, it would have to conceive of different potentialities and then a probabilistic selection. I don't know, I'm not seeing how deterministic Turing machines could model that.
Suppose in QM you have a wavefunction which recognizably evolves into a superposition of wavefunctions. I'll write that psi0, the initial wavefunction, becomes m.psi' + n.psi'', where m and n are coefficients, and psi' and psi'' are basis wavefunctions.
Something slightly analogous to the MWI interpretation of this, could be seen in a Turing machine which started with one copy of a bitstring, PSI0, and which replaced it with M copies of the bitstring PSI' and N copies of the bitstring PSI''. That would be a deterministic computation which replaces one world, the single copy of PSI0, with many worlds, the multiple copies of PSI' and PSI''.
So it's straightforward enough for a deterministic state machine to invent rules corresponding to a proliferation of worlds. In fact, in the abstract theory of computation, this is one of the standard ways to model nondeterministic computation - have a deterministic computation which deterministically produces all the possible paths that could be produced by the nondeterministic process.
However, the way that QM works, and thus the way that a MWI theory would have to work, is rather more complicated, because the coefficients are complex numbers, the probabilities (which one might suppose correspond to the number of copies of each world) are squares of the absolute values of those complex numbers, and probability waves can recombine and destructively interfere, so you would need worlds / bitstrings to be destroyed as well as created.
In particular, it seems that you couldn't reproduce QM with a setup in which the only fact about each world / bitstring was the number of current copies - you need the "phase information" (angle in the complex plane) of the complex numbers, in order to know what the interference effects are. So your Turing machine's representation of the state of the multiverse would be something like:
(complex coefficient associated with the PSI' worlds) (list of M copies of the PSI' bitstring); (complex coefficient associated with the PSI'' worlds) (list of N copies of the PSI'' bitstring) ; ...
and the "lists of copies of worlds" would all be dynamically irrelevant, since the dynamics comes solely from recombining the complex numbers at the head of each list of copies. At each timestep, the complex numbers would be recomputed, and then the appropriate number of world-copies would be entered into each list.
But although it's dynamically irrelevant, that list of copies of identical worlds is still performing a function, namely, it's there to ensure that there actually are M out of every (M+N) observers experiencing worlds of type PSI', and N out of every (M+N) observers experiencing worlds of type PSI''. If the multiverse representation was just
(complex coefficient of PSI' world) (one copy of PSI' world) ; (complex coefficient of PSI'' world) (one copy of PSI'' world) ; ...
then all those complex numbers could still evolve according to the Schrodinger equation, but you would only have one observer seeing a PSI' world, and one observer seeing a PSI'' world, and this is inconsistent with observation, where we see that some quantum events are more probable than others.
This is the well-known problem of recovering the Born probabilities, or justifying the Born probability rule - mentioned in several places in the QM Sequence - but expressed in the unusual context of bit-strings on a Turing tape.
(Incidentally, I have skipped over the further problem that QM uses continuous rather than discrete quantities, because that's not a problem of principle - you can just represent the complex numbers the way we do on real computers, to some finite degree of binary precision.)
... keep in mind that deterministic Turing machines can trivially simulate nondeterministic Turing machines.
The problem here seems to be one of notation. You are using nondeterministic Turing machine in the formal sense of the term, where Mitchell seems to be using nondeterministic closer to "has a source of random bits."
Trivially? I was under the impression that it involved up to a polynomial slowdown, while probabilistic Turing machines can simulate deterministic Turing machines by merely having only a single probability of 1 for each component of its transition function.
Algorithmically trivially, I didn't see anyone concerned about running times.
Well, wouldn't that be because it's all theorizing about computational complexity?
I see the point. Pseudorandom number generators would be what you mean by simulation of nondeterminism in a DTM? Would a deterministic UTM with an RNG be sufficient for AIXI to hypothesize randomness? I still don't see how SI would be able to hypothesize Turing machines that produce bitstrings that are probabilistically similar to the bitstring it is "supposed" to replicate.
Do you see how a nondeterministic Turing machine could model that?
If so ... ... ...
Someone want to start one day after tomorrow? Run monthly or something? Let's see what happens.
Eliezer's proposal was a different notation, not an actual change in the strength of Solomonoff Induction. The usual form of SI with deterministic hypotheses is already equivalent to one with probabilistic hypotheses. Because a single hypothesis with prior probability P that assigns uniform probability to each of 2^N different bitstrings, makes the same predictions as an ensemble of 2^N deterministic hypotheses each of which has prior probability P*2^-N and predicts one of the bitstrings with certainty; and a Bayesian update in the former case is equivalent to just discarding falsified hypotheses in the latter. Given any computable probability distribution, you can with O(1) bits of overhead convert it into a program that samples from that distribution when given a uniform random string as input, and then convert that into an ensemble of deterministic programs with different hardcoded values of the random string. (The other direction of the equivalence is obvious: a computable deterministic hypothesis is just a special case of a computable probability distribution.)
Yes, if you put a Solomonoff Inductor in an environment that contains a fair coin, it would come up with increasingly convoluted Turing machines. This is a problem only if you care about the value of an intermediate variable (posterior probability assigned to individual programs), rather than the variable that SI was actually designed to optimize, namely accurate predictions of sensory inputs. This manifests in AIXI's limitation to using a sense-determined utility function. (Granted, a sense-determined utility function really isn't a good formalization of my preferences, so you couldn't build an FAI that way.)
It seems to completely answer your question. That is, one can think about probabilities and formulate and test probabilistic hypotheses, without needing to generate any random numbers.
Qiaochu has already answered your question about SI, but to also attack your question about AIXI:
Careful about what you're assuming. You're implicitly assuming that the AI doesn't know that what is being flipped is a random coin. If the AI had that knowledge, it could just replace all those convoluted descriptions with just a simple one: "Generate a pseudorandom number". This would be just as effective as any other predictor, and indeed it would be very short and easy to run.
Now, what if the AI doesn't know this? Then you are feeding it random numbers and expecting it to find order in them. In other words, you're asking the hardest problem of all. It makes sense that it would expend a huge amount of computational power trying to find some order in random numbers. Put yourself in the computer's place. How on Earth would you ever be able to know if the string of 0's and 1's you are being presented with is really just random or the result of some incredibly complicated computer program? No one's telling you!
Finally, if the coin is actually a real physical coin, the computer will keep trying more and more complicated hypotheses until it has modelled your fingers, the fluid dynamics of the air, and the structure of the ground. Once it has done so, it will indeed be able to predict the outcome of the coin flip with accuracy.
Note that the optimality of AIXI is subject to several important gotchas. It is a general problem solver, and can do better than any other general problem solver, but there's no guarantee that it will do better than specific problem solvers on certain problems. This is because a specifically-designed problem solver carries problem-specific information with it - information that AIXI may not have access to.
Even a very small amount of information (say, a few tens of bits) about a problem can greatly reduce the search space. Just 14 bits of information (two ASCII characters) can reduce the search space by a factor of 2^14 = 16384.
I have a question about the Simulation Argument.
Suppose that it's some point in the future, and we're able to run conscious simulations of our ancestors. We're considering whether or not to run such a simulations.
We are also curious about whether we are in a simulation ourselves, and we know that knowledge that civilizations like ours run ancestor simulations would be evidence for the proposition that we ourselves are in a simulation.
Could the choice at this point whether or not to run a simulation be used as a form of acausal control over the probability that we ourselves are living in a simulation?
Taboo "acausal control."
Hmm, okay, to put it another way -- if we avoid running ancestor simulations for the purpose of maximizing the probability that we are not in a simulation, is it valid to, based on this fact, increase our credence in not being in a simulation?
I think so. If we decided not to run a simulation, any would-be-simulators analogous to us would also choose not to run a simulation, so you've eliminated a bunch of worlds where simulations are possible.
Only if those simulators are extremely similar to us. It may only take a very minor difference to decide to run simulations.
That is true, but irrelevant. Making the decision eliminates possible worlds in which we are simulations. Therefore we end up with fewer simulation-worlds out of our total list of potential future worlds, and thus our probability estimate must increase.
Or, to put it in Bayesian terms: P(we're in a simulation|we chose not to be in a simulation)/P(we choose not to be in a simulation) is greater than 1.
Sure, but by how much? If the ratio is something like 2 or even 5 or 10 this isn't going to matter much.
That's not the question.
That's the question, and the answer is "yes."
Unless you round sufficiently small increases down to zero, which is what people generally do. If somebody asked me that, and I estimated that the difference in probability was .00000000001, then I would answer "no".
That is granted. However, I'm also fairly sure (p=.75) that the probability isn't that small, because by deciding not to simulate a civilization yourself, you have greatly decreased the probability of being in an infinite descending chain. There remains singleton chance simulations and dynamic equilibria of nested simulations, but those are both intuitively less dense in clones of your universe - so you've ruled out a significant fraction of possible simulation-worlds by deciding not to simulate yourself yourself.
The most you can say is that all reflectively consistent ancestors would behave the same way you do. Wasn't there a Greg Egan's story about it?
English tip: the possessive ending " 's " carries an implicit "the". Thus "Greg Egan's story" means "the story of Greg Egan", not just "story of Greg Egan". (This is unlike the corresponding construction in, for example, German.) Instead of the above, you wanted to write:
(This particular mistake occurs often among non-native-speakers, and indeed is a dead giveaway of one's status as such, so it's worth saying something about.)
(Except in constructs like “girls' school” or “a ten minutes' walk”.)
You're right about "girls' school", but "a ten minutes' walk" is wrong (should be "a ten-minute walk" or "ten minutes' walk").
Thanks. I myself am a non-native speaker.
[Note to self: I should re-read the relevant chapter in my English grammar when I get back home. Meanwhile, I'll look at the overview here.]
(Semantically, “ten minutes' walk” still means ‘a ten-minute walk’ rather than ‘the ten-minute walk’, but your point in reply to shminux was about syntax not semantics anyway.)
The "proof of synonymy" looks like this:
ten minutes' walk = (the walk) of (ten minutes) = a (walk of ten minutes) = a ten-minute walk
...the second "equality" being where semantics is invoked.
Thanks. This sounds plausible (if irrelevant), but I could not find an authoritative reference confirming it. Any links?
(My comment was generated by the spontaneous reaction and reflection of a native speaker rather than memory of any deliberately learned rule.) Wikipedia has this to say:
One should indeed think of " 's " in this context as the equivalent for nouns of what "my" is for the pronoun "I".
A Student's Introduction to English Grammar, p. 90:
p. 109:
See also the Wikipedia determiner and genitive case articles.
Thanks! Now, if only someone linked that Egan story :)
This story is not by Egan, but it might be what you're looking for.
Ah, yes, thanks. I wondered why I couldn't find it :) Hmm, I thought it was longer...
Hm. So how do you express the concept of an undetermined relative of some patient? The text you quoted would say that [one patient's relative] means the relative of one patient -- how do I express a relative of one patient?
Didn't you just?
Well, of course there are ways to rephrase most anything. I am, however, interested in whether there's a way to express the "a relative of one patient" notion through the possessive 's.
A related question is whether a native speaker would be sure that one patient's relative necessarily means the relative, or he would be ambiguous whether it means the relative or a relative.
In a specialized context (such as among people who work at a hospital), "patient's relative" could conceivably become a set phrase, in which case sentences such as "there are some patient's relatives waiting outside" would become possible (contrast * "there are some Greg Egan's stories on the shelf").
This is presumably what happened with "girls' school". Very rarely, it can even happen with proper nouns, as in the mathematical term Green's function. But this is not part of the syntax of the possessive ; it is the result of the whole possessive phrase being treated as a unit. (When you hear "the Green's function for this operator" for the first time, you immediately know that "Green's function" is a jargon phrase, because of the irregular syntax.)
Haven't read it, but perhaps you mean this one? It sounds very interesting!
No. It is unreasonable to think that all simulations are ancestral anyway. Even if no one runs ancestral simulations people will still run simulations of other possible words for a variety of reasons and we will be likely in one of those. And anyway, as soon as you can make a complete ancestral simulation (without knowing of any way to do so without giving consciousnesses/qualia/whatever to the simulated) you can be >99% that you live in a simulation no matter if you run anything yourself or not.
I strongly recommend not using stupid. It's less distracting to just point out mistakes without using insults.
changed to unreasonable if that helps
That is less insulting, and therefore an improvement. A version that's not even a little insulting might look something like "Not all simulations are ancestral." That approach expresses disagreement with the original claim, but doesn't connote anything about the person who made it.
However, your version completely skips what I am actually saying - that I think that whole line of thinking is bad.
There's a difference between “it is unreasonable to think X” and “not X”. (Let X equal “the sixteenth decimal digit of the fine structure constant is 3”, for example.)
(I'd use “There's no obvious good reason to think that all simulations are ancestral.”)
"Unreasonable" is an improvement, but I'd take it further to "mistaken" or "highly implausible".
Actually, I agree with you about the likelihood of numerous sorts of simulations that highly outnumber ancestor simulations.
Point taken regarding ancestor simulations, but I don't think that resolves the question. What we choose to do is still evidence about what others will choose to do whether or not the choice is about simulating ancestors or just other possible worlds.
In Bostrom's formulation there is also the possibility that civilizations capable of ancestor simulations will overwhelmingly choose not to. It's not obvious to me that this is one of the horns of the trilemma to reject.
I can think of at least two reasons why it might be a convergent behavior not to run ancestor simulations:
1) Civilizations capable of running ancestor simulations might overwhelmingly have morals that dissuade them from subjecting sentient beings to such low standards of living as their ancestors had.
2) Such civilizations may wish to exert acausal control over whether they are in a simulation. This is the motivation for my question.
Again, you are making Bostrom's mistake of focusing on ancestral simulations. This is likely why this option seems plausible to you like it did to him - it looks much more plausible that people will decide not to run any ancestral simulations because of their morals than it is that people will decide not to run any simulations whatsoever.
This is theoretically possible but realistically there is little reason to expect all posthuman civilizations to have such morals in regards to arbitrary creatures. We certainly don't seem to be the type of civilization which would sacrifice the utility gained by running simulations for some questionable moral reasons - or at least not with a probability that is close to 1. Additionally, The mindspace for all posthuman agents is huge - you need a large amount of evidence to conclude that it is likely for all posthuman civilizations to be so moral.
Similarly, mind space is huge and it seems really unlikely by default that most posthuman societies will never run a simulation just on that basis. Furthermore, it is enough if only 1 for every billion posthuman civilizations runs simulations for it to be more likely that we are in a simulation than not, provided that the average simulator civilization runs more than a billion simulation in it's history.
Furthermore, in order for most posthuman civilizations to not run any simulations there needs to be some sort of a 100% efficent way to prevent rogue agents to develop simulations. This also could be possible but still mostly unlikely. Even if somehow all posthuman societies always decide to never run a single simulation (for which there is no evidence) it is unlikely that all those civilizations also have a world-wide simulation-prevention mechanism in place from the very moment when simulations are technologically possible in that world.
Again, this seems irrelevant. I talked about ancestor simulations because that's how it's worded in the Simulation Argument, but as I said in the post above, as far as I can tell the logic doesn't depend on it. Just replace 'simulations of ancestors' with 'simulations of worlds containing sentient beings'.
As for the rest of your post, those are fine arguments for why the second horn of the trilemma should be rejected. I don't find them absolutely convincing, so I still assign non-negligible credence to option 2 (and thus still find the acausal control question interesting), but I don't have strong counterarguments either, so if you do assign negligible credence to option 2, perhaps we'll have to agree to disagree on this point.
I do and based on the wording of your comment you have no real reason not to either.
Did you miss this part?
Nope. They weren't meant to be absolutely convincing - option 2) is possible just not probable.
Perhaps. I will have to think about it some more.
Do inaccurate ancestral simulations count for anything in this argument? Admittedly, I'm extrapolating from humans as I know them, but the combination of incomplete research, simulations modified for convenience and/or tolerability and/or to improve the story, and interest in what-if scenarios implies that even if you're a ancestor of an ancestor simulation creating civilization, you won't be that much like the actual ancestor.
Just for the fun of it, the Borgias on tv.
It completely doesn't matter whether you are a simulation of an accurate ancestor, inaccurate ancestor or HJPEV. As I am trying to point out there is nothing special to ancestral simulations and no real reason to focus only on them.
Does anyone know of a good textbook on public relations (PR), or a good resource/summary of the state of the field? I think it would be interesting to know about this, especially with regards to school clubs, meetups, and online rationality advocacy.
Any LW readers living in India? I recently moved here (specifically, New Delhi) from the United States and I'm interested in the possibility of a local meet-up.
The usual suggestion for cases like this is to unilaterally announce a meetup in a public place, and bring a book in case no one shows up. Best case: awesome people doing awesome things. Worst case: you spend a couple hours reading.
Warning: politics, etc., etc.
What do conservative political traditions squabble over?
My upbringing and social circles are moderately left-wing. There's a well-observed failure mode in these circles, not entirely dissimilar to what's discussed in Why Our Kind Can't Cooperate, where participants sabotage cooperation by going out of their way to find things to disagree about, presumably for moral posturing and virtue-signalling reasons.
In recent years I have become fairly sceptical of intrinsic differences between political groups, which leads me to my opening question: what do conservative political traditions squabble over? I find it hard to imagine what form this sort of self-sabotaging moral posturing might take. Can anyone who grew up on the other side of the fence offer any insight?
Not speaking based on what I've grown up but this seems slightly more common on the American left than the American right. That said, examples of squabbles of similar forms on the right include over religion such as arguing over whether voting for Mitt Romney was ok given that he was a Mormon. (See e.g. here, with similar attacks on Glenn Beck. Recently one had certain aspects of the Tea Party call for a boycott of Fox News for being too pro-Obama. Similarly, of the Protestants on the right are still not ok with Catholics although they aren't a very large group and seem to be getting smaller. There's also a running trend in the fight between the more interventionist end of the right and the more isolationist end. See e.g. here. Another example is when Rick Perry tried to get HPV mandatory vaccination in Texas, there was blowback from the right as well as from the general libertarians.
But it seems that overall, these sorts of fights occur at a smaller scale than they do on the left. They don't involve as much splintering of organizations. And like many of the similar issues on the left, few people who aren't personally involved are paying much attention to them and even when one does, the differences often look small to outsiders even as the arguments get very heated.
At least in American politics, this seems to me to be cyclical: conservatives were very tightly united during the 80's and 90's, and are presently fairly divided. (Their present divisions are partially papered over by the two other factors that lead to increased party-bloc voting- the end of racism as an effective issue that ran across party lines, and a general increase in party-line/ideological voting that also shows up among Democrats. Non-substantive votes like the historic near-failure of Boehner's run for House Majority Leader, and the Party's internal discussions, show divisions better.)
There have been some substantive examples as well. The TARP vote was considerably more divisive for Republicans than for Democrats. Both parties were about equally divided on the recent Amash Amendment vote (to defund the NSA).
I don't think the racism as an effective issue is over. Atwater's southern strategy seems alive and well to me. This was first executed (successfully?) by Reagan and the pattern seems to hold. Here's Atwater's quote on the matter:
This is not relevant to what I said, for several reasons. First, guessing at your beliefs, you almost certainly believe that only one party today is racist; therefore, racism is not an effective issue that runs across party lines. (Note that until the 60's-70's, the South was split between Democrats and Republicans; there were effectively four political groups in the US: racist Democrats, racist Republicans, non-racist Democrats, non-racist Republicans. This screwed with party-based analysis of voting patterns.) The second is that, so far as I know, Congress no longer holds any straight-up-or-down votes on racism ala the Voting Rights Act; racism itself is not an issue, as nobody would vote for it.
(entirely based on recent USA politics) My instinct is the say conservatives do less jockying for status and have more subtantive disagreements with each other (not without vitriol, of course). I thik this is true, but likely not as much as it seems to me.
One main conservative divide is over how much to use the state to influence the country towards traditional insitutions versus staying with a libertarian framework. Social conservatives vs fiscal conservatives. Generally the first group still wants to work within the democratic process, and see left groups as wanting to appeal to judges to find novel interpretations of exisiting laws. (ie, conservatives amending the state consititution to define marriage vs liberals finding exisiting non-dscrimination amendments to apply more broadly they were likely intended).
Social conservatives will want ordered, controled immigration vs open, almost unregulated immigration of fiscal conservatives (probably justice vs pragmatism), though both will affirm legal immigrants and both will likely want to reduce direct incentives for immigrants (ie, welfare).
A mirror of this in foreign policy is libertarian isolationism vs hawkish/neo-con interventionism, the latter falling out of favor lately, as anger fades and war weariness sets in (or more charitably, people learn lessons and modify their theories).
There are other divisions that I don't think fall along the same lines. Another broad category is how radically to enact change. There is a bit of fundamental tension in a "conservative" philosophy in that at some point after losing a battle there is almost an obligation to conserve the victories of your opponents while fighting their next expansion. (By analogy, picture two nations fighting over borders where A wants to annex the B, but B has an ideological goal to keep the borders set in place by each most recent treaty. Hence, i suspect, the rise of internet Reactionaries who want to do more than draw new lines in the sand).
For example, all conservatives are going to be in favor of free markets, but some may differ on the needed level of intervention by regulators or quasi-governmental groups like the Fed, where those in favor of less are viewed as more conservative but may be called "out of the mainstream" or such. There are some who self-identify as conservatives and argue for expanded state-business cooperation/interference, such as GW Bush proposing TARP.
Another division, perhaps more petty, is over how much to compromise and work with liberals/Democrats vs standing on, and losing with, principles. Some argue that if Republicans articulate a conservative vision and do not sell out people will embrace that; some argue that people probably won't, but then we should let them get what they want by electing Democrats and not having policies that [conservatives view] are inevitable failures be painted with a bipartisan brush so as to be an object lesson, others that politics is messy, we have to compromise to get the best policies that we can while working together with the otherside. Optimism vs pessimism vs pragmatism.
Despite being overly long, I don't know if this answers your question or says anything non-obvious, as you seem to be asking for more petty disputes. I think that those tend to be a magification of a difference along some of the axis mentioned above into not just a quantitative difference but an unbridgeable qualitative one. But there are fundamental disagreements such that one can't say "I'm more conservative than you because I want more x than you" and expect it to hold sway and earn status points across the ideology. Well, maybe lower taxes.
We used to nutshell it as Trads vs Libertarians in college. Here are the relevant strawmen each group has of the other. (Hey, you asked what the fights look like!)
Trads see libertarians as: Just as prone to utopian thinking as those wretched liberals, or else shamelessly callous. Either they really do believe that people will just be naturally good without laws or institutions (what piffle!) or they just don't care about the casualties and trust that they themselves will rise to the top of their brutal, anarchic meritocracy. Not to mention that some of them could be more accurately described as libertines and just want an excuse for license.
Libertarians see trads as: Hidebound stick in the muds. They'd rather have people following arbitrary rules than thinking critically. They despise modernity, but don't actually have a positive vision of what they want instead (they're prone to ruefully shaking their heads and saying "Everything went downhill after the 1950s, or the American Revolution, or the Fall of Man"). By proposing ridiculous schemes (a surprising number have monarchist sympthies!) and washing their hands of governance in a show of 'epistemological modesty' and 'subsidiarity' they wriggle out of putting principles into practice.
The left-to-right political axis is a very poor tool for looking at political goal/values/theories/opinions/etc.
First, to even talk about it you need to specify at least the locality. "Left" (or, say, "liberal") in the US means something different from what "left" (or "liberal") means in Europe. I'd wager it means something different yet in China, Russia, India...
Second, one dimension is clearly inadequate for political analysis. For example consider a very important (IMHO) concept in politics: statism. Is the American left statist? Well, kinda. They are statist economically but not culturally. Is the American right statist? Well, kinda. They are statist morally but not economically. I'm, of course, speaking in crude generalizations here.
“Left” and “liberal” in the US and “left” in Europe mean more-or-less similar things, whereas “liberal” in Europe often means something else entirely. (I once made a longer comment about that somewhere, I'll link to it when I find it. EDIT: here it is.)
Obama is considered left in the US.
From a German perspective he's a lot more right than Angela Merkel who Germany's right wing chancelor.
Angela Merkel wouldn't put the government employee who exposed torture into prison while not charging anyone who tortured with crimes.
I meant in a relative sense, not in an absolute one: AFAIK, Obama is more “left” than his competition (other mainstream American politicians), and Merkel is less “left” than her competition (other mainstream German politicians), where “left” in both cases refers to the south-westwards direction (direction, not region) on the Political Compass. AFAIK “liberal” in the US also generally refers to that direction, whereas ISTM that in Europe it often refers to the eastward direction.
Yes. in a relative sense I think left and right mean the same things.
Liberal is Europe refers to southwards on the compass. UK liberals wanted that the UK gets rid of nuclear weapons because they considered them too expensive.
In Europe we also tend to speak about neoliberalism. That basically means the Washington consensus policies and all the policies for which corporate money pays. That means things like free trade agreements like NAFTA, putting children year ealier into school so that they are sooner available to join the workforce, taking political power away from states and cities, PPP, reducing taxes and the social safety net.
Yes, I guess that one was the meaning I was familiar with. (The Italian Liberal Party is in a centre-right coalition.)
That depends on the issue in question.
At the most basic level, the definitions are that the right wing wants to keep things as they are and the left wing wants to change them. There is one way to do the first, and innumerable to do the second. This probably accounts for a large part of the effect you observe.
(There are of course, many exceptions to the given definition; for example, conservatives wanting to eliminate government programs that are currently part of the status quo. But in this case, they are likely to frame this as a return to a previous state when they didn't exist, which is still a well-defined Schelling point. Right-wingers that do not fit this categorization, such as extreme libertarians calling for a minimal state that has never existed, are known to squabble among them as much as left-wingers.)
This is not actually accurate. On virtually any issue you can think of, the right-wing consensus supports changes in government policy. This is true to an extent such that some have argued that Republicans oppose everything about the liberal executive branch and civil service, simply because Obama is in office.
"This is true to an extent such that some have argued that Republicans oppose everything about the liberal executive branch and civil service, simply because Obama is in office." The arguments could be rhetorical, hence not demonstrative of the extent of the truth of such proposition. Weak evidence without discussing how those arguments are put forth.
Are you claiming that Republicans are only claiming to oppose Obama, and secretly support him on many issues despite their habit of verbal attacks, filibustering policies they claim to support as a means of threatening Obama on unrelated issues, and swearing to avoid compromise? I would need very strong evidence to believe this.
I don't know how you get that from what I said. I would claim the following three things, at least, that are relevant:
Republicans are not an especially united group; some will fillibuster the same policies that others support, like Rand Paul vs John McCain on the NSA programs.
Republicans, or pluralities of them, do not oppose all of the Presidents policies, such as much of the foreign policy and bank bailouts.
The opposition to the Presidents policies drives opposition to him being in office, and not vice versa.
Also, Republican and right wing are not synonyms.
Looking back, I misread your first post- I thought you were claiming that the Republicans' arguments were rhetorical. My response would've been, a) your response didn't really address my argument, since the section you disagreed with and b) you have no reason to assume bad faith.
Well, yes, I wasn't claiming that every conservative holds the exact same opinion on everything; this is not true in politics in general, and is more-or-less assumed.
The bank bailouts were conducted under President Bush, not Obama, and in any case poll poorly with all Americans, including Republicans. Americans as a whole oppose Obama's foreign policy, which has a 16% approval rating among Republicans.
This is disproven by the fact that strong pluralities of Republicans supported almost identical policies under a different president.
In general, people base their identities around political parties or organizations like the Tea Party, not general political affiliation. Therefore, the relevant groups are political parties, not 'left-wing' vs 'right-wing'. Party membership is also a lot easier to measure. Therefore, people in general talk about the parties, rather than specific points on the left-right axis. (e.g. note that the above poll broke data down by Republicans vs. Democrats, not left-wing vs. right-wing)
The pole in question fails to deal with the questions of whether they think it is too interventionist, not interventionist enough or something else.
"This is disproven by the fact that strong pluralities of Republicans supported almost identical policies under a different president."
Well, look, I think you are casting people as acting in bad faith but it is a lot more complicated than that, for example, different nuances in how the policies are crafted, promoted, or enforced; learning from what are viewed as mistakes; or different sentiments among the population at large. It's hard to say because you haven't given any examples.
I'm also not sure if you mean congressional Republicans or individual voters or activists or whathaveyou.
But I'm not really interested in defending Republicans any further than this here.
At least in the US since the 60's, another way to divide conservatives has been in the party's three big issues: economic classical liberalism, social conservatism, and foreign-policy neo-conservatism. The moderate, short-term goals of these groups are sometimes in alignment, but their desired end-states look very different:
Neo-conservatives want a big military and an aggressive foreign policy, whereas classical liberals hate war and want to shrink the military, along with the rest of the government; and religious conservatives (generally- the prevalence of the other groups has lead to abnormalities in the most famous preachers) hate war and love peace.
Religious conservatives are generally fine with the welfare state and regulations, and support restrictive social laws; whereas classical liberals hate all of the above.
Classical liberals want to shrink (or drown) the government, which both of the other groups oppose for various reasons: some to most religious conservatives like environmentalism and the idea of a safety net, and neoconservatives love the military.
There's also a distinction between traditional politicians who support negotiation, moderation, and compromise, and the Tea Party-backed groups who don't.
Has anyone else's inbox icon been behaving erratically (i.e., turning red even when there were no new messages or comment replies)?
Not mine.
You might be confused because pressing the "back" button to a time when the message was unread will make the symbol turn read.
I've also had this effect by opening a bunch of tabs, with my inbox being the last one.
Kate Stone, TED talk, paper with electronics
This seems like an interesting half truth since you can't change the environment without acting on objects. However, it's possible that the environment is a richer tool of influence than acting directly, and also possible that people are less apt to resent the environment for not doing what they want, therefore less likely to try to force it.
In the past, people like Eliezer Yudkowsky and, I think, Luke Meulhauser have argued that MIRI has a medium probability of success. What is this probability estimate based on and how is success defined? I've read standard MIRI literature (like "Evidence and Import" and "Five Theses"), but I may have missed something.
Do you have a permalink to any of those instances? It would be helpful to know what they defined medium as.
see 1, 2, 3, 4, and 5.
I've noticed a few times how surprisingly easy it is to be in the upper echelon of some narrow area with a relatively small amount of expenditure (for an upper middle class American professional). This is easy to see in various entertainment hobbies- an American professional adult who puts, say, 10% of his salary into Legos will have a massive collection by the standards of most people who own Legos. Similarly, putting 10% of a professional's salary into buying gadgets means that you would be buying a new one or two every month.
I recently came across an article on political donations and saw the same effect- to be in the top .01% of American political donors, it only takes about $11k an election cycle (more in presidential years, less in legislative only years). Again, at 10% of income, that only takes an income of ~$55k a year (since the cycles occur every two years), which is comparable to the median American salary (and lower than the starting salaries for most of my friends who graduated with STEM bachelor's degrees).
It's not clear to me what percentage of people do this. It's the sort of thing that you could only do for a few narrow niches, since buying a ton of Legos impedes your ability to buy a bunch of gadgets, and it seems like most people go for broad niches instead of narrow niches. If you spend 10% of your income on clothes, say, then if most people spend 10% of their income on clothes you need to be in the top 1% of income-earners to be in the top 1% of clothes-buyers.
I know a handful of people in the LW sphere give a startlingly high percentage of their income to MIRI and are near the top of MIRI supporters. They probably also end up in the top percentile of charitable givers, but I don't have numbers on hand for that.
I'm curious if this is a worthwhile pattern to emulate. I currently do this for art collection in a narrow subfield, and noticed the benefits of being at the top percentage of expenditure mostly by accident, but don't have a good sense of how those benefits compare to marginal value comparisons between different potential hobbies. (Actually, now that I think about this, this might just be a special case of the general "specialization pays off" heuristic, where it may be better to have one extreme hobby than dabble in twenty things, but this may not be obvious when moving from twenty hobbies to nineteen hobbies.)
Some random points that came to my mind. The Pareto principle: 80% of the effect comes from 20% of the expenditure. So if we take the figure 10,000h to mastery, 2,000h will already lead to ridiculous effects, compared to the average Joe. The tighter the niche you choose is, the less competition there will be, so sheer probability dictates that you are more likely to be in a higher percentile of the distribution.
Overall, it seems to be better to be extremely invested in one niche and take a low interest in a couple of others for social purposes at least than to dabble moderately in a lot of them. What are the 'benefits' you alude to?
Finally, people spending a little bit on a lot of hobbies my be a symptom of an S-shaped response curve to money spent. The first few dollars increase pleasure a lot. Then you are just throwing money at it without obviousy return, so you forego the opportunity cost and get your high elsewhere. But should you for any reason get over this hypothetical plateu you reach again an interval of high return, maybe even higher than in the beginning and spend your money there.
Mostly access to exceptional people / opportunities, and admiration / social status. For example, become a major donor to a wildlife rescue center, and you get invited to play with the tigers. I would be surprised if major MIRI donors that live in the Bay area don't get invited to dinner parties / similar social events with MIRI people.
For the status question, I think it's better to be high status in a narrow niche than medium status in many niches. It's not clear to me how the costs compare, though.
Activity in many niches could credibly signal high status in some circles by making available many insights with short inferential distance to the general public (outside any of your niches). Allowing one to seem very experienced/intelligent.
Moreover, the benefits to being medium status in several hobby groups and the associated large number of otherwise unrelated social connections may be greater than readily apparent. https://en.wikipedia.org/wiki/Socialnetwork#Structuralholes
Agreed. It seems like there are several general-purpose hobby groups that seem to be particularly adept at serving this role, of which churches are the most obvious example.
Does anyone else has problems with the appearance of Lesswrong? My account is somehow at the bottom of the site and the text of some posts transgresses the white background. I noticed the problem about 2 days ago. I didn't change my browser (Safari) or something else. I think.
Ugh. I am generally in the unsympathetic-to-PUA thinking camp, so I offer the following not to bring up a controversial subject again, but because I think publicly acknowledging when one encounters inconvenient evidence for one's priors is a healthy habit to be in...
Recently I added the following (truthful) text to my OkCupid! profile:
Having noted that I am a)unavailable and b)getting lots of competing offers, a high status combination, the result is... in three days, the number of women rating my profile highly has gone from 61 to 113.
+1 for acknowledging the inconvenient (without regard to subject matter).
+1 for a (+1 for acknowledging the inconvenient) on a subject you dislike discussion of.
OTOH I wouldn't at all be shocked to find out that profiles rated highly and profiles most often responded to are significantly different sets. Signalling preferences vs revealed preference yada yada.
“People will be more likely to (say they) like you once you're in a relationship with someone else” isn't something only people in the sympathetic-to-PUA thinking camp usually say.
Funny, I read your post and my initial reaction was that this evidence cuts against PUA. (Now I'm not sure whether it supports PUA or not, but I lean towards support).
PUA would predict that this phrase
is unattractive.
I dunno, in the context it sounds clearly tongue-in-cheek -- though you usually can't countersignal to people who don't know you (see also).
The irony is that the phrase was sort of serious, but in the context of a profile much of which is a lengthy exercise in countersignalling to people who don't know me, I can probably count on most people making the same assumption you did.
More specifically: “I devote myself to worshiping the ground she walks on” is the kind of sentence you mainly say for its connotations, not its denotations. In isolation, the connotation would be ‘she's so much awesome than me’, which is low status, but in context it's ‘she's so much more awesome than you’, which is high status.
Good point.
Note also that the same action may be interpreted as a sexual advance if the recipient is available (or at least there's no common knowledge to the contrary) and as a sincere compliment for its own sake otherwise; therefore, if someone is willing to do the former but not the latter for whatever reason (e.g. irrational fear of creep- or slut-shaming due to ethanol deficiency)...
What is the function of the karma awards page?
There's been some discussion about incentivizing people to do useful things for the community by putting up karma bounties, thus removing some of the uncertainty inherent in upvotes. The most comprehensive thread I could find is here; two years old, but LW development grinds slow.
That's my best guess, anyway.
Ok, thanks! Seems like an interesting plan, I hope it can get implemented.
As I understand applying Bayes to science, the aim is to direct research into areas that make sense. However, sometimes valuable discoveries are made by accident.
Is there any way to tell whether your research is over-focused? To improve the odds of noticing valuable anomalies?
Knowing a diverse network of people working on valuable projects seems like it could help. I can only think of one example; are there more?
After a short discussion on irc regarding basilisks I declared that if anyone has any basilisks that they consider dangerous or potentially real in anyway to please private message me about them. I am extending this invitation here as well. Furthermore, I will be completely at fault for any harm caused to me by learning about it. Please don't let my potential harm discourage you.
Some basilisks are potentially contagious.
Please give me examples.
I think the most obvious semi-basilisk example is certain strains of religion. Insofar as it makes you believe you might go to hell, and all your friends are going to hell, these religions will make you feel bad an also make you want to spread them to everyone you know. Feeling bad is not the same as death or mental breakdown or other theoretical actual basilisk consequences but in essence there are meme complexes that contain elements that demand you spread the whole complex. If someone's in possession of such a concept but has defeated it or is in some way immune it may still be correct for them not to tell you for fear you are not and will spread it to others once it has worked it's will on you.
Ever seen one of those "If you don't forward this email to five friends, your (relation) will DIE!!1!!!one!" emails?
Can you tell us what you're trying to achieve with this?
Interested in the responses since I actually think I can learn some useful things if anyone actually shares something good. Also, I assign significantly less than 1% chance that anyone will actually tell me anything 'dangerous' - for example I think roko's is as dangerous as pie. I don't plan to release memetic hazards on unsuspecting citizens if that's your fear.
It's more that soliciting information hazards seems like really odd behaviour. Even if no-one sends you an Interactive Suicide Rock, you might still receive some horrible or annoying stuff you don't want to be carrying around in your head.
I'm really interested to find out what, if anything, people send you, but I'm not sure I want to know exactly what they are.
Other people expressed a similar view and since I don't mind, I can at least help with satisfying people's curiosity in a way that would cause minimal harm. However, I have found nothing worth talking about after some fairly extensive google searches so I am currently trying to think if there is anyone knowledgeable that I can e-mail (already have a few people on the list) or if there are any good search terms that I haven't tried yet.
It's probably worth clarifying what you consider a basilisk, as that might reduce any unpleasant-yet-irrelevant submissions.
The Motif of Harmful Sensation is a common fictional trope, but of real-life examples there are pretty much 0. (Excepting e.g. a subject with given mental susceptibilities such as depression or OCD.)
And even more obviously, epilepsy. Yet, I don't understand why you would except them.
'You see, X does not exist, since I choose to ignore all the cases in which X does exist; I hope you'll agree that this argument is watertight once you grant my premises.'
I think David has a point here.
The cases you two have mentioned of sensory hazards all affect people who have identifiable susceptibilities that those people usually know about in advance and that affect relatively small minorities.
Somebody might have a high confidence that they are non-depressed, non-OCD, non-epileptic, etc. Are there examples of sensory hazards that apply to people who do not have a recognized medical problem?
But this is a different question. You have quietly redefined the question "are there harmful sensations to people?" - to which the answer is overwhelmingly, resoundingly, yes, there absolutely are - to 'are there harmful sensations to a newly redefined subset of people which we will immediately update if anyone produces further examples, so actually what I meant all along was "are there harmful sensations which we don't yet know about?"'
Or to put it more simply: 'Can you provide an example of a harmful sensation we don't yet know about?' Well... If I could produce a harmful sensation, you and David would simply say something like 'ah, well, I guess we now have a recognized medical problem, because look, we [commit suicide / collapse in convulsions / cease functioning / become obsessed with useless actions] if you expose us to X! That's a pretty serious psychiatric problem! But, are there examples of sensory hazards that apply to people who do not have a recognized medical problem?'
To which I can only shake my head no.
I hear you and I'm not trying to play the definition game or wriggle out of this. The way I conceptualized the question -- which I think the original poster had in mind and what I think is relevant to hazard risk assessment -- is more like one of these:
A) "What fraction of the public is seriously vulnerable to sensory hazards",
B) "Given that one knows one's medical history and demographics, what is the probability that there are sensory hazards one is vulnerable to but not already well aware of."
My hunch is that the answers are "less than 20%" and "close to zero." The example of epilepsy didn't shift my beliefs about either; epilepsy is rare and is rarely adult-onset for the non-elderly.
So you're asking, what new medical sensory hazards may be developed in the future.
Well, the example of photosensitive epilepsy, where no trigger is mentioned which could have existed before the 19th century or so, suggests you should be very wary of thinking the risk of new sensory hazards is close to zero. Flash grenades are another visual example of a historically novel sensation which badly damages ordinary people. Infrasound is another plausible candidate for future deliberate or accidental weaponization. And so on...
There, see, you're doing it again! Why would you exclude the elderly? Keep in mind that you yourself should aspire to become elderly one day (after all, consider the most likely alternative...).
Gwern, this thread is about the Basilisk. Conflating that with epilepsy is knowing equivocation. Don't be dense, thanks.
No denser than thou, David:
Who was it who brought up the Motif of Harmful Sensation, which is not limited to Roko's basilisk? Who was it who brought up in order to define away examples of depression or OCD? Thou, David, thou.
I think that most of the general examples have been mentioned: Religion among others, which has the rather mildly harmful "fear of hell" and it's own propagation.
I think that any majorly harmful hazard which the general population was susceptible to would cause them to all shortly win darwin awards and remove themselves from the genepool.
As such we only have minority groups which are vulnerable to specific stimuli.
How harmful does it have to be? Noise can be hard on people, and sufficiently loud noise causes permanent damage.
There's something interesting in here about what counts as a sensation for purposes of this discussion-- probably "a sensation which most people wouldn't expect to be harmful".
You are using basilisk in a manner that I don't understand. I assume you're not asking if anyone has a lizard that will literally turn you into stone, so what does basilisk mean in this context?
Memetic/Information Hazards - the term comes from here. Basically anything that makes you significantly worse off after you know it than before. Giving someone wrong instructions for how to build a bomb wouldn't count for example as I can just never build a bomb or just use other instructions etc.
Warning: Could be dangerous to look into it
They really should be called Medusas -- since it's you looking at them, not them looking at you.
Yup, Medusa is what some blogposts use to describe them.
I think they both need to make eye contact.
Do you of anyone claiming to be in possession of such a fact?
I know some basilisks, yes. Although, there is nothing I regard as actually dangerous. However, sharing things like this publicly is considered bad etiquette on LessWrong.
I tried to rot13 my previous discussion and was only mocked. The attitude towards basilisks seems to be one of glib reassurance.
Not just glib reassurance. There is also the outright mockery of those who advocate taking (the known pseudo-examples of) them seriously.
I can't imagine that anyone is advocating taking them seriously.
Can you send me yours? Please PM me here or on IRC. I already know the most famous one here.
If it's not dangerous, how does it constitute a hazard?
I know one.
Also I think you're missing the word "know"
Eliezer is in possession of a fact that he considers to be highly dangerous to anyone who knows it, and who does not have sufficient understanding of exotic decision theory to avoid being vulnerable to it. This is the original basilisk that drew LessWrong's attention to the idea. Whether he is right is disputed (but the disputation cannot take place here).
In HPMOR, he has fictionally presented another basilisk: Harry cannot tell some other wizards, including Dumbledore, about the true Patronus spell, because that knowledge would render them incapable of casting the Patronus at all, leaving them vulnerable to having their minds eaten by Dementors.
Please let us know if you recieve anything interesting.
Could you post how many you receive and your realistic estimation on whether any are actually dangerous? Without specifics of course. (If you take these things seriously, I suppose you should have a dead-man's switch.)
Though for the record I think the LW policy on not being able to discuss basilisks is ridiculous - a big banner at the top of a post saying for example 'Warning - Information Hazard to those who have suffered anxiety at the thought of AI acting acausally' should be fine. I strongly disagree with outright banning of discussion about specific basilisks/medusas, especially seeing as LW is one of the only places where one could have a meaningful conversation about them.
You magnificent, magnanimous son of a bitch.
Well that escalated quickly.
I think a level of gaiety and excitement is appropriate given the subject.
We almost need a list for this. This makes half a dozen people I've seen making the same declaration.
Without endorsing the reasoning at all I note that those with information suppressing inclinations put only a little weight on harm caused to you and even less on your preferences. If they believe that the basilisk is worthy of the name they will expect giving it to you to result in you spreading it to others and thereby causing all sorts of unspeakable misery and soforth. It'd be like infecting a bat with ebola.
If I believe that automation causing mass unemployment is around the corner (10-20 years), what do I do or invest in now to prepare for it?
Acquire as much capital as you can, presumably. If the share of economic growth for labor is falling, that of capital must be rising. The topic has come up before but I'm not sure anyone had more concrete advice than index funds - it's tempting to try to invest in software or specific tech companies, except then you're basically being a VC and it's very hard to pick the winners.
Or land.
You can train yourself in one of the industries you expect to thrive. This could either be the high-tech route of being the one programming and developing the machines, or it could be in a job that never goes away, like plumbing/carpentry/welding. All of which can earn 6 figures, it's a matter of the type of work you like doing.
Starting to write introductions to LW for friends; here's my fast-track.
Please comment with thoughts here (or there).
I got a 'page not found' error when I clicked on that link because of the period at the end.
Fixed.
This is a call for Less Wrong users who do not wish to personally identify as rationalists, or do not perfectly relate to the community at a cultural level:
What do you use Less Wrong for? And, what are some reasons for why you do not identify as a rationalist? Are there some functions that you wish the community would provide which it otherwise does not?
I think of "rationalist" as "one who applies rationality to real life". By that definition, I've identified as rationalist since age 2 at the latest (I said identified, not "been any good at it").
LW culture is hard to grasp. Politics is a minefield, there's apparently a terrible feminism problem, there seem to be two not so distinct factions: people who want more instrumental rationality, and people who get annoyed by this and only want to discuss philosophy. You have to read lots of things not optimized for keeping readers from falling asleep (I'm not talking about the sequences; I actually stay awake through those) in order to have the necessary background to participate in many discussions; I'm quite terrified of missteps (I make them quite often).
However, I know what I'm reading will be thoroughly vetted for truthfulness most of the time, and in spite of the utter failure to demonstrate rationality superpowers, applying science and reasoning to reality for good results is encouraged and seemingly the main thrust of the whole site. It's obviously far from optimal, otherwise we'd have tons of success stories rather than something trying very hard not to be a technoCult, but those aren't really detraction enough given the absence of a better alternative.
That, and solving CAPCHAs is quite inconvenient and so I'm kinda selective about where I register, so I registered here instead of Reddit and that means this is the only place I'm going to be able to talk about HPMoR. :P
(Also, I like emoticons an awful lot considering that I can't see them. I haven't encountered any emoticons on LW. In any other comment, I would have been much more wary of using one. ??? )
Being 'part of a community' and having a term that defines one's identity are two different conditions. In the former, one's participation in a community is merely another aspect to one's personality or character, which can be all-expansive.
In the latter, one is tied to others who share the identifier. Even if 'rationalist' just means one who subscribes to the importance of instrumental and epistemic rationality in daily life, accepting and embracing that or any identifier can have negatives. The former condition, representing a choice rather than a fact of identity, is absent those negatives while retaining the positive aspects of communal connection.
Exempli gratia:
One is trying to appeal to some high status figure. This high-status figure encounters a 'rationalist', and perceives them as low-status. If One has identified themselves as also being a rationalist, then the high-status person's perception of the 'rationalist' may taint their perception of One.
If One has instead identified themselves as being part of a certain community, to which this 'rationalist' may also claim affiliation, One can claim that while they find the community worthwhile for many pursuits, not all who flock to the community are representative of its worth.
If someone thinks this a losing strategy, please speak up, as it's generally applicable. Notable exceptions to its applicability include claiming oneself as identifiable by their association with a friend group or extended family, as in, "I am James Potter, Marauder," rather than, "I am James Potter, member of the Marauders"; and, "I am a Potter," rather than the simple, "My name is James Potter."
A few months ago, I decided to try a "gather impossible problems, hold off on proposing solutions until we've thoroughly understood them, then solve them" 'campaign'. The problems I came up with focused on blindness, so I started the discussion here rather than LW. I was surprised when I looked it up today and found that it only lasted for four days--I had been sure it had managed to drag on a little longer than that.
I recall someone tried something similar on LW, though considerably less focused and more willing to take things they couldn't be expected to solve without many more resources. I also recall that little if anything came of it.
Something tells me we're doing it wrong.
Does anyone know why GiveWell is registered with the IRS under a different name (Clear Fund)? I am including a link to their recommendation for the AMF on a wedding registry and have already gotten a question about about why their name differs.
I had noticed that when I got a receipt for a donation I made to them, but I assumed “Clear Fund” was their former name and they hadn't bothered to legally change it or something and didn't worry too much about that.
(link) Effective Altruism: Professionals donate expertise. Toyota sends some industrial engineers to improve NYC's Food Bank charity.
HT Hacker News
I typed up the below message before discovering that the term I was looking for is "data dredging" or "hypothesis fishing." Still decided to post below so others know.
Is there a word processing program for Windows that's similar to TextEdit on a Mac? I always preferred TextEdit over programs like Microsoft Word or Pages because it loads quickly and you can easily fit it in a small window for writing quick notes. In other words, it's "small", I guess you would say.
Right now I'm using CopyWriter, which is pretty good, but it has two problems 1) no spell check and 2) no autosave. Mostly I just use Evernote and Google Docs though.
Any suggestions?
WordPad is the built-in Windows light word processor. Other alternatives that come to mind are SciTE and Notepad++
My priors tell me that statistical arbitrage opportunities in online poker to net 100k a year to be less than 2% for someone who has an IQ of 100. And likely to be diminishing quickly as the years go by.
A few reasons include: Bots are confirmed to be winning players, in full ring and NL games. Online poker is mature and has better players. Rake. New 'fish" to grinder ratio is getting smaller.
Does anyone have thoughts to the contrary? Perhaps more sophisticated software to catch botters? Or new regulations legalizing online poker to increase new fish?
Waffled between putting this here and putting this in the Stupid Questions thread:
Why is the default assumption that a superintelligence of any type will populate its light cone?
I can see why any sort of tiling AI would do this - paperclip maximizers and the like. And for obvious reasons there's an inherent problem with predicting the actions of an alien FAI (friendly relative to alien values).
But it certainly seems to me that a human CEV-equivalent wouldn't necessarily support lightspeed expansion. Certainly, humanity has expanded whenever it has the opportunity - but not at its maximum speed, nor did entire population centers move. The top few percent of adventurous or less-affluent people leave, and that is all.
On top of this, I ... well, I can't say "can't imagine," but I find it unlikely that a CEV would support mass cloning or generation of humans (though if it supports mass uploading, then accelerated living might produce a population boom sufficient to support luminal expansion.) In which case, an FAI that did occupy as much space as possible, as rapidly as possible, would find itself spending resources on planets that wouldn't be used for millenia, when it could instead focus on improving local life.
There is, of course, the intelligence-explosion argument, but I'd think even intelligence would hit diminishing marginal returns eventually.
So to sum up, it seems not unreasonable that certain plausible categories of superintelligences would willingly not expand at near-luminal velocities - in which case there's quite a bit more leeway in the Fermi Paradox.
It's because we want to secure as many resources as possible, before the aliens get to them.
I expect an FAI to expand rapidly, but merely securing resources and saving them for humans to use much later.
Read up on the Dominion Lands Act and the Homestead Act for a historic human precedent.
Right, but I'm not sure that's the right precedent to use. Space is big: it'd be more equivalent to, oh, dumping the Lost Roman Legion in a prehistoric Asia and expecting them to divvy up the continent as fast as they could march.
Hm. Point.
So maybe the Solar System has been secured by an alien-FAI and we're being saved for the aliens to use much later..?
What's the most credible way to set up an information bounty?