All of D_Malik's Comments + Replies

I don't understand that screenshot at all (maybe the resolution is too low?), but from your description it sounds in a similar vein to Zendo and Eleusis and Penultima, which you could get ideas from. Yours seems different though, and I'd be curious to know more details. I tried implementing some single-player variants of Zendo five years ago, though they're pretty terrible (boring, no graphics, probably not useful for training rationality).

I do think there's some potential for rationality improvements from games, though insofar as they&... (read more)

I've heard of Zendo and I've been looking for someone to play Eleusis with for a while heh (maybe I'll be able to get the local EA group to do it one of these days).

though insofar as they're optimized for training rationality, they won't be as fun as games optimized purely for being fun

Fun isn't a generic substance. Fun is subjective. A person's sense of fun is informed by something. If you've internalised the rationalist ethos, if your gut trusts your mind, if you know deeply that rationality is useful and that trai... (read more)

D_Malik140

Don't let them tell us stories. Don't let them say of the man sentenced to death "He is going to pay his debt to society," but: "They are going to cut off his head." It looks like nothing. But it does make a little difference.

-- Camus

2entirelyuseless
Why is this rational? A great deal of the deterrent value of a criminal justice system consists in telling stories. If you simply state the facts, they might be much less deterring. Thus "they are going to lock him away and feed and house him for free for the next ten years," might look more like an additional benefit than a deterrent.

"From the fireside house, President Reagan suddenly said to me, 'What would you do if the United States were suddenly attacked by someone from outer space? Would you help us?'

"I said, 'No doubt about it.'"

"He said, 'We too.'"

D_Malik170

You assume utility of getting neither is 0 both before and after the transformation. You need to transform that utility too, eg from 0 to 4.

D_Malik160

Lost 35lbs in the past 4 months, currently at 11.3% body fat, almost at my goal of ~9% body fat, at which point I'll start bulking again. Average body fat is ~26% for men in my age group. My FFMI (= non-fat mass / height^2) is still somewhere above 95th percentile for men in my age group.

Got an indoor treadmill, have since walked 1100km in the past 2 months, 18km/day, 4.5hour/day average. Would definitely recommend this.

Scored 2 points short of perfect on the GRE. Got a 3.8 average for college courses over the past year.

You usually avoid unlimited liability by placing a stop order to cover your position as soon as the price goes sufficiently high. Or for instance you can bound your losses by including a term in the contract which says that instead of giving back the stock you borrowed and sold, you can pay a certain price.

6Dagon
Note that for volatile assets (the very ones where you feel uncomfortable about unbounded risk), stop orders are not guaranteed to help. Remember, prices are not continuous - there is a discrete sequence of bids. Price can go from below your stop to MASSIVELY above it before your stop order can be executed. Most often this happens on news when a market is closed, but it can occur intraday as well.
0philh
The stop order feels hackish, to me. I was thinking along the lines of short squeezes even before I learned their name. But also, if I'm expecting a bubble to burst, I won't necessarily be surprised if the price rises massively before it does. I'd be looking for limited exposure without having to chicken out. The contract term sounds like the sort of thing I was looking for.

Introspecting, the way I remember this is that 1 is a simple number, and type 1 errors are errors that you make by being stupid in a simple way, namely by being gullible. 2 is a more sophisticated number, and type 2 errors are ones you make by being too skeptical, which is a more sophisticated type of stupidity. I do most simple memorization (e.g. memorizing differentiation rules) with this strategy of "rationalizing why the answer makes sense". I think your method is probably better for most people, though.

D_Malik10

Whether they believe your confidence vs whether they believe their own evidence about your value. If a person is confident, either he's low-value and lying about it, or he's high-value and honest. The modus ponens/tollens description is unclear, I think I only used it because it's a LW shibboleth. (Come to think of it, "shibboleth" is another LW shibboleth.)

D_Malik00

Sunlight increases risk of melanoma but decreases risk of other, more deadly cancers. If you're going to get, say, 3 times your usual daily sunlight exposure, then sunscreen is probably a good idea, but otherwise it's healthier to go without. I'd guess a good heuristic is to get as much sunlight as your ancestors from 1000 years ago would have gotten.

3Tem42
I've got your citations.. right here This whole article is worth reading, and has a number of counter-intuitive findings.
0[anonymous]
I've seen so many claims about the benefit or lack thereof of sunscreen. Do you have a citation?
0Fluttershy
This is something I'd eventually like to look into. Do you know which cancers sunlight protects against? Might sun exposure after one has applied sunscreen provide some protection against these cancers?
D_Malik10

You don't need to reconstruct all the neurons and synapses, though. If something behaves almost exactly as I would behave, I'd say that thing is me. 20 years of screenshots 8 hours a day is around 14% of a waking lifetime, which seems like enough to pick out from mindspace a mind that behaves very similarly to mine.

5Lumifer
A mind that behaves very similarly to yours while typing at a a computer screen. Put that mind in a pub and I doubt it will perform well.
0estimator
Well, I agree, that would help FAI build people similar to you. But why do you want FAI to do that? And what copying precision is OK for you? Would just making a clone based in your DNA suffice? Maybe, you don't even have to bother with all these screenshots and photos.
D_Malik150

Confidence is the alief that you have high value, and it induces confidence-signalling behaviors. People judge your value partly by actually looking at your value, but they also take the shortcut of just directly looking at whether you display those signals. So you can artificially inflate your status by having incorrect confidence, i.e. alieving that you're more valuable than you really are. This is called hubris, and when people realize you're doing it they reduce their valuation of you to compensate. (Or sometimes they flip that modus tollens into a mod... (read more)

0[anonymous]
Care to explain what they flip exactly?
0Good_Burning_Plastic
Are you implying there actually is no such mechanism? What about this?
D_Malik70

Allow the AI to reconstruct your mind and memories more accurately and with less computational cost, hopefully; the brain scan and DNA alone probably won't give much fidelity. They're also fun from a self-tracking data analysis perspective, and they let you remember your past better.

D_Malik110
  • Getting an air filter can gain you ~0.6 years of lifespan, plus some healthspan. Here's /u/Louie's post where I saw this.
  • Lose weight. Try Shangri-La, and if that doesn't work consider the EC stack or a ketogenic diet.
  • Seconding James_Miller's recommendation of vegetables, especially cruciferous vegetables (broccoli, bok choy, cauliflower, collard greens, arugula...) Just eat entire plates of the stuff often.
  • Write a script that takes a screenshot and webcam picture every 30 seconds. Save the files to an external hard drive. After a few decades, bury the
... (read more)
0Ben Pace
That last one sounds like a plan, on top of signing up for Cryonics. Tah.
1estimator
I'm very skeptical of the third. A human brain contains ~10^10 neurons and ~10^14 synapses -- which would be hard to infer from ~10^5 photos/screenshots, esp. considering that they don't convey that much information about your brain structure. DNA and comprehensive brain scans are better, but I guess that getting brain scans with required precision isn't quite easy. Cryonics, at least, might work.
2[anonymous]
What do the webcam and screenshots help with?
5David_Bolin
I do the screenshot / webcam thing, and OCR the screenshots so that my entire computing history is searchable.
6Fluttershy
Interesting; I hadn't thought about air filters at all. Thanks for mentioning them! Five minutes of googling around leaves me highly skeptical the air filter mentioned in Louie's post, which you linked to, is the one anyone interested in life extension would want to buy, though; the air filter mentioned there is a "HEPA grade" filter, not a "HEPA" filter. The filter in question only claims that it can catch particles which are larger than 30 angstroms, while real HEPA filters catch at least 99.97 % of particles larger than 3 angstroms. As a quick check, this seems like it might matter since I'd expect HEPA filters to catch all of the six air contaminants the EPA mentions here, while a filter which only caught particles larger than 30 angstroms would only stand a chance of catching one of those six contaminants. Please pardon the back-of-the-envelope nature of this comment.
D_Malik90

Which country should software engineers emigrate to?

I'm going to research everything, build a big spreadsheet, weight the various factors, etc. over the next while, so any advice that saves me time or improves the accuracy of my analysis is much appreciated. Are there any non-obvious considerations here?

There are some lists of best countries for software developers, and for expats in general. These consider things like software dev pay, cost of living, taxes, crime, happiness index, etc. Those generally recommend Western Europe, the US, Canada, Israel, Aus... (read more)

5Daniel_Burfoot
How ambitious are you? If you want to work very hard and potentially change the world, come to the US. For better or worse (probably worse), the US is basically the place where world-changing technology development happens. If you are more interested in work/life balance, I would recommend Europe (esp. Germany), Canada, Australia or maybe NZ. In particular everyone I've talked to who knows both Australia and the US agrees that while Americans make more money, Aussies have a better lifestyle. There are many, many things about the US that are just astonishingly broken. The immigration system is one of them. Also, there is lots of anti-immigrant turmoil going on in the US now.

Context: I am a software engineer living and working in the US. I am a US citizen and my wife is not, and we are working on getting her permanent residency.

It's definitely worth investigating the difficulty of immigrating to the various countries. Canada and Singapore both have reputations for being a lot easier for skilled workers to immigrate to than the US, but YMMV.

Note that if you do marry an American, be sure to start the process of getting your permanent residency while you're still in the country. My wife and I made the mistake of starting the p... (read more)

D_Malik50

Yes, the effect of diets on weight-loss is roughly mediated by their effect on caloric intake and expenditure. But this does not mean that "eat fewer calories and expend more" is good advice. If you doubt this, note that the effect of diets on weight-loss is also mediated by their effects on mass, but naively basing our advice on conservation of mass causes us to generate terrible advice like "pee a lot, don't drink any water, and stay away from heavy food like vegetables".

The causal graph to think about is "advice → behavior → cal... (read more)

1Brillyant
I think it has a net negative effect on the global dieting discussion that it contains these superfluous steps to the actual causes of weight loss. Having a rational discussion about satiation is one thing. It is a long way from the woo that has been involved in getting people to believe carbs are magically evil. I remember first digging into the Atkins diet. I thought, "No. This is dumb. It's just calorie restriction. Why are they pretending it's more than that?" But I shut my mouth for a while because I didn't understand the science and Atkins and other low carb variants seemed so popular. "Eat less and exercise more" is the best dieting advice (Or perhaps even better, "Create a reasonable calorie deficit over time".) It may be difficult to follow, but it's clear. It allows people to rationally attack the problem of "how" to accomplish weight loss. Everything else is just muddying the waters.
D_Malik00

Replace "if you don't know" with "if you aren't told". If you believe 80% of them are easy, then you're perfectly calibrated as to whether or not a question is easy, and the apparent under/overconfidence remains.

-2Lumifer
I am still confused. You don't measure calibration by asking "Which percentage of this set of questions is easy?". You measure it by offering each question one by one and asking "Is this one easy? What about that one?". Calibration applies to individual questions, not to aggregates. If, for some reason, you believe that 80% of the questions in the set is easy but you have no idea which ones, you are not perfectly calibrated, in fact your calibration sucks because you cannot distinguish easy and hard.
D_Malik170

About that survey... Suppose I ask you to guess the result of a biased coin which comes up heads 80% of the time. I ask you to guess 100 times, of which ~80 times the right answer is "heads" (these are the "easy" or "obvious" questions) and ~20 times the right answer is "tails" (these are the "hard" or "surprising" questions). Then the correct guess, if you aren't told whether a given question is "easy" or "hard", is to guess heads with 80% confidence, for every question. Then ... (read more)

2tim
I am probably misunderstanding something here, but doesn't this Basically say, "if you have no calibration whatsoever?" If there are distinct categories of questions (easy and hard) and you can't tell which questions belong to which category, then simply guessing according to your overall base rate will make your calibration look terrible - because it is
5cousin_it
Wow, that's a great point. We can't measure anyone's "true" calibration by asking them a specific set of questions, because we're not drawing questions from the same distribution as nature! That's up there with the obvious-in-retrospect point that the placebo effect gets stronger or weaker depending on the size of the placebo group in the experiment. Good work :-)
D_Malik190

Every child has both a mother and a father, and there are about as many men as women, so the mean number of children is about the same for males as for females. But there are more childless men than childless women, because polygyny is more common than polyandry, ultimately because of Bateman's principle.

D_Malik00

Depends on your feature extractor. If you have a feature that measures similarity to previously-seen films, then yes. Otherwise, no. If you only have features measuring what each film's about, and people like novel films, then you'll get conservative predictions, but that's not really the same as learning that novelty is good.

-3[anonymous]
Thanks. Now I'm trying to learn a bit about (exponential) moving averages. I know moving averages are used in machine learning, but I've also come across them in stock market investing where they are regarded with derision. Can someone explain what their utility is, and how they can be useful when they aren't in trading? If my financial knowledge is correct, moving averages only indicative profitable moves when there are linear dependencies between fundamental variables and the stocks price. This is true both empirically and is what most technical analysts assume. However, how do we know that for a particular security, price behavior isn't better modeled as a random walk. Basically we don't, and we know for any given stock, it's a pretty good generalization. However, asset classes as a whole, or particular indices, often go up over the long run. It frustrates me that technical traders strategies just create self-fulfilling prophecies. They're like uninformed (in the market information sense) speculators who act on the same tea leaves. I don't understand the reason for using moving averages unless you have reason to believe, in advance, that it will be a good model of the physical behaviour of that which you're trying to predict. But then, you wouldn't be conducting predictive analytics then. Wouldn't neural networks, otherwise, dominate them? I would imagine that they would stumble upon a moving averages strategy if that was evidently a good model for the phenomenon in question. And yet the risk metrics associated with the implementation of a simple neural network in Quantopian doesn't seem that attractive. But what can't neural networks do? They seem like most perfect learning devices ever!
D_Malik00

Good point. I may be thinking about this wrong, but I think Deutsch self-consistent time travel would still vastly concentrate measure in universes where time travel isn't invented, because unless the measures are exactly correct then the universe is inconsistent. Whereas Novikov self-consistent time travel makes all universes with paradoxes inconsistent, Deutsch self-consistent time travel merely makes the vast majority of them inconsistent. It's a bit like quantum suicide: creating temporal paradoxes seems to work because it concentrates your measure in universes where it does work, but it also vastly reduces your total measure.

2Douglas_Knight
That's why it's not usually called "Deutsch self-consistency." It's not supposed to be a filter on legal universes, but a dynamic rule that each initial condition does lead to a consistent universe. The resolution of the grandfather paradox is a 50-50 superposition of the universe where you are born and leave and the universe where you appear, kill your grandfather, and are never born. You could say that it filters out the 80-20 superposition, but that's like saying that Newton's self-consistency principle filters out universes that don't obey his laws. (Well, maybe that's Lagrange's self-consistency principle...)
D_Malik50

Playing devil's advocate: Archaic spelling rules allow you to quickly gauge other people's intelligence, which is useful. It causes society to respect stupid people less, by providing objective evidence of their stupidity.

But I don't actually think the benefits outweigh the costs there, and the signal is confounded by things like being a native English-speaker.

6fubarobfusco
Spelling is more a gauge of how attentive you were in early schooling than of how intelligent you are. It's basically a form of conspicuous consumption of the scarce resources of childhood attention and teaching time. The cultural notion that bad spelling is an indicator of stupidity is self-reinforcing, though: it prevents English from undergoing spelling reforms like those German, Spanish, Russian, and many other languages have had, because any "reformed" spelling will necessarily look like ignorant spelling. Because English spelling is unusually difficult, it is a challenge. Because it is a challenge, people who have mastered it care about the fact that they have mastered it. And because of that, it can't be made easier.
D_Malik90

If humanity did this, at least some of us would still want to spread out in the real universe, for instance to help other civilizations. (Yes, the world inside the computer is infinitely more important than real civilizations, but I don't think that matters.)

Also, if these super-Turing machines are possible, and the real universe is finite, then we are living in a simulation with probability 1, because you could use them to simulate infinitely many observer-seconds.

0Raiden
This is probably true. I think a lot of people feel uncomfortable with the possibility of us living in a simulation, because we'd be in a "less real" universe or we'd be under the complete control of the simulators, or various other complaints. But if such super-Turing machines are possible, then the simulated nature of the universe wouldn't really matter. Unless the simulators intervened to prevent it, we could "escape" by running an infinite simulation of ourselves. It would almost be like entering an ontologically separate reality.
D_Malik180

Suppose backward time travel is possible. If so, it's probably of the variety where you can't change the past (i.e. Novikov self-consistent), because that's mathematically simpler than time travel which can modify the past. In almost all universes where people develop time travel, they'll counterfactualize themselves by deliberately or accidentally altering the past, i.e. they'll "cause" their universe-instance to not exist in the first place, because that universe would be inconsistent if it existed. Therefore in most universes that allow time t... (read more)

0Yosarian2
Alternately, if a type time travel is invented where you can change the past, you would expect people to keep meddling with the past until they accidentally changed it so much that time travel had never been invented. This process would continue, over and over again, and the final results is that the final "stable" timeline will be one where time travel is never invented; not because it's not possible, but simply because every timeline where time travel is invented eventually changes it's own past until it no longer has time travel.
6Douglas_Knight
Since quantum mechanics is true, Deutsch self-consistency has pretty big advantages over Novikov self-consistency.
6John_Maxwell
One thing I've never seen people mention re: time travel is that if you travel back in time six months, say, you'll find yourself floating out in space with the Earth on the other side of the Sun. (The Sun is in a slow orbit around the Milky Way, which itself is moving, right?) So practical time travel also requires practical space travel?
1Gurkenglas
Some civilization would have thought of that and made sure to direct their research away from time travel and towards AI.
D_Malik60

Interesting. Very small concentrations of the chemical would have to sterilize practically everyone they contacted - else it would just cause humanity to very rapidly evolve resistance, or maybe kill off the developed world.

Reminds me of the decline in testosterone levels over the past couple decades, which might be due to endocrine-disrupting compounds in the water supply and in plastics and food, but which hasn't been enough to sterilize much of the population.

D_Malik00

I think two-boxing in your modified Newcomb is the correct answer. In the smoking lesion, smoking is correct, so there's no contradiction.

One-boxing is correct in the classic Newcomb because your decision can "logically influence" the fact of "this person one-boxes". But your decision in the modified Newcomb can't logically influence the fact of "this person has the two-boxing gene".

3Unknowns
Under any normal understanding of logical influence, your decision can indeed "logically influence" whether you have the gene or not. Let's say there is a 100% correlation between having the gene and the act of choosing -- everyone who chooses the one box has the one boxing gene, and everyone who chooses both boxes has the two boxing gene. Then if you choose to one box, this logically implies that you have the one boxing gene. Or do you mean something else by "logically influence" besides logical implication?
D_Malik30

Random thing that I can't recall seeing on LW: Suppose A is evidence for B, i.e. P(B|A) > P(B). Then by Bayes, P(A|B) = P(A)P(B|A)/P(B) > P(A)P(B)/P(B) = P(A), i.e. B is evidence for A. In other words, the is-evidence-for relation is symmetric.

For instance, this means that the logical fallacy of affirming the consequent (A implies B, and B is true, therefore A) is actually probabilistically valid. "If Socrates is a man then he'll probably die; Socrates died, therefore it's more likely he's a man."

0ZeitPolizei
See also this.
5Slider
The surprise comes only to those that try to overload probability with roles it should not have. For example "A implies B" does imply P(B|A)=1 but P(B|A)=1 doesn't imply "A implies B". While it is common that if we know systematically that a certain probability is high it is a promising line of argument for causations and implications it doesn't always carry through (meta-example I am in essence arguing that while an okayish rule of thump, it is actually improper to infer causations from probabilities. I am doing this by pointing out that P(causation|probability) ~ 1 and P(causation|probability)<1 which alone is only suggestive that it is so (or the actual steps are implied)).
3MrMind
Yes, Jaynes talks about this in the first chapter of his book, calling it a "weak syllogism" and using it as a guideline to introduce probability as a kind of extended logic.
D_Malik60

Maybe the differentiable physics we observe is just an approximation of a lower-level non-differentiable physics, the same way Newtonian mechanics is an approximation of relativity.

If physics is differentiable, that's definitely evidence, by symmetry of is-evidence-for. But I have no idea how strong this evidence is because I don't know the distribution of the physical laws of base-level universes (which is a very confusing issue). Do "most" base-level universes have differentiable physics? We know that even continuous functions "usually&quo... (read more)

D_Malik00

Perhaps they create lots of children, let most of them die shortly after being born (perhaps by fighting each other), and then invest heavily in the handful that remain. Once food becomes abundant, some parents elect not to let most of their children die, leading to a population boom.

In fact, if you squint a little, humans already demonstrate this: men produce large numbers of sperm, which compete to reach the egg first. Perhaps that would have led to exactly this Malthusian disaster, if it weren't for the fact that women only have a single egg to be fertilized, and sperm can't grow to adulthood on their own.

D_Malik00

Agreed. But the Great Filter could consist of multiple Moderately Great Filters, of which the Malthusian trap could be one. Or perhaps there could be, say, only n Quite Porous Filters which each eliminate only 1/n of civilizations, but that happen to be MECE (mutually exclusive and collectively exhaustive), so that together they eliminate all civilizations.

0chaosmage
That seems correct to me, but it is quite different from your original proposal. Can you think of other filters that are MECE with the Malthusian trap? I don't see obvious ones. Maybe a good way out of the Malthusian trap would be mechanisms that limit procreation, and those make interplanetary colonization - which is procreation of biospheres - seem immoral? I don't think that sounds very convincing.
D_Malik00

I think you may have oversimplified bio-engineering to suggest it could arise in such a way before advanced technology.

I think it could be accomplished with quite primitive technology, especially if the alien biology is robust, and if you just use natural brains rather than trying to strip them down to minimize food costs (which would also make them more worthy of moral consideration). Current human technology is clearly sufficient: humans have already kept isolated brains alive, and used primitive biological brains to control robots. If you connect new... (read more)

0Elo
I was referring to the difficulty of growing a brain without a body. Or keeping a brain alive without its natural body.
D_Malik00

I'd agree that the brains of very primitive animals, or brains that have been heavily stripped down specifically to, say, operate a traffic light, aren't really worthy of moral consideration. But you'd probably need more intelligent brains for complex tasks like building cars or flying planes, and those probably are worthy of moral consideration - stripping out sapience while leaving sufficient intelligence might be impossible or expensive.

D_Malik20

Could Malthusian tragedy be the Great Filter? Meaning, maybe most civilizations, before they develop AGI or space colonization, breed so much that everyone is too busy trying to survive and reproduce to work on AGI or spaceflight, until a supernova or meteor or plague kills them off.

Since humans don't seem to be headed into this trap, alien species who do fall into this trap would have to differ from humans. Some ways this might happen:

  • They're r-selected like insects, i.e. their natural reproduction process involves creating lots of children and then all
... (read more)
0Houshalter
This filter applies even before they invent technology. Brains use lots of energy and development time. This is typically selected against. Therefore most organisms only evolve the minimum amount of intelligence they need. And human level intelligence is never an advantage in most environments. So you need some really weird set of conditions to create an environment that selects for high intelligence, and doesn't select too strongly against energy efficiency or development time. I don't know what these conditions are, but they only occurred once on Earth, over hundreds of millions of years. This suggests these conditions occur very rarely, and we might just be very lucky.
0[anonymous]
I think a too easy reproduction mechanism means not evolving intelligence. The selective pressure for human intelligence could not have came from the environment or else many species would be similarly intelligent like how many species can fly. It was some sort of a competition inside our species, probably sexual. And when reproduction is easy, sexual competition is not so tight. In fact, the most likely answer is that the runaway explosion of intelligence that resulted in us must be probably a mutually reinforcing process: intelligence made reproduction harder, hence more sexual competition, hence more pressure for intelligence. How did intelligence make reproduction harder? Through big heads making childbirth harder. Frankly I don't know why didn't evolution just came up with the idea of giving women gigantic hips and large vaginas, but what happened instead is that babies are born far too prematurely so that their head size is not too big, and thus require a huge amount of care and investment after birth. This, postnatal maternal investment, then intensified sexual competition and thus pressure for more intelligence. This really rules out the external eggs.
2VoiceOfRa
That doesn't seem like it would lend itself to evolving culture. Specifically, since parents don't invest in their offspring they don't tell them what they've learned. Thus no matter how smart individuals are, knowledge doesn't pass to the next generation.
0Daniel_Burfoot
More generally, you can imagine a lot of failure modes where an alien species evolves to become intelligent, but cannot build technological civilization because it cannot achieve large scale social cooperation.
5Lumifer
Your Malthusian collapse seems to be conditional on some particulars of aliens' biology, but the Great Filter has to be very very general and almost universal.
1Toggle
I have two somewhat contradictory arguments. First, this is probably a poor candidate for the great filter because it lacks the quality of comprehensiveness. Remember that a threat is not a candidate for a great filter if it merely exterminates 90%, or 99%, of all sentient species. Under those conditions, it's still quite easy to populate the stars with great and powerful civilizations, and so such a threat fails to explain the silence. Humans seem to have ably evaded the malthusian threat so far, in such a way that is not immediately recognizable as a thermodynamic miracle, so it's reasonable to expect that a nontrivial fraction of all civilizations would do so. At least up to our current stage of development. Second, I'll point out that bullets two and four are traits possessed by digital intelligences in competition with one another (possibly the first as well), and they supplement it with a bullet you should have included but didn't- functional immortality. These conditions correspond to what Nick Bostrom calls a 'multipolar scenario', a situation in which there exist a number of different superintelligences with contradicting values. And indeed, there are many smart people who think about the dangers of these selection pressures to a sufficiently advancd civilization. So, malthusian pressures on biological systems are unlikely to explain the apparent lack of spacefaring civilizations. On the other hand, malthusian pressures on technologically optimized digital entities (possibly an obligate stage of civilization) may be much more of a threat, perhaps even one deserving the name 'Great Filter'.
D_Malik30

Some of the disgust definitely derives from the imagery, but I think much of it is valid too. Imagine the subjective experience of the car-builder brain. It spends 30 years building cars. It has no idea what cars do. It has never had a conversation or a friend or a name. It has never heard a sound or seen itself. When it makes a mistake it is made to feel pain so excruciating it would kill itself if it could, but it can't because its actuators' range of motion is too limited. This seems far worse than the lives of humans in our world.

By "would these c... (read more)

0chaosmage
You are right that a vat brain life should certainly seem far worse than a human life - to a human. But would a vat brain agree? From its perspective, human lives could be horrible, because they're constantly assaulted by amounts of novelty and physical danger that a vat brain couldn't imagine handling. Humans always need to work for homeostasis from a wildly heterogenous set of environmental situations. A vat brain wouldn't at all be surprised to hear that human lives are much shorter than vat brain lives. Do you think that once we know what intelligence is exactly, we'll be able to fully describe it mathematically? Since you're assuming electronics-based superintelligence is possible, it would appear so. Well if you're right, intelligence is substrate-independent. Your distinction between "single mind" and "higher-order mechanism" is a substrate distinction, so it shouldn't matter. You and I feel it does matter, because we're glorified chimps with inborn intuitions about what constitutes an agent, but math is not a chimp - and if math doesn't care whether intelligence runs on a brain or on a computer system, it shouldn't care whether intelligence runs on one brain or on several.
D_Malik70

I'm surprised that nobody's pointed out the dual phenomenon of "yay fields", whereby a pleasurable stimulus's affect is transferred to its antecedents.

The field of behavior modification calls this "conditioning", and "higher-order conditioning" if the chain has more than two stimuli.

1[anonymous]
I can think of plenty of personal "Ugh fields," but I'm having a hard time finding concrete examples of a "yay field." Would you mind illustrating a little further?
D_Malik90

First, I'd predict that much of the observed correlation between technical proficiency and wealth is just because both of them require some innate smarts. In general, I'm suspicious of claims that some field develops "transferable reasoning abilities", partly because people keep using that to rationalize their fiction-reading or game-playing or useless college degrees. I'm worried that math and physics and theoretical CS are just nerd-snipery / intellectual porn, and we're trying to justify spending time on them by pretending they're in line with... (read more)

4IlyaShpitser
No way, especially not physics. We as a civ need to do more of this stuff, not less, compared to what we are doing now. I can't think of any category of human activity that did more to improving the world than the hard sciences. Maaaaybe some religions in the "convince people to stop killing each other and cooperate long enough to get science off the ground" sense.
0Ben Pace
Hehe, it's rare that intellectual satisfaction is called a 'low' value. Yes, there's also the theory that success in hard math/science fields are strong signals of intellect, and thus smart people flock to them to signal their intelligence. As someone about to start a CS degree and figuring out what to do with his life, this is a sobering line of thought.
0JonahS
My view is that some degree of technical facility helps a lot. As I recently wrote, I think that learning to read very carefully and not make unwarranted assumptions is a very important skill, and one way to get it is by studying proof-based math. I don't have strong views on how much studying pure math and TCS help after the first 1-2 years. I think that the case for learning advanced statistics & machine learning is much stronger. Separately, I benefited a huge amount from reading and interacting with elite mathematicians. Even though they weren't thinking about the things that I'm doing now directly, I was able to transfer what I had learned from them to the things that I'm currently focused on. That's the peer group effect.
D_Malik10

Turing machines are a big deal because when you change the definition of a Turing machine (by letting it use arbitrarily many symbols, or giving it multiple tapes or a multi-dimensional tape, or letting it operate nondeterministically, or making its tape finite in one direction...) it usually can still solve exactly the same set of problems, which strongly suggests that Turing completeness is a "natural concept". A lot of computational systems are Turing-complete, and all of the discrete computational systems we've built are no more powerful than... (read more)

2Viliam
An important property of Turing machines is that they have only one kind of instruction, which is very simple. That comes useful in various mathematical proofs, where you don't have to enumerate many options. (Try to imagine the horror of writing a mathematical proof that something cannot be solved by a C program.)
D_Malik260

Upgraded reflective senses would be really cool. For instance:

  • Levels of various interesting hormones like cortisol, epinephrine, testosterone, etc. For instance, cortisol levels are higher in the morning than in the evening, but this is not obvious. (Or am I lying to prevent hindsight bias?)
  • Various things measured by an implanted EEG. For instance, it would be cool to intuitively know the difference between beta and gamma waves.
  • Metabolism-related things like blood insulin, glucose, ketones.
  • Galvanic skin response. Heart rate variability.

We already ha... (read more)

1Viliam
Could this be trained? In a safe environment, increasing and decreasing the levels quickly, and noticing what happens. So you would learn how exactly "high level of X" and "low level of X" feel for various values of X.
8Ishaan
This is my favorite one so far, unexpected and very practical. You quite plausibly might upgrade your meta-cognitive ability as these upgraded senses improve your ability to notice. Noticing your body reacting to getting anxious or being able to detect certain mental processes (mind wandering, confabulation, etc) is something you'd really want on all the time, at every second.
D_Malik50

Clicking on the tag "open thread" on this post only shows open threads from 2011 and earlier, at "http://lesswrong.com/tag/open_thread/". If I manually enter "http://lesswrong.com/r/discussion/tag/open_thread/", then I get the missing open threads. The problem appears to be that "http://lesswrong.com/tag/whatever/" only shows things posted to Main. "http://lesswrong.com/r/all/tag/open_thread/" seems to behave the same as "http://lesswrong.com/tag/open_thread/", i.e. it only shows things posted to ... (read more)

D_Malik160

I don't think a shutdown is even remotely likely. LW is still the Schelling point for rationalist discussion; Roko-gate will follow us regardless; SSC/Gwern.net are personal blogs with discussion sections that are respectively unusable and nonexistent. CFAR is still an IRL thing, and almost all of MIRI/CFAR's fans have come from the internet.

Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.

-2SanguineEmpiricist
Part of the reason it is losing steam is there is a small quantity of posters that post wayyyy too much using up everyone's time and they hardly contribute anything. Too many contrarians. We have a lot of regular haters that could use some toning down.
Viliam200

Agreed though that LW is slowly losing steam, though. Not sure what should be done about it.

To have a website with content like the original Sequences, we need someone who (a) can produce enough great content, and (b) believes that producing content for a website is the best use of their time.

It already sounds like a paradox: the more rational and awesome a person is, the more likely it is that they can use their time much better than writing a blog.

Well, unless they use the blog to sell something...

I think Eliezer wrote the original Sequences pretty mu... (read more)

3Cariyaga
If Less Wrong is, indeed, losing steam as a community (I wouldn't have considered myself part of it until recently, and hadn't kept up with it before then), there are options to deal with it. First, we could create enjoyable media to be enjoyed by large quantities of people, with rationalistic principles, and link back to Less Wrong in it. HPMOR is already a thing, and certainly does well for its purpose of introducing people to and giving some basic instruction in applied rationality. However, as it's over, the flow of people from the readership it generated has ceased. Other media is a possibility. If people are interested in supporting Less Wrong and CFAR specifically, there could perhaps be a youtube channel made for it; maybe streaming live discussions and taking questions from the audience. Non-video means are also, obviously, possible. Webcomics are somewhat niche, but could drive readership if a high quality one was made. I'm loathe to suggest getting already-established content creators to read and support Less Wrong, partially because of my own reticence in such, and partially because of a host of problems that would come with that, as our community is somewhat insular, and though welcoming in our own way, Less Wrong often comes off to people as arrogant or elitist. On that note, while I would not suggest lowering our standards for discourse, I think that in appealing to a larger community it's necessary to realize that newer members of the community may not have the background necessary to take constructively the criticisms given. I'm not sure how to resolve this problem. Being told to "go and read such and such, then you'll understand" comes off rudely. Perhaps some form of community primer link on the front page, regarding customs here? The about page is a little cluttered and not entirely helpful. That in addition to a marker next to someone's name indicating they're new to Less Wrong could do a lot to help. Furthermore, a section for the "younger"
D_Malik60

The null hypothesis is always false, and effect sizes are never zero. When he says it's zero you should probably interpret zero as "too small to care about" or "much smaller than most people think". I'll bet the studies didn't say the effect was literally zero, they just said that the effect isn't statistically significant, which is really just saying the effect and the sample size were too small to pass their threshold.

People say a lot of things that aren't literally true, because adding qualifiers everywhere gets annoying. Of course if he doesn't realize that there are implicit qualifiers, then he's mistaken.

1SanguineEmpiricist
Yeah but his abrasiveness of delivery is contrary to your goodwill. I'm being polite in his favor.
D_Malik160

That study is observational, not experimental. Maybe genes for disagreeableness make parents abuse their children, and they pass those genes on to their offspring. Probably both nature and nurture contribute.

-1Epictetus
Certainly. Correlation isn't causation. One hurdle is that any experimental study of this phenomenon would be highly unethical. But all is not lost. Single-parent households are also associated with higher risk of juvenile delinquency. I'll see if I can dig up a study of children abused by foster parents or step-parents.
D_Malik40

Probably gotten most of the responses it was going to get, so here's a scatter plot:

People seem to think it's worse the more they know about it (except those who know nothing seem slightly more pessimistic than those who know only a little).

Made by running this in IPython (after "import pandas as pd" and "from numpy.random import randn" in .pythonstartup):

!sed "/^#/d" poll.csv >poll-clean.csv
pd.read_csv("poll-clean.csv", names=["user", "pollid", "response", "date"])
_.pivo
... (read more)
D_Malik10

Agreed, considering "EEA" to mean the African savannah. So for instance if your ancestry is European and you're currently living in California you don't need to spend very much time outside, and if you're dark-skinned and living at a high latitude you should try to get lots of sunlight.

D_Malik40

In the Sleeping Beauty problem, SIA and SSA disagree on the probability that it's Monday or Tuesday. But if we have to bet, then the optimal bet depends on what Ms Beauty is maximizing - the number of bet-instances that are correct, or whether the bet is correct, counting the two bets on different days as the same bet. Once the betting rules are clarified, there's always only one optimal way to bet, regardless of whether you believe SIA or SSA.

Moreover, one of those bet scenarios leads to bets that give "implied beliefs" that follow SIA, and the ... (read more)

3Manfred
I think the consensus was not so much that phrasing anthropic problems in terms of decision problems is necessary, or that there is a "dissolution" taking place, but merely that it works, which is a very important property to have. One has to be careful when identifying implied beliefs as SSA or SIA, because the comparison is usually made by plugging SSA and SIA probabilities into a naive causal decision theory that assumes 'the' bet is what counts (or reverse-engineering such a decision theory). Anything outside that domain and the labels start to lose usefulness. In the course of answering Stuart Armstrong I put up two posts on this general subject, except that in both cases the main bodies of the posts were incomplete and there's important content in comments I made replying to my own posts. Which is to say, they're absolutely not reader-friendly, sorry. But if you do work out their content, I think you should find the probabilities in the case of Sleeping Beauty somewhat less mysterious. First post on how we assign probabilities given causal information. Second post on what this looks like when applied.
D_Malik30

Is transcranial direct current stimulation technology yet at the point where someone who starts it has higher expected gains than costs? I.e., should more LWers be using it? You can comment and/or answer this poll:

Do you think the average LWer would get a net benefit from using tDCS, taking into account the benefits, costs of equipment, risks, etc.? [pollid:906] How much do you know about this topic? [pollid:907]

Summary of the 2008 state of the art; tDCS subreddit.

4D_Malik
Probably gotten most of the responses it was going to get, so here's a scatter plot: People seem to think it's worse the more they know about it (except those who know nothing seem slightly more pessimistic than those who know only a little). Made by running this in IPython (after "import pandas as pd" and "from numpy.random import randn" in .pythonstartup): !sed "/^#/d" poll.csv >poll-clean.csv pd.read_csv("poll-clean.csv", names=["user", "pollid", "response", "date"]) _.pivot_table("response", ["user"], ["pollid"]) _ + 0.1*randn(*_.shape) # jitter _.plot(kind="scatter", x=906, y=907) plt.xlabel("Net loss.....Net benefit") plt.ylabel("Nothing.....Expert")
0Manfred
Ooh, having the raw poll data is neat.
D_Malik30

I've been doing the same thing for ~40 minutes of daily peak sunlight, because of heuristics ("make your environment more like the EEA") and because there's evidence it improves mood and cognitive functioning (e.g.). The effect isn't large enough to be noticeable. Sunlight increases risk of skin cancer, but decreases risks of other, less-survivable cancers more; I'm not sure how much of the cancer reduction you could get from taking D3 and not getting sunlight. I guess none of that actually answers your question.

3Ishaan
My vague and untrustworthy impression is that D3 supplementation is better than nothing but has risks related to calcium going to the wrong places, which may be mitigated by Nitric oxide which is also sun linked, and might also be mitigated by not being K2 and magnesium deficient which most people are. I should probably start being better about archiving what I read so that I can stop being vague and untrustworthy. I do notice a muscle and general relaxation effect which is deeper and lasts longer than, say, an equally warm shower. A blood panel I got back when I was not supplemented said I was pretty severely D deficient, so it might be that I feel the effects more. (Though from what I know of the biology of this the NO is more likely to be responsible for the relaxation effect than the D3.)
0polymathwannabe
If you're white, you're no longer adapted to the ancestral environment where humans evolved.
D_Malik30

Raymond Smullyan calls these sorts of puzzles (where characters' ability to solve the puzzle is used by the reader to solve the puzzle) "metapuzzles". There are some more examples in his books.

D_Malik00
  1. Go to his first article, then in the "Article Navigation" menu use the "by author" arrows.
  2. Go to lesswrong.com/user/Yvain, go to the last page (by clicking "next" or by changing the URL in some way), then go back one page at a time.

Haven't tested either of those, but they should work.

D_Malik00

Thanks for the feedback! I'm intending to go into industry, not academia, but this is still helpful.

Load More