Filter This month

You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: WalterL 20 October 2016 02:45:03AM 3 points [-]

Sorry, I didn't mean that to be what you took from it.

I used to be fat. ( I still am, but not nearly to the same extent) Like, Jabba fat. My parents got doctors to say that I had an eating disorder, and maybe I did.

Othering my appetite never helped me. Like "I have an eating disorder" focused my energy on something (my disorder) that didn't have a mind. It couldn't get tired, or bored...it didn't exist. It's like "fighting" cancer.

But that doesn't mean that what worked was thinking "I'm a glutton".

When you say that "I am a dumb person", it isn't any closer to a thought you can act on. Kicking yourself when you are down feels good (or, at least, it did for me), it feels like "paying" for the behavior, but that's just thoughts. It doesn't actually change stuff.

I was shooting for more "I am a person who had unprotected sex with sketchy folks at place X". That feels, 'actionable', if you will, to me. Like, if the problem is a sex addiction, I dunno what the solution is. If the problem is being a dumb person, I dunno what the solution is. But if the problem is going to a place and doing stuff, there are a bunch of solutions.

1: Carry protection, everywhere. Put it in something that you carry everywhere (wallet, little thingy on your car keys, cell phone case, whatever). If you ever screw someone sketchy, make sure you take it out and use it. If they aren't willing, maybe that's a spur to reconsider?

2: Enlist the help of the dudes who run the place. Tell them if they see you there, you will give them ten thousand dollars, or however much money would sting. Ask them, as friends, to kick you out. Tell them you have leprosy. Whatever words you have to say to make sure you aren't welcome back there.

3: If this place is pay to play, then ration your funds. Each morning put exactly as much cash as you'll need that day in your wallet, and don't carry a credit card.

I don't know if any of these could work for you, but something similar might. A behavior that you don't want to repeat can always be made more inconvenient. That's what helped me out with eating too much. I hope that you can do a similar thing to get yourself a different habit.

Comment author: Lumifer 18 October 2016 09:29:16PM 3 points [-]

So your current value can be considered a value and none else?

That objection is not logical :-P

It's using your brain mechanics seeking for a higher power

Sorry, don't have those. Maybe somewhere in dusty off-line storage, but certainly not activated.

Because is it really that bad to value logic over all else?

That strikes me as an expression devoid of meaning. Logic is a tool. Tools can be useful or not so much, but tools are not values unto themselves, they just make it easier to reach actual goals.

Do tell, how The One True Value of logic led you to post word salad on LW?

Comment author: ChristianKl 18 October 2016 08:56:25PM 3 points [-]

What empirical evidence do you have observed to back you belief that this technique is valuable?

Comment author: Lumifer 18 October 2016 08:33:29PM 3 points [-]

Thank you, I'm not looking for a religious conversion experience.

Neither I'm likely to take blind leaps of faith on the say-so of internet strangers. Logic isn't a "value", anyway.

Comment author: Lumifer 18 October 2016 07:57:18PM 3 points [-]

I don't have a most important value.

Comment author: moridinamael 17 October 2016 09:49:54PM 3 points [-]

I am essentially imagining you to be similar to me about five years ago.

It sounds like you are not really excited about anything in your own life. You're probably more excited about far-future hypotheticals than about any project or prospect in your own immediate future. This is a problem because you are a primate who is psychologically deeply predisposed to be engaged with your environment and with other primates.

I used to have similar problems of motivation and engagement with reality. At some point I just sort of became exhausted with it all and started working on "insignificant" projects like writing a book, working on an app, and raising kids. It turns out that focusing on things that are fun and engaging to work on is better for my mental health than worrying about how badly I'm failing to live up to my imagined ideal of a perfectly rational agent living in a Big World.

If I find that I'm having to argue with myself that something is useful and I should do it, then I'm fighting my brain's deeply ingrained and fairly accurate Bullshit Detector Module. If I actually believe that a task is useful in the beliefs-as-constraints-for-anticipated-experience sense of "believe", then I'll just do it and not have any internal dialogue at all.

Comment author: James_Miller 17 October 2016 01:00:05PM *  3 points [-]

Yes, I agree. It shows children are trying to guess the teacher's password and are not doing math. Interestingly, when I asked my son this question he said you couldn't find the answer. When I asked how he knew that he said he has seen other math problems where you don't have enough information to solve.

Comment author: SithLord13 17 October 2016 12:51:29PM 3 points [-]

I think the issue here might be slightly different than posed. I think the real issue is that children instinctively assume they're running on corrupted hardware. For all priors in math, they've had a solvable problem. They've had problems they couldn't solve, and then been shown it was a mistake on their part. Without good cause, why would they suddenly assume all their priors are wrong, and not just that they're failing to grasp it? Given their priors and information, it's ration to expect that they missed something.

Comment author: kithpendragon 17 October 2016 12:51:35AM 3 points [-]

I wonder how my coworkers will do...

Comment author: gworley 16 October 2016 12:41:31AM 3 points [-]

medium makes it a little hard to find the rss feeds, but it's at:

https://medium.com/feed/map-and-territory

Comment author: CronoDAS 15 October 2016 09:49:23PM 3 points [-]

Is there an RSS feed for new posts?

Comment author: WhySpace 15 October 2016 06:42:05PM 3 points [-]

If the majority of minds with moral weight are the result of an intelligent mind's decision, then the link between complexity and frequency may be weak. Pain is a strong motivator for some things, even if it's bad at motivating creativity, so perhaps there would still be an incentive to create more pain. This is extremely speculative though.

The bigger worry would be that forces like Moloch and evolution may favor pain. Wild animals appear to have much more pain in their lives than pleasure. Even if the carrot was a more effective motivator than the stick for something, if pain was simpler and more robust evolution would still favor it.

This would be especially important for things like Boltzmann brains. To me it seems unlikely to me that things like trees or insects can suffer, but if they can we'd have a very hard time relating to minds so different from our own. With so little evidence, the choice of a good prior is crucial, so it would be useful to have a prior for the predominance of suffering over happiness.

Comment author: scarcegreengrass 13 October 2016 11:57:11AM 2 points [-]

Oh, this is much more complete, thanks.

Wow, it's surreal to hear Obama talking about Bostrom, Foom, and biological x risk.

Comment author: DanArmak 12 October 2016 02:55:20PM 3 points [-]

Without commenting on whether this presentation matches the original metaethics sequence (with which I disagree), this summary argument seems both unsupported and unfalsifiable.

  1. No evidence is given for the central claim, that humans can and are converging towards a true morality we would all agree about if only we understood more true facts.
  2. We're told that people in the past disagreed with us about some moral questions, but we know more and so we changed our minds and we are right while they were wrong. But no direct evidence is given for us being more right. The only way to judge who's right in a disagreement seems to be "the one who knows more relevant facts is more right" or "the one who more honestly and deeply considered the question". This does not appear to be an objectively measurable criterion (to say the least).
  3. The claim that ancients, like Roman soldiers, thought slavery was morally fine because they didn't understand how much slaves suffer is frankly preposterous. Roman soldiers (and poor Roman citizens in general) were often enslaved, and some of them were later freed (or escaped from foreign captivity). Many Romans were freedmen or their descendants - some estimate that by the late Empire, almost all Roman citizens had at least some slave ancestors. And yet somehow these people, who both knew what slavery was like and were often in personal danger of it, did not think it immoral, while white Americans in no danger of enslavement campaigned for abolition.
Comment author: WalterL 12 October 2016 02:46:05PM -1 points [-]

Yes, those with my values will live here, in Gondor. Your folks can live other there, in Mordor. Our citizens will no longer come into contact and conflict with one another, and peace will reign forever.

What, these segregated regions THEMSELVES come into conflict? Absurd. What would you even call a conflict that was between large groups of people? That could never happen. Everyone who shares my value system knows that lots of people would die, and we all agree that nothing could be worth that.

Comment author: Lumifer 11 October 2016 06:58:47PM *  3 points [-]

It's socially acceptable to twirl and manipulate small objects in your hands, from pens to stress balls. If you need to get your mouth involved, it's mostly socially acceptable to chew on pens. Former smokers used to hold empty pipes in their mouths, just for comfort, but it's hard to pull off nowadays unless you're old or a fully-blown hipster.

Comment author: Lumifer 11 October 2016 06:38:33PM 2 points [-]

We don't know what an AI which maximizes human values is because we don't know what human values are at the necessary level of precision. Not to mention the assumption that the AI will be a maximizer and that values can be maximized.

Comment author: James_Miller 11 October 2016 02:33:02PM 3 points [-]

She could read "The Basic AI Drives" to him at night.

Comment author: ChristianKl 10 October 2016 09:51:24PM 2 points [-]

It's certainly not a blog that tries to pander the reader.

Comment author: Lumifer 10 October 2016 06:42:04PM 3 points [-]

Brain drain has been a concern of some for a long time.

Comment author: 9eB1 10 October 2016 05:01:32PM 3 points [-]

I would be very interested in this as well. In the meantime, there is a subreddit for the site that has a thread with best posts for a new reader, and a thread on people's favorite things from TLP.

Comment author: pcm 10 October 2016 04:44:24PM 3 points [-]

I suspect attempted telekinesis is relevant.

Comment author: DanArmak 10 October 2016 04:18:24PM 3 points [-]

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Comment author: turchin 10 October 2016 02:28:19PM 3 points [-]

Good point, but my question was about what we can do to raise chances that it will be friendly AI.

Comment author: ChristianKl 10 October 2016 01:05:28PM 3 points [-]

The standard way to learn massage is through taking a course.

I would also recommend Betty Martin's 3-Minute game as a secular message like practice: https://www.youtube.com/watch?v=auokDp_EA80

Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

In response to Quantum Bayesianism
Comment author: chron 09 October 2016 06:53:59PM 3 points [-]

Did anyone else find the banner at the top of the article (about preferring secondary and tertiary sources to primary ones) more interesting (about the problems with wikipedia) than the article itself?

Comment author: jimmy 08 October 2016 07:11:22PM 3 points [-]

An excellent post, but not Scott :)

http://mindingourway.com/confidence-all-the-way-up/

Comment author: CellBioGuy 07 October 2016 11:09:11PM *  3 points [-]

The short version before I get a chance to write more posts:

Primates appear to be an interestingly potentiated lineage, prone to getting smart when they get large, due to differences in brain development established 50 megayears ago that make their brains much more impressive per unit volume than most larger mammals. The great apes other than humans actually seem to run into energetic limits to feeding their brains and have smaller brains than you'd expect for a primate of their size, while humans are right on the generic primate trendline. Birds are another potentiated lineage - their brains are about 6x as compact as a comparable primate brain.

Eukaryotes are really weird. The one thing that is incontrovertible these days is that the classic 3-domains-of-life idea, with eukaryotes and archaea as sister clades, is turning out to be wrong. Eukaryotes are turning out to have come from a fusion/symbiosis of a bacterium and something that fits entirely within the archaeal domain. Various people who are studying their origin and evolution have their pet models and hold to them too tightly and fight each other bitterly, though some things are finally coming out for sure. A lot of their weird features may come from particular population genetic selective pressures that come from competition between copies of the mitochondrial genome, and a lot of others may come from the fact that they invented sex and have low population sizes both of which allow types of evolution and genetic drift that you are much less likely to see in the eubacteria or archaebacteria, the two 'primary' domains (whose separation represent the deepest branch in the tree of life). But the fact that ALL eukaryotes have a huge constellation of weird traits with no intermediate forms means their origin was a weird event, and opinions vary on if that means it was a singular extremely unlikely event or if all those weird properties come logically from how they formed, and on if there was strong first-mover-advantage.

And, of course, our data on the breadth of life that exists on Earth is incomplete... I love the specialized journals on electron microscopy, now and then you see a paper where someone just sees something really bizarre and reports on what they saw without a clue as to what it is. If you want a really perplexing paper, take a look at this: http://jmicro.oxfordjournals.org/content/61/6/423.full.pdf+html (alternately https://dl.dropboxusercontent.com/u/67168735/a%20unique%20organism%20from%20the%20deep%20sea.pdf ) "A Unique Microorganism from the Deep Sea"

Comment author: Unnamed 07 October 2016 06:15:59AM 3 points [-]
Comment author: Houshalter 07 October 2016 06:11:12AM 3 points [-]

In that case, the magic wand has a high degree of optimization power. It is extremely good at converting things it touches into gold, in all possible environments. But it is perfectly plain that the wand is not intelligent. So that definition of intelligence is mistaken.

The wand isn't generally intelligent. Maybe by some stretch of the definition we could sorta say it's "intelligent" at the task of turning things to gold. But it can't do any tasks other than turning things into gold. The whole point of AGI is general intelligence. That's what the G stands for.

Humans are generally intelligent. We can apply our brains to widely different tasks, including many that we weren't evolved to be good at at all. From playing Go to designing rockets. Evolution is generally intelligent. It can find remarkably good designs for totally arbitrary objective functions.

I think general optimization ability is a perfectly fine definition of intelligence. It includes things like humans and evolution, and some kinds of simple but general AI, but excludes things like animals and domain specific AI. It defines intelligence only by results. If you can optimize an arbitrary goal you are intelligent. It doesn't try to specify what the internal mechanisms should be, just whether or not they work. And it's continuous - you can have a degree of very stupid optimizer like evolution, all the way to very good/intelligent ones like humans.

Intelligence is the ability to engage in abstract thought. You could characterize that as pattern recognition, except that it is the ability to recognize patterns in patterns in patterns, recursively.

This definition is really vague. You are just shoving the hard problem of defining intelligence into the hard problem of defining "abstract thought". I guess the second sentence kind of clarifies that you mean. But it's not clear at all that humans even meet that definition. Do humans recognize patterns in patterns? I don't think so. I don't think we are consciously aware of the vast majority of our pattern recognition ability.

The most intelligent AI we have, is not remotely close to that. It can only recognize very particular patterns in very particular sorts of data.

Not really. Deep neural networks are extraordinary general. The same networks that win at Go could be applied to language translation, driving cars, playing pacman, or recognizing objects in an image.

One example of that was the case of AlphaGo, where Eliezer called it "superintelligent with bugs,"

The exact quote is "superhuman with bugs". In the context, he was describing the fact that the AI plays far above human level. But still makes some mistakes a human might not make. And it's not even clear when it makes mistakes, because it is so far above human players and may see things we don't see, that makes those moves not mistakes.

The orthogonality thesis is a similar mistake of that kind; something that is limited to seeking a limited goal like "maximize paperclips" cannot possibly be intelligent, because it cannot recognize the abstract concept of a goal.

A paperclip maximizer can recognize the concept of a goal. It's not stupid, it just only cares about paperclips. In the same way humans are programmed by evolution to maximize sex, social status, and similarly arbitrary goals, there is no reason an AI couldn't be programmed to maximize paperclips. Again, perhaps humans are not intelligent by your definition.

Unless you believe there is some magical point where there is a sudden change from stupid to intelligent, we are still extremely far off from intelligent machines.

Yeah that seems quite obviously true. Just look at the chimpanzees. By some accounts the main difference in human brains is they are just scaled up primate brains - 3 times as large, with a bit more sophisticated language ability. And suddenly you go from creatures that can barely master simple tools and can't communicate ideas, to creatures capable of technological civilization. 500 million years of evolution refined the mammal brain to get chimps, but only about a million was needed to go from stupid animals to generally intelligent humans.

I don't see any reason to believe AI progress should be linear. In practice it is clearly not. Areas of AI often has sudden discontinuities or increasing rates of progress. I don't see any reason why there can't be a single breakthrough that causes enormous progress, or why even incremental progress must be slow. If evolution can make brains by a bunch of stupid random mutations, surely thousands of intelligent engineers can do so much better on a much shorter time scale.

as is evident in the behavior of children, which passes imperceptibly from stupid to intelligent.

This isn't a valid analogy at all. Baby humans still have human brains running the same algorithms as adult humans. Their brains are just slightly smaller and have had less time to learn and train. Individual AIs may increase in ability linearly as they grow and learn. But the AI algorithms themselves have no such constraint, someone could theoretically figure out the perfect AI algorithm tomorrow and code it up. There is certainly no law of nature that says AI progress must be slow.

Comment author: gjm 06 October 2016 11:58:35PM -1 points [-]

You are misreading Jacobian

Plausible guess, but actually my error was different: I hadn't noticed the bit of Jacobian's comment you quote there; I read what you wrote and made the mistake of assuming it was correct.

Those words "once you've decided on a course of action" were your words. I just quoted them. It does indeed appear that they don't quite correspond to what Jacobian wrote, and I should have spotted that, but the original misrepresentation of Jacobian's position was yours rather than mine.

(But I should make clear that you misrepresented Jacobian's position by making it look less unreasonable and less easy for you to attack, so there's something highly creditable about that.)

Comment author: Lumifer 06 October 2016 03:11:50PM 3 points [-]

So, if the emotional empathy should be discarded, why should I help all those strangers? The only answer that the link suggests is "social propriety".

But social propriety is a fickle thing. Sometimes it asks you to forgive the debts of the destitute, and sometimes it asks you to burn the witches. Without empathy, why shouldn't you cheer at the flames licking the evil witch's body? Without empathy, if there are some kulaks or Juden standing in the way of the perfect society, why shouldn't you kill them in the most efficient manner at your disposal?

Comment author: Lumifer 06 October 2016 02:52:56PM 3 points [-]

As with any complex phenomenon in a complex system, there is going to be a laundry list of contributing factors, none of which is the cause (in the sense that fixing just that cause will fix the entire problem). We can start with

  • Genetic factors (such as lower IQ)
  • Historical factors, which in turn flow into
  • Cultural factors (such as distrust of the government / law enforcement) and
  • Economic factors (from being poor to having a major presence in the drug trade)

The opinions about the relative weights of these factors are going to differ and in the current political climate I don't think a reasonable open discussion is possible.

Comment author: siIver 06 October 2016 02:35:05PM *  3 points [-]

I think this is the first article in a long time that straight up changed my opinion in a significant way. I always considered empathy a universally good thing – in all forms. In fact I held it as one of the highest values. But the logic of the article is hard to argue with.

I still tentatively disagree that it [emotional empathy] inherently bad. Following what I read, I'd say it's harmful because it's overvalued/misunderstood. The solution would be to recognize that it's an egoistical thing – as I'm writing this I can confirm that I think this now. Whereas cognitive empathy is the selfless thing.

Doing more self-analysis, I think I already understood this on some level, but I was holding the concept of empathy in such high regards that I wasn't able to consciously criticize it.

I think this article is something that people outside of this community really ought to read.

Comment author: moridinamael 06 October 2016 02:20:37PM 3 points [-]
Comment author: SodaPopinski 06 October 2016 11:52:27AM 3 points [-]

CellBioGuy all your astrobiology posts are great I'd be happy to read all of those. This may be off the astrobiology topic but I would love to see a post with your opinion on the foom question. For example do you agree with Gwern's post about there not being complexity limitations preventing runaway self-improving agents?

Comment author: philh 05 October 2016 10:40:48AM 2 points [-]

I'd find all of these interesting, particularly the first three and the last.

I'm glad you're back.

Comment author: Manfred 04 October 2016 07:06:49PM *  3 points [-]

The AI has to do what humans mean (rather than e.g. not following your orders and just calculating more digits of pi) before you start talking at it, because you are relying on it interpreting that sentence how you meant it.

The hard part is not figuring out good-sounding words to say to an AI. The hard part is figuring out how to make an actual, genuine computer program that will do what you mean.

Comment author: Gunnar_Zarncke 04 October 2016 04:29:17PM 3 points [-]

Do you have a mathematical formulation for it? (That will be the first question by the physics consultant mentioned above)

Comment author: Fluttershy 04 October 2016 01:43:49AM *  3 points [-]

I'm sorry! Um, it probably doesn't help that much of the relevant info hasn't been published yet; this patent is the best description that will be publicly available until the inventors get more funding. From the patent:

By replacing the volume of the vasculature (from 5 to 10 percent of the volume of tissues, organs, or whole organisms) with a gas, the vasculature itself becomes a “crush space” that allows stresses to be relieved by plastic deformation at a very small scale. This reduces the domain size of fracturing...

So, pumping the organ full of cool gas (not necessarily oxygen) is done for reasons of cooling the entire tissue at the same time, as well as to prevent fracturing, rather than for biological reasons.

ETA: To answer your last question, persufflation would be done on both cooling and rewarming.

Comment author: Lumifer 03 October 2016 09:26:06PM 3 points [-]

Is it falsifiable? Which empirical observations/experiments can falsify it?

Comment author: Florian_Dietz 03 October 2016 08:22:13PM *  3 points [-]

Is there an effective way for a layman to get serious feedback on scientific theories?

I have a weird theory about physics. I know that my theory will most likely be wrong, but I expect that some of its ideas could be useful and it will be an interesting learning experience even in the worst case. Due to the prevalence of crackpots on the internet, nobody will spare it a glance on physics forums because it is assumed out of hand that I am one of the crazy people (to be fair, the theory does sound pretty unusual).

Comment author: Luke_A_Somers 03 October 2016 12:01:11AM 3 points [-]

It might help, though - if you suddenly stop applying the magnetic fields, then it might freeze more abruptly than if you simply lower the temperature. That could reduce the extent of crystallization and thus damage.

Comment author: Viliam 02 October 2016 07:26:27PM *  3 points [-]

I saw the 2015 remake of Death Note, and I was so disappointed. :(

The goal of the remake was probably to make the conflict of two highly intelligent opponents more accessible to an audience of normies. Not a bad idea per se; I actually liked some of the changes.

The problem is that while making the changes, they introduced a few obvious logical errors, probably as a side effect of trying to make some scenes more dramatic. Which matters a lot in a story based on the premise that two highly intelligent opponents are fighting by exploiting each other's smallest mistakes; and then something completely stupid happens and no one notices, most likely because the author of the remake didn't notice it.

I'll try to avoid being unnecessarily specific; but here is the general pattern: In the story universe, it is possible to cast magical spells on other people. If certain preconditions are met, the magic makes people follow a script specified by the caster. If the preconditions are not met, nothing happens. (There is no such thing as partially meeting the preconditions; it's either yes or no.)

However, at least twice in the series the following happens: The mage casts the spell with a sequence of unlikely actions on someone. The victim does the unlikely action A, then does the unlikely action B, and then... as a big surprise... at last moment it turns out they don't do the remaining unlikely action C! How is that possible? Turns out someone else outsmarted the mage and made some of the preconditions fail, so the magic spell didn't work.

I guess at this moment the audience is supposed to cheer for the smart opponent, but I am left scratching my head: so, if the preconditions of the spell were not met, how was it possible in the first place that the victim did the unlikely actions A and B? The magic spell was cast in privacy; the victim had no chance to know the values of A, B, C. The victim didn't expect the spell to be cast; in one case the victim's unusual behavior was a new information for the opponent. It wasn't a coincidence; in one case the victim went to a specified abandoned place and pretended to be dead.

I generally don't mind something slightly illogical here and then, if the plot requires it. But in this specific case, it ruined the essence of the story. After this, "intelligent opponents cleverly gaining information by exploiting each other's small mistakes" became merely an applause light without substance.

Comment author: turchin 02 October 2016 12:04:33AM 3 points [-]

In the Trent's article even mentioned possible species of Dinos who may be able have intelligent explosion. http://www.strangehorizons.com/2009/20090713/trent-a.shtml

It means that we could find really interesting (and dangerous) things during excavations in Antarctica?

Comment author: Gram_Stone 01 October 2016 01:31:11AM 3 points [-]

I know it was slightly tangential, but the organ matchmaking software was really interesting to me. I doubt this is how the idea was conceived, but I think when you look at it in a particular way, it seems like a really elegant solution to an important coordination problem.

(Content note: organ trade.)

Currently, a big stumbling block with organ trade is that suppliers can only supply organs in an altruistic context because of moral intuitions about the respective sanctities of life and money; buying and selling organs is impure. This is really bad because it limits most donations to those from family members and those from people who donate their organs upon death. Family members aren't always compatible, and organs from cadavers don't last as long and are more likely to contain cancers. If you're incompatible with a family member or you want to splurge on the extra expected lifespan afforded by a non-cadaveric organ, you can't sell your organ and use the money to buy a compatible/non-cadaveric one from someone else. There seem to be a great many trades that don't take place because of solvable spatial and temporal constraints that would be avoided by the use of a medium of exchange like money. So, the legal organ trade is relatively inefficient in most countries.

The really cynical version of the problem that you're trying to solve, before you ever write the extremely important matchmaking algorithms that are easy to overlook from this point of view, is "How do I efficiently allocate organs without buying or selling them?" The current system of organ donation is practically a barter system, there must be a coincidence of wants between donor and recipient. So, one interpretation of what matchmaking software does is make pledges to donate organs into a medium of exchange when you can't use the normal medium of exchange for political reasons. When you make pledges to donate into a credible signal that you will in fact donate given the satisfaction of a certain set of easily verifiable conditions, you can use that signal in place of money to make more complex trades that you couldn't otherwise make with 'pure barter'.

Kind of useless armchair scholarship I guess, but I thought it was elegant.

(After cursory research I lean pro-legal organ trade (something more market-like than what exists today); willing to expand on this if anyone's interested in collecting a new contrarian opinion.)

Comment author: gjm 01 October 2016 01:02:35AM -1 points [-]

But why should the probability for lower-complexity hypotheses be any lower?

It shouldn't, it should be higher.

If you just meant "... be any higher?" then the answer is that if the probabilities of the higher-complexity hypotheses tend to zero, then for any particular low-complexity hypothesis H all but finitely many of the higher-complexity hypotheses have lower probability. (That's just part of what "tending to zero" means.)

Comment author: ChristianKl 30 September 2016 10:42:42PM 3 points [-]

For all those reasons Nassim Taleb wrote about, it's a bad idea to treat risk like it can be that precisely measured.

Comment author: Elo 30 September 2016 12:48:06AM -2 points [-]

I am no expert, but I wonder if you could run a monte-carlo on your expected responses. Do the questions you ask give you enough information to yield results?

Just not sure if your questions are honing correctly. Chances are there are people that know better than me.

In response to Seven Apocalypses
Comment author: wallowinmaya 29 September 2016 04:30:29PM *  3 points [-]

I don't understand why you exclude risks of astronomical suffering ("hell apocalypses").

Below you claim that those risks are "Pascalian" but this seems wrong.

Comment author: Vaniver 29 September 2016 01:36:18PM 3 points [-]

My impression is that activity begets more activity--if there were 0 posts today, having your self-post be the post for the day is more bothersome than if there were 10 posts today. But we can look at this in a month and see how it turned out.

Comment author: DanArmak 28 September 2016 06:34:06PM -1 points [-]

Thanks for linking this! It was well written and interesting, and I'm glad to have discovered a new blog to read.

Suggestion for a summary of the link:

Some arguments are really signals targeting some in-group. Outsiders frequently miss the intended meaning. These arguments are simply nonsensical if taken literally ("Bashar Assad was a Mossad agent sent to kill Syrian children"). This may be because clearly stating the intended meaning is politically incorrect outside the ingroup, or it may just be a rhetorical device.

It's important to recognize such arguments, even if we are unsure what they are really intended mean. Consider such alternative explanations when people say what seems to be nonsense or clearly wrong.

Comment author: helldalgo 28 September 2016 06:01:45PM 3 points [-]

A surprising movie that met many of these guidelines: Oculus. It's a horror movie, though, not a happy movie. The characters are smart and empathetic and it has Katie Sackhoff in it.

In response to Linkposts now live!
Comment author: ike 28 September 2016 03:54:41PM 3 points [-]

In feedly, I need to click once to get to the post and a second time to get to the link. Can you include a link within the body of the RSS so I can click to it directly?

Comment author: entirelyuseless 27 September 2016 11:08:17PM 2 points [-]

"A few short lines of code..."

AIXI is not computable.

If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.

Comment author: ChristianKl 27 September 2016 04:11:09PM 2 points [-]

I frequently hear people saying that self-help books are too long but I don't think that's really true. Changing deep patterns about how to deal with situations is seldomly made by reading a short summary of a position.

Comment author: Lightwave 27 September 2016 09:03:08AM 3 points [-]

He's writing an AI book together with Eliezer, so I assume he's on board with it.

Comment author: PECOS-9 27 September 2016 04:50:58AM *  3 points [-]

Anybody have recommendations of a site with good summaries of the best/most actionable parts from self-help books? I've found Derek Sivers' book summaries useful recently and am looking for similar resources. I find that most self-help books are 10 times as long as they really need to be, so these summaries are really nice, and let me know whether it may be worth it to read the whole book.

Comment author: smk 26 September 2016 11:50:14PM *  3 points [-]

Has Sam Harris stated his opinion on the orthogonality thesis anywhere?

Comment author: ChristianKl 26 September 2016 10:21:24PM 2 points [-]

I think Elo and Nancy have moderator rights. Various older people who don't frequent the website like EY also have moderator rights.

Comment author: ChristianKl 26 September 2016 10:16:38PM *  3 points [-]

I think that many physiotherapists could do a better job if they would have body scanners.

The BMI is a horrible metric and having cheap body scanners would move us past the BMI and provide us with better targets for weight management.

Given that, wouldn't having lots and lots of these scanners massively increase medical costs by creating many false positives?

In many cases I wouldn't need to go to the doctor if a good body scanner can tell me what's up with me. If the scanner can tell me whether my teeth are alright, I don't have to go to the dentist.

If I can get a body scan for mammogram from a person who isn't a breast surgery salesman as in the status quo, a false positive is also less likely to get me to do risky treatment.

Comment author: ChristianKl 26 September 2016 10:13:57PM 3 points [-]

Given that previous US debates results in a LW person writting an annotated version that pointed out every wrong claim made during the debate, why do you think that LW shies away from discussing US debates?

Secondly what do you think would "direct coverage" produce? There's no advantage for rational thinking in covering an event like this live. At least I can't imagine this debate going in a way where my actions significantly change based on what happens in the debate and it would be bad if I would gain the information in a week.

Direct coverage is an illness of mainstream media. Most important events in the world aren't known when they happen. We have Petrov day. How many newspapers covered the event the next day? Or even in the next month?

Comment author: James_Miller 26 September 2016 05:30:42PM 3 points [-]

My understanding of the medical value of body scanners comes from watching the TV show House. Given that, wouldn't having lots and lots of these scanners massively increase medical costs by creating many false positives?

Comment author: DataPacRat 26 September 2016 01:45:17PM 3 points [-]

Music to be resurrected to?

Assume that you are going to die, and some years later, be brought back to life. You have the opportunity to request, ahead of time, some of the details of the environment you will wake up in. What criteria would you use to select those details; and which particular details would meet those criteria?

For example, you might wish a piece of music to be played that is highly unlikely to be played in your hearing in any other circumstances, and is extremely recognizable, allowing you the opportunity to start psychologically dealing with your new circumstances before you even open your eyes. Or you may just want a favourite playlist going, to help reassure you. Or you may want to try to increase the odds that a particular piece survives until then. Or you may wish to lay the foundation for a practical joke, or a really irresistible one-liner.

Make your choice!

Comment author: Sable 26 September 2016 10:08:43AM 3 points [-]

I was at the vet a while back; one of my dogs wasn't well (she's better now). The vet took her back, and after waiting for a few minutes, the vet came back with her.

Apparently there were two possible diagnosis: let's call them x and y, as the specifics aren't important for this anecdote.

The vet specifies that, based on the tests she's run, she cannot tell which diagnosis is accurate.

So I ask the vet: which diagnosis has the higher base rate among dogs of my dog's age and breed?

The vet gives me a funny look.

I rephrase: about how many dogs of my dog's breed and age get diagnosis x versus diagnosis y, without running the tests you did?

The vet gives me another funny look, and eventually replies: that doesn't matter.

My question for Lesswrong: Is there a better way to put this? Because I was kind of speechless after that.

Comment author: MrMind 26 September 2016 09:42:32AM 3 points [-]

I've read it, still practice it and I recommend it.

The only piece of 'equipment' you'll need is a horizontal bar to do pullups (a branch or anything that supports your weight will work just as well).

Comment author: Tommi_Pajala 26 September 2016 09:10:26AM *  3 points [-]

Sounds like Convict Conditioning to me.

I haven't read it myself, but some friends have praised the book and the exercises included.

Comment author: Elo 25 September 2016 07:26:38PM -2 points [-]

not necessarily in lw jargon, but it appeals to some.

In response to Against Amazement
Comment author: Houshalter 23 September 2016 11:23:22AM 3 points [-]

Juergen Schmidhuber has a theory of artificial curiosity. His theory proposes that seeking confusion is actually a good thing. Agents that seek out situations where surprising things happen, put their internal models to the test and learn the most. And that's all curiosity is.

Amazement is just a form of curiosity. People who are interested in AlphaGo have had their internal models of AI progress challenged, and are updating them.

Comment author: ChristianKl 23 September 2016 10:32:10AM 3 points [-]

Did Zuckerberg make the right choice by a Berkeley, Stanford, and University of California collaboration decide how to spend their money? I guess BioHub will be similar than the NIH is how it allocates funding.

Zuckerberg could also have funded Aubrey de Grey. They could have funded research on how to make medical research better the way the Laura and John Arnold Foundation does.

TechCrunch:

The technologies Zuckerberg listed were “AI software to help with imaging the brain…to make progress on neurological diseases, machine learning to analyze large databases of cancer genomes,

Last year we made progress in understanding that the brain contains lympahtic tissue because a surgeon fund it. All the standard imaging didn't bring us forward. Using machine learning to analyze large databases of cancer genomes is also a well funded research area.

Funding AI technology to create <1000$ bodyscans based on technology like Walabot would likely bring us much further in understanding our bodies than the kind of research that's already well funded like brain imagining and genome analysis.

Comment author: Elo 21 September 2016 11:30:07PM -2 points [-]

is it google-able. If you google the name, will you show up easy? That's what having a name is all about right?

Comment author: Lumifer 21 September 2016 02:55:05PM 3 points [-]

Observational data doesn't allow one to distinguish correlation and causation.

No? If I observe a hammer striking a nail and the nail sinking into the wooden plank, is anyone going to argue that it's mere correlation and not causation?

Observational data doesn't always allow one one to distinguish correlation and causation.

I am also a bit confused since you're talking about learning values but your example is not about values but about a causal relationship.

Comment author: gjm 21 September 2016 02:12:24PM *  -1 points [-]

I think this is the second time within a week or two that someone who's never posted to LW before has come along with a video from this same person, asking "what do you think about this?" and the first time the person in question turned out to be here not to inquire but to proselytize.

[EDITED because what I initially wrote in the first paragraph wasn't quite what I intended.]

And I think what I've watched of this video (roughly the first 1/3, at double speed) is incredibly unimpressive: this guy took mind-altering drugs and had an experience that made a big impression on him, as people who take mind-altering drugs often do, and now he wants to tell us what an incredible enlightenment he's had. (And he keeps telling us that it's something we won't be able to understand ... and then goes on to try to explain it.)

Comment author: cousin_it 21 September 2016 10:38:05AM *  3 points [-]

Your first option fits somewhere between 4 and 5. Your second option fits at the end of the scale and I'm not sure why it wasn't included in the OP.

Comment author: hg00 21 September 2016 04:23:09AM 3 points [-]

Is anyone from LW part of a high IQ society that's more exclusive than Mensa? Can you tell us what it's like?

Comment author: James_Miller 20 October 2016 04:00:23PM 2 points [-]

Megyn Kelly walked by me once. If she had handed me a knife and asked me to remove my own heart and give it to her, part of my brain would have felt obligated to comply.

Comment author: siIver 20 October 2016 01:41:10AM *  2 points [-]

This may be a naive and over-simplified stance, so educate me if I'm being ignorant--

but isn't promiting anything that speeds up AI reasearch the absolute worst thing we can do? If the fate of humanity rests on the outcome of the race between solving the friendly AI problem and reaching intelligent AI, shouldn't we only support research that goes exclusively into the former, and perhaps even try to slow down the latter? The link you shared seems to fall into the latter category, aiming for general promotion of the idea and accelerating research.

Feel free to just provide a link if the argument has been discussed before.

Comment author: Manfred 20 October 2016 01:04:15AM *  2 points [-]

Depends on information. If people retain memories, so that each person-moment follows from a previous one, then knowing only that I suddenly find myself in a room means I'm probably in room A. If people are memory-wiped at some interval, then this increases the probability I should assign to being in room B - probability of being in a specific room, given that your state of information is that you suddenly find yourself in a room, is proportional to the number of times "I have suddenly found myself in a room" is somebody's state of information.

The above is in fact true. So here's a fun puzzler for you: why is the following false?

"If you tell me the exact time, then my room must more likely be B, because there are 1000 times more people in room B at that time. Since this holds for all times you could tell me, it is always true that my room is probably B, so I'm probably in room B."

Hint: Assuming that room B residents "live" 1,000,000 times longer than room A residents, how does their probability of being in room B look throughout their life, assuming they retain their memories?

Comment author: Gram_Stone 20 October 2016 12:22:21AM 2 points [-]

Here's a stab: If I understand you correctly, then every observer's experience is indistinguishable from every other's, so my credence in the proposition "I'm in room A" is 0.999 and my decision policy is "Bet that I'm in room A." If 100 trillion + 100 billion people choose room B, then 100 trillion will lose and 100 billion will win. If 100 trillion + 100 billion people choose room A, then 100 billion will lose and 100 trillion will win.

View more: Prev | Next