Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: DanArmak 28 September 2016 06:34:06PM 0 points [-]

Thanks for linking this! It was well written and interesting, and I'm glad to have discovered a new blog to read.

Suggestion for a summary of the link:

Some arguments are really signals targeting some in-group. Outsiders frequently miss the intended meaning. These arguments are simply nonsensical if taken literally ("Bashar Assad was a Mossad agent sent to kill Syrian children"). This may be because clearly stating the intended meaning is politically incorrect outside the ingroup, or it may just be a rhetorical device.

It's important to recognize such arguments, even if we are unsure what they are really intended mean. Consider such alternative explanations when people say what seems to be nonsense or clearly wrong.

In response to Linkposts now live!
Comment author: WalterL 28 September 2016 06:25:31PM 2 points [-]

Thanks for this. This was a smart change, and I doubt you were paid for it. I appreciate it.

Comment author: DanArmak 28 September 2016 06:15:56PM 1 point [-]

Maybe we should have subreddits on LW.

Another reason not to do this is that there aren't nearly enough daily posts on LW to further subdivide them.

Comment author: helldalgo 28 September 2016 06:01:45PM 1 point [-]

A surprising movie that met many of these guidelines: Oculus. It's a horror movie, though, not a happy movie. The characters are smart and empathetic and it has Katie Sackhoff in it.

Comment author: Houshalter 28 September 2016 05:58:52PM *  0 points [-]

Add a social norm where the best comments get linked to. I enjoy Yvain's SSC posts, and the comments section often contains some gems, but digging through all of them to find the gems is tedious.

The biggest problem with SSC is there is no voting. Lesswrong allows the best comments to rise to the top, in principle at least. Still I think it's a good idea, interesting discussion can be buried in nested comments, or be better than the main post.

Maybe we should have subreddits on LW. I'm not sure about this one.

I really don't like this idea. I think the best model for lesswrong is something like Hacker News. Hacker news has no sections for different topics and it's just vague "things of interest to hackers" which includes almost everything. I'd like to see lesswrong become "things of interest to rationalists", which could be everything from SSC posts to genetics research to AI research. I think it would work out well.

Whereas reddit excludes a lot of things that don't neatly fit into the limited topic of any specific large subreddit, and most people don't' subscribe to non default subreddits even if some of the content there might interest them.

In response to comment by ike on Linkposts now live!
Comment author: Vaniver 28 September 2016 05:40:15PM 1 point [-]

Made a github issue.

Comment author: Vaniver 28 September 2016 05:36:45PM 1 point [-]

Agreed that it makes sense to change the default. I think it also shouldn't be too hard to have an 'unread' feed, which works off whether you've clicked through before or the post has attracted enough new comments since you last saw it.

Comment author: Vaniver 28 September 2016 05:35:18PM 1 point [-]

Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff.

Posting links should be low-friction, and so it should be fine to post links without comment. That said, writing summaries in comments is very useful, and you should feel willing to do that even on links you didn't post.

Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Different subreddits seem best when used to separate norms / rules of discussion rather than topics. (Topics are often overlapping, and thus best dealt with using tags.) I think something like 'cold' and 'warm' subreddits, where the first has a more academic style and the second has a more friendly / improvisational style, might be sensible, but this remains to be seen.

In response to Seven Apocalypses
Comment author: Jude_B 28 September 2016 05:34:34PM 0 points [-]

Thanks for this summation.

Maybe we can divide item 7. to "our universe apocalypse" and "everything that (physically) exists apocalypse." since the two might not be equal.

Of course, there might be things that exist necessarily and thus cannot be "apocalypsed out", and it also would be strange if the principle that brought our universe to existence can only operate once.

So while it might be possible to have a Multiverse apocalypse, I think that there will always be something (physical) existing (but I don't know if this thought really can comfort us if we get wiped out...)

By the way, how do you (up)vote here?

Cheers

Comment author: Daniel_Burfoot 28 September 2016 05:34:02PM *  0 points [-]

You should think a lot about Singapore, and maybe also Australia or Taiwan. Your best bet depends a bit on which country has company(s) that want to hire your skill set.

I think seriously about moving to SG or Australia, and I'm a US citizen.

FWIW, I think you are reading the geopolitical situation wrong about Chinese military ambitions. If China does anything militaristic, it will get hit hard with sanctions by the international community, which will wreck its export-dependent economy. China's goal is to re-establish itself as the center of the world by dominating the global economy.

Comment author: Houshalter 28 September 2016 05:33:31PM 0 points [-]

I don't know if this is lesswrong material, but I found it interesting. Cities of Tomorrow: Refugee Camps Require Longer-Term Thinking

“the average stay today in a camp is 17 years. That’s a generation.” These places need to be recognized as what they are: “cities of tomorrow,” not the temporary spaces we like to imagine. “In the Middle East, we were building camps: storage facilities for people. But the refugees were building a city,” Kleinschmidt said in an interview. Short-term thinking on camp infrastructure leads to perpetually poor conditions, all based on myopic optimism regarding the intended lifespan of these places.

Many refugees may never be able return home, and that reality needs to be realized and incorporated into solutions. Treating their situation as temporary or reversible puts people into a kind of existential limbo; inhabitants of these interstitial places can neither return to their normal routines nor move forward with their lives..

From City of Thorns:

The UN had spent a lot of time developing a new product: Interlocking Stabilized Soil Blocks (ISSBs), bricks made of mud, that could be used to build cheap houses in refugee camps. It had planned to build 15,000 such houses in Ifo 2 but only managed to construct 116 before the Kenyan government visited in December 2010 and ordered the building stopped. The houses looked too much like houses, better even than houses that Kenyans lived in, said the Department for Refugee Affairs, not the temporary structures and tents that refugees were supposed to inhabit.

From reddit:

Peru had an uprising in the 1980s in which the brutality of the insurgents, the Sendero Luminoso, caused mass migration from the Andes down to the coast. Lima's population grew from perhaps a million to its current 8.5 million in a decade. This occurred through settlements in pure desert, where people lived in shacks made of cardboard and reed matting. These were called "young villages", Pueblos Jóvenes.

Today, these are radically different. Los Olivos is now a lower-middle-class suburb, boasting one of the largest shopping malls in South America, gated neighborhoods, mammoth casinos and plastic surgery clinics. All now have schools, clinics, paved roads, electricity and water; and there is not a cardboard house in sight. (New arrivals can now buy prefab wooden houses to set up on more managed spaces, and the state runs in power and water.)

Zaatari refugee camp in Jordan, opened 4 years ago seems to be well on it's way to becomming a permanent city. It has businesses, permanent structures, and it's own economy.

Comment author: Pfft 28 September 2016 05:10:41PM 0 points [-]

The claim as stated is false. The standard notion of a UTM takes a representation of a program, and interprets it. That's not primitive recursive, because the interpreter has an unbounded loop in it. The thing that is is primitive recursive is a function that takes a program and a number of steps to run it for (this corresponds to the U and T in the normal form theorem), but that's not quite the thing that's usually meant by a universal machine.

I think the fact that you just need one loop is interesting, but it doesn't go as far as you claim; if an angel gives you a program, you still don't know how many steps to run it for, so you still need that one unbounded loop.

In response to Linkposts now live!
Comment author: WhySpace 28 September 2016 05:06:25PM 3 points [-]

Awesome! This strikes me as a very good thing, especially with your suggested social norms. I have 3 additional suggestions, though:

  1. Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff. The title of the link serves much the same purpose, and gives readers enough info to decide whether or not to click through. This is standard practice on the more intellectual subreddit, since they already have the background context and knowledge that 90% of the article is spent explaining.

  2. Add a social norm where the best comments get linked to. I enjoy Yvain's SSC posts, and the comments section often contains some gems, but digging through all of them to find the gems is tedious. I intend to quote or rephrase gems when I find them, and link to them in comments here.

  3. Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Comment author: korin43 28 September 2016 04:51:17PM 0 points [-]

Raising a family in Silicon Valley is notoriously expensive.

It's worth pointing out that Silicon Valley isn't typical though. Jobs there can be worth it if the companies pay enough (see: Netflix, Google, etc.), but there are plenty of reasonable-paying tech jobs in much cheaper areas.

In response to Linkposts now live!
Comment author: Houshalter 28 September 2016 04:24:57PM 5 points [-]

This is really awesome and could change the fate of lesswrong. I really think this will bring people back (at least more than any other easy to implement change.) I personally expect to spend more time here now, at least.

One thing to take note of is that lesswrong, by default, sorts by /new. As the volume of posts increases, it may be necessary to change the default sort to /hot or /top/?t=week. Especially if you want it to be presentable to newcomers or even old timers coming back to the site, you want them to see the best links first.

Comment author: DanArmak 28 September 2016 04:22:51PM 0 points [-]

The division into a scanner, and a person who interprets its results, is arbitrary. Both are subcomponents of a single apparatus.

If the scanner produces a hard to interpret picture, and an expert human interprets it (or publishes instructions for doing so), then maybe the scanner itself would be judged legal - although I expect judges would apply a standard similar to "does it have significant noninfringing uses?"

If the scanner attaches to each image a probability of breast cancer, encrypted with a secret key, and the expert human is merely decrypting the result, then the scanner would probably be prohibited too.

These are two points on a smooth gradient where the scanner outsources more or less work to the human. Where along it does the scanner become illegal? Probably at the point someone decides to stop it to make a point.

In response to Linkposts now live!
Comment author: Gram_Stone 28 September 2016 04:13:17PM 3 points [-]

Thank you James Lamine, Vaniver, and Trike Apps.

I also wanted to quote something Vaniver has said, but that was unfortunately downvoted below the visibility threshold at the time:

I've pushed for doing things the right way, even if it takes longer, rather than quicker attempts that are less likely to work.

In response to Linkposts now live!
Comment author: ike 28 September 2016 03:54:41PM 3 points [-]

In feedly, I need to click once to get to the post and a second time to get to the link. Can you include a link within the body of the RSS so I can click to it directly?

Comment author: Lumifer 28 September 2016 02:22:46PM 0 points [-]

is that future-me might be even less trustable to work towards my values

If whoever revives you deliberately modifies you, you're powerless to stop it. And if you're worried that future-you will be different from past-you, well, that's how life works. A future-you in five years will be different from current-you who is different from the past-you of five years ago.

As to precommitment, I don't think you have any power to precommit, and I don't think it's a good idea either. Imagine if a seven-year-old past-you somehow found a way to precommit the current-you to eating a pound of candy a day, every day...

Comment author: TheAncientGeek 28 September 2016 01:34:21PM 0 points [-]

That is, did the designers copy human algorithms for converting sensory inputs into thoughts? If so, then the right kind of experiences would seem to be guaranteed

You seem to be rather sanguine about the equivalence of thoughts and experiences.

(And are we talking about equivlanet experiences or identical experiences? Does a tomato have to be coded as red?)

Or did they find new ways to compute similar coarse-grained input/output functions? Then, assuming the creatures have some reflexive awareness of internal processes, they're conscious of something, but we have no idea what that may be like.

It's uncontroversial that the same coarse input-output mappings can be realised by different algorithms..but if you are saying that consc. supervenes on the algorithm, not the function, then the real possibility of zombies follows, in contradiction to the GAZP.

(Actually, the GAZP is rather terrible because irt means you won't; even consider the possibility of a WBE not being fully conscious, rather than refuting it on its own ground).

Comment author: MrMind 28 September 2016 01:18:30PM 0 points [-]

That's interesting, you think of yourself as an aspiring villain? What does that entail?

Comment author: TheAncientGeek 28 September 2016 01:09:34PM *  2 points [-]

My reply to Cerullo:

"If we exactly duplicate and then emulate a brain, then it has captured what science tells us matter for conscious[ness] since it still has the same information system which also has a global workspace and performs executive functions. "

It'll have what science tells us matters for the global workspace aspect of consciousness (AKA access consciousness, roughly). Science doesn't tell us what is needed for phenomenal consciousness (AKA qualia) , because it doesn't know. Consciousness has different facets. You are kind of assuming that where you have one facet, you must have the others...which would be convenient, but isn't something that is really known.

"The key step here is that we know from our own experience that a system that displays the functions of consciousness (the easy problem) also has inner qualia (the hard problem)."

Our own experience pretty much has a sample size of one, and therefore is not a good basis for a general law. The hard question here is something like: "would my qualia remain exactly the same if my identical information-processing were re-implemented in a different physical substrate such as silicon?". We don't have any direct experience of that would answer it. Chalmer's' Absent Qualia paper is an argument to the effect, but I wouldn't call it knowledge. Like most philosophical arguments, its an appeal to intuition., and the weakness of intuition is that it is kind of tied to normal circumstances. I wouldn't expect my qualia to change or go missing while my brain was functioning within normal parameters...but that is the kind of law that sets a norm within normal circumstances, not the kind that is universal and exceptionless. Brain emulation isn't normal, it is unprecedented and artificial.

Comment author: WalterL 28 September 2016 01:07:16PM 0 points [-]

On the one hand, there is no magical field that tells a code file whether the modifications coming into it are from me (human programmer) or the AI whose values that code file is. So, of course, if an AI can modify a text file, it can modify its source.

On the other hand, most likely the top goal on that value system is a fancy version of "I shall double never modify my value system", so it shouldn't do it.

Comment author: TheAncientGeek 28 September 2016 12:42:49PM 0 points [-]

Everyone should care about pain-pleasure spectrum inversion!

Comment author: ChristianKl 28 September 2016 12:27:11PM 0 points [-]

A straightforward question would be: "What's the probability for diagnosis A and what's the probability for diagnosis B?".

Unfortunately you are likely out of luck because your vet doesn't know basic statistics to give you a decent answer.

Comment author: ChristianKl 28 September 2016 12:22:06PM 0 points [-]

I don't think nothing of consequence changed for the Iraqi's through the election of Bush.

Comment author: TheAncientGeek 28 September 2016 12:18:25PM 0 points [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic. Is it theoretically possible?

Is it possible for a natrual agent? If so, why should it be impossible for an artifical agent?

Are you thinking that it would be impossible to code in software, for agetns if any intelligence? Or are you saying sufficiently intelligent agents would be able and motivated resist any accidental or deliberate changes?

With regard to the latter question, note that value stability under self improvement is far from a give..the Lobian obstacel applies to all intelligences...the carrot is always in front of the donkey!

https://intelligence.org/files/TilingAgentsDraft.pdf

Comment author: ChristianKl 28 September 2016 12:12:22PM 0 points [-]

Legally I think the scanner might not be allowed to tell you whether you have breast cancer but I think it might be allowed to show you a pretty 3D picture.

Comment author: Lightwave 28 September 2016 10:05:12AM *  0 points [-]

He's mentioned it on his podcast. It won't be out for another 1.5-2 years I think.

Also Sam Harris recently did a TED talk on AI, it should be freely available on the internet within a week or two.

Comment author: username2 28 September 2016 08:20:00AM 0 points [-]

Nicolas Jaar - Space is only Noise which start around the time I regain sound perception.

Comment author: ThoughtSpeed 28 September 2016 08:16:59AM 0 points [-]

I think my go-to here would be Low of Solipsism from Death Note. As an aspiring villain being resurrected, I can't think of anything more dastardly.

Comment author: ThoughtSpeed 28 September 2016 08:08:45AM 0 points [-]

Is that for real or are you kidding? Can you link to it?

Comment author: Clarity 28 September 2016 03:20:27AM 2 points [-]
Comment author: hg00 28 September 2016 01:43:41AM *  3 points [-]

My understanding is that a USA programmer would start at the $20,000-a-year level (?), and that someone with experience can probably get twice that, and a senior one can get $100,000/year.

A pessimistic starting salary for a competent US computer programmer is $60K and senior ones can clear $200K. $100K is a typical starting salary for a computer science student who just graduated from a top university (also the median nationwide salary).

In the US market, foreigners come work as computer programmers by getting H1B visas. The stereotypical H1B visa programmer is from India, speaks mostly intelligible English with a heavy accent, gets hired by a company that wants to save money by replacing their expensive American programmers, and exists under the thumb of their employer (if they lose their job, their visa is jeopardized). I think that the average H1B makes less money than the average American coder. It sounds to me like you'd be a significantly more attractive hire than a typical H1B--you're fluent in English, and you've made contributions to Scheme?

The cost of living in the US is much higher than the Philippines. Raising a family in Silicon Valley is notoriously expensive. Especially if you want your kids to go to a "good school" where they won't be bullied. I don't know what metro has the best job availability/cost of living/school quality tradeoff. It will probably be one of the cities that's referred to as a "startup hub", perhaps Seattle or Austin. If your wife is willing to homeschool, you don't have to worry about school quality.

You can dip your toes in Option 1 without taking a big risk. Just start applying to US software companies. They'll interview you via Skype at first, and if you seem good, the best companies will be willing to pay for your flight to the US to meet the team. To save time you probably want to line up several US interviews for a single visit so you can cut down on the number of flights. Here are some characteristics to look for in companies to apply to:

  • The company has a process in place for hiring foreigners.

  • The company is looking for developers with your skill set.

  • The company's developer team is "clued in". Contributing to Scheme is going to be a big positive signal to the right employer. You can do things like read the company engineering blog, use BuiltWith, look up the employees on LinkedIn to figure out if the company seems clued in. Almost all companies funded by Y Combinator are clued in. If your interviewer's response to seeing Scheme on your resume is "What is Scheme?", then you're interviewing at the wrong company and you'll be offered a higher salary elsewhere.

  • The company is profitable but not sexy. For example, selling software to small enterprises. (You probably don't want to work for a business that sells software to large enterprises, as these firms are generally not "clued in". See above.) Getting a job at a sexy consumer product company like Google or Facebook is difficult because those are the companies that everyone is applying to. You can interview at those companies for fun, as the last places you look at. And you don't want to apply for a startup that's not yet profitable because then you're risking your wife and kids on an unproven business. I'm not going to tell you how to find these companies--if you use the same methods everyone else uses to find companies to apply to, you'll be applying to the same places everyone else is.

Of course you'll be sending out lots of resumes because you don't have connections. Maybe experiment with writing an email cover letter very much like the post you wrote here, including the word "fucking". I've participated in hiring software developers before, and my experience is that attempts at formal cover letters inevitably come across as stuffy and inauthentic. Catch the interviewer's interest with an interesting email subject line+first few sentences and tell a good story.

Actually you might have some connections--consider reaching out to companies that are affiliated with the rationalist community, posting to the Scheme mailing list if that's considered an acceptable thing to do, etc.

Consider donating some $ to MIRI if my advice ends up proving useful.

Comment author: DataPacRat 28 September 2016 01:33:52AM 0 points [-]

living will

In the latest draft, I've rewritten at least half from scratch, focusing on the reasons why I want to be revived in the first place, and thus under which circumstances reviving me would help those reasons.

future-you

The whole point about being worried about hostile entities taking advantage of vulnerabilities hidden in closed-source software is that future-me might be even less trustable to work towards my values than the future-self of a dieter can be trusted not to grab an Oreo if any are left in their home. Note to self: include the word 'precommitment' in version 0.2.1.

Comment author: almkglor 28 September 2016 12:03:37AM 0 points [-]

Hmm, why is this still a draft.... hmm

Comment author: entirelyuseless 27 September 2016 11:08:17PM 1 point [-]

"A few short lines of code..."

AIXI is not computable.

If we had a computer that could execute any finite number of lines of code instantaneously, and an infinite amount of memory, we would not know how to make it behave intelligently.

Comment author: DanArmak 27 September 2016 09:19:24PM *  0 points [-]

Unfortunately, I don't think we have the regulatory regime to sell them to consumers. At least, not in the US and Europe, and not while making available any information on how the scanner might produce medically relevant data.

Another Musk-level capital investment might be needed to solve this hurdle.

Comment author: username2 27 September 2016 08:23:57PM *  1 point [-]

Something that also makes this point is AIXI. All the complexity of human-level AGI or beyond can be accomplished in a few short lines of code... if you had the luxury of running with infinite compute resources and allow some handwavery around defining utility functions. The real challenge isn't solving the problem in principle, but defining the problem in the first place and then reducing the solution to practice / conforming to the constraints of the real world.

Comment author: Lumifer 27 September 2016 06:27:40PM *  0 points [-]

You are writing a, basically, living will for a highly improbable situation. Conditional on that situation happening, I think that since you have no idea into which conditions you will wake up, it's best to leave the decision to the future-you. Accordingly, the only thing I would ask for is the ability for your future-you to decide his fate (notably, including his right to suicide if he makes this choice).

Comment author: vallinder 27 September 2016 05:28:28PM 3 points [-]

I don't think it's fair to say that "nobody understood induction in any kind of rigorous way until about 1968." The linked paper argues that Solomonoff prediction does not justify Occam's razor, but rather that it gives us a specific inductive assumption. And such inductive assumptions had previously been rigorously studied by Carnap among others.

But even if we grant that assumption, I don't see why we should find it surprising that science made progress without having a rigorous understanding of induction. In general, successfully engaging in some activity doesn't require having a rigorous understanding of that activity, and making inductive inferences is something that comes very natural to human beings.

Moreover, it seems that algorithmic information theory has (at best) had extremely limited impact on actual scientific practice in the decades since the field was born. So even if it does constitute the first rigorous understanding of induction, the lesson seems to be that scientific progress does not require such an understanding.

Comment author: ChristianKl 27 September 2016 04:11:09PM 1 point [-]

I frequently hear people saying that self-help books are too long but I don't think that's really true. Changing deep patterns about how to deal with situations is seldomly made by reading a short summary of a position.

Comment author: pcm 27 September 2016 03:04:58PM 1 point [-]

See ontological crisis for an idea of why it might be hard to preserve a value function.

Comment author: Brillyant 27 September 2016 02:53:40PM 0 points [-]

I'd argue U.S. policy is too important and consequential to require elaboration.

"Following politics" can be a waste of time, as it can be as big a reality show circus as the Kardashians. But it seems to me there are productive ways to discuss the election in a rational way. And it seems to me this is a useful way to spend some time and resource.

Comment author: ChristianKl 27 September 2016 02:48:47PM 0 points [-]

Cheap scales don't measure body fat uniformly. They ignore arm composition. For the purposes of standarization they give different answers than the expensive devices used in clinical studies.

Fitness studies also measure more than body fat. They measure the circumference of various body regions. I don't think a measurement that doesn't take into account the shape of a body produce a good answer.

why do you think a device two orders of magnitude more expensive would?

Most medical devices that set standards aren't very cheap. Very cheap devices give nobody an incentive to run the studies

Comment author: Mac 27 September 2016 02:02:00PM 0 points [-]

"Everything in Its Right Place" by Radiohead would capture the moment well; it's soothing yet disorienting, and a tad ominous.

Comment author: Daniel_Burfoot 27 September 2016 01:51:27PM 0 points [-]

This paper makes me think again how amazing it is that science made any progress at all, before the middle part of the 20th century. Science is completely based on induction, and nobody understood induction in any kind of rigorous way until about 1968, but still people managed to make scientific progress. Occam, Bacon, Hume, Popper and others were basically just hand-waving; thankfully this hand-waving was nearly enough correct that it enabled science, but it was still hand-waving.

Comment author: MrMind 27 September 2016 12:53:40PM 0 points [-]

Because primitive recursion is quite easy, and so it is quite easy to get a universal Turing machine. Filling that machine with a useful program is another thing entirely, but that's why we have evolution and programmers...

Comment author: MrMind 27 September 2016 12:51:09PM *  0 points [-]

Yes, but the U() and the T() are primitive recursive. Unbounded search is necessary to get the encoding of the program, but not to execute it, that's why I said "if an angel gives you the encoding".

The normal form theorem indeed says that any partial recursive function is equivalent to two primitive recursive functions / relations, namely U and T, and one application of unbounded search.

Comment author: Drahflow 27 September 2016 10:33:08AM 1 point [-]

A counterexample to your claim: Ackermann(m,m) is a computable function, hence computable by a universal Turing machine. Yet it is designed to be not primitive recursive.

And indeed Kleene's normal form theorem requires one application of the μ-Operator. Which introduces unbounded search.

Comment author: username2 27 September 2016 09:05:46AM 0 points [-]

Why is this useful to remember?

Comment author: username2 27 September 2016 09:04:33AM 3 points [-]

tonight—and the U.S. POTUS election writ large—is shaping up to be a very consequential world event

Is that actually true? I've lived through many US presidential eras, including multiple ones defined by "change." Nothing of consequence really changed. Why should this be any different? (Rhetorical question, please don't reply as the answer would be off-topic.)

Consider the possibility that if you want to be effective in your life goals (the point of rationality, no?) then you need to do so from a framework outside the bounds of political thought. Advanced rationalists may use political action as a tool, but not for the search of truth as we care about here. Political commentary has little relevance to the work that we do.

Comment author: Lightwave 27 September 2016 09:03:08AM 0 points [-]

He's writing an AI book together with Eliezer, so I assume he's on board with it.

Comment author: username2 27 September 2016 08:57:42AM 0 points [-]

Depends entirely on the agent.

Comment author: MrMind 27 September 2016 07:09:09AM 0 points [-]

This is awesome, thank you!

Comment author: Gunnar_Zarncke 27 September 2016 06:58:50AM 1 point [-]

1 for all relevant possibilities, and an inner paint touching only part of that outer paint.

I don't get what the inner and outer paint stands for.

Comment author: Good_Burning_Plastic 27 September 2016 05:39:58AM 0 points [-]

The BMI is a horrible metric and having cheap body scanners would move us past the BMI

We have had cheap bathroom scales measuring body fat percentages (not terribly accurately, but still better than guessing from the BMI) for a while; if those didn't "move us past the BMI", why do you think a device two orders of magnitude more expensive would?

Comment author: UmamiSalami 27 September 2016 05:23:29AM 0 points [-]

See Omohundro's paper on convergent instrumental drives

Comment author: PECOS-9 27 September 2016 04:50:58AM *  0 points [-]

Anybody have recommendations of a site with good summaries of the best/most actionable parts from self-help books? I've found Derek Sivers' book summaries useful recently and am looking for similar resources. I find that most self-help books are 10 times as long as they really need to be, so these summaries are really nice, and let me know whether it may be worth it to read the whole book.

Comment author: Elo 27 September 2016 02:36:16AM 0 points [-]

yes

Comment author: WhySpace 27 September 2016 02:06:25AM *  6 points [-]

Happy Petrov day!

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983. Wherever you are, whatever you're doing, take a minute to not destroy the world.

  • 2007 - We started celebrating with the declaration above, followed by a brief description of incident. In short, one man decided to ignore procedure and report an early warning system trigger as a false alarm rather than a nuclear attack.

  • 2011 - Discussion

  • 2012 - Eneasz put together an image

  • 2013 - Discussion

  • 2014 - jimrandomh shared a program guide describing how their rationalist group celebrates the occasion. "The purpose of the ritual is to make catastrophic and existential risk emotionally salient, by putting it into historical context and providing positive and negative examples of how it has been handled."

  • 2015 - Discussion

Comment author: smk 26 September 2016 11:50:14PM *  1 point [-]

Has Sam Harris stated his opinion on the orthogonality thesis anywhere?

Comment author: Ozyrus 26 September 2016 11:25:21PM *  0 points [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment author: ChristianKl 26 September 2016 10:21:24PM 0 points [-]

I think Elo and Nancy have moderator rights. Various older people who don't frequent the website like EY also have moderator rights.

Comment author: ChristianKl 26 September 2016 10:16:38PM *  3 points [-]

I think that many physiotherapists could do a better job if they would have body scanners.

The BMI is a horrible metric and having cheap body scanners would move us past the BMI and provide us with better targets for weight management.

Given that, wouldn't having lots and lots of these scanners massively increase medical costs by creating many false positives?

In many cases I wouldn't need to go to the doctor if a good body scanner can tell me what's up with me. If the scanner can tell me whether my teeth are alright, I don't have to go to the dentist.

If I can get a body scan for mammogram from a person who isn't a breast surgery salesman as in the status quo, a false positive is also less likely to get me to do risky treatment.

Comment author: ChristianKl 26 September 2016 10:13:57PM 3 points [-]

Given that previous US debates results in a LW person writting an annotated version that pointed out every wrong claim made during the debate, why do you think that LW shies away from discussing US debates?

Secondly what do you think would "direct coverage" produce? There's no advantage for rational thinking in covering an event like this live. At least I can't imagine this debate going in a way where my actions significantly change based on what happens in the debate and it would be bad if I would gain the information in a week.

Direct coverage is an illness of mainstream media. Most important events in the world aren't known when they happen. We have Petrov day. How many newspapers covered the event the next day? Or even in the next month?

Comment author: morganism 26 September 2016 10:13:08PM 0 points [-]

"A disaster is looming for American men"

"On the basis of these factors, I expect that more than one-third of all men between 25 and 54 will be out work at mid-century. Very likely more than half of men will experience a year of non-work at least one year out of every five."

https://www.washingtonpost.com/news/wonk/wp/2016/09/26/larry-summers-a-disaster-is-looming-for-american-men/

https://www.aei.org/

Comment author: morganism 26 September 2016 10:07:39PM *  0 points [-]

Trait Entitlement: A Cognitive-Personality Source of Vulnerability to Psychological Distress.

"First, exaggerated expectations, notions of the self as special, and inflated deservingness associated with trait entitlement present the individual with a continual vulnerability to unmet expectations. Second, entitled individuals are likely to interpret these unmet expectations in ways that foster disappointment, ego threat, and a sense of perceived injustice, all of which may lead to psychological distress indicators such as dissatisfaction across multiple life domains, anger, and generally volatile emotional responses"

http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/bul0000063

but of course..... Psychiatry as Bullshit

"By even the most charitable interpretation of the concept, the institution of modern psychiatry is replete with bullshit. "

http://www.ingentaconnect.com/contentone/springer/ehpp/2016/00000018/00000001/

story http://www.rawstory.com/2016/09/proven-wrong-about-many-of-its-assertions-is-psychiatry-bullsht/

Comment author: Alejandro1 26 September 2016 09:37:33PM 4 points [-]

Lately it seems that at least 50% of the Slate Star Codex open threads are filled by Trump/Clinton discussions, so I'm willing to bet that the debate will be covered there as well.

Comment author: Brillyant 26 September 2016 07:34:39PM -2 points [-]

'Tis a shame that an event like tonight's debate won't, and ostensibly never would have, received any direct coverage/discussion on LW, or any other rationality sites of which I am aware.

I know (I know, I know...) politics is the mind killer, but tonight—and the U.S. POTUS election writ large—is shaping up to be a very consequential world event, and LW is busy discussing base rates at the vet and LPTs for getting fit given limited square footage.

Comment author: Gleb_Tsipursky 26 September 2016 07:11:36PM -1 points [-]

I like those other examples for labeling others, though - might be a nice general strategy to employ.

Comment author: Brillyant 26 September 2016 06:24:30PM *  1 point [-]

The vet honestly doesn't know the answer to your question.

I'd suggest this is likely. Assuming both ailments are relatively common and not obviously known to be rare, I'd bet the vet just doesn't know the data necessary to discuss base rates in a meaningful way that would help determine X or Y.

Side note: My experience is that sometime the tests needed to help narrow down illnesses in animals are prohibitively expensive.

Comment author: gwern 26 September 2016 05:49:54PM 0 points [-]

Harney JW, Leary JD, Barofsky IB. "Behavioral activity of catnip and its constituents: nepetalic acid and nepetalactone", Fed Proc 1974; 33: 481 (/r/scholar)

Comment author: gwern 26 September 2016 05:49:29PM 0 points [-]

Behrman et al 1977, "Controlling for and measuring the effects of genetic and family environment in equations for schooling and labour market success", In Kinometrics, ed. P. Taubman. North Holland: Amsterdam (/r/scholar)

Comment author: James_Miller 26 September 2016 05:30:42PM 3 points [-]

My understanding of the medical value of body scanners comes from watching the TV show House. Given that, wouldn't having lots and lots of these scanners massively increase medical costs by creating many false positives?

Comment author: g_pepper 26 September 2016 05:26:10PM *  0 points [-]

Mightn't the vet have already factored the base rate in? Suppose x is the more common disease, but y is more strongly indicated by the diagnostics. In such a case it seems like the vet could be justified in saying that she cannot tell which diagnosis is accurate. For you to then infer that the dog most likely has x just because x is the more common disease would be putting undue weight on the Bayesian priors.

Comment author: Houshalter 26 September 2016 05:08:24PM 5 points [-]

"Base rate" is statistics jargon. I would ask something like "which disease is more common?" And then if they still don't understand, you can explain that its probably the disease that is most common, without explaining Bayes rule.

Comment author: Manfred 26 September 2016 04:58:00PM *  1 point [-]

Mahler's 2nd symphony, for reasons including the obvious.

Comment author: 9eB1 26 September 2016 03:06:19PM 5 points [-]

I have read Convict Conditioning. The programming in that book (that is, the way the overall workout is structured) is honestly pretty bad. I highly recommend doing the reddit /r/bodyweightfitness recommended routine.

  1. It's free.

  2. It has videos for every exercise.

  3. It is a clear and complete program that actually allows for progression (the convict conditioning progression standards are at best a waste of time) and keeps you working out in the proper intensity range for strength.

  4. If you are doing the recommended routine you can ask questions at /r/bodyweightfitness.

The main weakness of the recommended routine is the relative focus of upper body vs. lower body. Training your lower body effectively with only bodyweight exercises is difficult though. If you do want to use Convict Conditioning, /r/bodyweightfitness has some recommended changes which will make it more effective.

Comment author: MrMind 26 September 2016 02:32:40PM 0 points [-]

Who are the current moderators?

View more: Next