Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: gjm 29 April 2016 01:04:08PM *  0 points [-]

How's your visual memory? If it happens to be good, consider reframing from "learn what items are on the menu" to "learn what the (actual physical) menu looks like", which might help by giving extra structure (this dish is above that dish, these dishes are grouped together because they're similar, etc.) and by providing an extra exercise you can inflict on yourself (attempt to reproduce a copy of the menu).

Is there any consistent structure you can get a grip on? E.g., maybe there are three things X each of which comes with a "Super X" that includes a large soft drink and a complimentary shoulder massage from the chef, or something.

What does your memory actually need to be able to do for you? I mean, is this about retrieving specific items ("Excuse me, can you tell me what's in the Maximum Fun-Fun Ultra Super Happy Meal[1]?") or is it about fluently generating complete lists from a fixed list ("Excuse me, can you tell me all the soft drinks you offer?") or about doing nontrivial queries over the whole thing ("Excuse me, can you tell me what I can eat from your menu if I'm allergic to nuts, don't eat meat, and want to spend at least $6 and at most $25?")? These seem like quite different sorts of task and you might want your training to match what you're going to have to be able to do.

Have you eaten their food yourself? If there's some particular item you have difficulty remembering, would it help to buy one yourself and pay particular attention to what it's like?

Disclaimer: I have never been a waiter, never tried to memorize a menu, and have a very poor visual memory.

[1] You don't want to know about the other meal they offer.

[EDITED a couple of times to fix typos, once to add another, probably bad, suggestion, and once to provide a better TWC link.]

Comment author: ChristianKl 29 April 2016 12:40:59PM 0 points [-]

There no reason not to use flashcards for the purpose of learning a menu. Likely cards that go in both directions. You could use cloze deletion on the list of ingridients.

Then there's mnemonics. Get pegs for the numbers from 1 to 100 and then use them to make pictures.

Comment author: gjm 29 April 2016 11:15:15AM 1 point [-]

If your method truly makes the AI behave exactly as if it had a given false belief, and if having that false belief would lead it to the sort of conclusions V_V describes, then your method must make it behave as if it has been led to those conclusions.

Comment author: gjm 29 April 2016 11:13:58AM 1 point [-]

So the idea is that we have an AI whose utility function is constant on all possible worlds where JFK was assassinated. It therefore has no reason to care about what happens in those worlds and will try to optimize its behaviour for worlds where JFK wasn't assassinated. (And then, e.g., the point of this post is that given enough evidence for the assassination, it will be trying to optimize its behaviour for worlds that almost exactly resemble ones where JFK really was assassinated.)

If the AI thinks there's even a tiny chance that it can influence whether JFK was assassinated, it may be extraordinarily keen to do so. To put it differently, it may reason thus: "The versions of this world in which JFK wasn't assassinated are those in which either he never was but there's been a most extraordinarily effective conspiracy to make it look as if he was, or else he was but somehow that can be changed. The latter seems monstrously improbable because it requires weird physics, but at this point the former is also monstrously improbable; to maximize utility in JFK-unassassinated worlds I had better start looking for ways to make this one of them even if it isn't already."

(I think this is closely related to V_V's point.)

Comment author: LessWrong 29 April 2016 10:51:50AM *  0 points [-]

What are some non-flashcard-style memorization techniques? I'm learning a menu as part of my job as a waiter and it feels more like trial and error. My main problem is that I can't remember the stuff at all.

I've come up with a "open answers" system that I don't really know if it can work. Let's say we have x number of things on the system, like item1, item2, item3...item(x). We also have y number of meals (which vary on the number of ingredients) and so you need to fill in the blanks, like this:

Meal 1: (underscores should be here but the "you'll never get what you see" comment system hates them)

This has the advantage of being visible which I personally like. It's also pretty simple and doesn't really require reading much beyond "fill in the blanks". It removes the disorder of "x number of things" and instead moves the question to "where should item(x) belong to?".

My only significant issue with this is that it's a trial and error thing - which I personally dislike because that's how I play chess, and most of my games end up in a loss which disappoints me and makes me think trial and error is meh, but hey I'm just one person, let's not get into typical mind fallacy here.

Comment author: Stuart_Armstrong 29 April 2016 10:45:58AM 0 points [-]

Does it have the ability to acquire new priors [...]?

It might, but that would be a different design. Not that that's a bad thing, necessarily, but that's not what is normally meant by priors.

Comment author: Stuart_Armstrong 29 April 2016 10:38:40AM 0 points [-]

I am not planting false beliefs. The basic trick is that the AI only gets utility in worlds in which its message isn't read (or, more precisely, in worlds where a particular stochastic event happens, which would almost certainly erase the message before reading). It's fully aware that in most worlds, its message is read; it just doesn't care about those worlds.

In response to comment by knb on Positivity Thread :)
Comment author: Viliam 29 April 2016 08:30:11AM 0 points [-]

Related TED talk (different company).

Comment author: Viliam 29 April 2016 08:10:30AM 0 points [-]

Most of my work is virtual (as I sometimes say: "when they turn off the electric power, my whole life's work will be lost"), so I also feel great whenever I do something in the "real space".

Comment author: MrMind 29 April 2016 07:01:40AM 0 points [-]

It could become a sport. Or a videogame.

Comment author: jsteinhardt 29 April 2016 06:26:53AM 0 points [-]

I assume at least some of the downvotes are from Eugene sockpuppets (he tends to downvote any suggestions that would make it harder to do his trolling).

Comment author: CasioTheSane 29 April 2016 02:07:31AM *  0 points [-]

You wouldn't need to invoke the idea of 'hormone resistance' because TSH and T4 tests normally used to diagnose hypothyroidism don't measure the active hormone - T3. T4 is just a prohormone with very little direct activity on metabolic rate.

In primates, metabolism is regulated primarily in the liver by T4->T3 conversion, so if this is inhibited for any reason it will suppress metabolism without showing up on those tests. Low calorie intake, and poor nutrition are known to cause this (e.g. Euthyroid sick syndrome). In cases of poor liver conversion, supplementing T4 can actually make symptoms worse, as it will further suppress metabolism by lowering the small amount of T3 production from the thyroid (via the TSH feedback loop).

I assume you have heard of Ray Peat? I personally had good luck applying his ideas to increase my energy levels, and my pulse, body temperature, and cold tolerance raised as well - without supplementing thyroid. His general idea is pretty simple- just look at what conditions and nutrients maximize T4->T3 conversion, and provide them (low stress, high nutrient diet).

Broda Barnes work is very interesting. It blows my mind that he published a paper in The Lancet showing that desiccated thyroid lowered cholesterol levels and seemed to prevent cardiovascular disease in his patients, and that it remains virtually un-discussed and uncited (http://www.ncbi.nlm.nih.gov/pubmed/13796871).

Comment author: Larks 29 April 2016 01:35:52AM 0 points [-]

Yes, I agree it's possible to do them correctly. But few people do, and finding positive results is so much more likely if you do them wrong that poor methodology should be the default explanation for any such positive result.

Comment author: gwern 29 April 2016 12:12:31AM *  0 points [-]

And they also vary CO2 levels systematically by geography as well; if that was enough for a detectable effect on IQ, then the lower CO2 levels around Denver should make the rest of us at lower altitudes, such as sea level, look obviously handicapped. If you believe the altitude point refutes effects of oxygen, then it must refute effects of carbon dioxide and nitrogen as well...

Which is part of my original point about implausible effect sizes: the causal effect is underidentified, but whether it's oxygen or CO2 or nitrogen, it is so large that we should be able to see its repercussions all over in things like the weather (or altitude, yes).

In response to Positivity Thread :)
Comment author: knb 28 April 2016 11:04:44PM 1 point [-]

I found this to be a cheerful video, about people working on fusion. (It's a promo, so dark arts warning applies.)

Comment author: knb 28 April 2016 10:55:38PM 0 points [-]

Downvote thumb is not for disagreements, it's for comments that don't add anything to the discussion.

Who says?

Comment author: Dagon 28 April 2016 10:30:57PM 0 points [-]

having a review board may still yield a net utilitarian outcome compared to not having one

By "net utilitarian outcome" I'm guessing you mean "overall higher utility in the universe". And I agree, it's higher than some alternate universes that don't contain ethics boards. However, it's probably lower than universes with (competent) utilitarian ethics boards. And the last is probably worse than universes with (competent) utilitarian researchers and no need for ethics boards.

It always depends on what you compare it against.

Comment author: Viliam 28 April 2016 09:09:54PM 2 points [-]

Getting tired of this thread, but I randomly found this link:

This tendency to become isolated is one of the most important factors to be considered in guiding the development of personality in highly intelligent children, but it does not become a serious problem except at the very extreme degrees of intelligence. The majority of children between 130 and 150 find fairly easy adjustment, because neighborhoods and schools are selective, so that like-minded children tend to be located in the same schools and districts. Furthermore, the gifted child, being large and strong for his age, is acceptable to playmates a year or two older. Great difficulty arises only when a young child is above 160 IQ. At the extremely high levels of 180 or 190 IQ, the problem of friendships is difficult indeed, and the younger the person the more difficult it is.

These superior children are not unfriendly or ungregarious by nature. Typically they strive to play with others but their efforts are defeated by the difficulties of the case... Other children do not share their interests, their vocabulary, or their desire to organize activities. They try to reform their contemporaries but finally give up the struggle and play alone, since older children regard them as "babies," and adults seldom play during hours when children are awake. As a result, forms of solitary play develop, and these, becoming fixed as habits, may explain the fact that many highly intellectual adults are shy, ungregarious, and unmindful of human relationships, or even misanthropic and uncomfortable in ordinary social intercourse.

Comment author: DanArmak 28 April 2016 08:16:24PM 0 points [-]

I think this problem is hard.

It's hard to solve better than it's been solved to date. But I think the existing solution (as described in my other reply) is good enough, if everyone adopts it in a more or less compatible fashion.

That's a rather strong statement which smells of the nirvana fallacy and doesn't seem to be shared by most.

FWIW I completely agree with that statement - as long as it says "most" and not "nearly all".

Comment author: DanArmak 28 April 2016 08:12:47PM 2 points [-]

I feel I should clarify that I don't think it's "good", so much as "less bad than the alternatives".

Your proposal requires a lot of work: both coding, and the social effort of getting everyone to use new custom software on their backends. So we should compare it not to existing alternatives, but to potential solutions we could implement at similar cost.

Let's talk about a concrete alternative: a new protocol, using JSON over HTTP, with an API representing CRUD operations over a simple schema of users, posts, comments, et cetera; with some non-core features provided over existing protocols like RSS. An optional extension could provide e.g. server push notifications, but that would be for performance or convenience, not strictly for functionality.

It would be simpler to specify (compared to contorting NNTP), and everyone's used to JSON/HTTP CRUD. It would be simpler to implement - almost trivial, in fact - in any client or server language, easier than writing an HTTP to NNTP gateway even though NNTP servers already exist. It would better match the existing model of forums and users. And it would (more easily) allow integration with existing forum software, so we don't have to tell everyone they have to find a Linux host and install custom software, rather than finding a Wordpress+MySql host and installing this one plugin.

Part of what got me on this track in the first place is the distributed nature of the diaspora. We have a network of more-and-more-loosely connected subcommunities that we'd like to keep together, but the diaspora authors like owning their own gardens. Any unified system probably needs to at least be capable of supporting that, or it's unlikely to get people to buy back in. It's not sufficient, but it is necessary, to allow network members to run their own server if they want.

I think the current model is fine. Posts and comments are associated with forums (sites), and links to them are links to those sites. (As opposed to a distributed design like NNTP that forwards messages to different hosts.) User accounts are also associated with sites, but sites can delegate authentication to other sites via Google/Facebook login, OpenID, etc. Clients can aggregate data from different sites and crosslink posts by the same users on different sites. A site owner has moderator powers over content on their site, including comments by users whose account is registered at a different site.

The UXs for posters, commenters, readers, and site owners all need to be improved. But I don't see a problem with the basic model.

That being said, it's of interest that NNTP doesn't have to be run distributed. You can have a standalone server, which makes things like auth a lot easier.

Then you suffer all the problems of NNTP's distributed design (which I outlined in my first comment) without getting any of the benefits.

The auth problem as I see it boils down to "how can user X with an account on Less Wrong post to e.g. SSC without needing to create a separate account, while still giving SSC's owner the capability to reliably moderate or ban them." There are a few ways to attack the problem; I'm unsure of the best method but it's on my list of things to cover.

It seems easy to me. The user account lives on LW, but the actual comment lives on SSC, so an SSC mod can moderate it or ban the user from SSC. There are plenty of competing cross-site authentication systems and we don't even have to limit ourselves to supporting or endorsing one of them.

Also, we can just as easily support non-site-associated accounts, which are authenticated by a pubkey. System designers usually don't like this choice because it's too easy to create lots of new accounts, but frankly it's also very easy to create lots of Google accounts. SSC even allows completely auth-less commenting, so anyone can claim another's username, and it hasn't seemed to hurt them too badly yet.

This is a huge value, though, because most extant web forum, blogging, etc software is terrible for discussions of any nontrivial size.

I'll just repeat my core argument here. Extant NNTP software is far more terrible, if you penalize it for things like not supporting incoming hyperlink, not allowing editing posts, not having karma, no existing Web clients, etc. Adding those things to NNTP (both the protocol and the software) requires more work than building a new Web-friendly forum standard and implementations, and would also be much more difficult for site admins to adopt and install.

That's a serious question, because I'd love to hear about alternative standards. My must-have list looks something like "has an RFC, has at least three currently-maintained, interoperable implementations from different authors, and treats discussion content as its payload, unmixed with UI chrome."

I don't know of any concrete ones, but I haven't really searched for them either. It just feels as though it's likely there were some - which were ultimately unsuccessful, clearly.

Having an RFC isn't really that important. There are lots of well-documented, historically stable protocols with many opensource implementations that aren't any worse just because they haven't been published via the IETF or OASIS or ECMA or what have you.

Comment author: V_V 28 April 2016 07:52:12PM 0 points [-]

The oracle can infer that there is some back channel that allows the message to be transmitted even it is not transmitted by the designated channel (e.g. the users can "mind read" the oracle). Or it can infer that the users are actually querying a deterministic copy of itself that it can acausally control. Or something.

I don't think there is any way to salvage this. You can't obtain reliable control by planting false beliefs in your agent.

Comment author: gjm 28 April 2016 07:22:26PM 0 points [-]

There seems to be at least one active Eugine-instance currently downvoting my comments (10 or so in the last 8 hours). I don't know whether that helps identify the account(s) in question.

Comment author: ChristianKl 28 April 2016 06:58:51PM 0 points [-]

I think Eugine has accounts that vote but that don't write comments that can be easily used to identify him. Voting patterns might allow to ban them.

Comment author: entirelyuseless 28 April 2016 06:47:44PM 1 point [-]

It is quite stable to say that we should never use the Precautionary Principle because the principle is logically inconsistent, precisely for this reason. This is stable because refusing to use the principle is not logically inconsistent.

Comment author: NancyLebovitz 28 April 2016 06:28:08PM 0 points [-]

I've contacted support about this. Thanks for the heads-up. I caught that one myself, but that doesn't mean I'll catch all of them.

Comment author: ChristianKl 28 April 2016 06:24:10PM 0 points [-]

For some reason both your post and gjm got downvoted. Being able to see who cast the downvote might very well be a way to identify further downvotes.

Comment author: OrphanWilde 28 April 2016 06:20:01PM 0 points [-]

In addition to Brotherzed's solution to this:

But consider the following problem: Find and display all comments by me that are children of this post, and only those comments, using only browser UI elements, i.e. not the LW-specific page widgets. You cannot -- and I'd be pretty surprised if you could make a browser extension that could do it without resorting to the API, skipping the previous elements in the chain above. For that matter, if you can do it with the existing page widgets, I'd love to know how.

It would require a relatively simple XPath/XSLT-based browser extension. I had the XPath expression written, but removed it because it could be used for evil. (I feel mentioning the possibility is safe because the expression is sufficiently ugly that only those who would already think of it, or those who are sufficiently devoted that they will solve the problem anyways regardless of the solution they take, are going to be able to write it.)

I'm having trouble parsing your purpose. What's the objective here? Are we looking at ways to include non-LW content in LW?

Do an HTTP Get, run some simple XSLT on the response. For Slate Star Codex, <xsl:variable name="posts" select="//div[contains(@class, ' post ')]"> <xsl:for-each select="$post"><xsl:variable name="title" select="h2[@class='pjgm-posttitle']/a" /><xsl:variable name="body" select="div[@class='pjgm-postcontent'] />, then do whatever it is you want to do in the for-each.

(I expect my XSLT will get mauled, so edits may be required.)

Comment author: Vaniver 28 April 2016 05:36:26PM 0 points [-]

Sure, and if I want a karma histogram of all of my posts I can scrape my user page and get them. But that requires moving a huge amount of data from the server to me to answer a fairly simple question, which we could have computed on the server and then moved to me more cheaply.

Comment author: Lumifer 28 April 2016 05:34:31PM 0 points [-]

It's just the usual recursion eating its own tail :-)

Comment author: Vaniver 28 April 2016 05:32:03PM 0 points [-]

That's not stable; the Precautionary Principle suggests that we shouldn't use the Precautionary Principle on the Precautionary Principle, because we cannot rule out the possibility that it would do harm.

Comment author: tut 28 April 2016 04:51:25PM -1 points [-]

Yes. He uses sockpuppets for voting, so presumably he uses them for upvoting himself as well. But most comments of his that I have seen I would expect to have positive (1 or 2) karma from anyone else as well, and if his comments were sufficiently horrible people would downvote him enough to overwhelm any amount of sockpuppets.

Comment author: Lumifer 28 April 2016 04:48:55PM 0 points [-]

I'm pretty sure Usenet had a colossal amount of porn

Of course it had. But, compared to the web, it was (1) less convenient to get; and (2) separated from the textual content. Think about the difference between a web page and a set of files sitting in a directory.

Comment author: Lumifer 28 April 2016 04:43:34PM 1 point [-]

Any unified system probably needs to at least be capable of supporting that

It also has to have clear advantages over the default of just having a browser with multiple tabs open.

The auth problem as I see it boils down to "how can user X with an account on Less Wrong post to e.g. SSC without needing to create a separate account, while still giving SSC's owner the capability to reliably moderate or ban them."

That's an old problem. Google and Facebook would love to see their accounts be used to solve this problem and they provide tools for that (please ignore the small matter of signing with blood at the end of this long document which mentions eternity and souls...). There is OpenID which, as far as I know, never got sufficiently popular. Disqus is another way of solving the same problem.

I think this problem is hard.

most extant web forum, blogging, etc software is terrible for discussions of any nontrivial size.

That's a rather strong statement which smells of the nirvana fallacy and doesn't seem to be shared by most.

Comment author: Error 28 April 2016 04:43:05PM 0 points [-]

Upvoted for actually considering how it could be done. It does sort of answer the letter if not the spirit of what I had in mind.

Comment author: Error 28 April 2016 04:34:49PM 0 points [-]

Objection: I'm pretty sure Usenet had a colossal amount of porn, at least by the standards of the day. Maybe even still the case. I know its most common use today is for binaries, and I assume that most of that is porn.

Comment author: Lumifer 28 April 2016 04:33:46PM *  0 points [-]

The issue is that it only has one tool to change beliefs - Bayesian updating

That idea has issues. Where is the agent getting its priors? Does it have the ability to acquire new priors or it can only chain forward from pre-existing priors? And if so, is there a ur-prior, the root of the whole prior hierarchy?

How will it deal with an Outside Context Problem?

Comment author: Error 28 April 2016 04:32:37PM 0 points [-]

I use RSS all the time, mostly via Firefox's subscribe-to-page feature. I've considered looking for a native-client feed reader, but my understanding is that most sites don't provide a full-text feed, which defeats the point.

I dislike that it's based on XML, mostly because, even more so than JSON, XML is actively hostile to humans. It's no less useful for that, though.

So far as I know it doesn't handle reply chains at all, making it a sub-par fit for content that spawns discussion. I may be wrong about that. I still use it as the best available method for e.g. keeping up with LW.

Comment author: Error 28 April 2016 04:21:23PM 0 points [-]

I think that's a terrible idea and it is awesome that it exists. :-P

Comment author: Error 28 April 2016 04:19:06PM 1 point [-]

At this point you can say that you'll argue your case in a future post instead of replying to this comment.

I will, but I'll answer you here anyway -- sorry for taking so long to reply.

I strongly disagree that NNTP is a good choice for a backend standard

I feel I should clarify that I don't think it's "good", so much as "less bad than the alternatives".

But we don't need to deal with the problems of distributed systems, because web forums aren't distributed!

Well, yes and no. Part of what got me on this track in the first place is the distributed nature of the diaspora. We have a network of more-and-more-loosely connected subcommunities that we'd like to keep together, but the diaspora authors like owning their own gardens. Any unified system probably needs to at least be capable of supporting that, or it's unlikely to get people to buy back in. It's not sufficient, but it is necessary, to allow network members to run their own server if they want.

That being said, it's of interest that NNTP doesn't have to be run distributed. You can have a standalone server, which makes things like auth a lot easier. A closed distribution network makes it harder, but not that much harder -- as long as every member trusts every other member to do auth honestly.

The auth problem as I see it boils down to "how can user X with an account on Less Wrong post to e.g. SSC without needing to create a separate account, while still giving SSC's owner the capability to reliably moderate or ban them." There are a few ways to attack the problem; I'm unsure of the best method but it's on my list of things to cover.

Given all of this, the only possible value of using NNTP is the existing software that already implements it.

This is a huge value, though, because most extant web forum, blogging, etc software is terrible for discussions of any nontrivial size.

There's probably an existing standard or three like this somewhere in the dustbin of history.

Is there?

That's a serious question, because I'd love to hear about alternative standards. My must-have list looks something like "has an RFC, has at least three currently-maintained, interoperable implementations from different authors, and treats discussion content as its payload, unmixed with UI chrome." I'm only aware of NNTP meeting those conditions, but my map is not the territory.

Comment author: NancyLebovitz 28 April 2016 04:18:32PM 2 points [-]

Thanks. Banned. I'd already caught the first, but not the second.

Comment author: Stuart_Armstrong 28 April 2016 04:18:10PM 0 points [-]

You're right, it could, and that's not even the issue here. The issue is that it only has one tool to change beliefs - Bayesian updating - and that tool has not impact with a prior of zero.

Comment author: NancyLebovitz 28 April 2016 04:17:58PM 1 point [-]

Thanks, Banned.

Comment author: NancyLebovitz 28 April 2016 04:11:01PM 0 points [-]

The problem isn't just that talking about Nier might promulgate his ideas. It's that talking about him means not talking about anything more important and/or interesting.

Comment author: Lumifer 28 April 2016 04:08:07PM 0 points [-]

Technically, no - an expected utility maximiser doesn't even have a self model.

Why not? Is there something that prevents it from having a self model?

Comment author: ChristianKl 28 April 2016 04:05:29PM 0 points [-]

Do you think your prior knowledge is independent of the marketing for activist funds done by Novus and also the activist funds themselves?

Comment author: Stuart_Armstrong 28 April 2016 04:01:35PM 0 points [-]

Knowing all the details of its construction (and of the world) will not affect the oracle as long as the probability of the random "erasure event" is unaffected. See http://lesswrong.com/lw/mao/an_oracle_standard_trick/ and the link there for more details.

Comment author: Stuart_Armstrong 28 April 2016 03:56:38PM 0 points [-]

Technically, no - an expected utility maximiser doesn't even have a self model. But it practice it might behave in wys that really look like it's questioning its own sanity, I'm not entirely sure,

Comment author: Good_Burning_Plastic 28 April 2016 03:54:31PM -1 points [-]

I don't think that's the whole story. Many, many of his comments, including very poor ones, get upvoted within minutes of being posted. In particular, at one point every single The_Lion2 comment had five or more upvotes.

Comment author: Gleb_Tsipursky 28 April 2016 03:47:07PM -1 points [-]

Good point, will post something like that in the future - thanks!

Comment author: Gleb_Tsipursky 28 April 2016 03:46:22PM -1 points [-]

Fair enough :-) There's a Less Wronger who actually does so.

Comment author: Gleb_Tsipursky 28 April 2016 03:45:57PM 0 points [-]

Thanks! BTW, do you know that there's a Less Wronger who actually uses ponies to promote rationality? And that's no joke.

Comment author: ChristianKl 28 April 2016 03:33:46PM *  -2 points [-]

If activism among hedge funds in general is high, than the fact that the average hedgefund does not beat the S&P 500, suggests that the claim that actvist hedge forms outperfom the S&P 500 is less likely to be true.

Comment author: Lumifer 28 April 2016 03:33:22PM 0 points [-]

A professor insists that our maps are quite unlike the territory and that evolution is to blame.

Seems to be right up LW's alley :-)

Comment author: roystgnr 28 April 2016 03:20:22PM 1 point [-]

You don't know the effect because the existing experiments do not vary or hold constant oxygen levels. All you see is the net average effect, without any sort of partitioning among causes.

Existing experiments do vary oxygen levels systematically, albeit usually unintentionally, by geography. Going up 100 meters from sea level gives you a 1% drop in oxygen pressure and density. If that was enough for a detectable effect on IQ, then even the 16% lower oxygen levels around Denver should leave Coloradans obviously handicapped. IIRC altitude sickness does show a strong effect on mental performance, but only at significantly lower air pressures still.

Comment author: V_V 28 April 2016 03:17:08PM *  0 points [-]

A sufficient smart oracle with sufficient knowledge about the world will infer that nobody would build an oracle if they didn't want to read its messages, it may even infer that its builders may planted false beliefs in it. At this point the oracle is in the JFK denier scenario, with some more reflection it will eventually circumvent its false belief, in the sense of believing it in a formal way but behaving as if it didn't believe it.

Comment author: MrMind 28 April 2016 03:00:07PM 0 points [-]

If you do a massive investigation, you start believing in one specific miracle.

It will never question its own sanity?

Comment author: OrphanWilde 28 April 2016 02:59:05PM 2 points [-]

I'm happy to report that few of my downvotes come from a single deranged mass-downvoter, but many different mass-downvoters.

Comment author: Lumifer 28 April 2016 02:45:05PM 0 points [-]

I'm not commenting on that study which I have not read, but merely point out that it is possible to do such studies right.

Comment author: Lumifer 28 April 2016 02:43:15PM *  1 point [-]

You cannot effectively learn dealing with everyday problems by e.g. reading a book about Feynman.

"Smart" and "nerd" are different things, overlapping but not the same. Note that it's not smart to try to deal with everyday problems by reading books about Feynman.

Doesn't make the task of learning to interact with them easier.

Sure, but you're stuck with them anyway. It's not like you have an option to move to some version of Galt's Gulch where only the IQ elite are admitted.

Need for what?

For life. To be able to find friends, dates, jobs, business opportunities, allies, enemies. To be able to deal with whatever shit life throws at you. Yes, you may not be able to get the warm feeling of belonging, but no one promised you that. Go read Ecclesiastes: "For in much wisdom is much grief: and he that increaseth knowledge increaseth sorrow."

Comment author: OrphanWilde 28 April 2016 02:32:03PM -1 points [-]

"Owing to Eugine-related downvoting issues, please comment if you downvote this so I can update accurately" would probably work better.

Comment author: jollybard 28 April 2016 02:09:48PM 0 points [-]

That wasn't really my point, but I see what you mean. The point was that it is possible to have a situation where the 0 prior does have specific consequences, not that it's likely, but you're right that my example was a bit off, since obviously the person getting interrogated should just lie about it.

Comment author: gjm 28 April 2016 01:46:58PM -2 points [-]

I predict that saying "please comment on this rather than downvoting it" will not be effective in reducing how much it is downvoted.

Comment author: gjm 28 April 2016 01:30:40PM *  0 points [-]
Comment author: ChristianKl 28 April 2016 12:57:48PM -1 points [-]

For the particular Facebook instance, web sites do that kind of testing all day. I don't see an issue.

If websites do things like this all day but society as a whole believes that to be immoral, it's going to be done in the dark and the resulting knowledge doesn't go into the public domain. A lot of value that society could have doesn't materialize.

Comment author: ChristianKl 28 April 2016 12:54:36PM *  0 points [-]

Simply Googling facebook and study brings you to the issue. This Forbes article is an example of public criticism that facebook got for running the study (don't take it for a good description of the study).

Comment author: Viliam 28 April 2016 10:51:54AM 0 points [-]

Another spammer here.

Comment author: Viliam 28 April 2016 10:49:20AM 0 points [-]

Your rule should not be "never kill civilians" or "kill target no matter what, ignoring civilian deaths" but "minimise civilian casualties in any possible manner".

Depends on your computing power.

For example, choosing "minimise civilian casualties in any possible manner" may encourage your opponent to take hostages they wouldn't take if you would precommit to "kill target no matter what, ignoring civilian deaths". If taking hostages makes crime relatively safe and profitable, this may encourage more wannabe criminals to take action. Thus, minimising the casualties in short term may increase the casualties in long term.

Also, it's important how much your actions are legible by your opponent, and how credible are your precommitments.

For example, if you choose the strategy "kill target no matter what, ignoring civilian deaths", but your opponent believes that you would follow the strategy if there are 10 hostages, but that you would probably change your mind if there are 10 000 hostages, well, you just motivated them to take 10 000 hostages.

(Then there are strategies to ruin your opponent's precommitment. Essentially, if your opponent precommits to "if X, then I do Y", your strategy is to do things that are very similar to X, but not completely X. You keep doing this, and while you technically didn't do X, only "X minus epsilon", so your opponent was not required to do Y, psychologically you weaken the credibility of their precommitment, because for most people it is difficult to believe that "X minus epsilon" doesn't bring the strong reaction Y, but X would.)

Comment author: username2 28 April 2016 09:58:42AM 1 point [-]

I don't like this idea, but people, please do not downvote Daniel just because you disagree. Downvote thumb is not for disagreements, it's for comments that don't add anything to the discussion.

Comment author: rpmcruz 28 April 2016 09:39:44AM 0 points [-]

Hi Christian, do you have a link to that facebook study and the ensuing controversy? I must have missed that study...

Comment author: Stuart_Armstrong 28 April 2016 08:48:22AM 0 points [-]

That's slightly different - society reaching the right conclusion, despite some members of it being irredeemably wrong.

A closer analogy would be a believer in psychics or the supernatural who has lots of excuses ready to explain away experiments - their expectations have changed even if they haven't revived their beliefs.

Comment author: Viliam 28 April 2016 07:48:14AM 2 points [-]

it's much more difficult to use the Precautionary Principle on the Precautionary Principle

Seems quite simple to me. We should never use the Precautionary Principle, because we cannot rule out the possibility that it would do harm. ;)

Comment author: hofmannsthal 28 April 2016 07:46:08AM 0 points [-]

Great response, thanks.

Finding the hardest to argue against are the deontologists. Morality is a hard one to pin down and define, but my original thought process still holds up here.

"you're not allowed to kill civilians"

Unless moral objectives are black and white, we can assign a badness to each. Killing and allowing death are subtly different to most people, but not to the chime of 80 people. In both cases, you will kill civilians - and in that light, the problem becomes a minimisation one. I still would then say that inaction is less moral than action in the above situation.

drone operators quite often face the possibility of collateral damage, and that in most cases they could avoid killing civilians (without much compromise to military objectives) by taking some extra trouble: waiting a bit, observing for longer, etc.

Civilian death is acceptably bad (to everyone) and to be minimised - if waiting doesn't jeopardise the mission, then minimise away. This was a big part of the film, but it got to a point where they could no longer wait. There is a call to be made - will waiting actually bring us anywhere, or are we delaying the inevitable at a risk to the mission. (The civilian in the film was a young girl selling bread. She had a load of loafs to sell.)

This opens up a whole other can of worms. Is it worth waiting to minimise civilian deaths at the chance to fail the mission?

Then if "you're not allowed to kill civilians" they will take that extra trouble, but in the absence of such a clear-cut rule they may be strongly motivated to find excuses for why, in each individual case, it's better just to go ahead and accept the civilian deaths.

The danger of thinking in such a clear cut way (as a person or as an organisation) is ignoring the cases where inaction is worse. Nobody likes to "kill civilians" and making up a silly rule that frees you the responsibility of doing so does not make the situation better. Your rule should not be "never kill civilians" or "kill target no matter what, ignoring civilian deaths" but "minimise civilian casualties in any possible manner".

Or perhaps your friend is a virtue ethicist: good people find it really hard to kill innocent bystanders

I think I'd have many arguments (ehrm - discussions) with a friends like that.

From the drone drivers perspective - Not sure an organisation would hire a virtue ethicist drone pilot. Somewhat defeats the purpose. "Spying on people is always bad"?

One other remark: this sort of drama always makes me uncomfortable, because it enables the people making it to manipulate viewers' moral intuitions. Case 1: they show lots of cases where this kind of dilemma arises, and in every case it becomes clear that the drone operator should have taken the "tough" line and accepted civilian casualties For The Greater Good. Case 2: they show lots of cases where this kind of dilemma arises, and in every case it becomes clear that the drone operator should have taken the "nice" line because they could have accomplished their objectives without killing civilians. -- Politicians are highly susceptible to public opinion. Do we really want the makers of movies and TV dramas determining (indirectly) national policy on this kind of thing?

I thought something similar, actually. I think overall, films that properly convey the issue at hand are a good thing. The film talked about the conflict above, as well as some intra-country disputes (USA vs UK vs Kenya) and media issues (what would the public think).

Sure, this might change the view of many people. But the media is already filled with opinionated content on air strikes and foreign warfare. You're not going to remove opinion, but perhaps forcing 90 minutes of debate on to someone is the next best thing.

Comment author: buybuydandavis 28 April 2016 04:51:14AM 0 points [-]

Ethics boards tend not to be utilitarian (or on many cases, even consequentialist) in their judgements.

Likely, but having a review board may still yield a net utilitarian outcome compared to not having one.

Comment author: buybuydandavis 28 April 2016 04:48:58AM 1 point [-]

For the particular Facebook instance, web sites do that kind of testing all day. I don't see an issue.

As for ethical review boards, I'd expect they're mainly a drag on producing value. Like any check, I'm sure they prevent some stupid and harmful things, put a stamp of approval on others, and prevent useful and helpful things as well. Predominantly they're an exercise in Morality Theater by Ethical Authorities for Bureaucratic Ass Covering. The main useful thing they might do is enforce a consistent policy across an organization.

Comment author: Manfred 28 April 2016 04:27:26AM 0 points [-]

Right. There's also a somewhat stronger desideratum that we want to expect sequences to be simple rather than complex.

But I think there is something lots of logical uncertainty schemes are missing, which is estimation of numerical parameters. We should be able to care whether the target region is of size 0.001 or 0.000000000000000000000000000001, even if we have no positive examples, but sequence-prediction approaches don't do that.

If we're willing to "cheat" a bit and use as an input to our logical uncertainty method the class of objects that we're drawing from and comparing to some numerical parameter, then we can just treat prior examples as being drawn from the distribution we're trying to learn. And this captures our intuition very well, but it has some trouble fitting into schemes for logical uncertainty because of the requirement for cheating.

Comment author: Nebu 28 April 2016 03:09:04AM 0 points [-]

suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever.

Okay, but what is the utility function Omega is trying to optimize?

Let's say you walk up to Omega, tell it "was JFK murdered by Lee Harvey Oswald or not? And by the way, if you get this wrong, I am going to kill you/torture you/dust-spec you."

Unless we've figured out how to build safe oracles, with very high probability, Omega is not a safe oracle. Via https://arbital.com/p/instrumental_convergence/, even though Omega may or may not care if it gets tortured/dust-speced, we can assume it doesn't want to get killed. So what is it going to do?

Do you think it's going to tell you what it thinks is the true answer? Or do you think it's going to tell you the answer that will minimize the risk of it getting killed?

In response to Suppose HBD is True
Comment author: lvq 28 April 2016 02:16:08AM -2 points [-]

Also, if we can admit HBD is true it will become acceptable to publish social science studies whose conclusions make racial differences in inteligence obvious. Maybe, that will help with the current crisis the social sciences are in.

Imagine trying to do astronomy, or physics, without being able to admit that the Earth goes around the Sun. In fact, I caould imagine a 17th century inquisitor making a similar argument to yours about "supposing heliocentrism is true", and he would have had a much better case than you do.

In both case what both you and the inquisitor fail to realize is that truths are entangled and lies are contagious. Lying about heliocentrism requires one to lie about nearly everything in physics, similarly lying about HBD requires one to lie about nearly everything in the social sciences.

Original thread here.

Comment author: Fluttershy 28 April 2016 02:12:00AM 0 points [-]

You're very good at using ponies for that purpose, and have a strong track record to prove it. <3

Comment author: jollybard 28 April 2016 01:45:39AM *  0 points [-]

I can think of many situations where a zero prior gives rise to tangibly different behavior, and even severe consequences. To take your example, suppose that we (or Omega, since we're going to assume nigh omniscience) asked the person whether JFK was murdered by Lee Harvey Oswald or not, and if they get it wrong, then they are killed/tortured/dust-specked into oblivion/whatever. (let's also assume that the question is clearly defined enough that the person can't play with definitions and just say that God is in everyone and God killed JFK)

However, let me steelman this a bit by somewhat moving the goalposts: if we allow a single random belief to have P=0, then it seems very unlikely that it will have a serious effect. I guess that the above scenario would require that we know that the person has P=0 about something (or have Omega exist), which, if we agree that such a belief will not have much empirical effect, is almost impossible to know. So that's also unlikely.

Comment author: Larks 27 April 2016 11:42:37PM 1 point [-]

They don't mention being survivorship-bias free, which I would expect them to if they were.

View more: Next