Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: cleonid 26 April 2015 12:29:13AM 0 points [-]

You raise very relevant points. I’ll try to address them without getting too technical.

the bonudary drawing is a moderation choice

Our recommendation system estimates the probability that a user A will like a comment B. It is then a personal choice of a user A to decide what is the right threshold (read all comments, ignore comments rated below 60%, etc.).

Another naive failure mode is that if a user is as a whole bucketed as good or bad.

We use a bit more sophisticated method.

it's hard to impossible to differentiate within a subject area

I’m not quite sure what you mean here. Could you elaborate on this?

while it would be text and there would be low amount of conflict based interruptions it would not be that communicative. In order to make transimission of information make sense you have to be able to send information that the receiver doesn't already have.

We use two methods to solve this problem. The first is to let people choose among several possible filters. For instance, the people can sort comments based on recommendations for their own in-group or they can read comments popular among large outgroups (liberals, conservatives, libertarians etc.). The second is to split all debate arguments in two groups – pros and cons. Users will then be able to read the best arguments (i.e. those recommended by their own group) against their current position.

Comment author: knb 26 April 2015 12:21:09AM *  0 points [-]

The Wall Street Journal has an article up claiming that the world economy is currently experiencing an excess of capital, labor, and commodities, and that this is potentially a cause of serious problems.

Could anyone explain to me how it is possible to have an excess of capital and an excess of labor?

Comment author: advancedatheist 25 April 2015 11:39:46PM 0 points [-]

Sigh. Another dead transhumanist. I never met Dan Fredinburg, but I gather from his friends' posts on Facebook that he wanted to upload his mind some day.

And what an unlikely way to die. You put your life at risk by trying to climb Everest under normal conditions. Fredinburg just happened to attempt that when a catastrophic earthquake struck Nepal.

Comment author: Slider 25 April 2015 11:32:28PM 1 point [-]

I would like to note that on what basis the automation is done is really sensitive whether indirect moderation is exercised. If you have categories or subject topics then a) it's hard to impossible to differentiate within a subject area b) the bonudary drawing is a moderation choice (ie who gets to suffer from their "neighbours" bad karma). Another naive failure mode is that if a user is as a whole bucketed as good or bad. This would misbehave if the person contributes to one area but gets really button pushed on another area.

And while it doesn't place that much limitations on the "free" part it might make limitations on the "speech" part. If you have multiple echo chambers that preach to the choir the cross-cultural interaction is missed out on even if cultural segregation is fully successful. That is while it would be text and there would be low amount of conflict based interruptions it would not be that communicative. In order to make transimission of information make sense you have to be able to send information that the receiver doesn't already have.

Comment author: Mark_Friedenbach 25 April 2015 11:16:56PM *  0 points [-]

Economic policies and historical analysis are not avoided on LW and are totally on-topic. Politics is avoided and would be off-topic. Do you understand the difference?

Comment author: Elo 25 April 2015 11:12:40PM 0 points [-]

Thats awesome! I didn't realise!

Comment author: cleonid 25 April 2015 10:56:23PM 0 points [-]

The website is intended for discussion of all ideologically divisive issues that are currently avoided on LW (economic policies, historical analysis etc.).

Comment author: shminux 25 April 2015 10:31:10PM 0 points [-]

Why discuss politics instead of policies?

Comment author: Manfred 25 April 2015 09:32:26PM 0 points [-]

Hm, I think we're talking past each other.

To give an example of what I mean, if a probabilistic programming language really implemented logical uncertainty, you could write a program that computes the last digit of the googolth prime number (possible outputs 0 through 9), and then use some getDistribution method on this deterministic program, and it would return the distribution {0.25 chance of 1, 0.25 chance of 3, 0.25 chance of 7, 0.25 chance of 9} (unless your computer actually has enough power to calculate the googolth prime).

Comment author: Kindly 25 April 2015 09:25:30PM 0 points [-]

So the standard formulation of a Newcomb-like paradox continues to work if you assume that Omega has a merely 99% accuracy.

Your formulation, however, doesn't work that way. If you precommit to suicide when Omega asks, but Omega is sometimes wrong, then you commit suicide with 1% probability (in exchange for having $990 expected winnings). If you don't precommit, then with a 1% chance you might get $1000 for free. In most cases, the second option is better.

Thus, the suicide strategy requires very strong faith in Omega, which is hard to imagine in practice. Even if Omega actually is infallible, it's hard to imagine evidence extraordinary enough to convince us that Omega is sufficiently infallible.

(I think I am willing to bite the suicide bullet as long as we're clear that I would require truly extraordinary evidence.)

Comment author: Dorikka 25 April 2015 09:14:14PM 0 points [-]

Did not read grandparent, but poes law is more likely to hold if speaker creates weak signals that a sentence is parody, compared to alterative hypotheses such as holding a curious view. When there is greater variance of views, a stonger signal is needed to provide same level of evidence

Comment author: dxu 25 April 2015 09:12:44PM *  0 points [-]

Count me among them. (My actual answer would have been between two poll options--a 4.5/5, so to speak, if I were rating it out of 5--so I selected the leftmost option in the first question and the second leftmost option in the second to average it out.)

In response to comment by oge on Self-verification
Comment author: Nanashi 25 April 2015 08:49:24PM 0 points [-]

Specifically, I planned on imagining what my response would be if I found a message supposedly "from myself" that was transmitted using one of these methods. How likely would I be to truly integrate into my identity this event of which I have no memory?

Comment author: RichardKennaway 25 April 2015 08:34:01PM 0 points [-]

but what to make of a relationship where one party provides regular sex in exchange for food and a place to stay

Sounds like one idea of traditional marriage. The woman promises to provide sex and the man promises to provide. Some feminists (e.g. Germaine Greer) have described this arrangement as prostitution.

Comment author: jacob_cannell 25 April 2015 08:19:23PM 0 points [-]

This doesn't look like what I want - have you read my take on logical uncertainty?

A little - yes approximation is key for practical performance. This is why neural nets and related models work so well - they allow one to efficiently search over model/function/program space for the best approximate models that fit memory/compute constraints.

Probabilistic programming seems to assign distributions to programs that involve random variables, but logical uncertainty assigns distributions to programs without random variables :P

Code and variables are interchangeable, so of course prob programming can model distributions over programs. For example, I could create a VM or interpreter with a big array of 'variables' that define opcodes. Graphical models and networks also encode programs as variables. There is no hard distinction between data/code/variables.

Comment author: Houshalter 25 April 2015 08:06:10PM *  0 points [-]

That's great to say, but much harder to actually do.

For example, if Omega pays $1,000 to people or asks them to commit suicide. But it only asks people it knows100% will not do it, otherwise it gives them the money.

The best strategy is to precommit to suicide if Omega asks. But if Omega does ask, I doubt most lesswrongers would actually go through with it.

Comment author: ChristianKl 25 April 2015 08:01:11PM 1 point [-]

I notice a lack of humor among LessWrong posters. When I talked to a friend about completing my journey to the Dark Side via prostitute, he laughed at the joke.

In a text only medium you can't tell at all whether people who read what you write laugh.

In this case laughing at speaking about "the Dark Side" is also a simple mechanism to avoid dealing with the substance of the issue. Laughing to avoid dealing with moral questions is not in the spirit of LW.

Comment author: Manfred 25 April 2015 08:01:04PM *  0 points [-]

That's done or being done. Look into "probabilistic programming".

This doesn't look like what I want - have you read my take on logical uncertainty? I still stand by the problem statement (though I'm more uncertain now about minimizing Shannon vs. Kolmogorov information), even if I no longer quite approve of the proposed solution.

EDIT: If I were to point to what I no longer approve of, it would be that while in the linked posts I focus on what's mandatory for limited reasoners, I now think it should be possible to look at what's actually a good idea for limited reasoners.

Probabilistic programming seems to assign distributions to programs that involve random variables, but logical uncertainty assigns distributions to programs without random variables :P

Comment author: Epictetus 25 April 2015 06:57:35PM 1 point [-]

The existence of prostitution puzzles me, because it looks like a dysfunction of human sexuality in agricultural societies.

Prostitution might not even be a uniquely human phenomenon.

There's also a question of what, exactly defines prostitution. It's straightforward enough when it's a one-time transaction, but what to make of a relationship where one party provides regular sex in exchange for food and a place to stay (a paleo sugar daddy)?

Comment author: jacob_cannell 25 April 2015 06:18:29PM 0 points [-]

But ultimately I'm looking into this because I want to understand the top-down picture, not because of specific applications

For that, I really really recommend "Representation Learning". Quote from the abstract:

"This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning."

It really covers the big picture view of the entirety of machine learning, and from multiple viewpoints (statistical, geometric, optimization, etc).

Well, integrating logical uncertainty with induction would be nice, but if that works it's more of a theoretical milestone than an application)

That's done or being done. Look into "probabilistic programming".

Comment author: Dorikka 25 April 2015 06:16:34PM 1 point [-]

This sort of thing should really be done on all polls, just in case people have very small error bars around the results...

Comment author: advancedatheist 25 April 2015 04:57:27PM -1 points [-]

I notice a lack of humor among LessWrong posters. When I talked to a friend about completing my journey to the Dark Side via prostitute, he laughed at the joke.

I don't anticipate feeling guilty afterwards. Pissed off, perhaps, because I couldn't make this happen organically in my teens and early 20's with young women I knew in high school or college.

The existence of prostitution puzzles me, because it looks like a dysfunction of human sexuality in agricultural societies. I gather that in some agricultural societies, many men have their first sexual experiences with prostitutes as a rite of passage.

Yet I haven't heard of any hunter-gatherer societies with prostitutes, though I would appreciate references to documented examples if you know of any. If you look to the paleolithic hunter-gatherer as the baseline for human welfare, as in paleonutrition, then a postulated "paleo-sexuality" wouldn't seem to allow for prostitution.

Comment author: TheAncientGeek 25 April 2015 04:51:55PM 0 points [-]

Utilitarianism doesn't have anywhere to place a non arbitrary level of obligation except at zero and maximum effort. The zero is significant, because it means utilitarianism can't bootstrap obligation .... I think that is the real problem, not demandingness.

Comment author: ouinon 25 April 2015 04:49:20PM 0 points [-]

I think that his theory is that the kinds of activity that the brain can carry on without the "attention modeling" and its consequent "conscious experience"/awareness is what most other animals haven't managed to progress beyond, and that the "attention model" ( of the parietal junction etc ), is what has enabled the massively sustained attention span of humans compared to other animals, and other advanced kinds of cognitive function, a step up in complexity, rather like the M button on a calculator. It has enabled the human brain to optimise attention processes, plan and organise attention, avoid distraction, and even more importantly, compared to other animals, to pay attention to things which are not in front of us, not present in the here and now, to conceive of and focus on imaginary things, which are elsewhere or don't exist/haven't been built yet.

Comment author: Kindly 25 April 2015 04:27:25PM 1 point [-]

Result spoilers: Fb sne, yvxvat nypbuby nccrnef gb or yvaxrq gb yvxvat pbssrr be pnssrvar, naq gb yvxvat ovggre naq fbhe gnfgrf. (Fbzr artngvir pbeeryngvba orgjrra yvxvat nypbuby naq yvxvat gb qevax ybgf bs jngre.)

I haven't done the responsible thing and plotted these (or, indeed, done anything else besides take whatever correlation coefficient my software has seen fit to provide me with), so take with a grain of salt.

Comment author: Romashka 25 April 2015 04:16:02PM 1 point [-]

Are there people who would be interested in a (virtual) reading group for Pearl's Causality?

Comment author: gjm 25 April 2015 03:00:50PM 2 points [-]

It would appear that five people have different opinions of fruit juice and fruit juice.

Comment author: Lukas_Gloor 25 April 2015 02:41:19PM *  0 points [-]

Are you sure? That meaning wasn't obvious to me?

I often got this as an objection to utilitarianism, the other premise being that utilitarianism is impractical for humans. I've talked to lots of people about ethics since I took high school philosophy classes, study philosophy at university, and have engaged in more than a hundred online discussions about ethics. The objection actually isn't that bad if you steelman it, maybe people are trying to say that they, as humans, care about many other things and would be overwhelmed with utilitarian obligations. (But there remains the question whether they care terminally about these other things, or whether they would self-modify to a perfect utilitarian robot if given the chance.)

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped.

There could be in some cases, if people find out they didn't really believe their axiom after all. But it can just as well be that the starting assumptions really are axiomatic. I think that the idea that terminal values are hardwired in the human brain, and will converge if you just give an FAI good instructions to get them out, is mistaken. There are billions of different ways of doing the extrapolation, and they won't all output the same. At the end of the day, the buck does have to stop somewhere, and where else could that be than where a person, after long reflection and an understanding of what she is doing, concludes that x are her starting assumptions and that's it.

I don't quite agree with the prominent LW-opinion that human values are complex. What is complex are human moral intuitions. But no one is saying that you need to take every intuition into account equally. Humans are very peculiar sort of agents in mind space, when you ask most people what their goal is in life, they do not know or they give you an answer that they will take back as soon as you point out some counterintuitive implications of what they just said. I imagine that many AI-designs would be such that the AIs are always clearly aware of their goals, and thus feel no need to ever engage in genuine moral philosophy. Of course, people do have a utility-function in form of revealed preferences, what they would do if you placed them in all sorts of situations, but is that the thing we are interested in when we talk of terminal values? I don't think so! It should at least be on the table that some fraction of my brain's pandemonium of voices/intuitions is stronger than the other fractions, and that this fraction makes up what I consider the rational part of my brain and the core part of my moral self-identity, and that I would, upon reflection, self-modify to an efficient robot with simple values. Personally I would do this, and I don't think I'm missing anything that would imply that I'm making any sort of mistake. Therefore, the view that all human values are necessarily complex seems mistaken to me.

Having multiple epistemologies with equally good answers us something of a disaster.

These different epistemologies have a lot in common. The exercise would always be "define you starting assumptions, then see which moves are goal-tracking, and which ones aren't". Ethical thought experiments for instance, or distinguishing instrumental values from terminal ones, are things that you need to do either way if you think about what your goals are, e.g. how you would want to act in all possible decision-situations.

I still don't know what you think is bad about bad deontology.

  • It is often vague and lets people get away with not thinking things through. It feels like they have an answer, but most people would have no clue how to set the parameters for an AI that implemented their type of deontology (e.g. when dilemma situations become probabilistic, which is, of course, all the time).

  • It contains discussion stoppers like "rights", even though, when you taboo the term, that just means "harming is worse than not-helping", which is a weird way to draw a distinction, because when you're in pain, you primarily care about getting out of it and don't first ask what the reason for it was. Related: It gives the air of being "about the victim", but it's really more about the agent's own moral intuitions, and is thus, not really other-regarding/impartial at all. This would be ok if deontologists were aware of it, but they often aren't. They object to utilitarianism on the grounds of it being "inhumane", instead of "too altruistic".

In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.

Yes, I see that now. I thought I was mainly preaching to the choir and didn't think the details of people's metaethical views would matter for the main thoughts in my original post. It felt to me like I was saying something at risk of being too trivial, but maybe I should have picked better examples. I agree that this comment does a good job at what I was trying to get at.

Comment author: CurtisSerVaas 25 April 2015 02:21:24PM 0 points [-]

I've edited the LW-wiki to make a list of LWers interested in making debate tools..

In general, I think it'd be useful to make a post similar to the "What are you working on threads", so that people with similar interest can find each other. What do people think of a "People working on X repository" post?

Comment author: bbleeker 25 April 2015 01:42:49PM 1 point [-]

I like salty, sour, hot, spicy. I like sweet too, but not as much. I don't mind a little bitter. In general, I like strong tastes: the very darkest chocolate, old strong cheese. I like almost all fruit, vegetables and fish, and most meat. I dislike bland, slippery things like fat, butter, new cheese, milk. I'll eat ice cream, but I don't really like it. The two most disgusting things I've ever tried to eat were tripe and cottage cheese. Natto was pretty disgusting too, but not as bad. I drink quite a lot of coffee, carbonated water and sugar-free soft drinks. Until recently, I used to drink a lot of alcohol too, but nowadays I'm saving that for parties (of which I don't have many), as it has a lot of calories and is bad for my blood pressure.

Comment author: tut 25 April 2015 12:57:42PM 0 points [-]

The traditional LW solution to this is that you precommit once and for all to this: Whenever I find myself in a situation where I wish that I had committed to acting in accordance with a rule R I will act in accordance with R.

Comment author: ChristianKl 25 April 2015 12:38:36PM 2 points [-]

The pool provides raw data. It's possible to download that data and see what correlates with what. It just needs a slight bit of R coding.

Comment author: Emily 25 April 2015 11:58:55AM 0 points [-]

Interesting. I get grapefruit (which I like better than strawberries) to be quite sour, but not bitter at all.

Comment author: mkf 25 April 2015 11:22:03AM 0 points [-]

You should definitely post it as a top-level post in Main.

Comment author: Houshalter 25 April 2015 11:13:21AM 0 points [-]

Everyone would like better vision but some people can't see more than a few feet without the help of glasses.

Comment author: Elo 25 April 2015 10:57:53AM 0 points [-]

Unfortunately I expect this poll to noise itself out of usefulness. for example. person A dislikes spicy and likes sweet. Person B dislikes sweet and likes spicy. this poll will show one vote 1 for sweet, one vote 5 for sweet, one vote 1 for spicy and one vote 5 for spicy.

There would have to be a form that can add another dimension to the results to see any correlation between results. This also limits people's opportunity to comment on what might have caused them to have certain preferences...

Comment author: TheAncientGeek 25 April 2015 10:49:51AM 0 points [-]

That's almost rule consequentialism.

Comment author: NancyLebovitz 25 April 2015 10:40:06AM 2 points [-]

https://hbr.org/2015/04/emotional-intelligence-doesnt-translate-across-borders

A few examples of people from different cultures misreading each other.

Comment author: Houshalter 25 April 2015 10:35:39AM 0 points [-]

I really want to say that you should pay. Obviously you should precommit to not paying if you can, and then the oracle will never visit you to begin with unless you are about to die anyway. But if you can't do that, and the oracle shows up at your door, you have a choice to pay and live or not pay and die.

Again, obviously it's better to not pay and then you never end up in this situation in the first place. But when it actually happens and you have to sit down and choose between paying it to go away or dying, I would choose to pay it.

It's all well and good to say that some decision theory results in optimal outcomes. It's another to actually implement it in yourself. To make sure every counter factual version of yourself makes the globally optimal choice, even if there is a huge cost to some of them.

Comment author: faul_sname 25 April 2015 10:30:54AM 0 points [-]

It's almost crazy to me that you wouldn't call strawberries sour. Strawberries taste quite sour to me, and quite sweet as well. I've always thought of sourness as relating to acidity (strawberries and grapefruits actually have pretty similar pH's). I perceive bitterness to be entirely different (strawberries are not bitter, grapefruits are slightly to moderately bitter, depending on the grapefruit, kale is very bitter to me but not at all sour).

Comment author: Manfred 25 April 2015 10:21:56AM 0 points [-]

The recent advances in deep learning come in part from the scruffies/experimental researchers saying "screw hard theory" and just forging ahead

Yeah I get that vibe. But ultimately I'm looking into this because I want to understand the top-down picture, not because of specific applications (Well, integrating logical uncertainty with induction would be nice, but if that works it's more of a theoretical milestone than an application). Research on neural networks is actually more specific than what I want to read right now, but it looks like that's what's available :P (barring a literature search I should do before reading more than a few NN papers)

Comment author: TheAncientGeek 25 April 2015 10:21:41AM *  0 points [-]

The way most people use it,

Are you sure? That meaning wasn't obvious to me?

For instance, some people people are just interested to find an "impartial" view that they would choose behind the veil of ignorance, whereas others also want to account person-specific intuitions and preferences. None of these two parties is wrong, they just have different axioms.

There could be further considerations that can be brought to bear. Just because something is claimed as axiomatic , doesn't mean the buck has actually stopped. Having multiple epistemologies with equally good answers us something of a disaster.

No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.

I still don't know what you think is bad about bad deontology.

In general, you need to make many fewer assumptions that what is obvious to you is obvious to everybody.

Comment author: TheAncientGeek 25 April 2015 10:01:07AM 1 point [-]

This is a much better explanation of the OPs point than the OPs own posting.

Comment author: Romashka 25 April 2015 07:16:22AM 0 points [-]

Thank you, edited.

Comment author: Username 25 April 2015 06:06:01AM *  1 point [-]

Let me play devil's advocate for this position.

"defining goals (or meta-goals, or meta-meta-goals) in machine code" or the "grounding everything in code" problems.

  1. An AI that is super intelligent will "know what I mean" when I tell it to do something. The difficulty is specifying the AI's goals (at compile time / in machine code) so that the AI "wants" to do what I mean.
  2. Solving the "specify the correct goals in machine code" is thus necessary and sufficient for making a friendly AI. A lot of my arguments depend on this claim.
  3. How to specify goals at compile time is a technical question, but we can do some a priori theorizing as to how we might do it. Roughly, there are two high level approaches of how to go about it. Simple hard-coded goals, and goals fed in from more complex modules. A simple hard-coded goal might be something like current reinforcement learners were the reward signal is human praise (or, a simple to hard-code proxy for human praise such as pressing a reward button). The other alternative is to make a a few modules (e.g. one for natural language understanding, one for modeling humans) and "use it/them as part of the definition of the new AI's motivation.").

  4. Responses to counterarguments:

4.1: needing to specify commands carefully (e.g. "give humans what they really want".).

And then of course there's those orders where humans really don't understand what they themselves want...

The whole point of intelligence is being able to specify tasks in an ambiguous way (e.g. you don't have to specify what you want in such detail that you're practically programming a computer). An AI that actually wants to make you happier (since it's goals were specified at compile time using a module that models humans) will ask you what to clarify your intentions if you give it vague goals.

Some other thoughts:

For it to have any chance of success, we need to be sure that both model-as-definition and the intelligence module idea are rigorously defined.

It will be hard to accomplish this, since nobody knows how to go about building such modules. Modeling language, humans, and human values are hard problems. Building the modules is a technical question. But, it is necessary and sufficient to build the modules and feed them into the goal system of another AI to build a friendly AI. In fact, one could make a stronger argument that any AGI that's built with a goal system must have it's goal system specified with natural language modules (e.g. reinforcement learning sucks). Thus, it is likely that any built AGIs would be FAIs.

EDITED to add: Tool-AI arguments. If you can build the modules to feed into an AI with a goal system, then you might be able to build a "tool-AI" that doesn't a goal system. I think it's hard to say a priori that such an architecture isn't more likely than an architecture that requires a goal system. It's even harder to say that a tool-AI architecture is impossible to build.

In summary, I think the chief issues with building friendly AI are technical issues related to actually building the AI. I don't see how decision theory helps. I do think that unfriendly humans with a tool AI is something to be concerned about, but doing math research doesn't seem related to that (Incidentally, MIRI's math research has intrigued people like Elon Musk, which helps with the "unfriendly humans problem").

Comment author: jam_brand 25 April 2015 05:52:28AM *  0 points [-]

Asking about people's "preference on a 1 to 5 scale" (rather than, say, "their appreciation on a -2 to +2 scale" or "on a scale from strongly dislike to strongly like"), then seeing the next line begin "I like spicy things", I nearly interpreted the far left to be "I like this only a little" and the far right to be "I like this a lot".

Comment author: Romashka 25 April 2015 05:28:05AM 0 points [-]

The way it was stated in the book, it's just a white spot on the map. (In vitro culture of mycorrhiza. Ed. by Declerk, Strullu and Fortin.)

Comment author: John_Maxwell_IV 25 April 2015 04:23:52AM 0 points [-]
In response to Self-verification
Comment author: oge 25 April 2015 03:45:00AM 0 points [-]

Could you please explain what motivated you to ask this question? It'd help motivate me to play the game...

Comment author: oge 25 April 2015 03:41:59AM 0 points [-]

Hey els, thanks for posting your thoughts. It'd be nice if you put a summary in the first paragraph seeing as the article is so long.

Comment author: oge 25 April 2015 03:23:30AM 0 points [-]

Very interesting post. I liked how your sequence of examples led from very from things we understand to the thing we're trying to understand.

Also, I recognized myself in the analogy of the scientists trying to redefine reality in order to fly :)

Comment author: satt 25 April 2015 02:49:42AM 1 point [-]

I think what GuySrinivasan's asking is closer to "how do I organize a mass of evidence & ideas about a topic so I can better reason about it" than "how do I grind numerical statistical inferences out of a formal Bayesian model"?

Comment author: satt 25 April 2015 02:07:35AM 1 point [-]

Yeah. The parent & sibling comments here got me curious about exactly what PZ wrote, and whether it'd be a transparently politically motivated fulmination against cryonicists.

But the post, as far as I can see, is just an unfavourable comparison of cryonics to ancient mummification, and Myers calling cryonicists frauds who practice "ritual" & "psuedo-scientific alteration of [a] corpse", frauds sometimes defended with "the transhumanist technofetishist version of Pascal’s Wager". Strong stuff, but I don't see anything in the post about partisan politics, race, nerd culture (unless one counts "transhumanist technofetishist" as a dog-whistle meant to slam nerds in general...?), or sexism or feminism or gender (well, except the reference to the frozen girl as a "girl").

Ctrl-F-ing for "Myers" doesn't reveal anything along those lines either.

I see several comments in the political categories I mentioned but they weren't posted by PZ or cheered by PZ, so I'm a bit surprised by the comments here focusing on PZ to impute political motives to him and psychoanalyze him.

PZ's post all but says he's slamming cryonicists because (to his mind) they're crooks & quacks. (Based on the reference to "tortur[ing] cadavers", maybe there's a purity-violation ick-reaction too. That's still pretty distant from the motivations people are speculating about here.) I don't understand why I'd need a special explanation for that, over & above the more common reasons why people tend to scoff at cryonics (absurdity heuristic, plus scepticism about future technological trends w.r.t. brain preservation & re-instantiation, plus over-generalization from everyday experience of how freezing affects food and the like).

Comment author: Gunnar_Zarncke 24 April 2015 11:29:26PM 2 points [-]

Actually they significantly don't.

Comment author: Gunnar_Zarncke 24 April 2015 11:25:45PM 0 points [-]

Yes, sorry. I noticed, but editing polls is not unproblematic.

Comment author: SilentCal 24 April 2015 11:01:10PM 0 points [-]

I am a moderate but regular drinker. I have a substantial liquor collection and a strong interest in cocktails, as well as beer and wine.

The top conscious motivation for my drinking is exploration of taste, and I usually don't drink to substantial impairment. I suspect the unconscious motives are substantial, but that they have less to do with intoxication and much to do with signalling. That is, I've internalized the idea that appreciating the taste of alcoholic beverages is sophisticated to the point that it doesn't feel like signalling, it just feels pursuing something inherently interesting. (I also like tasting various teas and coffees.) I have no desire to break this habit, as a) light-to-moderate drinking appears neutral or positive for health and b) given my cultural position it's probably a cheaper hobby that anything I'm likely to replace it with.

There's also a sense of relaxation to drinking. It's not the literal intoxication, as I feel it before I even take a sip. I think it has to do with an association of drinking with recreation and relaxation--like a classic diminished-alertness-as-signal, but to myself.

Though I should say that I do sometimes enjoy the buzz--it can seem to put me in a more spontaneous, moment-focused mood, ideal for time devoted to fun. This does sometimes lead to snowballing where drinking more makes me want to drink more, until I've had quite a bit; this has occurred with diminishing frequency since the end of college. If you're wondering whether to count me as a heavy drinker, I've probably drank heavily like this once in the last six months.

Comment author: jacob_cannell 24 April 2015 11:00:57PM *  2 points [-]

Do you know of any good resources for work exploring optimality and generality proofs for practical unsupervised learners?

Yes. There were a couple of DL theory papers in the last year or so that generated some excitement on r/machinelearning:

"Provable bounds for learning some deep representations."

"On the computational efficiency of training neural networks."

I'm not a huge fan of hard theory and optimality proofs, but even I found the first paper in particular somewhat insightful.

Machine learning was somewhat obsessed with formal theory (provable bounds, convex learning, etc) in the last decade, and I believe it held back progress.

The problem is that to prove any hard results, you typically have to simplify the model down to the point at which it loses all relevance to solving real world problems.

The recent advances in deep learning come in part from the scruffies/experimental researchers saying "screw hard theory" and just forging ahead, and also in part from advances in informal theoretical understanding (what one may call wisdom rather than math). The work of Bengio and Hinton in particular is rich with that kind of wisdom.

In particular see this comment from hinton, and a related comment here from Bengio.

Comment author: Strangeattractor 24 April 2015 10:26:00PM 0 points [-]

One way to approach it would be to organize the data around the questions "What seems to have an effect on the system? What makes things better, what makes things worse, even if the effect is very small (but reproducible)?" Then, investigate those things.

Doctors are kind of terrible at doing that. They tend to have a tool box of "these are the things I know how to do" and any information that doesn't fit their specific specialty is discarded as irrelevant.

I'm not sure how useful it would be to weight things by evidence if part of the problem is that some things haven't been investigated enough, or are simply not well-enough understood by modern medicine and science.

In response to Weekly LW Meetups
Comment author: PhilGoetz 24 April 2015 10:15:35PM 1 point [-]

I have a suggestion for people near Baltimore: There's a bioprinting symposium tomorrow (April 25) from noon to 5, at the Baltimore Under Ground Science Space, 101 North Haven Street, Suite 105, Baltimore, MD 21224. It is only $75. The organizers are losing a lot of money on this.

You could organize a meetup at this event. HOWEVER, don't walk there, and don't plan to walk around there to get lunch or dinner. I haven't been there, but it looks on the map like this spot is on the edge of the biggest slum in Baltimore.

Comment author: RyanCarey 24 April 2015 10:01:29PM 1 point [-]

For what it's worth, I used xelatex and some of Alex Vermeer's code, but I can't see why any would effect the links, and can't find any suggestions for why this would occur in Sumatra. I'll just sit on this for now, but if more people have a similar issue, I'll look further. Thanks.

Comment author: afeller08 24 April 2015 09:38:49PM 0 points [-]

I changed my mind midway through this post. Hopefully it still makes sense... I started disagreeing with you based on the first two thoughts that come to mind, but I'm now beginning to think you may be right.

So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.

I.

This statement doesn't really fit with the philosophy of morality. (At least as I read it.)

Consequentialism distinguishes itself from other moral theories by emphasizing terminal values more than other approaches to morality do. A consequentialist can have "No murder" as a terminal value, but that's different from a deontologist believing that murder is wrong or a Virtue Ethicist believing that virtuous people don't commit murder. A true consequentialist seeking to minimize the amount of murder that happens would be willing to commit murder to prevent more murder, but neither a deontologist nor a virtue ethicist would.

Contractualism is a framework for thinking about morality that presupposes that people have terminal values and their values sometimes conflict with each other's terminal values. It's a description of morality as a negotiated system of adopting/avoiding certain instrumental goals so that the people who implicitly negotiate the contract for their mutual benefit at attaining their terminal values. It says nothing about what kind of terminal values people should have.

II.

Discussions of morality focus on what people "should" do and what people "should" think, etc. The general idea of terminal values is that you have them and they don't change in response to other considerations. They're the fixed points that affect the way you think about what you want to accomplish with you instrumental goals. There's no point to discussing what kind of terminal values people "should" have. But in practice, people agree that there is a point to discussing what sorts of moral beliefs people should have.

III.

The psychological conditions that cause people to become immoral by most other people's standards have a lot to do with terminal values, but not anything to do with the kinds of terminal values that people talk about when they discuss morality.

Sociopaths are people who don't experience empathy or remorse. Psychopaths are people who don't experience empathy, remorse, or fear. Being able to feel fear is not the sort of thing that seems relevant to a discussion about morality... But that's not the same thing as saying that being able to feel fear is not relevant to a discussion about morality. Maybe it is.

Maybe what we mean by morality, is having the terminal values that arise from experiencing empathy, remorse, and fear the way most people experience these things in relation to the people they care about. That sounds like a really odd thing to say to me... but it also sounds pretty empirically accurate for nailing down what people typically mean when they talk about morality.

Comment author: Manfred 24 April 2015 08:57:24PM *  1 point [-]

Thanks for the recommendations! Do you know of any good resources for work exploring optimality and generality proofs for practical unsupervised learners?

In practice however there is a problem with the whole "predict all of my observations" idea: doing this tends to waste lots of computation on modelling features that have little specific task utility.

Right, this is why I think it's relevant that Solomonoff induction will still do a good job of predicting any computable function of your data - like a noise-filtering function. This gives you an easy proof that certain algorithms will succeed modulo noise.

Comment author: Bound_up 24 April 2015 08:08:26PM 0 points [-]

The first song of this album, "The Father of Death." https://www.youtube.com/watch?v=ZWwGo28FdZ8

Tom: They've waited so long for this day (Albert: They've waited so long for this day) Someone to take the death away (There is no price they wouldn't pay) No son would ever have to say, (For someone else to lead them) "My father worked into his grave." (Don't turn your back on me!)

Men sleep tonight with hands of bone. They will awake with hands of steel. And with these hands we will destroy. And with these hands we will rebuild.

Albert: And we will stand above our city, rising high above her streets. From tops of buildings we will look at all that lies beneath our feet.

And we will raise our hands above us, cold steel shining in the sun, and with these hands that will not bleed, your father's battle will be won!

Comment author: afeller08 24 April 2015 07:41:17PM 2 points [-]

Anti-epistemology is a more general model of what is going on in the world than rationalizations are,

Yes.

so it should all reduce to rationalizations in the end.

Unless there are anti-epistemologies that are not rationalizations.

The general concept of a taboo seems to me to be an example of a forceful anti-epistemology that is common in most moral ideologies and is different from rationalization. When something is tabooed, it is deemed wrong to do, wrong to discuss, and wrong to even think about. The tabooed thing is something that people deem wrong because they cannot think about whether it is wrong without in the process doing something "wrong," so there is no reason to suppose that they would find something wrong with the idea if they were to think about it, and try to consider whether the taboo fit with or ran against their moral sense.

A similar anti-epistemology is when people believe it is right to believe something is morally right... on up through all the meta-levels of beliefs about beliefs, so that they would already be committing the sin of doubt as soon as they begin to question whether they should believe that continuing to hold their moral beliefs is actually something they are morally obliged to do. (For ease of reference, I'll call this anti-epistemology "faith".)

One of the three things that rationalization, taboos, and faith have in common is that they are sufficiently general modes of thought to permit them to be applied to "is" propositions as well as "ought" propositions, and when these modes of thought are applied to objective propositions for which truth-values can be measured, they behave like anti-epistemologies. So in the absence of evidence to the contrary, we should presume that they behave as anti-epistemologies for morality, art criticisms and other subjects -- even though the existence of something stable and objective to be known in these subjects is highly questionable. The modes of thought I just mentioned are themselves inherently flawed. They are not simply flawed ways of thinking about morality, in particular.

If you are looking for bad patterns of though that deal specifically with ethics, and cannot be applied to other subjects about which truthiness can be more objectively measured, the best objection (I can think of) by which to call those modes of thought invalid is not to try to figure out why they are anti-epistemologies, but instead to reject them for their failure to put forward any objectively measurable claims. There are many more ways for a mode of thought to go wrong than for it to go right, so until some thought pattern has provided evidence of being useful for making accurate judgments about something, it should not be presumed to be a useful way to think about something for which the accuracy of statements is difficult or impossible to judge.

Comment author: ChristianKl 24 April 2015 07:14:04PM 5 points [-]

That's no problem it gives a test whether people respond the same way both times.

Comment author: Kindly 24 April 2015 06:32:14PM 0 points [-]

I believe editing polls resets them, so there's no reason to do it if it's just an aesthetically unpleasant mistake that doesn't hurt the accuracy of the results.

Comment author: ChristianKl 24 April 2015 06:07:56PM 0 points [-]

I understand that point what I don't know whether you know about the harm if you know why the fungi helps a particular plant.

Comment author: RainbowSpacedancer 24 April 2015 06:00:26PM 2 points [-]

Sumatra PDF 3.0 on Windows 8.1 x64. I believe the problem is the same one this user had with the AI to Zombies ebook.

I'll be reading the epub personally (which works fine in Sumatra) on my Ipad so it doesn't bother me, but I thought I would mention it as Sumatra is a relatively popular reader and if this ebook is produced by the same team as the rationality ebook then it seems to be a recurring problem.

Comment author: Romashka 24 April 2015 05:41:33PM 0 points [-]

Once it escapes from the field, it might do lots of unintended harm. Different plants react differently to different fungi.

Comment author: hairyfigment 24 April 2015 05:15:22PM 0 points [-]

What sort of consequences are you thinking of? The idea that ethics can consider two options equally preferable and not care which one you take follows from the idea of an ethical utility function (even a complicated function that only exists in an abstract mathematical sense). We don't need to assume it directly, we can go with the Archimedean property (roughly, that crossing the street can be worth a small chance of death).

Comment author: TheVoraciousObserver 24 April 2015 05:06:12PM *  0 points [-]

Will the existence of supertasters skew the results? I've ordered a supertaster test chemical strip before, and found I was a supertaster, but my brother was not.

http://en.wikipedia.org/wiki/Supertaster

I find I am not tolerant of extreme flavours in either direction. Super-sweet, or super sour, bitter flavors can make me gag. I can't drink coffee or most teas. I also don't enjoy chocolate. I eat mostly 'vanilla' or bland foods, but find they taste quite good.

Comment author: jacob_cannell 24 April 2015 04:57:34PM 0 points [-]

My problem with the idea of us living in a simulation is that it would be breathtakingly cruel. If we live in a simulation, that means that all the suffering in the world is there on purpose. Our descendants in the far future are purposefully subjecting conscious entities to the worst forms of torture, for their own entertainment.

There is a rather obvious solution/answer: the purpose of the simulation is to resurrect the dead. Any recreation of historical suffering is thus presumably more than compensated for by the immense reward of an actual afterlife.

We could even have an opt out clause in the form of suicide - if you take your own life that presumably is some indicator that you prefer non-existence to existence. On the other hand, this argument really only works if the person committing suicide was fully aware of the facts (ie that the afterlife is certain) and of sound mind.

Comment author: jacob_cannell 24 April 2015 04:47:51PM *  5 points [-]

What you are looking for already exists - look up unsupervised generative models, and in particular temporal autoencoders that attempt to predict their future input sequences. Generative models handle model uncertainty and approximation uncertainty in the way you are looking for. The universal prior concept maps directly to the concept of regularization in machine learning, and the general principle is that your model complexity (in bits) must be less than your data complexity (ie, your model should compress your data).

In practice however there is a problem with the whole "predict all of my observations" idea: doing this tends to waste lots of computation on modelling features that have little specific task utility.

Supervised learning (where applicable) is more efficient and generally preferred because it can focus the limited model capacity & computation on learning task relevant features.

Also, you should check out Jurgen Schmiduber's AMA on reddit, he explains the connections between ideal/impractical inference algorithms and their more practical/scalable approximations, such as RNNs.

Comment author: dxu 24 April 2015 04:13:19PM *  0 points [-]

No, Golden-rule deontology is very similar to timeless cooperation for instance, and that doesn't strike me as a misguided thing to be thinking about.

Well, there are two things I have to say in response to that:

  1. Timeless decision-making is a decision algorithm; you can use it to maximize any utility function you want. In other words, it's instrumental, not terminal. So it's hard to see how timeless cooperation could be morally significant, since morality usually deals with terminal values, not instrumental goals.
  2. Timeless decision-making is still based on your estimated degree of similarity to other agents on the playing field. I'll only cooperate in the one-shot Prisoner's Dilemma if I suspect my decision and my opponent's are logically connected. So even if you advocate timeless decision-making, "cooperate in PD-like situations" is still not going to be a universal rule like the Golden Rule.
Comment author: Eitan_Zohar 24 April 2015 04:08:44PM *  1 point [-]

Fruit juice is twinned. Can you edit these polls?

Comment author: Lumifer 24 April 2015 03:52:05PM *  -1 points [-]

I am drawing a distinction between epistemology and correct reasoning. They are not synonyms.

Mistakes (deliberate or not) in logic, refusals to draw an obvious conclusion, etc. are not epistemology problems.

Comment author: Lumifer 24 April 2015 03:46:58PM *  0 points [-]

Just that I haven't seen it applied to the simulation hypothesis before

Well, the simulation hypothesis is essentially equivalent to saying our world was made by God the Creator so a lot of standard theology is applicable X-)

And religion can't really answer this question.

What, do you think, can really answer this question?

Comment author: Lukas_Gloor 24 April 2015 03:37:56PM *  0 points [-]

Did you read my third paragraph? I'm not assuming moral realism and I'm well aware of the issue you mention. I do think ithere is a meaningful way a person's reasoning about moral issues can be wrong, even under the assumption of anti-realism. Namely, if people use an argument of form f to argue for their desired conclusion, and yet they would reject other conclusions that follow from the argument of form f, it seems like they're deluding themselves. I'm not entirely sure the parallels to epistemology are strong enough to justify the analogy, but it seems worth thinking about it.

Comment author: Gunnar_Zarncke 24 April 2015 03:14:56PM 4 points [-]

Interesting idea. Could be made into a poll to measure breath and variability of preference via a poll.

I will just plain take your points and make each into a poll and add some of my own. Everybody is invited to vote the their preference on a 1 to 5 scale (as many as you like, no need to consider all, the liste got quite long):

I like spicy things I dislike spicy things

I like chilli I dislike chilli

I like wasabi I dislike wasabi

I like horseraddish I dislike horseraddish

I like sweets I dislike sweets

I'm addicted to sugar

Very much Not at all

I like chocolate I dislike chocolate

I like dark chocolate I dislike dark chocolate

I like licorice I dislike licorice

I like fruits I dislike fruits

I like vegetables I dislike vegetables

I like whole grain products I dislike whole grain products

I like hot dishes I dislike hot dishes

I like cold dishes I dislike cold dishes

I like creamy/squishy/souce-like food I dislike creamy/squishy/souce-like food

I like hard/firm food I dislike hard/firm food

I like crispy food I dislike crispy food

I like beefy food I dislike beefy food

I like ice cream I dislike ice cream

I like cheese I dislike cheese

I like meat I dislike meat

I like fish I dislike fish

I like honey I dislike honey

I like milk I dislike milk

I drink a lot of water I drink water only as part of other drinks

I like coffee I dislike coffee

I like tea I dislike tea

I like to drink coffeinated beverages I don't like to drink coffeinated beverages

I like fruit juice I dislike fruit juice

I like hot drinks I dislike hot drinks

I like cold drinks I dislike cold drinks

I like fruit juice I dislike fruit juice

I like alcoholic beverages I dislike alcoholic beverages

I like the initial effects of alcohol I dislike initial effects of alcohol

I like the ultimate effects of alcohol I dislike ultimate effects of alcohol

I like bitter tastes I dislike bitter tastes

I like sour tastes I dislike sour tastes

I like salty tastes I dislike salty tastes

I like starchy tastes I dislike starchytastes

Submitting...

Comment author: Diadem 24 April 2015 03:11:28PM 1 point [-]

Well yes. I wasn't claiming that "why is there suffering" is a new question. Just that I haven't seen it applied to the simulation hypothesis before (if it has been discussed before, I'd be interested in links).

And religion can't really answer this question. All they can do is dodge it with non-answers like "God's ways are unknowable". Non-answers like that become even more unsatisfactory when you replace 'God' with 'future humans'.

View more: Next