You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Comment author: DanArmak 10 October 2016 09:54:20PM 0 points [-]

The author says a moral theory should:

  • "Cover how one should act in all situations" (instead of dealing only with 'moral' ones)
  • Contain no contradictions
  • "Cover all situations in which somebody should perform an action, even if this “somebody” isn’t a human being"

In other words, a decision theory, complete with an algorithm (so you can actually use it), and a full set of terminal goals. Not what anyone else means by "moral theory'.

Comment author: ChristianKl 10 October 2016 09:51:24PM 2 points [-]

It's certainly not a blog that tries to pander the reader.

Comment author: ChristianKl 10 October 2016 09:46:15PM 1 point [-]

And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

You can fit many people in California despite it being desert.

Comment author: ChristianKl 10 October 2016 09:43:51PM 0 points [-]

I don't think I would need to define it that way for the above comment to be coherent.

Comment author: ChristianKl 10 October 2016 09:41:38PM 0 points [-]

Do you have a preferred source?

I use the Guardian as the source https://www.theguardian.com/us-news/ng-interactive/2015/jun/01/the-counted-police-killings-us-database

We have implicit biases. Biases based on race are a pretty big deal in this country, historically. In my view, the level of bias in police shootings doesn't reach any reasonable threshold to be called anything like "racism" in many, many cases.

The academic notion of implicit racism isn't about any other threshold than statistical significance. The tool that they developed have gotten good at picking up effects in many people so the threshold is quite low and most people suffer from implicit racism.

If you reject that concept, then it doesn't make sense to see Hillary using it as progress.

What are your political leanings? I'd like to better understand our interaction by knowing how you view yourself generally on the U.S. political spectrum.

I'm not on the U.S. political spectrum. He Facebook political status is currently "Continental". My formal political associations put me left of center in Berlin.

Comment author: DanArmak 10 October 2016 09:32:39PM 0 points [-]

Is the 'success' of a group its number of members, regardless of actual activity?

Comment author: DanArmak 10 October 2016 09:32:01PM 0 points [-]

I agree there would probably only be one successful AGI, so it's not the first step of many. I meant it would be a step in that direction. Poor phrasing on my part.

Comment author: Brillyant 10 October 2016 09:24:31PM *  -1 points [-]

In 2016 the difference is slightly stronger than the difference between Whites and Black getting killed. It's a fact that easily knowable if you care to look for the numbers of police killings by race. Anybody who cares enough about the issue to know the fact should know it if the can read numbers in a straightforward way instead of just trying to validate their party line.

Do you have a preferred source?

In fairly confident that it's better than blaming people for having biases and hoping that they will change as a result.

Who's doing this?

If you think that the logical conclusion would be that Clinton was wrong when she claimed that everybody suffers from implicit bias.

We have implicit biases. Biases based on race are a pretty big deal in this country, historically. In my view, the level of bias in police shootings doesn't reach any reasonable threshold to be called anything like "racism" in many, many cases.

That's exactly why it's unproductive. You don't actually think in terms of "implicit racism" but simply use the new name to label concepts that you already knew beforehand.

Perhaps this is true for you. I often think about ways my view may be biased when relating to people. And then I act to better understand and, hopefully, neutralize the bias. My efforts are clumsy and likely often fail, because I'm not particularly intelligent or skilled at overcoming bias.

At any rate, the first step toward being productive in this regard is recognizing bias exists.

You don't see evidence of implicit bias in a case by case basis.

Sure you could. I'd agree the aggregate data would be (perhaps more) revealing, but the facts of a particular case (including the video) could also tell you something about what biases might exist and how they effected the event.

I'm tapping.

What are your political leanings? I'd like to better understand our interaction by knowing how you view yourself generally on the U.S. political spectrum. Thanks.

Comment author: ChristianKl 10 October 2016 09:02:41PM 0 points [-]

The success of a Facebook group depends a lot on how it get's promoted and whether there are a few people who care about creating content for it.

Comment author: Gunnar_Zarncke 10 October 2016 08:59:07PM 1 point [-]

For some context about when and what to change see Attempted Telekinesis.

Comment author: ChristianKl 10 October 2016 08:58:41PM 0 points [-]

Is the disparity between Whites and Asians killed by police significant?

In 2016 the difference is slightly stronger than the difference between Whites and Black getting killed. It's a fact that easily knowable if you care to look for the numbers of police killings by race. Anybody who cares enough about the issue to know the fact should know it if the can read numbers in a straightforward way instead of just trying to validate their party line.

You speak about this is a very definitive way, as if you know exactly what would work. I don't know what would work.

You don't know what would work because Clinton doesn't speak about the evidence for what works. It's not the conversation she tries to have on the subject. There's good evidence that body camera's do work.

The fact that creating legal structure where police can effectively prosecuted for wrongdoing seem obvious to me. I don't have specific evidence for it, but it feels like an elephant in the room.

Evidence-based policy making and running trials to see which policies perform best is a framework that applying rationality. In fairly confident that it's better than blaming people for having biases and hoping that they will change as a result. I don't have studies that validate that claim but it again seem obviously true.

I don't think race is a factor in every police shooting.

If you think that the logical conclusion would be that Clinton was wrong when she claimed that everybody suffers from implicit bias.

That's exactly why it's unproductive. You don't actually think in terms of "implicit racism" but simply use the new name to label concepts that you already knew beforehand.

Every police shooting ought to be examined based on the objective facts.

That sounds again like a rejection of using the framework of implicit bias. You don't see evidence of implicit bias in a case by case basis. You see it when you look in aggregate on choices. A person with implicit bias has higher availability for certain action and thus likely reacts a little faster, even if both cases result in a dead suspect.

Comment author: Zack_M_Davis 10 October 2016 08:58:40PM 1 point [-]

"A woman is misclassified as a man, finds out and corrects it" is a more useful approximation [...] and is closer to the motivation of transpeople than the "choice to change" one.

I understand that some people don't model themselves as being sufficiently agentlike to admit that their major life choices were in fact choices; it's certainly politically convenient to claim to have an immutable innate identity that everyone needs to respect. But other people who do model themselves as agents---sometimes even genuinely dysphoric people who might partially understand a little bit of what you're going through!---might have an interest in defending social norms that let them describe their model of reality in non-contrived ways, even if that occasionally hurts some people's feelings. You can and should edit your body and social presentation if that's what you want to do. You cannot edit other people's models of reality, and people might push back if you try to shame them into doing so.

Comment author: Gunnar_Zarncke 10 October 2016 08:54:37PM 2 points [-]

Link: http://www.vhemt.org/

It's very likely much bigger then 9800. It is also very balanced and laid back in its views and methods. I'd think that contributes.

Comment author: Houshalter 10 October 2016 08:28:53PM 1 point [-]

Sure corn isn't the optimal crop to do this with. What about water based plants or algae which have more efficient photosynthesis? Algae has very short generation times and could perhaps be bred to produce biofuel directly, instead of an inefficient indirect process of fermenting it.

If I recall correctly, you would only need a relatively small percent of Earth's surface to produce enough fuel for current use. And it could be some undesirable land in a desert. Tubes full of water and algae is a lot cheaper than solar panels and batteries.

Comment author: Houshalter 10 October 2016 08:17:50PM 1 point [-]

And also competitive tax rates have been a popular subject in politics for a long long time. "If we tax millionaires/businesses, what stops them from just leaving to another country/state/city?"

Comment author: Houshalter 10 October 2016 08:15:41PM 0 points [-]

Friendly AI is an AI which maximizes human values. We know what it is, we just don't know how to build one. Yet, anyway.

Comment author: Houshalter 10 October 2016 08:07:29PM *  4 points [-]

I agree. I think it's very unlikely FAI could be produced from MIRI's very abstract approach. At least anytime soon.

There are some methods that may work on NN based approaches. For instance my idea for an AI that pretends to be human. In general, you can make AIs that do not have long-term goals, only short term ones. Or even AIs that don't have goals at all and just make predictions. E.g., predicting what a human would do. The point is to avoid making them agents that maximize values in the real world.

These ideas don't solve FAI on their own. But they do give a way of getting useful work out of even very powerful AIs. You could task them with coming up with FAI ideas. The AIs could write research papers, review papers, prove theorems, write and review code, etc.

I also think it's possible that RL isn't that dangerous. Reinforcement learners can't model death and don't care about self-preservation. They may try to hijack their own reward signal, but it's difficult to understand what they would do after that. E.g. if they just tweak their own RAM to have reward = +Inf, and then not do anything else. It may be harder to create a working paperclip maximizer than is commonly believed, even if we do get superintelligent AI.

Comment author: Brillyant 10 October 2016 07:27:50PM *  -1 points [-]

"More important" for what and "other factors" from which set?

In regard to social issues, such as the murder rate by race you cited earlier, I'm not compelled to believe blacks are genetically wired to behave poorly and kill more often. Rather, as I've said, I believe there has been an extreme set of circumstances in the U.S. that have led to lots of problems.

What do you think are transmission mechanisms which would show how having, say, great-great-grandparents who were slaves affects you now?

As I've said—and as you've said by saying culture can be persistent through generations—I am who I am, in part, because of who my parents and family are. Of course, genetically. But there is more than this. Partly because of material wealth, partly because of availability of education and the opportunity to learn marketable skills, partly because of access to social and professional networks—Simply, there was a deficit created by slavery that takes a while to even out. Slavery wasn't that long ago.

And again, even apart from slavery, there has been, and continues to be discrimination against African Americans in the U.S. Both legally through segregation and just plain old racism (implicit and explicit).

If we compare it to a 100 meter race, it's not as if this was just a simple 20 meter head start for whites because of slavery; it's also that hurdles have been placed every 10 meters in the African American lane through segregation and discrimination.

Do you think the somewhat worse conditions of the American blacks explain the gap in outcomes looking at the present day?

This is my view, yes. See above.

I cited this earlier.

Imagine something like this type of discrimination is happening at all sorts of levels in the U.S.—Blacks are just less likely to be successful in a professional capacity simply because they discriminated against because are black, and apart from any consideration of actual merit.

So, it takes 15 resumes (instead of 10) to get a callback. Then the black candidate is 33% less likely to score an actual interview from that callback. Then 33% less likely to get to the second interview; 33% less likely to get to the 3rd and final interview.

Then they're employed... How much less likely is it a black person receives a promotion? How much less do they make on average?

Edit: Correct me if I'm wrong, but it seems you discount the idea slavery, segregation and discrimination has had ill effects for African Americans in the U.S. up to the present day...Why is that?

Comment author: Lumifer 10 October 2016 06:43:42PM -1 points [-]

And how do you find out whether you're right or not?

Comment author: Lumifer 10 October 2016 06:42:04PM 3 points [-]

Brain drain has been a concern of some for a long time.

Comment author: skeptical_lurker 10 October 2016 06:26:46PM 7 points [-]

Ignore all the stuff about provably friendly AI, because AFAIK its fairly stuck at the fundamental level of theoretical impossibility due to lob's theorem and its prob going to take a lot more than five years. Instead, work on cruder methods which have less chance of working but far more chance of actually being developed in time. Specifically, if Google are developing it in 5 years, then its probably going to be deepmind with DNNs and RL, so work on methods that can fit in with that approach.

Comment author: skeptical_lurker 10 October 2016 06:21:41PM 4 points [-]

That doesn't mean that there is nothing to do - if you don't know what FAI is, then you try to work out what it is.

Comment author: skeptical_lurker 10 October 2016 06:14:36PM 2 points [-]

We live in an increasingly globalised world, where moving between countries is both easier in terms of transport costs and more socially acceptable. Once translation reaches near-human levels, language barriers will be far less of a problem. I'm wondering to what extent evaporative cooling might happen to countries, both in terms of values and economically.

I read that France and Greece lost 3 & 5% of their millionaires last year (or possibly the year before), citing economic depression and rising racial/religious tension, with the most popular destination being Australia (as it has the 1st or 2nd highest HDI in the world). 3-5% may not seem like a lot, but if it were sustained for several years it quickly piles up. The feedback effects are obvious - the wealthier members of society find it easier to leave and perhaps have more of a motive to leave an economic collapse, which decreases tax revenue, which increases collapse etc. On the flip side, Australia attracts these people and its economy grows more making it even more attractive...

Socially, the same effect as described in EY's essay I linked happens on a national scale - if the 'blue' people leave, the country becomes 'greener' which attracts more greens and forces out more blues. And social/economic factors feed into each other too - economic collapses cause extremism of all sorts, while I imagine a wealthy society attracting elites would be more able to handle or avoid conflicts.

Now, this is not automatically a bad thing, or at least it might be bad locally for some people, but perhaps not globally. Any thoughts as to what sort of outcomes there might be? And incidentally, how many people can you fit in Australia? I know its very big, but also has a lot of desert.

Comment author: skeptical_lurker 10 October 2016 05:55:04PM -1 points [-]

Google do not strike me as incompitant, and they do have ethics oversite for AI. Worry, yes, despair, no.

Comment author: scarcegreengrass 10 October 2016 05:24:47PM *  2 points [-]

This blog is so wordy and cultural that i (unfamiliar with the context) find it actually challenging to figure out what the premise, thesis, or content of the post is. Reminds me of my experience with discovering arcane 'neoreaction' blogs.

Comment author: 9eB1 10 October 2016 05:01:32PM 3 points [-]

I would be very interested in this as well. In the meantime, there is a subreddit for the site that has a thread with best posts for a new reader, and a thread on people's favorite things from TLP.

Comment author: Brillyant 10 October 2016 04:59:00PM -2 points [-]

Why aren't you seeking to explain why White's get more likely to be killed by police than Asian's? Why do you think it's a question that people like Clinton don't address?

She didn't address it because it wasn't relevant to the discussion at hand. Is the disparity between Whites and Asians killed by police significant? Is it an issue that is pressing in terms of it's current effect on the body politic?

Making winning decisions is about agency. Hillary Clinton could say that she wants that all police wear body camera's. If she can win a majority for that policy she can implement it.

On the other hand you can't pass a law that people shouldn't have implicit bias anymore. Speaking about it is useful if Hillary Clinton wants to engage in virtue signaling but not actually focus on getting policies changed.

If she wanted to do rational policy making she could say: "We should do controlled trials that try different policies in different area's to find out which policies actually help with changing the status quo."

You speak about this is a very definitive way, as if you know exactly what would work. I don't know what would work. It seems to me these are complex issues. I just noted it was good and, I think, useful to hear someone mention the everyone is subject to bias as opposed to the same old Red v. Blue talking points. I'd have praised anyone who said something similar, regardless of which team they played for.

In what kind of ontology do you believe if you think that a police shooting could be racist, even in principle? That there are some police shootings that are racists and other that aren't? If you want to use the word "racist" to be a property of events and not a property of people than it means something qualitatively different than what the term "implicit racism" is about.

This looks like your conceptualization of racism is the standard meaning of the word and has little to do with the academic term of "implicit racism".

My phrasing was poor.

I don't think race is a factor in every police shooting. Despite this, the left seems to try and make every single police shooting involving an African American into another example of blatant, explicit racism. I don't agree with this at all and I think it detracts from the effort to improve things. Every police shooting ought to be examined based on the objective facts.

The idea that an officer (or judge, or anyone) could have an implicit bias against a group of people, and that that bias is consequential, seem to me to be worth exploring.

Comment author: pcm 10 October 2016 04:44:24PM 3 points [-]

I suspect attempted telekinesis is relevant.

Comment author: Lumifer 10 October 2016 04:43:09PM 0 points [-]

If it's a tool AGI, I don't see how it would help with friendliness, and if it's an active self-developing AGI, I thought the canonical position of LW was that there could be only one? and it's too late to do anything about friendliness at this point?

Comment author: DanArmak 10 October 2016 04:18:24PM 3 points [-]

We don't have an AGI that doesn't kill us. Having one would be a significant step towards FAI. In fact, "a human-equivalent-or-better AGI that doesn't do anything greatly harmful to humanity" is a pretty good definition of FAI, or maybe "weak FAI".

Comment author: ZankerH 10 October 2016 03:56:07PM 1 point [-]

Despair and dedicate your remaining lifespan to maximal hedonism.

Comment author: Lumifer 10 October 2016 03:10:09PM 1 point [-]

Options (b) and (c) are basically wishes and those are complex X-D

"Not kill us" is an easy criterion, we already have an AI like that, it plays Go well.

Comment author: DanArmak 10 October 2016 02:55:47PM 2 points [-]

We do know it isn't an AI that kills us. Options b and c still qualify.

Comment author: DanArmak 10 October 2016 02:54:19PM 4 points [-]

Or possibly they are accurate measurements of the rates of Facebook use among these two groups. Maybe it's a good thing if people who are concerned about existential risk do serious things about it instead of participating in a Facebook group.

Comment author: Lumifer 10 October 2016 02:48:06PM *  -1 points [-]

Nothing, because we still don't know what a friendly AI is.

Comment author: Lumifer 10 October 2016 02:43:36PM *  1 point [-]

I'm not sure I believe genetics are more important than other factors.

You'll have to be a bit more specific. "More important" for what and "other factors" from which set?

it's an extreme set of "nurture" circumstances that robbed a group of people of all opportunity for many generations, based on race.

What do you think are transmission mechanisms which would show how having, say, great-great-grandparents who were slaves affects you now?

You might find it interesting to compare them to East European Jews who 150 years ago certainly weren't slaves, but they were segregated and discriminated against, they faced limitations on what they could own, where could they live, and what could they do, plus once in a while a mob of angry peasants would come and burn down a village. They weren't rich either.

Do you think the somewhat worse conditions of the American blacks explain the gap in outcomes looking at the present day?

Comment author: Lumifer 10 October 2016 02:33:31PM *  1 point [-]

The argument that I was making or, maybe, just implying is a version of the argument for deontological ethics. It rests on two lemmas: (1) You will make mistakes; (2) No one is a villain in his own story.

To unroll a bit, people who do large-scale evil do not go home to stroke a white cat and cackle at their own evilness. They think they are the good guys and that they do what's necessary to achieve their good goals. We think they're wrong, but that's an outside view. As has been pointed out, the road to hell is never in need of repair.

Given this, it's useful to have firebreaks, boundaries which serve to stop really determined people who think they're doing good from doing too much evil. A major firebreak is emotional empathy -- it serves as a check on runaway optimization processes which are, of course, subject to the Law of Unintended Consequences.

And, besides, I like humans more than I like optimization algorithms :-P

Comment author: turchin 10 October 2016 02:28:19PM 3 points [-]

Good point, but my question was about what we can do to raise chances that it will be friendly AI.

Comment author: ChristianKl 10 October 2016 02:02:25PM 4 points [-]

Get employed by Google.

Comment author: James_Miller 10 October 2016 01:59:55PM 10 points [-]

Save less because of the high probability that the AI will (a) kill us, (b) make everyone extremely rich, or (c) make the world weird enough so that money doesn't matter.

Comment author: Thomas 10 October 2016 01:56:24PM 1 point [-]

First, this is not very unlikely.

Second, be faster than them.

Comment author: ChristianKl 10 October 2016 01:05:28PM 3 points [-]

The standard way to learn massage is through taking a course.

I would also recommend Betty Martin's 3-Minute game as a secular message like practice: https://www.youtube.com/watch?v=auokDp_EA80

Comment author: ChristianKl 10 October 2016 12:53:15PM 5 points [-]

Nothing. I don't think facebook membership counts are a good measurement.

Comment author: turchin 10 October 2016 12:46:06PM 3 points [-]

There is 5 times more members in the group "Voluntary Human Extinction Movement (VHEMT)" (9800) in Facebook than in the group "Existential risks" (1880). What we should conclude from it?

Comment author: dhoe 10 October 2016 12:27:50PM 2 points [-]

My partner has requested that I learn to give a good massage. I don't enjoy massages myself and the online resources I find seem to mostly steeped in woo to some degree. Does anybody have some good non-woo resources for learning it?

Comment author: roland 10 October 2016 12:20:15PM 3 points [-]

Is the following a rationality failure? When I make a stupid mistake that caused some harm I tend to ruminate over it and blame myself a lot. Is this healthy or not? The good thing is that I analyze what I did wrong and learn something from it. The bad part is that it makes me feel terrible. Is there any analysis of this behaviour out there? Studies?

Comment author: turchin 10 October 2016 11:13:53AM 5 points [-]

If we knew that AI will be created by Google, and that it will happen in next 5 years, what should we do?

Comment author: username2 10 October 2016 09:23:33AM 6 points [-]

Is there something similar to the Library of Scott Alexandria available for The Last Psychiatrist ? I just read "Amy Schumer offers you a look into your soul" and I really liked it but I don't have enough time to read all posts on the blog.

In response to Quantum Bayesianism
Comment author: MrMind 10 October 2016 07:24:07AM *  0 points [-]

QBism is very interesting, but still lacks a lot of foundations for it to be taken seriously as a mainstream interpretation: why SIC-POVMs should be preferred? What they can tell us about the ontology of quantum mechanics? Do they exists in all dimensions? How are they related to the Heisenberg picture?

Comment author: MrMind 10 October 2016 07:18:33AM 2 points [-]

In the Sequence, Eliezer made a strong case for the realist interpretation of QM (neo-Everettian many worlds), based on decoherence and Occam's razor. He then, in another point of the Sequence, tied that problem with interesting questions about anthropic probability (the infamous anthropic trilemma), and that cemented MWI as the preferred way to think about QM here.

On the other hand, I think we are still missing the big picture about quantum mechanics: ER = EPR, categorical quantum mechanics, QBism etc. all points us to interesting unexplored directions.

Comment author: hairyfigment 10 October 2016 01:46:54AM 0 points [-]

I think you're eliding the question of motive (which may be more alien for an AI). But I'm glad we agree on the main point.

Comment author: morganism 09 October 2016 10:32:00PM 1 point [-]

another group working on mosquito reduction.

https://debugproject.com/faqs/

Comment author: TheAncientGeek 09 October 2016 09:51:31PM 1 point [-]

A major way of avoiding tax is to keep money offshore. ... so what can you usefully do with money while it is resting in an account in the Cayman islands?

Comment author: ChristianKl 09 October 2016 09:35:29PM -1 points [-]

I'm not sure what you mean by this. Can you restate it?

Why aren't you seeking to explain why White's get more likely to be killed by police than Asian's? Why do you think it's a question that people like Clinton don't address?

You mentioned "gender studies" a couple times in a negative light—Why? It doesn't have anything to do with this discussion.

Because it's difficult to have a conservation about the quality of the public debate without accounting for the cultural forces that are responsible for the public debate being the way it currently is.

where men and women need to quickly make highly consequential decisions under extreme stress—this obscured map may lead to irrational, "non-winning", decisions seems uncontroversial. Certainly nothing you've said has rebutted it.

Making winning decisions is about agency. Hillary Clinton could say that she wants that all police wear body camera's. If she can win a majority for that policy she can implement it.

On the other hand you can't pass a law that people shouldn't have implicit bias anymore. Speaking about it is useful if Hillary Clinton wants to engage in virtue signaling but not actually focus on getting policies changed.

If she wanted to do rational policy making she could say: "We should do controlled trials that try different policies in different area's to find out which policies actually help with changing the status quo."

For the record, I don't think every police shooting is racist. Not even close. And I think the left goes way too far trying to spin this.

In what kind of ontology do you believe if you think that a police shooting could be racist, even in principle? That there are some police shootings that are racists and other that aren't? If you want to use the word "racist" to be a property of events and not a property of people than it means something qualitatively different than what the term "implicit racism" is about.

This looks like your conceptualization of racism is the standard meaning of the word and has little to do with the academic term of "implicit racism".

Comment author: Brillyant 09 October 2016 09:09:21PM -1 points [-]

They also don't ask the obvious questions such as whether the fact that more Whites get killed than Asians is also due to implicit bias. That a very straightforward question if you look at the data and want to use implicit bias as a cognitive tool for explaining the data of police killers.

I'm not sure what you mean by this. Can you restate it?

Gender studies

You mentioned "gender studies" a couple times in a negative light—Why? It doesn't have anything to do with this discussion.

...

Generally, the idea that (a) we all have implicit biases based on how our brain works and our life experiences, (b) these biases may significantly obscure our map of the territory, and (c) in the special case of police—where men and women need to quickly make highly consequential decisions under extreme stress—this obscured map may lead to irrational, "non-winning", decisions seems uncontroversial. Certainly nothing you've said has rebutted it.

For the record, I don't think every police shooting is racist. Not even close. And I think the left goes way too far trying to spin this.

Comment author: username2 09 October 2016 09:00:43PM 0 points [-]

So, this criticism is implying that simply understanding human speech will (at a minimum) let the AI understand moral philosophy, which is not trivial.

I don't disagree with the other stuff you said. But I interpreted the criticism as "an AI told to 'do what humans want, not what they mean'" will have approximately the same effect as if you told a perfectly rational human being to do the same. So in the same way that I can instruct people with some success to "do what I mean", the same will work for AI too. It's just also true that this isn't a solution to FAI any more than it is with humans -- because morality is inconsistent, human beings are inherently unfriendly, etc...

Comment author: ChristianKl 09 October 2016 08:29:20PM 1 point [-]

Egalitarianism - Someone high-status holding a belief must never be offered, in itself, as a reason to believe something. It’s OK to take track records into account, but the default response to naming an authority figure’s or local celebrity's beliefs as reason for someone else to believe something is for it to be perceived as an argument from authority. Therefore, the track record argument should be made very explicitly, and with great care. if at all.

I don't think the problem is about naming authority figure's. Developing trust in institutions is a useful social mechanism. We can't reason in detail about every belief we have.

In response to Quantum Bayesianism
Comment author: TheAncientGeek 09 October 2016 08:25:34PM 1 point [-]

Q: Quantum. Bayesianism isn't the LessWrong official preferred interpretation of QM because....?

In response to comment by chron on Quantum Bayesianism
Comment author: ChristianKl 09 October 2016 08:06:07PM 1 point [-]

about the problems with wikipedia

The problem that Wikipedia adopts standards from modern evidence-based medicine? It's better to read a meta-analysis from Cochrane (which is a secondary source) than reading various papers that make statements about what a drug did that might not replicate.

Comment author: philh 09 October 2016 07:56:56PM 2 points [-]

The article's goal is, more or less, to avoid being convinced of untrue things by motivated agents.

I think the article is trying to help groups set up discourse norms that help people find the truth. (The update uses the phrase "socioepistemic virtue".) It's not so much about helping individuals defend against other individuals, as about helping groups defend their members against bad agents.

Comment author: ChristianKl 09 October 2016 07:51:08PM 2 points [-]

Actually, Big Pharma would LOSE billions if it works. There are only a few anti-virals, and none of them work well, and most need to be used in combinations.

No. Gilead manages to charge it's 1000$ per pill for an antiviral. If Draco for all viruses works it could also be sold for a similar price for a bunch of conditions like AIDS.

You could argue say that Gilead isn't really Big Pharma but Biotech but it still shows that there are companies that have no problem with bringing cures to market. Gilead also makes a lot of money.

The company that would bring a working drug to market that cures drugs like AIDS would make a lot of money even if a few other companies might lose billions from it.

In response to comment by chron on Quantum Bayesianism
Comment author: morganism 09 October 2016 07:46:37PM 1 point [-]

Noticed , I think its first time i have seen that, usually ask for more primary published works.....

Comment author: morganism 09 October 2016 07:37:02PM 0 points [-]

Actually, Big Pharma would LOSE billions if it works. There are only a few anti-virals, and none of them work well, and most need to be used in combinations.

This is also not a blue sky hunt, he has a mechanism, and just needs to fine tune the hydrogenation or delivery method.

from Wiki "DRACO is selective for virus-infected cells. Differentiation between infected and healthy cells is made primarily via the length and type of RNA transcription helices present within the cell. Most viruses produce long dsRNA helices during transcription and replication. In contrast, uninfected mammalian cells generally produce dsRNA helices of fewer than 24 base pairs during transcription. Cell death is effected via one of the last steps in the apoptosis pathway"

https://en.wikipedia.org/wiki/DRACO

Comment author: ChristianKl 09 October 2016 06:54:10PM -1 points [-]

People could discuss cognitive biases in a really stupid and irrational way that would make it unproductive?

The way she discussed it wasn't productive. There also the general field of Gender studies. As a field it doesn't encourage open and data driven debate about the subject. When you start a discussion with saying that your opponent holds their position because of implicit bias that doesn't tend to be a discussion where it's easy to focus on rational argument.

Yeah?

The problem is that you are making claims that are wrong. It wasn't a discussion of how implicit bias works. If you want to analyse claims about a debate it's useful to stay with the facts.

You seem to making a black or white argument that Clinton's comment isn't useful because it's not that useful

No. Focusing a discussion on implicit bias means to not focus the discussion on "How can we solve this problem?" It's a rhetoric strategy to signal concern about Black Lives Matter while at the same time not having to actually discuss policy solutions to the problems.

There's also a good chance that a conservative person who hears the debate is harder to educate about the concept of implicit bias after listening the debate.

The intellectual toolkit of Gender studies with includes asserting that the opponent is driven by implicit bias and privilege is not useful for having rational discussions. The communities that engage in that toolkit generally don't want to let data decide.

The also don't ask the obvious questions such as whether the fact that more Whites get killed than Asians is also due to implicit bias. That a very straightforward question if you look at the data and want to use implicit bias as a cognitive tool for explaining the data of police killers.

In response to Quantum Bayesianism
Comment author: chron 09 October 2016 06:53:59PM 3 points [-]

Did anyone else find the banner at the top of the article (about preferring secondary and tertiary sources to primary ones) more interesting (about the problems with wikipedia) than the article itself?

Comment author: entirelyuseless 09 October 2016 04:31:30PM 0 points [-]

Do you mean how do you decide which goal to choose? Many different causes. For example if someone tells you that something is good, you might do it, just because you trust them and they told you it was good. They don't even have to say what goal it will accomplish, other than the fact that it will be something good.

Note that when you do that, you are not trying to accomplish any particular goal, other than "something good," which is completely general, and could be paperclips, for all you know, if the person who told you that was a paperclipper, and might be something entirely different.

Comment author: entirelyuseless 09 October 2016 04:29:24PM 0 points [-]

I did not say paperclippers are impossible in principle. I stated earlier that the orthogonality thesis may be true in principle, but it is false in practice. As you said, AIXI-tl is very inefficient. Practical AIs will not be like that, and they will not be limited to one rigid goal like that.

And even if you find my theory of intelligence unconvincing, one that implies that evolution is intelligent is even less convincing, since it does not respect what people actually mean by the word.

" Because if it's correct, it means AI researchers just need to figure out that one idea, to suddenly make intelligent AIs." That would be true, if it were easy to program that kind of generalization. Currently that seems to be very difficult, and as you correctly say, no one knows how to do it.

Comment author: Brillyant 09 October 2016 03:37:12PM *  -1 points [-]

Whether a discussion is useful depends on the results of the discussion. There are a lot of true things you can say that don't advance a discussion into a direction that leads to a positive outcome.

People could discuss cognitive biases in a really stupid and irrational way that would make it unproductive? If that's what you mean, then, yeah. Of course.

It wasn't a discussion of how implicit bias works but an uncited assertion that it has effects in certain conditions.

Yeah? It wasn't really the format for a CFAR plug.

That might be true but it's not what the LW mission of rationality that's about systematic winning is about. I understand the mission to be about finding thinking strategies that lead to making winning decisions.

Right. Like approaching policy debates with a reduction in mind-killedness. Acknowledging implicit bias is a great step.

You can also look at the decision making literature and see what saying "everyone has biases" does to a person self awareness of their own biases. It generally does little.

It does more than not acknowledging people are biased—this was literally what Clinton's critics said in regard to her comment. They essentially denied that implicit bias exists.

You seem to making a black or white argument that Clinton's comment isn't useful because it's not that useful—it won't solve anything or make rationality win U.S. policy on this issue. I am not under the illusion her one sentence will un-mindkill U.S. politics. I'm merely contrasting the (a) acknowledgement of bias with (b) being apparently unaware that it exists.

A is better than B.

Comment author: username2 09 October 2016 02:28:48PM *  0 points [-]

Smells like a homunculus. What guides your reasoning about your goals?

Comment author: ChristianKl 09 October 2016 01:22:16PM 1 point [-]

The problem is that the reason that his project was popular with people on Kickstarter was likely that he created the perception that the chances that his project will result in a working drugs is much higher than it is in reality.

If it was a tax deduction if it failed, but allowed for a gain, then it might be a way to do projects that were popular with people, but not attractive to Big Pharma or VC.

Big Pharma can make billions from this project if it works. Big Pharma also has a lot more expertise in judges the likelihood that it works than random people on Kickstarter.

If you take research on a new way to do exercise that inherently can't be patented then there can be a high chance that the research will create a lot of value but there's no business model to turn that value into money for the inventor. That's not the case with DRACO. Big Pharma is in a good position to assess whether it's a worthwhile investment of resources and put money into the project if they think it's a worthwhile investment.

Comment author: TheAncientGeek 09 October 2016 12:18:50PM *  0 points [-]

There's another mechanism which is a bit more like paperclipping: rulers come up with random ideas, which they think are doing good because their yes-men say so. (Example]. So you have two mechanisms, one which can go anywhere, and one which converges onto a narrow set of features, such as having multiple sexual partners. In view of the second mechanism, it becomes clear what a great piece of social technology the idea of an Official Opposition is.

Comment author: Jacobian 09 October 2016 12:11:48PM 0 points [-]

I'm actually not a fan of the bit I've written about Eliezer, I should probably remove it if that will allow you to share it with more people. That paragraph doesn't do a lot for the piece.

Comment author: Jacobian 09 October 2016 12:09:34PM 0 points [-]

I am very much in favor of "expanding the circle of empathy". My thesis is that this consists of supplanting your emotional empathy (who your heart beats in harmony with naturally) with cognitive empathy (who your brain tells you is worthy of empathy even if you don't really feel their Tajik feelings).

Comment author: Jacobian 09 October 2016 12:00:15PM 0 points [-]

You still haven't made a single argument in favor of emotional empathy, other than conflating lack of emotional empathy with, in order of appearance: Stalinism, Nazism, witch hunting, fanaticism. None of this name calling was supported by any evidence re:empathy.

Comment author: waveman 09 October 2016 08:52:42AM 2 points [-]

An extra data point. If we crash and burn, then earth will be too hot for multicellular life by the time the coal and oil are replenished. So the one and only industrial revolution has happened.

And given ~4,000m years of life so far and the heating only a few hundred million years away, we only just made it. Which suggests it is pretty hard to build intelligent life. Maybe because computation is very expensive so the gradient is steep. Robin Hanson has a paper on this point.

Comment author: waveman 09 October 2016 08:40:56AM 1 point [-]

Following this for 40 years things definitely seem to have sped up. Problems that seemed intractable like the dog/cat problem are now passe.

I see a confluence of three things: more powerful hardware allows more powerful algorithms to run, and makes testing possible and once possible, much faster.

Researchers still don't have access to anywhere near the 10^15 flops that is roughly the human brain. Exciting times ahead.

Comment author: John_Maxwell_IV 09 October 2016 07:11:27AM 1 point [-]
Comment author: Clarity 09 October 2016 03:19:15AM 1 point [-]

Wow.

I feel inspired by all that coordination.

* rationalist Haiku*

AI risk is a reverse lottery. So is the risk of climate change But MAOI’s for antidepressants make sense then

Comment author: WhySpace 09 October 2016 01:41:13AM *  1 point [-]

Agreed. I'll add 2 things that support of your point, though.

First, the Milgram experiment seems to suggest that even seemingly antisocial impulses like stubbornness can be extremely valuable. Sticking to core values rather than conforming likely led more people to resist the Nazis.

Also, I didn't bring it up earlier because it undermines my point, but apparently sociopaths have smaller amygdalas than normal, while kidney donors have larger ones, and empathy is linked to that region of the brain. So, we probably could reduce or remove emotional empathy and/or cognitive empathy if we really wanted to. However, I'm not at all inclined to inflict brain damage on myself, even if it could somehow be targeted enough to not interfere with cognitive empathy or anything else.

So, more generally, even reversible modification worries me, and the idea of permanently changing our values scares the shit out of me. For humanity as a whole, although not necessarily small groups of individuals as a means to an end, I don't endorse most modifications. I would much rather we retain a desire we approve of but which the laws of physics prevent us from satisfying, than to remove that value and be fulfilled.

Comment author: Tem42 09 October 2016 12:56:41AM 1 point [-]

It is interesting to note that if we quietly pass away and 50 million years later intelligent lungfish build up a civilization, they would presumably have good evidence that we were here, and would have good reason to assume that civilizations arise about once every 50 million years on average. Our effect on the Earth has probably been great enough that they will not have significant evidence from previous periods to contradict this assumption. In the case of large scale planetary civilizations, only the first one is likely to be in a position to reliably notice a delay in the appearance of previous civilizations longer than the pause between themselves and the immediately previous civilization. Therefore it may be reasonable to believe that, if 10 civilizations arise on the average planet, 90% of them will believe that they are probably midway through a long succession of civilizations.

View more: Prev | Next