All of Fronken's Comments + Replies

Also, what the heck are you talking about?

Wireheading. The term is not a metaphor, and it's not a hypothetical. You can literally stick a wire into someone's pleasure centers and activate them, using only non-groundbreaking neuroscience.

It's been tested on humans, but AFAIK no-one has ever felt compelled to go any further.

(Yeah, seems like it might be evidence. But then, maybe akrasia...)

0Said Achmiz
Where and what are these "pleasure centers", exactly?

So if we have a heresy, then exposing it as actually true would be good, because we want to know the truth - hang on.

... can't we rewire brains right now? We just ... don't.

1Said Achmiz
Well, we must not be hedonistic utilitarians then, right? Because if we were, and we could, we would. Edit: Also, what the heck are you talking about?

I think he meant "jesus myth" proponents, who IIRC are ... dubious.

-1dthunt
Well, hence "historical Jesus". If I were talking about Jesus mythicists, I would have said that. I ignorantly assume there aren't that many Jesus mythicist camps fighting each other out over specific theories of mythicism... I'm actually looking forward to Richard Carrier's book on that, but I do not expect it to decide mythicism.

I asked about this a while ago, and apparently the software doesn't support it :/

You're supposed to roleplay a Gatekeeper. There is more than money on the line.

5jbay
Yes, certainly. This is mainly directed toward those people who are confused by what anyone could possibly say to them through a text terminal that would be worth forfeiting winnings of $10. I point this out because I think the people who believe nobody could convince them when there's $10 on the line aren't being creative enough in imagining what the AI could offer them that would make it worth voluntarily losing the game. In a real-life situation with a real AI in a box posing a real threat to humanity, I doubt anyone would care so much about a captivating novel, which is why I say it's tongue-in-cheek. But just like losing $10 is a poor substitute incentive for humanity's demise, so is an entertaining novel a poor substitute for what a superintelligence might communicate through a text terminal. Most of the discussions I've seen so far involve the AI trying to convince the gatekeeper that it's friendly through the use of pretty sketchy in-roleplay logical arguments (like "my source code has been inspected by experts"). Or in-roleplay offers like "your child has cancer and only I can cure it", which is easy enough to disregard by stepping out of character, even though it might be much more compelling if your child actually had cancer. A real gatekeeper might be convinced by that line, but a roleplaying Gatekeeper would not (unless they were more serious about roleplaying than about winning money). So I hope to illustrate that the AI can step out of the roleplay in its bargaining, even while staying within the constraints of the rules; if the AI actually just spent two hours typing out a beautiful and engrossing story with a cliffhanger ending, there are people who would forfeit money to see it finished. The AI's goal is to get the Gatekeeper to let it out, and that alone, and if they're going all-out and trying to win then they should not handicap themselves by imagining other objectives (such as convincing the Gatekeeper that it'd be safe to let them out). As

Historical Flammel also has an official grave site in France (Paris, if I remember correctly); I want to think he lived to his eighties, but it's been a few months since I last read about him.

I recall hearing that "grave" does not contain a body, although I'm not sure how the person who told me that knew. (They were suggesting using him in fiction, much as HPMOR did.)

Isn't "Dark Side" approximately "effective, but dangerous"?

Well ... isn't it? What others are you thinking of? None spring to my mind.

3MugaSofer
Upvoted, but I think you misinterpreted the grandparent slightly. If you don't sign up for cryonics, you will have no chance at all of coming back if you die- the claim is literally true - but the grandparent seems to be considering the broader class of "might hopefully delay (or prevent) death" - anti-aging techniques, uploading, even time travel.

He never said they were "rejected" or "ruled out". Just weaker than the conversation - which I assume is because the average person is much worse than you, as cultured political disputant, experience.

Probably not true, still, unless you have the raw mind power to deduce all the flaws of the human mind from that mere conversation. And even then, only maybe.

Taking it as Bayesian evidence: arguably rational, although it's so small your brain might round it up just to keep track of it, so it's risky; and it may actually be negative (because psychopaths might be less likely to tell you something that might give them away.)

Worrying about said evidence: definitely irrational. Understandable, of course, with the low sanity waterline and all...

-4Eugine_Nier
Why?

Upvoted for mention of "applause lights".

Weirded out at the oversharing, obviously.

Assuming the context was one where sharing this somehow fit ... somewhat squicked, but I would probably be squicked by some of their fantasies. That's fantasies.

Oh, and some of the less rational ones might worry that this was an indicator that I was a dangerous psychopath. Probably the same ones who equate "pedophile" with "pedophile who fantasises about kidnap, rape, torture and murder" ,':-. I dunno.

-1Eugine_Nier
Why is this irrational? Having a fantasy of doing X means your more likely to do X.

I think that "human pleasure" is such a complicated idea that trying to program it in formally is asking for disaster. That's one of the things that you should definitely let the AI figure out for itself.

[...]

Eliezer is aware of this problem, but hopes to avoid disaster by being especially smart and careful. That approach has what I think is a bad expected value of outcome.

Huh I thought he wanted to use CEV?

3nshepperd
You are right. I think PhilGoetz must be confused. EY has at least certainly never suggested programming an AI to maximise human pleasure.

Sorry I thought you were pointing out something Orphan had acknowledged already - that's a different point. Retracted & upvoted.

2nd try replying to this, since people worried first was hard to parse:

I think that sexism is mostly folk psychology - false when tested, but not untestable given smart experimenters. Thus, feminism predicts that sexist hypotheses are not the way the world actually is, and that's empirical.

But, there are a lot of people rallying under flags with "feminism" on them, and they vary widely. So many of them probably just assume the current facts as we know them (good) and so merely claim that under those facts certain things may be wrong, ethically. And you have others who actually believe sexist claims but still want to be called feminist. So maybe tabooing is needed.

Ah yeah successful should maybe have been accepted, or universal, or maybe claims should have been arguments. Thanks!

I'd also say: being downvoted by one person is not particularly strong evidence of anything; don't get upset about it.

My first attempt to clarify was downvoted too :(

the obvious diagnosis is that you and Argency disagree about what "feminism" means

... oh. It is a very vague word ... I figured they were just underestimating the coherence of opposing arguments, since it's easy to when the position in question is quite discredited so you don't encounter them... I'll try asking them what they meant, good idea.

ಠ_ಠ

Each community could have its own standards and this wouldn't pose too much of an issue, and this is more or less the way things worked.

The reply:

I think you are overestimating pre-internet uniformity here [...] Each group has different ideas of what would constitute provocative clothing.

[This comment is no longer endorsed by its author]Reply
3JoshuaZ
I'm not sure what your point is with those two quotes. Are you trying to say that OrphanWilde already addressed what I was saying? If so, the points are different: Orphan was discussing was how distinct groups have different standards. The point I was making that in small geographic areas one can have a large number of groups with different standards that all have to interact with each other. And the example of the Modern Orthodox showed, even within a small, superficially uniform group, there can be a lot of variation.

That's why only "in an ideal world", methinks.

I think that's actually the common model that sex is something women have and men want. So, which of the two simply depends on whether you're inclined to grant it or not, and on the side you view it from. This may be an unrelated phenomenon to dom/sub (or, alternately, the source of a dom/sub effect.)

-2MugaSofer
That's an interesting "model", but it doesn't seem to be making any predictions here - it fits whether Wilde is right or wrong, so it's probably irrelevant.

OK I'm downvoted so I must have missed something. Help guys?

1gjm
I think one problem is what wedrifid says: it is difficult to work out what your comment actually means. * "the empirical claims of feminism are now successful": what does it mean for an empirical claim to be successful? Is that the same as "true", or something else? * "but they did exist": why "but"? what's the opposition between existing and "being successful"? I gather that you were disagreeing with Argency's statement that feminism "doesn't (or shouldn't) make any predictions about the way the world actually is or will be" on the grounds that you consider that feminism does (among other things) make claims about how the world is. Fair enough (and for what it's worth I'd agree), but it seems to me that the obvious diagnosis is that you and Argency disagree about what "feminism" means, in which case merely saying "but it does make empirical claims" doesn't achieve much. So: two problems. A statement whose meaning is hard to make sense of, treating a disagreement as one about how the world is when it's probably actually more about how to use one particular word. I'd guess that whoever downvoted you had one or both of those in mind. (I'd also say: being downvoted by one person is not particularly strong evidence of anything; don't get upset about it. But if you find yourself being downvoted a lot, the chances are that either you should change something or else LW just isn't a good place for you.)

Is that going to be harder that coming up with a mathematical expension of morality and preloading it?

Harder than saying it in English, that's all.

EY. It's his answer to friendliness.

No he wants to program the AI to deduce morality from us it is called CEV. He seems to be still working out how the heck to reduce that to math.

I would not dare to call that "Dark Arts".

Fortunately someone else already invented the term "Dark Arts" and that's what it means.

Humans are made to do that by evolution AIs are not. So you have to figure what the heck evolution did, in ways specific enough to program into a computer.

Also, who mentioned giving AIs a priori knowledge of our preferences? It doesn't seem to be in what you replied to.

... the what.

Ahh I just finished that.

... that is not rationality that is a mild infohazard trying to hack you into taking actions that make people starve. It should be kept away from people and counteragents spread to defend against further outbreaks. Seriously why would you post that as a rationality quote.

-9Multiheaded

This comment, while pointing out real and serious issues - I agree with it - contains way too much Dark Arts for a LessWrong comment.

Possibly I was placing the zero point between positive and negative higher than you. I don't see sadness as merely a low positive but a negative. But then I'm not using averages anyway, so I guess that may cover the difference between us.

6Ghatanathoah
I definitely consider the experience of sadness a negative. But just because someone is having something negative happen to them at the moment does not mean their entire utility at the moment is negative. To make an analogy, imagine I am at the movie theater watching a really good movie, but also really have to pee. Having to pee is painful, it is an experience I consider negative and I want it to stop. But I don't leave the movie to go to the bathroom. Why? Because I am also enjoying the movie, and that more than balances out the pain. This is especially relevant if you consider that humans value many other things than emotional states. To name a fairly mundane instance, I've sometimes watched bad movies I did not enjoy, and that made me angry, because they were part of a body of work that I wanted to view in its complete form. I did not enjoy watching Halloween 5 or 6, I knew I would not enjoy them ahead of time, but I watched them anyway because that is what I wanted to do. To be honest, I'm not even sure if it's meaningful to try to measure someone's exact utility at the moment, out of relation to their whole life. It seems like there are lots of instances where the exact time of a utility and disutility are hard to place. For instance, imagine a museum employee who spends the last years of their life restoring paintings, so that people can enjoy them in the future. Shortly after they die, vandals destroy the paintings. This has certainly made the deceased museum employee's life worse, it retroactively made their efforts futile. But was the disutility inflicted after their death? Was the act of restoring the paintings a disutility that they mistakenly believed was a utility? It's meaningful to say "this is good for someone" or "this is bad for someone," but I don't think you can necessarily treat goodness and badness like some sort of river whose level can be measured at any given time. I think you have to take whole events and timelessly add them up.

But then you kill sad people to get "neutral happiness" ...

2Ghatanathoah
If someone's entire future will contain nothing but negative utility they aren't just "sad." They're living a life so tortured and horrible that they would literally wish they were dead. Your mental picture of that situation is wrong, you shouldn't be thinking of executing an innocent person for the horrible crime of being sad. You should be thinking of a cancer patient ravaged by disease whose every moment is agony, and who is begging you to kill them and end their suffering. Both total and average utilitarianism agree that honoring their request and killing them is the right thing to do. Of course, helping the tortured person recover, so that their future is full of positive utility instead of negative, is much much better than killing them.

Could one not change the bidding to use "chore points" of somesuch? I mean, the system described is designed for spouses, but there's no reason it couldn't be adapted for you and your housemates.

For "successful" read "accepted". (Some are now accepted as historical facts.)

Considering timelessly, should it not also disprove helping the least happy, because they will always have been sad?

1Ghatanathoah
No. Our goal is to make people have much more happiness than sadness in their lives, not no sadness at all. I've done things that make me moderately sad because they will later make me extremely happy. In more formal terms, suppose that sadness is measured in negative utilons, and happiness in utilons. Suppose I am a happy person who will have 50 utilons. The only other person on Earth is a sad person with -10 utilons. The average utility is then 20 utilons. Suppose I help the sad person. I endure -5 utilons of sadness in order to give the sad person 20 utilons of happiness. I now have 45 utilons, the sad person has 10. Now the average utility is 27.5. A definite improvement.
6AndHisHorse
That raises another question - do we count average utility by people, or by duration? Is utility averaged over persons, or person-hours? In such a case, how would we compare the utilities of long-lived and short-lived people? Should we be more willing to harm the long-lived person, because the experience is a relatively small slice of their average utility, or treat both the long-lived and short-lived equally, as if both of their hours were of equal value?

Presumably, only if they get born. Although that's tweakable.

Not abstract, to be fair, usually ...

But yes, even those without such skepticism (like myself) tend to notice that the quality is, in fact, low.

I think the empirical claims of feminism are now successful, but they did exist. Sexism, after all, has empirical claims.

0wedrifid
I can't seem to parse this literally. Do you mean that some past empirical claims of feminism are no longer true do the success of the political advocacy of feminism? That seems true (with some controversy on the degree of 'some').

But at the same time, the MRAs have a serious problem: in the same way that some people have extremely negative associations with feminism, many have similar issues with the MRAs. If someone were to want to seriously deal with gender inequality issues in custody disputes, I'd strongly advise them to keep themselves away from being associated with the MRAs.

I'm just wandering past your conversation, but I think many people are just offended by the concept of men demanding rights - y'know, because they have enough damn rights already, and so on.

That is, th... (read more)

2Protagoras
Possibly, but as someone with lots of negative associations with MRAs, I'm not sure how big of a factor this is. I'm sympathetic to goals like making custody fairer or paying more attention to male victims of domestic violence. My extremely negative opinion of MRAs is based on what they're actually like, not an abstract skepticism that there could ever be a legitimate cause with that name.

Feel free to point me in the direction of choice-positive feminist blogs, incidentally. My list has gone from six down to one over the past few years. Those six were the best I could find and five of them -still- couldn't refrain from hostility, either towards women, or towards men.

No links in my pocket but I think I've encountered those. Maybe you were being to strict with the criteria? Few people could live up to that, I think.

But I also despise the position that women aren't -allowed- to be like this

Nobody actively believes this, mind. They just haven't thought about it.

Funny thing, I had the exact same

perfect woman? ,':-.

thought, even though I don't find a lot of those things attractive, come to think. Cultural conditioning? Subliminal messages?

That's...damn... that's like the whole bloody point of why we hold empathy for genuinely different/strange/foreign people to be so rare and valuable!

And thus, the quoted piece is ... self-evidently true? One of us is misunderstanding the person the quoted.

Once you specify where I am, who I am with, what kind of body language the man is using, how big he is, and what he is wearing, further specifying what race he is wouldn't matter that much.

Is that true? Depending on the "where I am" part?

There's only so much you can tell about someone from "what kind of body language the man is using, how big he is, and what he is wearing", after all. In the right racially-segregated society, could it provide valuable additional data?

What had the comment been saying before deletion?

Most third graders are probably still in the process of developing the foundational skills that they'll eventually need in order to effectively learn complex topics without guessing the teacher's password.

But then they don't, so we need to try another method, yes?

0Desrtopa
We need to do something differently, but we need to make the right changes at the right places. If you want kids to better understand rationality when they grow up, you don't want to start by teaching them things like the content of the Sequences, you start with something like "what did you see?"

On the other hand, if you don't tell them, most of them will come to that conclusion anyway. Then they will feel just as depressed, but also alienated from the oppressive adult caste.

I find most avoid considering the question.

Children are often visibly treated more like pets than people, at least in north american society.

Upvoted for quote, though unsure on conclusion. Has this been tried, that you've seen?

I had a 5th grade science teacher who was an idiot, but when I argued with him over his stupidities, he didn't shut me down, he argued back.

This seems lucky, from what I've seen the standard is lower.

Story ... too awesome ... not to upvote ...

not sure why its rational, though.

If you think FAI is not possible, why make an AI anyway?

0TimS
Personally, I don't think a super-human intelligence AI is possible. But if I'm wrong about that, then making an AI that is or can become super-human is a terrible idea - like the Aztecs sending boats to pick up that Spaniards, only worse.
Load More