Comment author: ZacHirschman 25 May 2015 07:07:40AM 5 points [-]

A few thoughts on Mark_Friedenbach's recent departure:

I thought it could be unpacked into two main points. (1) is that Mark is leaving the community. To Mark, or anyone who makes this decision, I think the rational response is, "good luck and best wishes." We are here for reasons, and when those reasons wane, I wouldn't begrudge anyone looking elsewhere or doing other things.

(2) is that the community is in need of growth. My interpretation of this is as follows: the Sequences are not updated, and yet they are still referenced as source material. I wouldn't mind reading if someone took a crack at a Sequences 2.0, or something completely different. Perhaps something with a more empirical/scientific focus (as opposed to foundational/philosophical), as Mark recommended.

Comment author: [deleted] 19 May 2015 06:58:53AM 2 points [-]

One thing I would like to add - maybe it was there just I missed it - is to tell yourself it is OKAY to feel sad. Let your feelings from from true thoughts. If it is true you loved or still love him / her - and a long relationship with a breakup initiated by the other makes it pretty likely - it is perfectly normal to feel sad. It is perfectly normal to not want to get over it, because you cherish the feeling of love even though it hurts now to a letting go.

Another very true thought here where feelings should flow from - I took it from my former Buddhist practices - is impermanence. ALMOST EVERY relationship ends badly: break-up or one of them dies. Humans are fragile and have a shelf-life of like 80 years.

There are these rare cases where they both die at the same time or like grandparents case where by the time grandpa died grandma was so demented that she hardly noticed. Even this are not really happy endings, a double tragedy cannot really be defined as a happy one, and seeing your loved one become an, um, "old fart" and yourself alongside has its own bittersweet sadness as well, I figure. Although we joke that 50 years later we will do wheelchair racing in the assisted home center but in reality we regret every year our relationship loses a bit more off that youthful sexual energy.

The lesson here is to start and conduct relationships with a consciousness of impermanence so there are no nasty surprises: it will almost certainly end badly. One will grieve over the breakup or death of the other.

OTOH I suspect that being conscious of impermanence plays a role in why I am in something like a constant state of light depression. It is sort of hard to get really enthusiastic over things when you know you will lose everything you cherish one way or another with very high probability.

Comment author: ZacHirschman 19 May 2015 03:56:58PM 2 points [-]

The impermanence of things is an excellent reason to get really enthusiastic about them.

Comment author: [deleted] 19 May 2015 07:11:03AM *  1 point [-]

I must confess I don't like the term "rationalism" as it has Vulcan-Hollywood-Rationalism connotations. In the past, this term was often used to describe attitudes that are highly impractical and ideological. More in PDF On the "Oakeshottian scale" LW-Rationalism is far closer to the pragmatic attitude Oakaeshott endorses than to that type of quasi-rationalism he criticizes.

If I had a time machine I would probably try to talk Eliezer into choosing another name. What name that would be I am not so sure, perhaps Pragmatism - after all Peirce is a big influence and his philosophy WAS called Pragmatism. (But hold on, the term Pragmatism is also used by Richard Dworkin and I am not sure I would like that association.)

Bygones are bygones, so now all I can do is to recommend you all to spread the ideas WITHOUT putting any kind of label on them. Just simply as good ideas, ideas that work. This makes sense even if you like the term "Rationalism". Because you probably see it is easier to get people to adopt ideas if we are not asking them to identify with any label.

Comment author: ZacHirschman 19 May 2015 01:31:32PM 0 points [-]

I think of it as "improvematism." Maybe "improvementism" would sound more serious.

In response to The ganch gamble
Comment author: Elo 19 May 2015 04:41:38AM 2 points [-]

Arguing for or against a parable is going to lead nowhere ever.

Having said that; cute story. I think the value of this story is that not all problems should be solved in a straightforward way; not in the Cooperate or defect or win/lose result of the story.

What if they kicked the mirror-maker out of town and awarded the actual worker? Not so fabulous now huh...

In response to comment by Elo on The ganch gamble
Comment author: ZacHirschman 19 May 2015 01:22:07PM 2 points [-]

"What if they kicked the mirror-maker out of town and awarded the actual worker?"

This is the question I keep asking myself. In the story as written, the village rewards the clever skilled worker over the diligent skilled worker. This might work in the short term, and the clever worker's gamble pays off for him personally as he sees increased business from increased prestige. If we consider the village (or the judges) to be actors in the game, however, they act in their own disinterest by disincentivizing craftsmanship in favor of craftiness. And here I am, arguing for or against a parable...

Comment author: [deleted] 19 May 2015 10:19:46AM *  0 points [-]

It is a powerful slogan, but it could be unpacked into people having different goals. Sometimes it is to find truth. Sometimes it is to find the policies with the best outcomes. Sometimes it is to enjoy the thrill of fighting tribe against tribe. This is actually a cool hobby when happens on a football field or basketball court, but when they are called Team Life and Team Choice and the ball is abortion then it is more problematic. Then all three gets mixed into one. It is usually better to keep these activities separate, I think that is the core lesson.

Comment author: ZacHirschman 19 May 2015 12:32:53PM -1 points [-]

The difference being that on a football field or basketball court, there is a settled outcome of competition, and no sincere value attached to certain outcomes. An average person might prefer that their chosen sports team wins, but I think they would acknowledge that it does not make the world a better place. In politics, however, the preference that a chosen team wins is very closely tied to the view that the win is beneficial for everybody.

Comment author: Richard_Loosemore 18 May 2015 04:52:23PM *  0 points [-]

Let me see if I can deal with the "no true scotsman" line of attack.

The way that that fallacy might apply to what I wrote would be, I think, something like this:

  • MIRI says that a superintelligence might unpack a goal statement like "maximize human happiness" by perpetrating a Maverick Nanny attack on humankind, but Loosemore says that no TRUE superintelligence would do such a thing, because it would be superintelligent enough to realize that this was a 'mistake' (in some sense).

This would be a No True Scotsman fallacy, because the term "superintelligence" has been, in effect, redefined by me to mean "something smart enough not to do that".

Now, my take on the NTS idea is that it cannot be used if there are substantive grounds for saying that there are two categories involved, rather than a real category and a fake category that is (for some unexplained reason) exceptional.

Example: Person A claims that a sea-slug caused the swimmer's leg to be bitten off, but Person B argues that no "true" sea-slug would have done this. In this example, Person B is not using a No True Scotsman argument, because there are darned good reasons for supposing that sea-slugs cannot bite off the legs of swimmers.

So it all comes down to whether someone accused of NTS is inventing a ficticious category distinction ("true" versus "non-true" Scotsman) solely for the purposes of supporting their argument.

In my case, what I have argued is right up there with the sea-slug argument. What I have said, in effect, is that if we sit down and carefully think about the type of "superintelligence" that MIRI et al. put into their scenarios, and if we explore all the implications of what that hypothetical AI would have to be like, we quickly discover some glaring inconsistencies in their scenarios. The sea-slug, in effect, is supposed to have bitten through bone with a mouth made of mucous. And the sea-slug is so small it could not wrap itself around the swimmer's leg. Thinking through the whole sea-slug scenario leads us into a mass of evidence indicating that the proposed scenario is nuts. Similarly, thinking through the implications of an AI that is so completely unable to handle context, that it can live with Grade A contradictions at the heart of its reasoning, leads us to a mass of unbelievable inconsistencies in the 'intelligence' of this supposed superintelligence.

So, where the discussion needs to be, in respect of the paper, is in the exact details of why the proposed SI might not be a meaningful hypothetical. It all comes down to a meticulous dissection of the mechanisms involved.

To conclude: sorry if I seemed to come down a little heavy on you in my first response. I wasn't upset, it was just that the NTS critique had occurred before. In some of those previous cases the NTS attack was accompanied by language that strongly implied that I had not just committed an NTS fallacy, but that I was such an idiot that my idiocy was grounds for recommending to all not to even read the paper. ;-)

Comment author: ZacHirschman 18 May 2015 08:23:02PM -1 points [-]

"... thinking through the implications of an AI that is so completely unable to handle context, that it can live with Grade A contradictions at the heart of its reasoning, leads us to a mass of unbelievable inconsistencies in the 'intelligence' of this supposed superintelligence."

This is all at once concise, understandable, and reassuring. Thank you. I still wonder if we are accurately broadening the scope of defined "intelligence" out too far, but my wonder comes from gaps in my specific knowledge and not from gaps in your argument.

Comment author: ZacHirschman 18 May 2015 04:05:21PM 2 points [-]

The idea that I find least entangled but still very potentially beneficial is that politics is the mind-killer. I realize it's an old sequence, and it doesn't have much traction here (since LW is ostensibly un-killed minds).

Comment author: Richard_Loosemore 18 May 2015 02:18:34AM 1 point [-]

Hey, no problem. I was really just raising an issue with certain types of critique, which involve supposed fallacies that actually don't apply.

I am actually pressed for time right now, so I have to break off and come back to this when I can. Just wanted to clarify if I could.

Comment author: ZacHirschman 18 May 2015 03:57:30PM 0 points [-]

Feel free to disengage; TheAncientGeek helped me shift my paradigm correctly.

Comment author: TheAncientGeek 17 May 2015 07:09:17AM *  0 points [-]

Einstein made predictions about what the universe would look like if there were a maximum speed. Your prediction seems to be that well built ai will not misunderstand its goals

Or a (likely to be built) AI won't even have the ability to compartmentalise it's goals from its knowledge base.

It's not No True Scotsnan to say that no competent researcher would do it that way.

Comment author: ZacHirschman 17 May 2015 03:10:52PM 0 points [-]

Thank you for responding and attempting to help me clear up my misunderstanding. I will need to do another deep reading, but a quick skim of the article from this point of view "clicks" a lot better for me.

Comment author: Richard_Loosemore 16 May 2015 04:26:05PM 0 points [-]

The only problem with this kind of "high-level" attack on the paper (by which I mean, trying to shoot it down by just pigeonholing it as a "no true scotsman" argument) is that I hear nothing about the actual, meticulous argument sequence given in the paper.

Attacks of that sort are commonplace. They show no understanding of what was actually said.

It is almost as if Einstein wrote his first relativity paper, and it got attacked with comments like "The author seems to think that there is some kind of maximum speed in the universe - an idea so obviously incorrect that it is not worth taking the time to read his convoluted reasoning."

I don't mean to compare myself to Albert, I just find it a bit well, pointless when people either (a) completely misunderstand what was said in the paper, or (b) show no sign that they took the time to read and think about the very detailed argument presented in the paper.

Comment author: ZacHirschman 16 May 2015 07:02:09PM 0 points [-]

You have my apologies if you thought I was attacking or pigeonholing your argument. While I lack the technical expertise to critique the technical portion of your argument, I think it could benefit from a more explicit avoidance of the fallacy mentioned above. I thought the article was very interesting and I will certainly come back to it if I ever get to the point where I can understand your distinctions between swarm intelligence and CFAI. I understand you have been facing attacks for your position in this article, but that is not my intention. Your meticulous arguments are certainly impressive, but you do them a disservice by dismissing well intentioned critique, especially as it applies to the structure of your argument and not the substance.

Einstein made predictions about what the universe would look like if there were a maximum speed. Your prediction seems to be that well built ai will not misunderstand its goals (please assume that I read your article thoroughly and that any misunderstandings are benign). What does the universe look like if this is false?

I probably fall under category a in your disjunction. Is it truly pointless to help me overcome my misunderstanding? From the large volume of comments, it seems likely that this misunderstanding is partially caused by a gap between what you are trying to say, and what was said. Please help me bridge this gap instead of denying its existence or calling such an exercise pointless.

View more: Prev | Next