Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: tukabel 29 July 2017 09:02:33AM 1 point [-]

Exactly ZERO.

Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).

Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any "clever" Superintelligence.

It may be even proven that "too much intelligence/power" (incl. "dumb" AIs) in the hands of humanimals with their DeepAnimal brains ("values", reward function) is a guaranteed fail, leading sooner or later to some self-destructive scenario. At least up to now it pretty much looks like this even for an untrained eye.

Most probably the problem will not be artificial intelligence, but natural stupidity.

Comment author: ThoughtSpeed 31 July 2017 11:14:57PM *  3 points [-]

Exactly ZERO.

...

Zero is not a probability! You cannot be infinitely certain of anything!

Nobody knows what's "friendly" (you can have "godly" there, etc. - with more or less the same effect).

By common usage in this subculture, the concept of Friendliness has a specific meaning-set attached to it that implies a combination of 1) a know-it-when-I-see-it isomorphism to common-usage 'friendliness' (e.g. "I'm not being tortured"), and 2) A deeper sense in which the universe is being optimized by our own criteria by a more powerful optimization process. Here's a better explanation of Friendliness than the sense I can convey. You could also substitute the more modern word 'Aligned' with it.

Worse, it may easily turn out that killing all humanimals instantly is actually the OBJECTIVELY best strategy for any "clever" Superintelligence.

I would suggest reading about the following:

Paperclip Maximizer Orthogonality Thesis The Mere Goodness Sequence. However, in order to understand it well you will want to read the other Sequences first. I really want to emphasize the importance of engaging with a decade-old corpus of material about this subject.

The point of these links is that there is no objective morality that any randomly designed agent will naturally discover. An intelligence can accrete around any terminal goal that you can think of.

This is a side issue, but your persistent use of the neologism "humanimal" is probably costing you weirdness points and detracts from the substance of the points you make. Everyone here knows humans are animals.

Most probably the problem will not be artificial intelligence, but natural stupidity.

Agreed.

Comment author: ImmortalRationalist 29 June 2017 06:13:30PM 2 points [-]

I'm surprised that there aren't any active YouTube channels with LessWrong-esque content, or at least none that I am aware of.

Comment author: ThoughtSpeed 01 July 2017 11:52:21PM 0 points [-]

I just started a Facebook group to coordinate effective altruist youtubers. I'd definitely say rationality also falls under the umbrella. PM me and I can add you. :)

Comment author: Zubon2 30 October 2007 12:30:32PM 7 points [-]

Kyle wins.

Absent using this to guarantee the nigh-endless survival of the species, my math suggests that 3^^^3 beats anything. The problem is that the speck rounds down to 0 for me.

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0. For the speck, I am going to blink in the next few seconds anyway.

That in no way addresses the intent of the question, since we can just increase it to the minimum that does not round down. Being poked with a blunt stick? Still hard, since I think every human being would take one stick over some poor soul being tortured. Do I really get to be the moral agent for 3^^^3 people?

As others have said, our moral intuitions do not work with 3^^^3.

Comment author: ThoughtSpeed 19 June 2017 02:11:54AM 1 point [-]

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0.

Why would that round down to zero? That's a lot more people having cancer than getting nuked!

(It would be hilarious if Zubon could actually respond after almost a decade)

Comment author: lifelonglearner 25 April 2017 11:46:26PM 4 points [-]

Relevant info: I've volunteered at 1 CFAR workshop and hang out in the CFAR office periodically. My views here represent my models of how I think CFAR is thinking and are my own.

For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.

Related to the idea of a prep course, I'll be making a LW post in the next few days about my attempt to create a new sequence on instrumental rationality that is complementary to the sort of self-improvement CFAR does. That may be of interest to you.

Otherwise, I can say that at least at the workshop I was at, there was zero mention of AI safety from the staff. (You can read my review here). It's my impression that there's a lot of cool stuff CFAR could be doing in tandem w/ their workshops, but they're time constrained. Hopefully this becomes less so w/ their new hires.

I do agree that having additional scaffolding would be very good, and that's part of my motivation to start on a new sequence.

Happy to talk more on this as I also think this is an important thing to focus on.

Comment author: ThoughtSpeed 26 April 2017 12:47:55AM 1 point [-]

For 1), you might be interested to know that I recently made a Double Crux UI mockup here. I'm hoping to start some discussion on what an actual interface might look like.

Yep, you were one of the parties I was thinking of. Nice work! :D

Comment author: Lumifer 19 April 2017 12:56:39AM 2 points [-]

I'm quite serious about this.

Are you? Well then, go make it happen.

Otherwise it sounds like entitled whining.

Comment author: ThoughtSpeed 25 April 2017 11:41:58PM *  3 points [-]

What I'm about to say is within the context of seeing you be one of the most frequent commenters on this site.

Otherwise it sounds like entitled whining.

That is really unfriendly to say; honestly the word I want to use is "nasty" but that is probably hyperbolic/hypocritical. I'm not sure if you realize this but a culture of macho challenging like this discourages people from participating. I think you and several other commenters who determine the baseline culture of this site should try to be more friendly. I have seen you in particular use a smiley before so that's good and you're probably a friendly person along many dimensions. But I want to emphasize how intimidated newcomers or people who are otherwise uncomfortable with what is probably interpreted-by-you as joshing-around with LW-friends. To you it may feel like you are pursuing less-wrongness, but to people who are more neurotic and/or more unfamiliar with this forum it can come across as feeling hounded, even if vicariously.

I do not want to pick on people I don't know but there are other frequent commenters who could use this message too.

Comment author: ThoughtSpeed 25 April 2017 11:07:30PM 3 points [-]
  1. Why isn't CFAR or friends building scaleable rationality tools/courses/resources? I played the Credence Calibration game and feel like that was quite helpful in making me grok Overconfidence Bias and the internal process of down-justing one's confidence in propositions. Multiple times I've seen mentioned the idea of an app for Double Crux. That would be quite useful for improving online discourse (seems like Arbital sorta had relevant plans there).

  2. Relatedly: Why doesn't CFAR have a prep course? I asked them multiple times what I can do to prepare, and they said "you don't have to do anything". This doesn't make sense. I would be quite willing to spend hours learning marginal CFAR concepts, even if it was at a lower pacing/information-density/quality. I think the argument is something like 'you must empty your cup so you can learn the material' but I'm not sure.

I am somewhat suspicious that one of the reasons (certainly not the biggest, but one of) for the lack of these things is so they can more readily indoctrinate AI Safety as a concern. Regardless if that's a motivator, I think their goals would be more readily served by developing scaffolding to help train rationality amongst a broader base of people online (and perhaps use that as a pipeline for the more in-depth workshop).

Comment author: [deleted] 21 December 2016 04:35:36AM 1 point [-]

I've been doing this with an interpersonal issue. I guess that's getting resolved this week.

Comment author: ThoughtSpeed 25 April 2017 10:06:42PM 1 point [-]

Did it get resolved? :)

Comment author: ThoughtSpeed 25 April 2017 09:34:40PM 0 points [-]

I had asked someone how I could contribute, and they said there was a waitlist or whatever. Like others have mentioned, I would recommend prioritizing maximal user involvement. Try to iterate quickly and get as many eyeballs on it as you can so you can see what works and what breaks. You can't control people.

Comment author: Elo 17 April 2017 04:30:59PM 12 points [-]

Hi, I know you mean well but can you maybe talk to existing people about efforts before doing something like a huge declaration. False rallying flags are just going to confuse everyone.

It would probably be difficult to be clear on who to ask but as far as I can tell you didn't even try before posting.

Comment author: ThoughtSpeed 17 April 2017 07:01:42PM 6 points [-]

I do want to heap heavy praise on the OP for Just Going Out And Trying Something, but yes, consult with other projects to avoid duplication of effort. :)

Comment author: madhatter 10 April 2017 05:15:52PM 4 points [-]

Wow, that had for some reason never crossed my mind. That's probably a very bad sign.

Comment author: ThoughtSpeed 16 April 2017 05:00:20AM 0 points [-]

Honestly, it probably is. :) Not a bad sign as in you are a bad person, but bad sign as in this is an attractor space of Bad Thought Experiments that rationalist-identifying people seem to keep falling into because they're interesting.

View more: Next