Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: curi 03 December 2017 02:55:03PM 0 points [-]

Do you want new material which is the same as previous material, or different? If the same, I don't get it. if different, in what ways and why?

Comment author: ZeitPolizei 03 December 2017 09:48:20PM 0 points [-]

It seems no one on LW is able to explain to you how and why people want different material. To my mind, Kaj's explanation is perfectly clear. I'm afraid it's up to you, to figure it out for yourself. Until you do, people will keep giving you invalid arguments, or downvote and ignore you.

Comment author: curi 02 December 2017 01:24:35PM *  0 points [-]

again: i and others already wrote it and they don't want to read it. how will writing it again change anything? they still won't want to read it. this request for new material makes no sense whatsoever. it's not that they read the existing material and have some complaint and want it to be better in some way, they just won't read.

your community as a whole has no answer to some fairly famous philosophers and doesn't care. everyone is just like "they don't look promising" and doesn't have arguments.

Comment author: ZeitPolizei 03 December 2017 04:07:28AM 0 points [-]

how will writing it again change anything?

Why should anyone answer this question? Kaj has already written an answer to this question above, but you don't understand it. How will writing it again change anything? You still won't understand it. This request for an explanation makes no sense whatsoever. It's not that you understand the answer and have some complaint and want it to be better in some way, you just won't understand.

You claim you want to be told when you're mistaken, but you completely dismiss any and all arguments. You're just like "these people obviously haven't spent hundreds of hours learning and thinking about CR, so there is no way they can have any valid opinion about it" and won't engage their arguments on a level so that they are willing to listen and able to understand.

Comment author: ZeitPolizei 09 October 2017 09:38:20AM 2 points [-]

Right now I'm exploring the possibility of setting up a site similar to yourmorals so that the survey can be effectively broken up and hosted in a way where users can sign in and take different portions of it at their leisure.

It may be worth collaborating with the EA community on this, since there is considerable overlap, both in participants and in the kinds of surveys people may be interested in.

Comment author: ZeitPolizei 14 September 2017 01:38:09AM 0 points [-]

From what I've read so far, I think Information Theory, Inference and Learning Algorithms does a rather good job of conveying the intuitions behind topics.

Comment author: gilch 09 July 2017 02:16:41AM 1 point [-]

Failure seems like the default outcome. How do we avoid that? Have there been other similar LessWrong projects like this that worked or didn't? Maybe we can learn from them.

Group projects can work without financial incentives. Most contributors to wikis and open-source software, and web forums like this one, aren't paid for that.

Assume we've made it work well, hypothetically. How did we do it?

Comment author: ZeitPolizei 09 July 2017 03:46:08AM 0 points [-]

It reminds me a lot of the "mastermind group" thing, where we had weekly hangouts to talk about our goals etc. The America/Europe group eventually petered out (see here for retrospective by regex), the Eurasia/Australia group appears to be ongoing albeit with only two (?) participants.

There have also been online reading groups for the sequences, iirc. I don't know how those went though.

forums, wikis, open source software

I see a few relevant differences:

  • number of participants: If there are very many people, most of which are only sporadically active, you still get nice progress/activity. The main advantage of this video tutoring idea is personalization, which would not work with many participants.
  • small barrier to entry, small incremental improvements: somewhat related to the last point, people can post a single comment, fix a single bug, or only correct a few spelling mistakes on a wiki, and then never come back, but it will still have helped.
  • independence/asynchronicity: also kind of related to small barrier to entry. For video tutoring you need at least two people agreeing on a time and keeping that time free. In all the other cases everyone can pretty much "come and go" whenever they want. In principle it would be possible to do everything with asynchronous communication. In practice you will also have some real-time communication e.g. via IRC channels.
  • Pareto contribution: I don't actually have data on this, but especially on small open source projects and wikis the bulk of the work is probably done by a single contributor, who is really passionate about it and keeps it running.
Comment author: ZeitPolizei 07 July 2017 04:29:13PM *  2 points [-]
Comment author: ZeitPolizei 26 June 2017 09:51:43AM 1 point [-]

I think this is a great idea, likely to have positive value for participants. So going Hamming questions on this, I think two things are important.

  1. I think the most likely way this is going to "fail", is that a few people will get together, then meet about three times, and then it will just peter out, as participants are not committed enough to participate long-term. Right now, I don't think I personally would participate without there being a good reason to believe participants will keep showing up, like financial incentives, for example.
  2. Don't worry too much about doing it the Right Way from the beginning. If you get some people together, just start with the first best thing that comes to mind and iterate.
Comment author: ZeitPolizei 13 May 2017 03:02:51PM 0 points [-]

To survive, and increase one's power are instrumentally convergent goals of any intelligent agent, which means that evolution does not select for any specific type of mind, ethics, or final values.

But, as you also point out, evolution "selects on the criterion of ingroup reproductive fitness", which does select for a specific type of mind and ethics, especially if you also have the constraint, that the agent should be intelligent. As far as I am aware all of the animals considered the most intelligent are social animals (octopi may be an exception?). The most important aspect of an evolutionary algorithm is the fitness function. The real world seems to impose a fitness function, that, if it selects for intelligence, it also seems to select for sociability, something that generally seems like it would increase friendliness.

True, humans rebelled against and overpowered evolution

Evolution among humans is as real as it is among any lifeform. Until every human being born is actually genetically designed from scratch there is evolutionary pressure favoring some genes over others.

Careless evolution managed to create humans on her first attempt at intelligence, but humans, given foresight and intelligence, have an extreme challenge making sure an AI is friendly? How can we explain this contradiction?

Humans are not friendly. There are countless examples of humans, who, if they had the chance, would make the world into a place that is significantly worse for most humans. The reason none of them have succeeded yet is that so far no one has had a power/intelligence advantage so large that they could just impose their ideas unilaterally onto the rest of humanity.

Comment author: ZeitPolizei 01 March 2017 02:12:57PM 0 points [-]

There is also Bertrand, which is organic. Their ingredients look like it would be pretty tasty, but it costs 9€ per day.

Comment author: ZeitPolizei 24 January 2017 11:20:29AM 2 points [-]

View more: Next