Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ZeitPolizei 14 September 2017 01:38:09AM 0 points [-]

From what I've read so far, I think Information Theory, Inference and Learning Algorithms does a rather good job of conveying the intuitions behind topics.

Comment author: gilch 09 July 2017 02:16:41AM 1 point [-]

Failure seems like the default outcome. How do we avoid that? Have there been other similar LessWrong projects like this that worked or didn't? Maybe we can learn from them.

Group projects can work without financial incentives. Most contributors to wikis and open-source software, and web forums like this one, aren't paid for that.

Assume we've made it work well, hypothetically. How did we do it?

Comment author: ZeitPolizei 09 July 2017 03:46:08AM 0 points [-]

It reminds me a lot of the "mastermind group" thing, where we had weekly hangouts to talk about our goals etc. The America/Europe group eventually petered out (see here for retrospective by regex), the Eurasia/Australia group appears to be ongoing albeit with only two (?) participants.

There have also been online reading groups for the sequences, iirc. I don't know how those went though.

forums, wikis, open source software

I see a few relevant differences:

  • number of participants: If there are very many people, most of which are only sporadically active, you still get nice progress/activity. The main advantage of this video tutoring idea is personalization, which would not work with many participants.
  • small barrier to entry, small incremental improvements: somewhat related to the last point, people can post a single comment, fix a single bug, or only correct a few spelling mistakes on a wiki, and then never come back, but it will still have helped.
  • independence/asynchronicity: also kind of related to small barrier to entry. For video tutoring you need at least two people agreeing on a time and keeping that time free. In all the other cases everyone can pretty much "come and go" whenever they want. In principle it would be possible to do everything with asynchronous communication. In practice you will also have some real-time communication e.g. via IRC channels.
  • Pareto contribution: I don't actually have data on this, but especially on small open source projects and wikis the bulk of the work is probably done by a single contributor, who is really passionate about it and keeps it running.
Comment author: ZeitPolizei 07 July 2017 04:29:13PM *  2 points [-]
Comment author: ZeitPolizei 26 June 2017 09:51:43AM 1 point [-]

I think this is a great idea, likely to have positive value for participants. So going Hamming questions on this, I think two things are important.

  1. I think the most likely way this is going to "fail", is that a few people will get together, then meet about three times, and then it will just peter out, as participants are not committed enough to participate long-term. Right now, I don't think I personally would participate without there being a good reason to believe participants will keep showing up, like financial incentives, for example.
  2. Don't worry too much about doing it the Right Way from the beginning. If you get some people together, just start with the first best thing that comes to mind and iterate.
Comment author: ZeitPolizei 13 May 2017 03:02:51PM 0 points [-]

To survive, and increase one's power are instrumentally convergent goals of any intelligent agent, which means that evolution does not select for any specific type of mind, ethics, or final values.

But, as you also point out, evolution "selects on the criterion of ingroup reproductive fitness", which does select for a specific type of mind and ethics, especially if you also have the constraint, that the agent should be intelligent. As far as I am aware all of the animals considered the most intelligent are social animals (octopi may be an exception?). The most important aspect of an evolutionary algorithm is the fitness function. The real world seems to impose a fitness function, that, if it selects for intelligence, it also seems to select for sociability, something that generally seems like it would increase friendliness.

True, humans rebelled against and overpowered evolution

Evolution among humans is as real as it is among any lifeform. Until every human being born is actually genetically designed from scratch there is evolutionary pressure favoring some genes over others.

Careless evolution managed to create humans on her first attempt at intelligence, but humans, given foresight and intelligence, have an extreme challenge making sure an AI is friendly? How can we explain this contradiction?

Humans are not friendly. There are countless examples of humans, who, if they had the chance, would make the world into a place that is significantly worse for most humans. The reason none of them have succeeded yet is that so far no one has had a power/intelligence advantage so large that they could just impose their ideas unilaterally onto the rest of humanity.

Comment author: ZeitPolizei 01 March 2017 02:12:57PM 0 points [-]

There is also Bertrand, which is organic. Their ingredients look like it would be pretty tasty, but it costs 9€ per day.

Comment author: ZeitPolizei 24 January 2017 11:20:29AM 2 points [-]
Comment author: ZeitPolizei 26 November 2016 04:36:08PM 4 points [-]

What's wrong with (instrumental) Rationality?

Comment author: ZeitPolizei 14 August 2016 04:41:09PM *  0 points [-]

Yeah, the estimates will always be subjective to an extent, but whether you choose historic figure, or all humans and fictional characters that ever existed or whatever, shouldn't make huge differences to your results, because, in Bayes' formula, the ratio P(C|E)/P(C) ¹ should always be roughly the same, regardless of filter.

¹ C: coin exists
E: person existed

Comment author: TheAncientGeek 27 May 2016 07:22:41PM *  0 points [-]

The stronger someones imaginative ability is, the more their imagining an experience is actually having it, in terms of brain states....and the less it s a counterexample to anything relevant.

If the knowedge the AI gets from the colour routine is unproblematically encoded in a string of bits, why can't it just look at the string of bits...for that matter, why can't Mary just look at the neural spike trains of someone seing red?

Comment author: ZeitPolizei 27 May 2016 11:57:02PM 1 point [-]

why can't Mary just look at the neural spike trains of someone seing red?

Why can't we just eat a picture of a plate of spaghetti instead of actual spaghetti? Because a representation of some thing is not the thing itself. Am I missing something?

View more: Next