Comment author: diegocaleiro 29 November 2015 11:04:21AM 4 points [-]

Yes I am.

Step 1: Learn Bayes

Step 2: Learn reference class

Step 3: Read 0 to 1

Step 4: Read The Cook and the Chef

Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically

Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.

Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.

Step 8: Find someone who can give you a recording of Geoff Anders' presentation at EAGlobal.

Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!

Comment author: endoself 30 November 2015 07:06:53AM *  2 points [-]

I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn't have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It's a gears model of cognition rather than the object-level phenomenon.

If you don't have gears models at all, then yes, it's just another way to spout nonsense. This isn't because it's useless, it's because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It's not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck.

What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.

Comment author: Yaacov 26 July 2015 04:57:04AM *  13 points [-]

Hi LW! My name is Yaacov, I've been lurking here for maybe 6 months but I've only recently created an account. I'm interested in minimizing human existential risk, effective altruism, and rationalism. I'm just starting a computer science degree at UCLA, so I don't know much about the topic now but I'll learn more quickly.

Specific questions:

What can I do to reduce existential risk, especially that posed by AI? I don't have an income as of yet. What are the best investments I can make now in my future ability to reduce existential risk?

Comment author: endoself 27 July 2015 09:48:37PM 4 points [-]

Hi Yaacov!

The most active MIRIx group is at UCLA. Scott Garrabrant would be happy to talk to you if you are considering research aimed at reducing x-risk. Alternatively, some generic advice for improving your future abilities is to talk to interesting people, try to do hard things, and learn about things that people with similar goals do not know about.

Comment author: [deleted] 08 April 2015 04:05:16PM -3 points [-]

I downvoted this post because it is basically meta discussions above arguments from authority and tribalism: Andrew Ng and MIRI == good, turns out Jeff Hawkins influenced Ng and shares some conceptual ideas with MIRI, therefore Hawkins == good. That's faulty reasoning which has the capability to reinforce wrong beliefs.

Tell me, what about Hawkins/Numentia's work makes it wrong or right on its own merits? Why or why not is it likely to lead to capable general purpose intelligences?

Comment author: endoself 09 April 2015 12:38:36AM *  1 point [-]

As far as I can tell, you've misunderstood what I was trying to do with this post. I'm not claiming that Hawkins' work is worth pursuing further; passive_fist's analysis seems pretty plausible to me. I was just trying to give people some information that they may not have on how some ideas developed, to help them build a better model of such things.

(I did not downvote you. If you thought that I was arguing for further work towards Hawkins' progam, then your comment would be justified, and in any case this is a worthwhile thing for me to explicitly disclaim.)

Comment author: joaolkf 08 April 2015 05:48:50PM *  3 points [-]

Worth mentioning that some parts of Superintelligence are already a less contrarian version of many arguments made here in the past.

Also note that although some people do believe that FHI is some sense "contrarian", when you look at the actual hard data on this the fact is FHI has been able to publish in mainstream journals (within philosophy at least) and reach important mainstream researchers (within AI at least) at rates comparable, if not higher, to excellent "non-contrarian" institutes.

Comment author: endoself 09 April 2015 12:24:36AM *  2 points [-]

Yeah, I didn't mean to contradict any of this. I wonder how much a role previous arguments from MIRI and FHI played in changing the zeitgeist and contributing to the way Superintelligence was received. There was a slow increase in uninformed fear-of-AI sentiments over the preceding years, which may have put people in more of a position to consider the arguments in Superintelligence. I think that much of this ultimately traces back to MIRI and FHI; for example many anonymous internet commenters refer to them or use phrasing inspired by them, though many others don't. I'm more sceptical that this change in zeitgeist was helpful though.

Of course specific people who interacted with MIRI/FHI more strongly, such as Jaan Tallinn and Peter Thiel, were helpful in bring the discourse to where it is today.

Comment author: IlyaShpitser 08 April 2015 06:38:53AM 4 points [-]

At least Ng's career though can be credited to Hawkins.

'At least a part'? Also,

???

Comment author: endoself 08 April 2015 08:36:57AM *  1 point [-]

The quote from Ng is

The big AI dreams of making machines that could someday evolve to do intelligent things like humans could, I was turned off by that. I didn’t really think that was feasible, when I first joined Stanford. It was seeing the evidence that a lot of human intelligence might be due to one learning algorithm that I thought maybe we could mimic the human brain and build intelligence that’s a bit more like the human brain and make rapid progress. That particular set of ideas has been around for a long time, but [AI expert and Numenta cofounder] Jeff Hawkins helped popularize it.

I think it's pretty clear that he would have worked on different things if not for Hawkins. He's done a lot of work in robotics, for example, so he could have continued working on robotics if he didn't get interested in general AI. Maybe he would have moved into deep learning later in his career, as it started to show big results.

Comment author: Sherincall 18 February 2015 08:51:28PM *  11 points [-]

Reddit is giving away 10% of their ad revenue to 10 charities that receive the most votes from the community. You can vote for as many charities as you want, with any account that has been created before 10AM PST today.

You can vote for your favorite charities here. I've had problems with the search by name, so if you don't find something, try searching by EIN instead.

Quick links: CFAR, MIRI

Comment author: endoself 18 February 2015 10:09:01PM *  9 points [-]

GiveWell, GiveDirectly, Evidence Action/Deworm the World. You can vote for multiple charities.

Comment author: endoself 24 October 2014 04:23:01AM 47 points [-]

I took the survey.

Comment author: Wei_Dai 22 July 2014 06:17:44AM 3 points [-]

This looks more like a problem with updating than with MMEU though. It seems possible to design a variant of UDT that uses MMEU, without it wanting to self-modify into something else (at least not for this reason).

Comment author: endoself 22 July 2014 06:53:51AM 3 points [-]

I can't see how this would work. Wouldn't the UDT-ish approach be to ask an MMEU agent to pick a strategy once, before making any updates? The MMEU agent would choose a strategy that makes it equivalent to a Bayesian agent, as I describe. The characteristic ambiguity-averse behaviour only appears if the agent is allowed to update.

Given a Cartesian boundary between agent and environment, you could make an agent that prefers to have its future actions be those that are prescribed by MMEU, and you'd then get MMEU-like behaviour persisting upon reflection, but I assume this isn't what you mean since it isn't UDT-ish at all.

Comment author: endoself 22 July 2014 05:01:59AM *  6 points [-]

MMEU isn't stable upon reflection. Suppose that in addition to the mysterious [0.4, 0.6] coin, you had a fair coin, and I tell you that all offer bet 1 ("pay 50¢ to be payed $1.10 if the coin came up heads") if the fair coin comes up heads and bet 2 if the fair coin comes up tails, but you have to choose whether to accept or reject before flipping the fair coin to decide which bet will be chosen. In this case, the Knighian uncertainty cancels out, and your expected winnings are +5¢ no matter which value is [0.4, 0.6] is taken to be the true probabilty of the mysterious coin, so you would take this bet on MMEU.

Upon seeing how the fair coin turns out, however, MMEU would tell you to reject whichever of bets 1 and 2 is offered. Thus, if I offer to let you see the result of the fair coin before deciding whether to accept the bet, you will actually prefer not to see the coin, for an expected outcome of +5¢, rather than see the coin, reject the bet, and win nothing with certainty. Alternatively, if given the chance, you would prefer to self-modify so as to not exhibit ambiguity aversion in this scenario.

In general, any agent using a decision rule that is not generalized Bayesian performs strictly worse than some generalized Bayes decision rule. Note, though, that this does not mean that such an agent is forced to accept at least one of bets 1 and 2, since rejecting whichever of them is offered is a Bayes rule; for example, a Bayesian agent who believes that the bookie knows something that they don't will behave in this way. It does mean, though, that there are many situations where MMEU cannot work, such as in my example above, since in such scenarios it is not equivalent to any Bayes rule.

Comment author: Manfred 21 July 2014 10:44:08PM 7 points [-]

Does anyone know how much people are typically willing to pay to switch options in the Ellsberg paradox? Among those that would pay to switch, my expectation is around 10%, not the ~50% predicted by max-min. This sort of mild ambiguity aversion is probably better captured by prospect theory.

Comment author: endoself 22 July 2014 04:57:24AM 6 points [-]

This is a very general point. Most of the uncertainty people face is of the sort that they would naively classify as Knighian, so if people actually behaved according to MMEU, then they would essentially be playing minimax against the world.

View more: Next