In response to LessWrong 2.0
Comment author: username2 07 December 2015 05:19:31PM 7 points [-]

There are an awful lot of ideas in this comment thread but many ideas have been proposed in the past. Without leadership, nothing's going to happen, and as I understand it the leaders of lw have left. Nate's been contacted? Ok, does he have decision making power? Is he an appropriate leader to have it? Will he use it? Well, I hope so, but the first step is a deliberate move to take ownership and end the headlessness

In response to comment by username2 on LessWrong 2.0
Comment author: So8res 13 December 2015 11:55:29PM 12 points [-]

I have the requisite decision-making power. I hereby delegate Vaniver to come up with a plan of action, and will use what power I have to see that that plan gets executed, so long as the plan seems unlikely to do more harm than good (but regardless of whether I think it will work). Vaniver and the community will need to provide the personpower and the funding, of course.

Comment author: iceman 11 December 2015 09:31:35PM 16 points [-]

$1000. (With an additional $1000 because of private, non-employer matching.)

Comment author: So8res 11 December 2015 11:08:03PM 5 points [-]

Thanks! And thanks again for your huge donation in the summer; I was not expecting more.

Comment author: Halfwitz 09 December 2015 09:46:22PM 15 points [-]

200, or 400 if you count matching.

Comment author: So8res 10 December 2015 07:34:16PM 2 points [-]

Thanks!

Comment author: Vaniver 10 December 2015 02:58:03AM 15 points [-]

$250 from me, and another $250 from my employer, though I am not sure exactly when it will arrive.

Comment author: So8res 10 December 2015 07:34:09PM 3 points [-]

Thanks!

MIRI's 2015 Winter Fundraiser!

28 So8res 09 December 2015 07:00PM

MIRI's Winter Fundraising Drive has begun! Our current progress, updated live:

 

Donate Now

 

Like our last fundraiser, this will be a non-matching fundraiser with multiple funding targets our donors can choose between to help shape MIRI’s trajectory. The drive will run until December 31st, and will help support MIRI's research efforts aimed at ensuring that smarter-than-human AI systems have a positive impact.

continue reading »
Comment author: philh 08 December 2015 12:44:17PM 18 points [-]

I donated $325 (I think) a few days ago.

Comment author: So8res 08 December 2015 03:28:56PM 3 points [-]

Thanks!

Comment author: James_Miller 08 December 2015 01:00:09AM 18 points [-]

Donated $100.

Comment author: So8res 08 December 2015 04:05:19AM 3 points [-]

Thanks!

Comment author: Benito 07 December 2015 11:23:24PM 21 points [-]

Positive reinforcement for being so open about your spending.

$89 donated.

My first donation to you, and it shall not be my last.

Comment author: So8res 08 December 2015 12:23:55AM 5 points [-]

Thanks!

Comment author: Wei_Dai 26 October 2015 04:57:14AM *  3 points [-]

Sorry, I meant to imply that my faith in UDT has been dropping a bit too, due to lack of progress on the question of whether the UDT-equivalent of the Bayesian prior just represents subjective values or should be based on something objective like whether some universes has more existence than others (i.e., the "reality fluid" view), and also lack of progress on creating a normative ideal for such a "prior". (There seems to have been essentially no progress on these questions since "What Are Probabilities, Anyway?" was written about six years ago.)

Comment author: So8res 26 October 2015 07:23:49PM 1 point [-]

I mostly agree here, though I'm probably less perturbed by the six year time gap. It seems to me like most of the effort in this space has been going towards figuring out how to handle logical uncertainty and logical counterfactuals (with some reason to believe that answers will bear on the question of how to generate priors), with comparatively little work going into things like naturalized induction that attack the problem of priors more directly.

Can you say any more about alternatives you've been considering? I can easily imagine a case where we look back and say "actually the entire problem was about generating a prior-like-thingy" but I have a harder time visualizing different tacts altogether (that don't eventually have some step that reads "then treat observations like Bayesian evidence").

Comment author: Wei_Dai 24 October 2015 10:55:49AM 4 points [-]

This comment isn't directly related to the OP, but lately my faith in Bayesian probability theory as an ideal for reasoning (under logical omniscience) has been dropping a bit, due to lack of progress on the problems of understanding what one's ideal ultimate prior represents and how it ought to be constructed or derived. It seems like one way that Bayesian probability theory could ultimately fail to be a suitable ideal for reasoning is if those problems turn out to be unsolvable.

(See http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/ and http://lesswrong.com/lw/mln/aixi_can_be_arbitrarily_bad/ for more details about the kind of problems I'm talking about.)

Comment author: So8res 25 October 2015 11:25:40PM 1 point [-]

Yeah, I also have nontrivial odds on "something UDTish is more fundamental than Bayesian inference" / "there are no probabilities only values" these days :-)

View more: Prev | Next