Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: DanArmak 14 June 2016 08:14:07AM 1 point [-]

We can take it as a calculation for 'number of technological civilizations in our past light cone', whose messages we could receive.

Comment author: amcknight 15 June 2016 09:11:51PM 0 points [-]

But the lower bound of this is still well below one. We can't use our existence in the light cone to infer there's at least about one per light cone. There can be arbitrarily many empty light cones.

Comment author: amcknight 14 June 2016 12:44:10AM *  1 point [-]

They use the number of stars in the observable universe instead of the number of stars in the whole universe. This ruins their calculation. I wrote a little more here

Comment author: The_Jaded_One 22 February 2016 03:17:59PM 0 points [-]

because the adoption curve of paid-up members or cryopreservations is almost eerily linear over the past 50 years

Does that mean a constant number of new sign ups per year, or an increasing number of new sign ups per year? Also, I'd love to see the data if you can link to it.

Comment author: amcknight 09 March 2016 02:01:42AM 0 points [-]

Here's an eerie line showing about 200 new Cryonics Institute members every 3 years.

Comment author: Metus 26 December 2014 07:37:19PM 2 points [-]

Any other fundraisers interesting to LW going on?

Comment author: amcknight 28 January 2015 12:53:53AM 1 point [-]

Charity Science, which fundraises for GiveWell's top charities, needs $35k to keep going this year. They've been appealing to non-EAs from the Skeptics community and lot's of other folks and kind of work as a pretty front-end for GiveWell. More here. (Full disclosure, I'm on their Board of Directors.)

Comment author: So8res 15 January 2015 12:09:44AM *  6 points [-]

Hmm, you seem to have missed the distinction between environmental uncertainty and logical uncertainty.

Imagine a black box with a Turing machine inside. You don't know which Turing machine is inside; all you get to see are the inputs and the outputs. Even if you had unlimited deductive capability, you wouldn't know how the black box behaved: this is because of your environmental uncertainty, of not knowing which Turing machine the box implemented.

Now imagine a python computer program. You might read the program and understand it, but not know what it outputs (for lack of deductive capability). As a simple concrete example, imagine that the program searches for a proof of the Riemann hypothesis using less than a googol symbols: in this case, the program may be simple, but the output is unknown (and very difficult to determine). Your uncertainty in this case is logical uncertainty: you know how the machine works, but not what it will do.

Existing methods for reasoning under uncertainty (such as standard Bayesian probability theory) all focus on environmental uncertainty: they assume that you have unlimited deductive capability. A principled theory of reasoning under logical uncertainty does not yet exist.

"impossible possibilities" sounds like a contradition

Consider that python program that searches for a proof of the Riemann hypothesis: you can imagine it outputting either "proof found" or "no proof found", but one of these possibilities is logically impossible. The trouble is, you don't know which possibility is logically impossible. Thus, when you reason about these two possibilities, you are considering at least one logically impossible possibility.

I hope this helps answer your other questions, but briefly:

  1. Fuzzy logic is only loosely related. It's traditionally used in scenarios where the objects themselves can be "partly true", whereas in most simple models of logical uncertainty we consider cases where the objects are always either true or false but you haven't been able to deduce which is which yet. That said, probabilistic logics (such as this one) bear some resemblance to fuzzy logics.
  2. When we say that you know how the machine works, we mean that you understand all the construction of the macihne and all the physical rules governing it. That is, we assert that you could write a computer program which would output "0" if the machine drops the ball into the top slot, and "1" if it drops the ball into the bottom slot. (Trouble is, while you could write the program, you may not know what it will output.)
  3. The Rube Goldberg machine thought experiment is used to demonstrate the difference between environmental uncertainty (not knowing which machine is being used) and logical uncertainty (not being able to deduce how the machine acts). I'm sorry if the reference to physical Rube Goldberg machines confused you.
  4. This is an overview document, it mostly just describes the field (which many are ignorant of) and doesn't introduce any particularly new results. Therefore, I highly doubt we'll put it through peer review. But rest assured, I really I don't think we're hurting for credibility in this domain :-)

(The field was by no means started by us. If it's arguments from authority that you're looking for, you can trace this topic back to Boole in 1854 and Bernoulli in 1713, picked up in a more recent century by Los, Gaifman, Halpern, Hutter, and many many more in modern times. See also the intro to this paper, which briefly overviews the history of the field, and which is on the same topic, and which is peer reviewed. See also many of the references in that paper; it contains a pretty extensive list.)

Comment author: amcknight 15 January 2015 08:23:12AM 0 points [-]

A more precise way to avoid the oxymoron is "logically impossible epistemic possibility". I think 'Epistemic possibility' is used in philosophy in approximately the way you're using the term.

Comment author: Error 07 January 2014 04:34:45AM *  7 points [-]

I've completed the first draft of a rather long piece of Chrono Trigger fanfiction -- with significant motivational help from the Less Wrong Study Hall and Beeminder. It still needs polishing, but it's generally excellent to see it done. [Edit: I didn't emphasize that enough. It is awesome to see it done. Like finishing a marathon and looking back down the trail.]

It has no particular rationalist bent. If anyone here is interested in seeing it anyway, you can find the HTML version here, and the PDF version here. It's about 35k words and would probably take a couple hours to read beginning to end. It may be intelligible even to non-players, I'm not sure. (but I would be interested to find out)

If you're interested in helping me get it ready for release, please flip a coin to choose the version (so I catch any version-specific problems) and send feedback to error@feymarch.net.

Comment author: amcknight 08 November 2014 07:58:08AM 0 points [-]

Links are dead. Is there anywhere I can find your story now?

Comment author: amcknight 28 October 2014 04:49:03AM 30 points [-]

Done! Ahhh, another year another survey. I feel like I did one just a few months ago. I wish I knew my previous answers about gods, aliens, cryonics, and simulators.

Comment author: Sophronius 30 August 2014 11:57:07AM 14 points [-]

Can somebody explain to me why people generally assume that the great filter has a single cause? My gut says it's most likely a dozen one-in-a-million chances that all have to turn out just right for intelligent life to colonize the universe. So the total chance would be 1/1000000^12. Yet everyone talks of a single 'great filter' and I don't get why.

Comment author: amcknight 25 September 2014 03:05:56AM 1 point [-]

I don't have an answer but here's a guess: For any given pre-civilizational state, I imagine there are many filters. If we model these filters as having a kill rate then my (unreliable stats) intuition tells me that a prior on the kill rate distribution should be log-normal. I think this suggests that most of the killing happens on the left-most outlier but someone better at stats should check my assumptions.

Comment author: Sean_o_h 28 December 2013 05:07:59PM 3 points [-]

On point 1: I can confirm that members of CEA have done quite a lot of awareness-spreading about existential risks and long-run considerations, as well as bringing FHI, MIRI and other organisations to the attention of potential donors who have concerns in this area. I generally agree with Will's point, and I think it's very plausible that CEA's work will result in more philanthropic funding coming FHI's way in the future.

On point 2: I also agree. I need to have some discussion with the founders to confirm some points on strategy going forward as soon as the Christmas period's over, but it's likely that additional funds could play a big role in CSER's progress in gaining larger funding streams. I'll be posting on this shortly.

Comment author: amcknight 06 January 2014 05:59:20AM 1 point [-]

It sounds like CSER could use a loan. Would it be possible for me to donate to CSER and to get my money back if they get $500k+ in grants?

In response to Why CFAR?
Comment author: amcknight 31 December 2013 06:43:24AM 2 points [-]

From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.

More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.

I'm extremely interested in this being spelled out in more detail. Can you point me to any evidence you have of this?

View more: Next