Comment author: Mitchell_Porter 15 March 2014 10:39:42PM *  1 point [-]

Am I blind or does the linked article not even say who compiled the list?

WealthX.com made a similar list based only on billionaire alumni. "Harvard has graduated some 52 billionaires, with a collective fortune of $205 billion". Compare the figures above: "Harvard has 2,964 alumni worth $200+ million, with a total wealth of $622 billion".

So Harvard has almost 3000 alumni with individual wealth of $200m-$1bn, collectively worth $420 billion; and then it has about 50 alumni with individual wealth >$1bn, collectively worth $205 billion. The average individual in the second group is about thirty times as wealthy as the average individual in the first group.

But wait! Jonah mentioned Gates ($75bn) and Zuckerberg ($30bn). So just two of the Harvard billionaires, are worth as much as the other 50 Harvard billionaires combined. And one of those two, Gates, has more than twice the wealth of the guy in second place.

To sum up: (Gates > Zuckerberg) > (50 lesser billionaires) > (a thousand lesser "hundred-millionaires") > you

Comment author: private_messaging 17 February 2014 10:23:47AM *  3 points [-]

One could take various simulation hypotheses as examples of modernized young universe creationism.

I don't think you can really "refute" that kind of hypotheses. They just stay right where they start, at their priors, not predicting any distinct experiences until a future date.

At most there may be good reasons for the priors to be very low, albeit you won't get very far with the complexity of gods in general - if our universe can plausibly culminate in creation of a super-intelligence, then the complexity of a god is at most not much higher than that of our universe; and for all we know it might well be lower.

Comment author: Mitchell_Porter 23 February 2014 10:46:08AM *  4 points [-]

Young Earth Simulationism (YES) could find supporters here... (And it can be contrasted with Natural Origins - NO.)

In response to Testing my cognition
Comment author: Mitchell_Porter 20 February 2014 03:45:20AM *  4 points [-]

Can someone please write a song about this - "Testing My Cognition" - to the tune of "Losing My Religion"?

Comment author: Eliezer_Yudkowsky 02 February 2014 03:54:53PM 8 points [-]

Would you agree that you are carrying out a Pascal's Muggle line of reasoning using a leverage prior?

http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/

If so, you're using it very controversially, compared to disbelieving in a googolplex or Ackermann of leverage. A 10^-80 prior is easy for sensory evidence to overcome if your model implies that fewer than 10^-80 sentients hallucinate your sensory evidence; this happens every time you flip 266 coins. Conversely to state the 10^-80 prior is invincible just restates that you think more than 10^-80 sentients are having your experiences, due to Simulation Arguments or some explanation of the Fermi Paradox which involves lots of civilizations like ours within any given Hubble volume. In other words, to say that the 10^-80 prior is not beaten by our sensory experience merely restates that you believe in an alternate explanation for the Fermi Paradox in which our sensory experiences are not rare.

Comment author: Mitchell_Porter 04 February 2014 12:09:23AM 3 points [-]

I want to respond directly now...

It seems to me that winning the leverage lottery (by being at the dawn of an intergalactic civilization) is not like flipping a few hundred coins and getting a random bitstring that was not generated in that fashion, anywhere else in our Hubble volume. It is like flipping a few hundred coins and getting nothing but heads. The individual random bitstring is improbable, but it is not special, and getting some not-special bitstring through the coinflipping process is the expected outcome.

Therefore I think the analogy fails, and the proper conclusion is that models implying a "cosmic manifest destiny" for present-day Earthlings are wrong. How this relates to the whole Mugging/Muggle dialectic I do not know, I haven't had time to see what's really going on there. I am presently more interested in the practical consequences of this conclusion for our model of the universe, than I am in the epistemology.

Comment author: Eliezer_Yudkowsky 02 February 2014 03:54:53PM 8 points [-]

Would you agree that you are carrying out a Pascal's Muggle line of reasoning using a leverage prior?

http://lesswrong.com/lw/h8k/pascals_muggle_infinitesimal_priors_and_strong/

If so, you're using it very controversially, compared to disbelieving in a googolplex or Ackermann of leverage. A 10^-80 prior is easy for sensory evidence to overcome if your model implies that fewer than 10^-80 sentients hallucinate your sensory evidence; this happens every time you flip 266 coins. Conversely to state the 10^-80 prior is invincible just restates that you think more than 10^-80 sentients are having your experiences, due to Simulation Arguments or some explanation of the Fermi Paradox which involves lots of civilizations like ours within any given Hubble volume. In other words, to say that the 10^-80 prior is not beaten by our sensory experience merely restates that you believe in an alternate explanation for the Fermi Paradox in which our sensory experiences are not rare.

Comment author: Mitchell_Porter 03 February 2014 04:32:44AM 20 points [-]

From the "Desk" of: Snooldorp Gastool V

Attention: Eliezer Yudkowsky Machine Intelligence Research Institute

Sir, you will doubtlessly be astonished to be receiving a letter from a species unknown to you, who is about to ask a favor from you.

As fifth rectified knigget of my underclan's overhive, I have recently come into possession of an ancient Andromedan passkey, guaranteeing the owner access to no less than 2^419 intergalactic credits. My own species is a trans-cladistic harmonic agglomerate and therefore does not satisfy the anghyfieithadwy of Andromedan culture-law, which stipulates that the titular beneficiary of the passkey (who has first claim on half the credits) must be a natural sophont species. However, we have inherited a trust relationship with a Voolhari Legacy adjudication system, in the vicinity of what you know as the Orion OB1 association, and we have verified that your species is the nearest natural sophont with the technical capacity and cognitive inclinations needed to be our partners in this venture. In order to earn your share of this account, your species should beam by radio telescope its genome, cultural history, and at least two hundred (200) characteristic high-resolution brain maps, to:

Right Ascension 05h 55m 10.3053s, Declination +07° 24′ 25.426″

The Voolhari adjudicator will then process and ratify your source code, facilitating the clemnestration of the passkey's paramancy. The adjudicator has already been notified to expect your transmission.

Please note that, due to the nearby presence of several aging supergiant stars, the adjudicator will most likely be destroyed via supernova within one galactic day (equalling approximately 610,000 Earth years), so this must be done urgently. Please maintain the transmission until we notify you that the clemnestration is complete. Also, again according to Andromedan anghyfieithadwy, the passkey will be invalidated if the beneficiary species becomes postbiological. We therefore request that you halt all technological progress for the duration of the transmission, unless it directly aids the maintenance of the radio signal.

Certain of your epistemologists may be skeptical of our veracity. If the passkey claimed access to 2^(2^419) credits, we would share this skepticism and suspect a Circinian scam. However, a single round of Arcturan Jeopardy easily produces events with a probability of less than 1 in 2^419; therefore, we consider it irrational to doubt our good luck in this case.

We look forward to concluding this venture with an outcome of mutual enrichment and satisfaction! Feelers touched to yours, Snooldorp Gastool V, Ensorcelment Overlord, Deneb Octant

Comment author: Vulture 01 February 2014 05:57:01PM *  3 points [-]

Don't assume that this immense savage universe is just a growth medium for whatever microbe wins the game on Earth.

Personally, I assume this as a two-place function; I assume that by my values, "basically a growth medium for humanity" is a good and useful way to think about the universe. Someone with a different value system, e.g. placing greater value than I do on non-human life, might prefer that we not think of it that way. Oh well.

Comment author: Mitchell_Porter 01 February 2014 08:12:17PM 10 points [-]

This is not about values, it is about realism. I am protesting this presumption that the cosmos is just a dumb desert waiting for transhumanity to come and make it bloom in our image. If a line of argument tells you that you are a 1-in-10^80 special snowflake from the dawn of time, you should conclude that there is something wrong with the argument, not wallow in the ecstatic dread of your implied cosmic responsibility. It would be far more reasonable to conclude that there is some presently unknown property of the universe which either renders such expansion physically impossible, or which actively suppresses it when it begins to occur.

In response to On saving the world
Comment author: Mitchell_Porter 01 February 2014 02:01:51AM 7 points [-]

We hold the entire future of the universe in our hands. Is that not justification enough?

It's too much justification. Don't assume that this immense savage universe is just a growth medium for whatever microbe wins the game on Earth.

Comment author: notsonewuser 21 January 2014 03:25:15PM *  19 points [-]

Going by only the data Yvain made public, and defining "experienced rationalists" as those people who have 1000 karma or more (this might be slightly different from Yvain's sample, but it looked as if most who had that much karma were in the community for at least 2 years), and looking only at those experienced rationalists who both recorded a cryonics probability and their cryonics status, we get the following data (note that all data is given in terms of percentages - so 50 means 50% confidence (1 in 2), while 0.5 means 0.5% confidence (1 in 200)):

For those who said "No - and do not want to sign up for cryonics", we have for the cryonics success probability estimate (and this is conditioning on no global catastrophe) (0.03,1,1) (this is (Q1,median,Q3)), with mean 0.849 and standard deviation 0.728. This group was size N = 32.

For those who said "No - still considering it", we have (5,5,10), with mean 7.023 and standard deviation 2.633. This group was size N = 44.

For those who wanted to but for some reason hadn't signed up yet (either not available in the area (maybe worth moving for?) or otherwise procrastinating), we have (15,25,37), with mean 32.069 and standard deviation 23.471. This group was size N = 29.

Finally, for the people who have signed up, we have (7,21.5,33), with mean 26.556 and standard deviation 22.389. This group was size N = 18.

If we put all of the "no" people together (those procrastinating, those still thinking, and those who just don't want to), we get (2,5,15), with mean 12.059 and standard deviation 17.741. This group is size N = 105.

I'll leave the interpretation of this data to Mitchell_Porter, since he's the one who made the original comment. I presume he had some point to make.

(I used Excel's population standard deviation computation to get the standard deviations. Sorry if I should have used a different computation. The sample standard deviation yielded very similar numbers.)

Comment author: Mitchell_Porter 22 January 2014 09:23:23AM 16 points [-]

Thanks for the calculations... and for causing me to learn about quartiles.

Part of Yvain's argument is that "proto-rationalists" have an average confidence in cryonics of 21%, but "experienced rationalists", only 15%. The latter group is thereby described as "less credulous", because the average confidence is lower, but "better at taking ideas seriously", because more of them are actually signed up for cryonics.

Meanwhile, your analysis – if I am parsing the figures correctly! – suggests that "experienced rationalists” who don't sign up for cryonics have an average confidence in cryonics of 12%, and "experienced rationalists” who do sign up for cryonics, an average confidence of 26%.

This breaks apart the combination of contrary traits that forms the headline of this article. We don’t see a single group of people who are simultaneously more cryo-skeptical than the LW newbies, and yet more willing to sign up for cryonics. Instead, we see two groups: one that is more cryo-skeptical and which doesn’t sign up for cryonics; and another which is less cryo-skeptical, and which does sign up for cryonics.

Comment author: Mitchell_Porter 21 January 2014 03:30:04AM 23 points [-]

If we distinguish between

"experienced rationalists" who are signed up for cryonics

and

"experienced rationalists" who are not signed up for cryonics

... what is the average value of P(Cryonics) for each of these subpopulations?

Comment author: cousin_it 08 January 2014 08:36:03PM *  4 points [-]

As far as I can tell, the paper is asking this question: if the world is just a wavefunction, why do we see it as a bunch of material things? Tegmark is trying to show that viewing the world as a bunch of material things is somehow special, that it optimizes some physical or mathematical quantity. That's impressive if he can make it work, but I'm not sure it's on the right track. Maybe a better question would be, which ways of looking at the wavefunction are the most likely to contain evolution? After all, minds are optimized for the kind of information processing that is useful for evolution. (Um, what I really meant here was "useful for increasing fitness", thx Mark_Friedenbach.)

Comment author: Mitchell_Porter 09 January 2014 02:33:32PM 2 points [-]

I think you're on the right track in assessing the paper's content. Here's what I retained from a first reading: He considers a quantum density matrix. He decides to separate it in a way which minimizes the mutual information of the two parts, hoping that this might be the amount of conscious information present, but it always turns out to be less than a bit. Also, his method of division tends to produce parts which are static (energy eigenstates). So in dividing up the density matrix, he adds a second condition (alongside "minimize the mutual information") so that the resulting parts will evolve over time. This increases the minimum mutual information, but not substantially.

I regard the paper as a very preliminary contribution to a new approach to quantum ontology. In effect he's telling us how the wavefunction divides into things, if we assume that the division is made according to this balance between minimal mutual information and some dynamics in the parts. Then he can ask whether the resulting things look like objects as we know them (reasonably so) and whether they look like integrated information processors (less success there, in my opinion, even though that was the aim).

View more: Prev | Next