Comment author: Lumifer 28 March 2016 01:05:05AM 3 points [-]

Not if the "data" is noise.

Comment author: Transfuturist 28 March 2016 11:59:00PM 1 point [-]

Most of those who haven't ever been on Less Wrong will provide data for that distinction. It isn't noise.

Comment author: gjm 27 March 2016 12:22:53AM 1 point [-]

With whom? My understanding is that this is intended to be a survey of people who either are or have been LW participants.

In response to comment by gjm on Lesswrong 2016 Survey
Comment author: Transfuturist 27 March 2016 12:42:42AM 5 points [-]

This is a diaspora survey, for the pan-rationalist community.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Transfuturist 27 March 2016 12:39:57AM 44 points [-]

I have taken the survey. I did not treat the metaphysical probabilities as though I had a measure over them, because I don't.

Comment author: ChristianKl 24 February 2016 09:38:45AM *  -2 points [-]

My main point is that Simon Anhold is a high status consultant and not a hippy. Lumifer rejects him because he thinks Simon Anhold is simply a person who isn't serious but a hippy. He's payed by governments to advice them how to achieve foreign policy objectives.

The solution he proposes does happen to be more effective than the status quo of achieving foreign policy objectives.

He also gives data-driven advice in a field where most other consultants aren't.

Comment author: Transfuturist 24 February 2016 05:57:16PM *  0 points [-]

I guess the rejection is more based on the fact that his message seems like it violates deep-seated values on your end about how reality should work than his work being bullshit.

Lumifer rejects him because he thinks Simon Anhold is simply a person who isn't serious but a hippy.

How about you let Lumifer speak for Lumifer's rejection, rather than tilting at straw windmills?

Comment author: indexador2 22 February 2016 08:55:07PM *  14 points [-]

If evolution is untrue, it changes everything.

Just by reading this phrase, I can conclude that everything else is probably useless.

Comment author: Transfuturist 22 February 2016 11:42:03PM 1 point [-]

The equivocation of 'created' in those four points are enough to ignore it entirely.

Comment author: ChristianKl 22 February 2016 09:19:14PM -2 points [-]

If you are talking about egoistic in the sense that you want as an US citizen outcomes that are generally good for US citizens:

Government-consultant Simon Anholt argues that if a country does a lot of good in the world that results in a positive brand in his TED talk. The better reputation than makes a lot of things easier.

You are treated better when you travel in foreign countries. A lot of positive economic trade happens on the back on good brand reputations. Good reputations reduce war and terrorism.

Spending money on EA interventions likely has better returns for US citizens than spending money on waging wars like the Iraq war on a per-dollar basis.

Comment author: Transfuturist 22 February 2016 11:06:58PM 0 points [-]

I'm curious why this was downvoted. The last statement, which has political context?

Comment author: Transfuturist 22 February 2016 08:32:13PM *  0 points [-]

Are there any egoist arguments for (EA) aid in Africa? Does investment in Africa's stability and economic performance offer any instrumental benefit to a US citizen that does not care about the welfare of Africans terminally?

Comment author: Transfuturist 20 December 2015 12:06:39AM *  1 point [-]

We don't need to describe the scenarios precisely physically. All we need to do is describe it in terms of the agent's epistemology, with the same sort of causal surgery as described in Eliezer's TDT. Full epistemological control means you can test your AI's decision system.

This is a more specific form of the simulational AI box. The rejection of simulational boxing I've seen relies on the AI being free to act and sense with no observation possible, treating it like a black box, and somehow gaining knowledge of the parent world through inconsistencies and probabilities and escaping using bugs in its containment program. White-box simulational boxing can completely compromise the AI's apparent reality and actual abilities.

Comment author: WalterL 06 October 2015 09:11:24PM 1 point [-]

Lots! But it seems like if we start doing "yay stability" vs. "boo stagnation" we'll be at politics pretty quick.

Comment author: Transfuturist 07 October 2015 04:26:00PM 1 point [-]

Stagnation is actually a stable condition. It's "yay stability" vs. "boo instability," and "yay growth" vs. "boo stagnation."

Comment author: turchin 07 October 2015 07:26:44AM 0 points [-]

So, as I understood you, you stay for resurrecting of "sparse coverage of the distribution", which will help to prevent exponential explosion of number of copies, but will cover most peculiar of possible copies landscape?

While I can support this case, I see the following problem: For example, I have a partner X, which will better preserved via cryonics, but my information will be partly lost. If there will be created 1000 semi-copies of me to cover the distribution, 999 of them will be without partner X, and partner X also will suffer because ve will now care for other my copies. (Ve could also be copied, but it would require coping of all world).

If it were my choice, I prefer to lose some of my memories or personal traits than to live in the world with many my copies.

Comment author: Transfuturist 07 October 2015 03:26:45PM 0 points [-]

(Ve could also be copied, but it would require coping of all world).

Why would that be the case? And if it were the case, why would that be a problem?

View more: Next