Comment author: Ilverin 11 October 2016 06:13:45PM *  0 points [-]

Is there any product like an adult pacifier that is socially acceptable to use?

I am struggling with self-control to not interrupt people and am afraid for my job.

EDIT: In the meantime (or long-term if it works) I'll use less caffeine (currentlly 400mg daily) to see if that helps.

Comment author: Ilverin 31 May 2016 04:05:12PM 2 points [-]

Discussion: weighting inside view versus outside view on extinction events

3 Ilverin 25 February 2016 05:18AM

Articles covering the ideas of inside view and outside view:

Beware the Inside View (by Robin Hanson)

Outside View LessWrong wiki article

Article discussing the weighting of inside view and outside view:

The World is Mad (by ozymandias)

 

A couple of potential extinction events which seem to be most easily mitigated (the machinery involved is expensive):

Broadcasting powerful messages to the stars: 

Should Earth Shut the Hell Up? (by Robin Hanson)

Arecibo message (Wikipedia)

Large Hadron Collider: 

Anyone who thinks the Large Hadron Collider will destroy the world is a t**t. (by Rebecca Roache)

 

How should the inside view versus the outside view be weighted when considering extinction events?

Should the broadcast of future Arecibo messages (or powerful signals in general) be opposed?

Should the expansion of energy levels (or continued operation at all) of the Large Hadron Collider be opposed?

Comment author: Ilverin 26 August 2015 10:36:05PM 0 points [-]

Efficient charity: you don't need to be an altruist to benefit from contributing to charity

Effective altruism rests on two philosophical ideas: altruism and utilitarianism.

In my opinion, even if you're not an altruist, you might still want to use statistics to learn about charity.

Some people believe that they have an ethical obligation to cause a net 0 suffering. Others might believe they have an ethical obligation to cause only an average amount of suffering. In these causes, in order to reduce suffering to an acceptable level, efficient charity might be for you.

It's possible that in your life you will not come across enough ponds with drowning people that only you can save and you will have to pursue other means of reducing suffering. An alternate method is charity, and statistics can identify which charities and how much to donate.

In order to save money to satisfy your own preferences, you might want to donate as little as possible. You might also calculate that a different time might be best to donate (like after you die). But if you come to either of these conclusions, you're still using the idea of efficient charity.

Comment author: Ilverin 18 May 2015 06:33:40PM *  0 points [-]

Disclaimer: I may not be the first person to come up with this idea

What if for dangerous medications (such as 2-4 dinitrophenol (dnp) possibly?) the medication was stored in a device that would only dispense a dose when it received a time-dependent cryptographic key generated by a trusted source at a supervised location (the pharmaceutical company/some government agency/an independent security company)?

Could this be useful to prevent overdoses?

Comment author: RichardKennaway 08 December 2014 12:13:04PM 10 points [-]

These seem pretty easy to answer even for a non-expert.

It is variously said that we share 99% of our genes with a chimpanzee, 95% of our genes with a random human, and 50% of our genes with a sibling. Explain how these can all be true statements.

Comment author: Ilverin 08 December 2014 03:47:53PM *  0 points [-]

Disclaimer: Not remotely an expert at biology, but I will try to explain.

One can think of the word "gene" as having multiple related uses.

Use 1: "Genotype". Even if we have different color hair, we likely both have the same "gene" for hair which could be considered shared with chimpanzees. If you could re-write DNA nucleobases, you could change your hair color without changing the gene itself, you would merely be changing the "gene encoding". The word "genotype" refers to a "function" which takes in a "gene encoding" and outputs a "gene phenotype"

Use 2: "Gene phenotype". If we both have the same color hair, we would have the same "Gene phenotype". Suppose the genotype for hair is a gene that uses simple dominance. In this case, we could have the same phenotype even with different gene encodings. Suppose you have the gene encoding "BB" whereas I have the gene encoding "Bb". In this case, we could both have black hair, the same "Gene phenotype", but have different "Gene encodings".

Use 3: "Gene encoding". If we have different color hair, then we have different gene encodings (but we have the same "genotype" as described in "Use 1"). This "gene encoding" is commonly not shared between siblings and less commonly shared between species.

So "we share 99% of our genes with a chimpanzee" likely refers to "Genotype".

"95% of our genes with a random human" likely refers to "Gene phenotype".

"50% of our genes with a sibling" likely refers to "Gene encoding".

Comment author: Mestroyer 25 February 2014 07:07:39PM 4 points [-]

Downvoted for the fake utility function.

"I wont let the world be destroyed because then rationality can't influence the future" is an attempt to avoid weighing your love of rationality against anything else.

Think about it. Is it really that rationality isn't in control any more that bugs you, not everyone dying, or the astronomical number of worthwhile lives that will never be lived?

If humanity dies to a paperclip maximizer, which goes on to spread copies of itself through the universe to oversee paperclip production, each of those copies being rational beyond what any human can achieve, is that okay with you?

Comment author: Ilverin 26 February 2014 04:54:49PM 2 points [-]

Thank you, I initially wrote my function with the idea of making it one (of many) "lower bound"(s) of how bad things could possibly get before debating dishonestly becomes necessary. Later, I mistakenly thought that "this works fine as a general theory, not just a lower bound".

Thank you for helping me think more clearly.

Comment author: Ilverin 24 February 2014 05:25:53PM *  -2 points [-]

"How dire [do] the real world consequences have to be before it's worthwhile debating dishonestly"?

M̶y̶ ̶b̶r̶i̶e̶f̶ ̶a̶n̶s̶w̶e̶r̶ ̶i̶s̶:̶

One lower bound is:

If the amount that rationality affects humanity and the universe is decreasing over the long term. (Note that if humanity is destroyed, the amount that rationality affects the universe probably decreases).

T̶h̶i̶s̶ ̶i̶s̶ ̶a̶l̶s̶o̶ ̶m̶y̶ ̶a̶n̶s̶w̶e̶r̶ ̶t̶o̶ ̶t̶h̶e̶ ̶q̶u̶e̶s̶t̶i̶o̶n̶ ̶"̶w̶h̶a̶t̶ ̶i̶s̶ ̶w̶i̶n̶n̶i̶n̶g̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶r̶a̶t̶i̶o̶n̶a̶l̶i̶s̶t̶ ̶c̶o̶m̶m̶u̶n̶i̶t̶y̶"̶?̶

R̶a̶t̶i̶o̶n̶a̶l̶i̶t̶y̶ ̶i̶s̶ ̶w̶i̶n̶n̶i̶n̶g̶ ̶i̶f̶,̶ ̶o̶v̶e̶r̶ ̶t̶h̶e̶ ̶l̶o̶n̶g̶ ̶t̶e̶r̶m̶,̶ ̶r̶a̶t̶i̶o̶n̶a̶l̶i̶t̶y̶ ̶i̶n̶c̶r̶e̶a̶s̶i̶n̶g̶l̶y̶ ̶a̶f̶f̶e̶c̶t̶s̶ ̶h̶u̶m̶a̶n̶i̶t̶y̶ ̶a̶n̶d̶ ̶t̶h̶e̶ ̶u̶n̶i̶v̶e̶r̶s̶e̶.̶

Comment author: Ilverin 13 December 2013 02:11:52PM 7 points [-]

If the author could include a hyperlink to Richard Wiseman when he is first mentioned, it might prevent any reader from being confused and not realizing that you are describing actual research. (I was confused in this way for about half of the article).

Comment author: Ilverin 24 June 2013 08:30:12PM *  0 points [-]

I wonder if there's a chance of the program that always collaborates winning/tieing.

If all the other programs are extremely well-written, they will all collaborate with the program that always collaborates (or else they aren't extremely well-written, or they are violating the rules by attempting to trick other programs).

View more: Next