Comment author: IlyaShpitser 18 December 2015 08:33:26PM *  3 points [-]

Is CFAR going to market themselves like this?

[at the workshop]:

"Look to the left of you, now to the right of you, now in 12 other directions. Only one of you will have a strong positive effect from this workshop."

Comment author: Academian 19 December 2015 04:09:51AM *  9 points [-]

I would expect not for a paid workshop! Unlike CFAR's core workshops, which are highly polished and get median 9/10 and 10/10 "are you glad you came" ratings, MSFP

  • was free and experimental,

  • produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and

  • produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.

Comment author: Academian 15 September 2014 12:07:42AM *  5 points [-]

1) Logical depth seems super cool to me, and is perhaps the best way I've seen for quantifying "interestingness" without mistakenly equating it with "unlikeliness" or "incompressibility".

2) Despite this, Manfred's brain-encoding-halting-times example illustrates a way a D(u/h) / D(u) optimized future could be terrible... do you think this future would not obtain, because despite being human-brain-based, would not in fact make much use of being on a human brain? That is, it would have extremely high D(u) and therefore be penalized?

I think it would be easy to rationalize/over-fit our intuitions about this formula to convince ourselves that it matches our intuitions about what is a good future. More realistically, I suspect that our favorite futures have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u).

Comment author: ChristianKl 16 June 2014 09:11:29AM 13 points [-]

I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea.

In which year did you do the asking?

Comment author: Academian 29 June 2014 04:58:36PM 9 points [-]

Great question! It was in the winter of 2013, about a year and a half ago.

Comment author: Pablo_Stafforini 09 May 2014 05:06:53PM *  1 point [-]

So I found this paper by Gelman, King, and Boscodarin (1998)

The link is dead. Here's the paper.

Note that the last name of the third author is Boscardin, not Boscodarin.

Comment author: Academian 02 June 2014 01:19:41AM 0 points [-]

Thanks, fixed!

Comment author: MrMind 19 February 2013 10:40:11AM 0 points [-]

Let's clear things up a little: you cannot use the category of "quantum random" to actual coin flip, because an object to be truly so it must be in a superposition of at least two different pure states, a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time). So let's talk about classic randomness from a Bayesian point of view: when you have no prior information that can correlate with the outcome of an event. That's the case with the coin flip (and also with the quantum case, according to many-worlds interpretation).
Since the face landing depends not only on thumb movement but also on the exact starting position and the movement of air molecules, it's surely not possible for you to know all this informations in the beginning to a degree precise enough to deduce the side landing up. In this situation, your "throw the coin" motor impulse and the coin landing are uncorrelated, and so the coin flip is random (from your perspective).
But the degree to which the coin depends on factors you don't control is very low: if you practice enough, you can control the movement of your thumb so that it lands, say, 9 times out of 10 the side you want. In this case you have formed a better model of the coin traveling through the air and you have learned to control your thumb more precisely. In this case the correlation with your motor cortex is much higher and the coin flip is of course no more random.

Comment author: Academian 19 February 2013 11:11:00AM *  2 points [-]

you cannot use the category of "quantum random" to actual coin flip, because an object to be truly so it must be in a superposition of at least two different pure states, a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).

Given the level of subtlety in the question, which gets at the relative nature of superposition, this claim doesn't quite make sense. If I am entangled with a a state that you are not entangled with, it may "be superposed" from your perspective but not from either of my various perspectives.

For example: a projection of the universe can be in state

(you observe NULL)⊗(I observe UP)⊗(photon is spin UP) + (you observe NULL)⊗(I observe DOWN)⊗(photon is spin DOWN) = (you observe NULL)⊗((I observe UP)⊗(photon is spin UP) + (I observe DOWN)⊗(photon is spin DOWN))

The fact that your state factors out means you are disentangled from the joint state of me and the particle, and so together the particle and I are "in a superimposed state" from "your perspective". However, my state does not factor out here; there are (at least) two of me, each observing a different outcome and not a superimposed photon.

Anyway, having cleared that up, I'm not convinced that there is enough mutual information connecting my frontal lobe and the coin for the state of the coin to be entangled with me (i.e. not "in a superposed state") before I observe it. I realize this is testable, e.g., if the state amplitudes of the coin can be forced to have complex arguments differing in a predictable way so as to produce an expected and measurable interference paterns. This is what we have failed to produce at a macroscopic level, and it is this failure that you are talking about when you say

a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).

I do not believe I have been shown a convincing empirical test ruling out the possibility that the state is not, from my brain's perspective, in a superposition of vastly many states with amplitudes whose complex arguments are difficult to predict or control well enough to produce clear interference patterns, and half of which are "heads" state and half of which are "tails" states. But I am very ready to be corrected on this, so if anyone can help me out, please do!

Comment author: shokwave 25 December 2012 10:04:46AM 0 points [-]

I find it strange that you feel evolutionary causation is adequate to justify something, but I guess I won't question that.

Not justify: instead, explain. I understood that previously, handoflixue felt that status was dirty, but in understanding it has come to feel that it's just part of human nature (for most people, as the post points out).

Comment author: Academian 12 January 2013 08:07:55PM 0 points [-]

Not justify: instead, explain.

I disagree. Justification is the act of explaining something in a way that makes it seem less dirty.

Comment author: Academian 11 January 2013 11:40:57PM *  3 points [-]

If you're curious about someone else's emotions or perspective, first, remember that there are two ways to encode knowledge of how someone else feels: by having a description of their feelings, or by empathizing and actually feeling them yourself. It is more costly --- in terms of emotional energy --- to empathize with someone, but if you care enough about them to afford them that cost, I think it's the way to go. You can ask them to help you understand how they feel, or help you to see things the way they do. If you succeed, they'll appreciate having someone who can share their perspective.

In response to Macro, not Micro
Comment author: Academian 08 January 2013 07:21:54PM *  2 points [-]

My summary of this idea has been that life is a non-convex optimization problem. Hill-climbing will only get you to the top of the hill that you're on; getting to other hills requires periodic re-initializing. Existing non-convex optimization techniques are often heuristic rather than provably optimal, and when they are provable, they're slow.

Comment author: Qiaochu_Yuan 06 January 2013 12:55:43AM *  10 points [-]

skipping over the mechanisms for filtering good ideas from bad leaves me confused about the point of the post.

The point of the post is that most people, in most domains, should not trust that they are good at filtering good ideas from bad.

Comment author: Academian 07 January 2013 06:46:50PM *  2 points [-]

And the point of CFAR is to help people become better filtering good ideas from bad. It is plainly not to produce people who automatically believe the best verbal argument anyone presents to them without regard for what filters that argument has been through, or what incentives the Skilled Arguer might have to utter the Very Convincing Argument for X instead of the Very Very Convincing Argument for Y. And certainly not to have people ignore their instincts; e.g. CFAR constantly recommends Thinking Fast and Slow by Kahneman, and teaches exercises to extract more information from emotional and physical senses.

Comment author: Wei_Dai 07 January 2013 09:12:10AM 3 points [-]

What if we also add a requirement that the FAI doesn't make anyone worse off in expected utility compared to no FAI? That seems reasonable, but conflicts the other axioms. For example, suppose there are two agents: A gets 1 util if 90% of the universe is converted into paperclips, 0 utils otherwise, and B gets 1 util if 90% of the universe is converted into staples, 0 utils otherwise. Without an FAI, they'll probably end up fighting each other for control of the universe, and let's say each has 30% chance of success. An FAI that doesn't make one of them worse off has to prefer a 50/50 lottery of the universe turning into either paperclips or staples to a certain outcome of either, but that violates VNM rationality.

And things get really confusing when we also consider issues of logical uncertainty and dynamical consistency.

Comment author: Academian 07 January 2013 06:32:04PM *  3 points [-]

What if we also add a requirement that the FAI doesn't make anyone worse off in expected utility compared to no FAI?

I don't think that seems reasonable at all, especially when some agents want to engage in massively negative-sum games with others (like those you describe), or have massively discrete utility functions that prevent them from compromising with others (like those you describe). I'm okay with some agents being worse off with the FAI, if that's the kind of agents they are.

Luckily, I think people, given time to reflect and grown and learn, are not like that, which is probably what made the idea seem reasonable to you.

View more: Prev | Next