Comment author: torekp 18 March 2016 10:44:22PM 4 points [-]

particularly that they measure the variance of a factor rather than its absolute importance (and hence you get results like variation in nutrition being almost invisible as an explanation for variation in height)

Excellent point, which deserves some elaboration. Suppose that very high doses of vitamin K dramatically increase height, but that almost nobody is experimenting with such doses. Then a heritability study will find that environment contributes little to the variation in height - but that's usually not what we want to know. What we want to know is more likely something like, what steps can I take to have tall children?

Comment author: [deleted] 05 March 2016 03:16:06AM *  -2 points [-]

Don't waste your time with the AI foom stuff. The commonly held opinion by experts that work with actual AI technology is that any sort of hard takeoffs will have timelines that are not small on human scales, at least with technology that exists today.

You need only do a few back of the envelope calculations using various AI techniques and AGI architectures to see that learning cycles are measured in hours or days, not milliseconds.

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: torekp 08 March 2016 02:15:16AM 1 point [-]

I suggest a different reason not to waste your time with the foom debate: even a non-fooming process may be unstoppable. Consider the institutions of the state and the corporation. Each was a long time coming. Each was hotly contested and had plenty of opponents, who failed to stop it or radically change it. Each changed human life in ways that are not obviously and uniformly for the better.

Comment author: Houshalter 07 March 2016 02:28:02AM 7 points [-]

I don't know if I'm saying anything that hasn't been said before elsewhere, but looking at the massive difference in intelligence between humans seems like a strong argument for FOOM to me. Humans are basically all the same. We have 99.99% the same DNA, the same brain structure, size, etc. And yet some humans have exceptional abilities.

I was just reading about Paul Erdos. He could hold 3 conversations at the same time, with mathematicians on highly technical subjects. He was constantly having insights into mathematical research left and right. He produced more papers than any other mathematician.

I don't think it's a matter of culture. I don't think an average person could "learn" to have a higher IQ, let alone be Erdos. And yet he very likely had the same brain structure as everyone else. Who knows what would be possible if you are allowed to move far outside the space of humans.


But this isn't the (main) argument Yudkowsky uses. He relies on this intuition that I don't think was explicitly stated or argued strongly enough. This one intuition is central to all the points about recursive self improvement.

It's that humans kind of suck. At least at engineering and solving complicated technical problems. We weren't evolved to be good at it. There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect. Even in the areas where we do well, we have trouble keeping track of many different things in our heads. We are very bad at prediction and pattern matching compared to small machine learning algorithms much of the time.

I think this intuition that "humans kind of suck" and "there are a lot of places we could make big improvements" is at the core of the FOOM debate and most these AI risk debates. If you really believe this, then it seems almost obvious that AI will very rapidly become much smarter than humans. People that don't have this seem to believe that AI is going to be very slow. Perhaps with steep diminishing returns.

Comment author: torekp 08 March 2016 02:05:39AM *  1 point [-]

There are many cases where simple genetic algorithms outperform humans. Humans outperform GAs in other cases of course, but it shows we are far from perfect.

To riff on your theme a little bit, maybe one area where genetic algorithms (or other comparably "simplistic" approaches) could shine is in the design of computer algorithms, or some important features thereof.

Comment author: gjm 09 February 2016 03:38:24PM 12 points [-]

I simply want to convince you to entertain the possibility that people might profess to believe in God for reasons other than indoctrination or stupidity.

Why do you think any convincing is necessary?

arguing against religious beliefs on logical grounds [...] spirituality is not about logic. It's about subjective experiences [...]

Religious beliefs and subjective experiences are quite separate things, at least in principle. If someone simply says "I went to church and had this amazing experience", I don't think even the strawmanniest Spockiest stereotypical rationalist would have much quarrel with that. But here in the real world, actual religious people tend not just to say "I had this amazing experience" but to go further and say "I believe in God, the Father Almighty, creator of all things seen and unseen, and in one Lord Jesus Christ", or "Hear, O Israel: the Lord is our God, the Lord is one", or whatever.

(They not infrequently go further still and say "you must do X and not do Y, because God says so", or attempt to get laws made requiring X and forbidding Y, or in very extreme cases blow things up in an attempt to intimidate people into doing X rather than Y, and that sort of behaviour tends to be what provokes the louder sort of unbeliever, rather than mere professions of belief. But let's ignore that for now.)

So, consider someone who has these amazing experiences and reacts to them by (not merely appreciating the experiences, but) declaring that those experiences give him special insight into the nature of reality, and professing belief in a particular religion's doctrines. There are (crudely) three possibilities.

  • Perhaps he means what he says at something like face value: he actually intends to make claims about how the actual world actually is.
    • In this case, arguing against those claims isn't a matter of misunderstanding What Spirituality Is About; our hypothetical religious person really is making (alleged) factual claims which may be right or wrong, supported or undermined by the evidence, etc., and argument is an appropriate response (at least in some contexts).
  • Or perhaps he doesn't mean to make actual factual claims; when he says "I believe in one holy, catholic and apostolic church" he really means "I had an experience where I felt like I was one with the universe"; when he says "Muhammad is the messenger of God" he really means "something ineffably indescribable happened to me".
    • In this case, indeed arguing against the claims he makes may be a mistake. But it might be perfectly reasonable to argue against using those claims to express those experiences. Because, really, take a look at typical religious professions of faith, theological writings, etc.; do they look to you like good ways of expressing ineffable overwhelming religious experiences? They don't to me.
  • Or, finally, perhaps he actually doesn't make those claims at all; or, at most, he makes them when required to make them by some ritual he participates in, and otherwise refrains.
    • In this case, finally, I do agree: the usual sort of religious argument may be entirely irrelevant to this person. But it seems to me that (1) most people who profess religious belief are not like this person, and (2) most people who engage in argument against religious beliefs are, most of the time, not doing so in discussion with someone like this.

In this sense, God is everywhere.

In this sense, we are all Spartacus. In this sense, the Singularity is here. In this sense, I am the walrus.

Comment author: torekp 07 March 2016 10:52:35PM 2 points [-]

Religious beliefs and subjective experiences are quite separate things

I would like to take this opportunity to note that "religious beliefs" is not redundant; that belief is not even a particularly important part of many religions. Not that you said anything to the contrary. But to a lot of readers of this site, Bible-thumping Christians, to whom belief is paramount, are over-represented in the mental prototype of "religion".

Comment author: lisper 05 March 2016 06:03:40PM 1 point [-]

(DO:A) raises the probability of B.

Yes, but there's still some terminological sleight-of-hand going on here. It is only fair to say that a future A affected a past B if P(B) is well defined without reference to A. In this case it's not. Because B is defined in terms of correlations between measurements made at T1 (noon) and measurements made at T2 (evening) then B cannot be said to have actually happened until T2.

correlation is a two-way street

No, it's an n-squared-minus-one-way street. It appears to be a two-way street in one (very common) special case (two macroscopic systems mutually entangled with each other), but weak measurements are interesting precisely because they do not conform to the conditions of that special case. When you go beyond the conditions of the common special case you can't keep using the rhetoric and intuitions that apply only to the special case and hope to come up with the right answer.

Comment author: torekp 07 March 2016 10:44:53PM 0 points [-]

if P(B) is well defined without reference to A

You're right. Good point.

it's an n-squared-minus-one-way street

Don't you mean n-factorial? Anyway, ... hmm, I need to think about this more.

Comment author: lisper 02 March 2016 06:02:07PM -1 points [-]

So I read the paper, and it is kind of a cool experiment, but it does not show that "future choices can affect a past measurement's outcome." Explaining why would require a separate article (maybe time to re-open main!) But the TL;DR version is this: if you want to argue that A affects B then you have to show a causal relationship that runs from A to B. If you can do that, then you can always come up with some encoding that will allow you to transmit information from A to B. That's what "causal relationship" means. But that is (unsurprisingly) not what Aharonov et al. have done. They have merely shown correlations between A and B, and then argue on purely intuitive grounds that there must have been some causal relationship between A and B because "Bell's theorem forbids spin values to exist prior to the choice of the orientation measured." While this is true, it's misleading because it implies that spin values do exist after a strong measurement. But that is not true. There is no fundamental difference between a strong and a weak measurement. There is a smooth continuum between weak and strong measurements, and at no point during the transition from weak to strong does the spin value begin to "actually exist" (a.k.a. wavefunction collapse).

Comment author: torekp 05 March 2016 02:15:57PM *  0 points [-]

That's what "causal relationship" means.

I disagree. Following Pearl, I define "A causes B" to mean something like: (DO:A) raises the probability of B.

Bob's choice in the evening to make strong measurements along the beta-axis, raises the probability of Alice's noon measurements along the beta-axis measurements having been the ones that showed the best correlation. It doesn't raise the probability of any individual measurement being up or down, but that's OK. Even on a many worlds interpretation, where perhaps every digital up/down pattern happens at some "world" and the overall multi-world distribution is invariant, "probability" refers to what happens in our "world", so again that's OK.

Correlation can only be observed after the fact, in the evening, not at noon. So isn't this just a case of Bob affecting Bob+Alice's immediate future, where they go over the results? Why do I say Bob's choice affected Alice's results? Because correlation is a two-way street, and in this case there isn't much traffic in the forward direction. Alice's measurements only weakly affect Bob's results.

Comment author: torekp 02 March 2016 01:58:37AM *  0 points [-]

Thanks, this helped me fill in some gaps. In Ron Garret's piece that you linked above, a comment has a link to a very nice article by Aharonov et al titled Can a Future Choice Affect a Past Measurement's Outcome?. (Hint: yes.)

Comment author: Bound_up 18 February 2016 06:14:51PM *  1 point [-]

Alright, we'll be meeting 7 pm at Amer's at 611 Church St, Ann Arbor, MI 48104. They have deli sandwiches and frozen yogurt.

It's right off the east side of Central Campus, across from the Pizza House we've met at before.

I understand there's another Amer's off of South Campus. That's not the one.

See you all there :)

Comment author: torekp 20 February 2016 03:24:22AM 0 points [-]

Thanks for organizing! It was lots of fun.

Comment author: Bayeslisk 14 February 2016 06:06:27AM 0 points [-]

On the advice of a new friend, I think that i will be coming to this, but will need some help navigating, since I currently live in Chicago and have been to Ann Arbor exactly once in my life 5 years ago.

Comment author: torekp 20 February 2016 03:23:48AM 0 points [-]

Hi Paul, I'm the other Paul (the p in torekp stands for Paul). Glad you made it.

Comment author: Bound_up 18 February 2016 06:14:51PM *  1 point [-]

Alright, we'll be meeting 7 pm at Amer's at 611 Church St, Ann Arbor, MI 48104. They have deli sandwiches and frozen yogurt.

It's right off the east side of Central Campus, across from the Pizza House we've met at before.

I understand there's another Amer's off of South Campus. That's not the one.

See you all there :)

Comment author: torekp 18 February 2016 11:16:21PM 1 point [-]

OK, still 7pm, right? See you.

View more: Prev | Next