Comment author: KatjaGrace 30 September 2014 01:09:01AM *  2 points [-]

Who are you? Would you like to introduce yourself to the rest of us? Perhaps tell us about what brings you here, or what interests you.

Comment author: kgalias 30 September 2014 10:10:39PM 3 points [-]

Hello! My name is Christopher Galias and I'm currently studying mathematics in Warsaw.

I figured that using a reading group would be helpful in combating procrastination. Thank you for doing this.

Comment author: KatjaGrace 30 September 2014 12:40:30PM 3 points [-]

‘We can also say, with greater confidence than for the AI path, that the emulation path will not succeed in the near future (within the next fifteen years, say) because we know that several challenging precursor technologies have not yet been developed. By contrast, it seems likely that somebody could in principle sit down and code a seed AI on an ordinary present-day personal computer; and it is conceivable - though unlikely - that somebody somewhere will get the right insight for how to do this in the near future.’ - Bostrom (p36)

Why is it more plausible that a person can sit down and invent a human-level artificial intelligence than that they can sit down and invent the technical means to produce brain emulations?

Comment author: kgalias 30 September 2014 04:41:21PM 2 points [-]

This is the part of this section I find least convincing.

Comment author: KatjaGrace 28 September 2014 06:28:32PM *  3 points [-]

To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life?

Some other technological topics that hadn't happened in real life when people became concerned about them:

  • Nuclear weapons, had The World Set Free, though I'm not sure how well known it was (may have been seen as frivolous by most at first - I'm not sure, but by the time there were serious projects to build them I think not)
  • Extreme effects from climate change, e.g. massive sea level rise, freezing of Northern Europe, no particular popular culture franchise (not very frivolous)
  • Recombinant DNA technology, the public's concern was somewhat motivated by The Andromeda Strain) (not frivolous I think).

Evidence seems mixed.

Comment author: kgalias 29 September 2014 11:11:53PM 1 point [-]

To be clear, you are saying that a thing will seem frivolous if it does have a relevant franchise, but hasn't happened in real life?

Yes, that was my (tentative) claim.

We would need to know whether the examples were seen as frivolous after they came into being, but before the technology started being used.

Comment author: Vulture 26 September 2014 03:01:13PM 5 points [-]

It seems to me that if we're going to be formalizing the idea of the relative "moral importance" of various courses of action to different moral theories, we'll end up having to use something like utility functions. It's unfortunate, then, that deontological rules (which are pretty common) can't be specified with finite utility functions because of the timelessness issue (i.e., a deontologist who doesn't lie won't lie even if doing so would prevent them from being forced to tell ten lies in the future).

Comment author: kgalias 26 September 2014 03:39:35PM *  3 points [-]

Can't we use a hierarchy of ordinal numbers and a different ordinal sum (e.g. maybe something of Conway's) in our utility calculations?

That is, lying would be infinitely bad, but lying ten times would be infinitely worse.

Comment author: KatjaGrace 25 September 2014 09:19:30PM 1 point [-]

War is taken fairly seriously in reporting, though there are a wide variety of war-related movies in different styles.

Comment author: kgalias 26 September 2014 01:14:13PM 3 points [-]

OK, but war happens in real life. For most people, the only time they hear of AI is in Terminator-like movies.

I'd rather compare it to some other technological topic, but which doesn't have a relevant franchise in popular culture.

Comment author: KatjaGrace 23 September 2014 01:16:21AM 2 points [-]

Was there anything in particular in this week's reading that you would like to learn more about, or think more about?

Comment author: kgalias 23 September 2014 06:52:43PM *  2 points [-]

As a possible failure of rationality (curiosity?) on my part, this week's topic doesn't really seem that interesting.

Comment author: KatjaGrace 23 September 2014 01:02:48AM 1 point [-]

I think an important fact for understanding the landscape of opinions on AI, is that AI is often taken as a frivolous topic, much like aliens or mind control.

Two questions:

1) Why is this?

2) How should we take it as evidence? For instance, if a certain topic doesn't feel serious, how likely is it to really be low value? Under what circumstances should I ignore the feeling that something is silly?

Comment author: kgalias 23 September 2014 06:51:29PM 1 point [-]

What topic are you comparing it with?

When you specify that, I think the relevant question is: does the topic have an equivalent of a Terminator franchise?

Comment author: KatjaGrace 22 September 2014 03:12:04AM 1 point [-]

Apologies; I didn't mean to imply that the economics related arguments here were central to Bostrom's larger argument (he explicitly says they are not) - merely to lay them out, for what they are worth.

Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.

Comment author: kgalias 22 September 2014 04:37:54PM 1 point [-]

No need to apologize - thank you for your summary and questions.

Though it may not be central to Bostrom's case for AI risk, I do think economics is a good source of evidence about these things, and economic history is good to be familiar with for assessing such arguments.

No disagreement here.

Comment author: gallabytes 16 September 2014 09:07:54PM 1 point [-]

Eh, not especially. IIRC, scores have also had to be renormalized on Stanford-Binet and Weschler tests over the years. That said, I'd bet it has some effect, but I'd be much more willing to bet on less malnutrition, less beating / early head injury, and better public health allowing better development during childhood and adolescence.

That said, I'm very interested in any data that points to other causes behind the Flynn Effect, so if you have any to post don't hesitate.

Comment author: kgalias 16 September 2014 11:27:20PM 2 points [-]

I'm just trying to make sure I understand - I remember being confused about the Flynn effect and about what Katja asked above.

How does the Flynn effect affect our belief in the hypothesis of accumulation?

Comment author: gallabytes 16 September 2014 01:28:04AM 4 points [-]

I would bet heavily on the accumulation. National average IQ has been going up by about 3 points per decade for quite a few decades, so there have definitely been times when Koko's score might have been above average. Now, I'm more inclined to say that this doesn't mean great things for the IQ test overall, but I put enough trust in it to say that it's not differences in intelligence that prevented the gorillas from reaching the prominence of humans. It might have slowed them down, but given this data it shouldn't have kept them pre-Stone-Age.

Given that the most unique aspect of humans relative to other species seems to be the use of language to pass down knowledge, I don't know what else it really could be. What other major things do we have going for us that other animals don't?

Comment author: kgalias 16 September 2014 09:44:14AM 2 points [-]

It is possible, then, that exposure to complex visual media has produced genuine increases in a significant form of intelligence. This hypothetical form of intelligence might be called "visual analysis." Tests such as Raven's may show the largest Flynn gains because they measure visual analysis rather directly; tests of learned content may show the smallest gains because they do not measure visual analysis at all.

Do you think this is a sensible view?

View more: Prev | Next