Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sherincall 25 August 2014 05:54:08PM 4 points [-]

Are the European meetups in English or the native language? I'm moving to Germany soon, and would love to attend some closer meetups (Germany/Netherlands/Belgium), iff they are English by default.

Comment author: Kaj_Sotala 25 August 2014 06:06:59PM 4 points [-]

The policy in the Finnish meetups has been "Finnish if everyone is Finnish, English if there are foreigners present". I would expect the meetups in the other countries to be similar.

Comment author: Kaj_Sotala 25 August 2014 04:03:37PM 4 points [-]

So I'm apparently a fictional spaceship now [1 2]. Also someone who's been instructed to keep an eye on it.

Comment author: atorm 22 August 2014 03:02:27AM 2 points [-]

This argument seems like something I would need to think long and hard about, which I see as a good thing: it seems rare to me that non-trivial things are simple and apparent. I don't see any glaring misinterpretation of natural selection. I would be interested in working on it in a "dialogue intellectually and hammer out more complete and concrete ideas" sense. I'm answering this quickly in a tired state because I'm not on LW as much as I used to be and I don't want to forget.

I'm getting a PhD in a biological field that is not Evolution. Both this and my undergraduate education covered evolution because it underlies all the biological fields. I have one publication out that discusses evolution but is not actually specifically relevant to this topic. I'll happily share more detail in private communications if you can't find an explicitly evolutionary biologist.

Comment author: Kaj_Sotala 23 August 2014 05:04:28PM 1 point [-]

That sounds good to me. :-)

E-mail me at xuenay@gmail.com and we can talk about things in more detail?

Comment author: Kaj_Sotala 18 August 2014 05:37:17PM 6 points [-]

I suspect that the closest thing to a "general intelligence module" might be a "skill acquisition module" - e.g. not something that would be generally intelligent by itself, but rather something that could generate more specialized modules that were optimized for some specific domain.

E.g. humans are capable of acquiring a wide variety of very different and specialized skills with sufficient practice and instruction, but we're probably constrained by whether the domain can be easily mapped into one of our native architectures. My hunch is that if you could give a baseline human access to a few more domains that they could natively think in and also enabled them to map new concepts into those domains (and vice versa), they could easily come across as superintelligent by being capable of coming up with modes of thought that were completely unfamiliar to us and applying them to problems that weren't easily handled with normal modes of thought. (On the other hand, they would have to come up with those mappings by themselves, since there was nobody around to give them instructions that was communicated in terms of those domains.)

Comment author: Kaj_Sotala 18 August 2014 05:25:23PM 3 points [-]

Related paper: How Intelligible is Intelligence?

Abstract: If human-level AI is eventually created, it may have unprecedented positive or negative consequences for humanity. It is therefore worth constructing the best possible forecasts of policy-relevant aspects of AI development trajectories—even though, at this early stage, the unknowns must remain very large.

We propose that one factor that can usefully constrain models of AI development is the “intelligibility of intelligence”—the extent to which efficient algorithms for general intelligence follow from simple general principles, as in physics, as opposed to being necessarily a conglomerate of special case solutions. Specifically, we argue that estimates of the “intelligibility of intelligence” bear on:

  • Whether human-level AI will come about through a conceptual breakthrough, rather than through either the gradual addition of hardware, or the gradual accumulation of special-case hacks;

  • Whether the invention of human-level AI will, therefore, come without much warning;

  • Whether, if AI progress comes through neuroscience, neuroscientific knowledge will enable brain-inspired human-level intelligences (as researchers “see why the brain works”) before it enables whole brain emulation;

  • Whether superhuman AIs, once created, will have a large advantage over humans in designing still more powerful AI algorithms;

  • Whether AI intelligence growth may therefore be rather sudden past the human level; and

  • Whether it may be humanly possible to understand intelligence well enough, and to design powerful AI architectures that are sufficiently transparent, to create demonstrably safe AIs far above the human level.

The intelligibility of intelligence thus provides a means of constraining long-term AI forecasting by suggesting relationships between several unknowns in AI development trajectories. Also, we can improve our estimates of the intelligibility of intelligence, e.g. by examining the evolution of animal intelligence, and the track record of AI research to date.

Comment author: Kaj_Sotala 18 August 2014 12:14:08PM 9 points [-]

Malcolm Ocean defines a thought hook as:

...a module in your brain that gets activated by you thinking/saying/hearing a certain phrase or structure of sentence. [...]

So one example (that will be familiar to some CFAR alumni at least) is when you encounter the word “later” and your brain instantly responds “THAT’S NOT A TIME.” Val, a CFAR instructor, while teaching a course on the planning fallacy and contingency planning, has described how he’s very averse to the word “later”. Why? Because it’s dangerous. It looks like a time but doesn’t act like a time. You can schedule something for “later” but that won’t actually cause the thing to happen because later never comes, even though the word works grammatically and type-sensitively (“schedule for X” requires that X refers to some point(s) in time, which “later” does).

I seem to have managed to install in myself the thought hook "if something feels uncomfortable, but doing it involves no real risk, then the discomfort is a reason to do it". This has been a useful way to get myself to do comfort zone expansion. So far, it has caused me to do things like 1) walk a route that I've sometimes avoided because I sometimes run into a neighbor coming the opposite way and I feel social anxiety over not knowing the right distance for making eye contact and saying hi 2) go into a store selling women's clothing and shop for a new dress 3) wear dresses and cat ears in public 4) make food when I'm at home and feeling sufficiently low on energy that I'd rather just go to a nearby fast food place than prepare anything myself.

Comment author: Viliam_Bur 18 August 2014 06:52:08AM *  25 points [-]

Thanks for the trust. I can imagine doing unplesant decisions based on data. I am not sure how to get those data; but I guess I would speak with developers and ask them to make me some database reports. Or Kaj would explain me what he did.

I accept the nomination. I am okay with doing this either alone or with someone else -- would slightly prefer to have a second opinion, but could act without it, too.

Comment author: Kaj_Sotala 18 August 2014 11:59:41AM 4 points [-]

but I guess I would speak with developers and ask them to make me some database reports. Or Kaj would explain me what he did.

I've exchanged e-mails with jackk and asked him to pull out voting data for me. In other words, spoken with developers and asked them to make some database reports. :-)

Comment author: paper-machine 17 August 2014 05:32:30PM 3 points [-]

I've already started visiting LW less often because I dread having new investigation requests to deal with.

Then commit to never working on another investigation, no matter what the request.

Comment author: Kaj_Sotala 17 August 2014 07:06:04PM 15 points [-]

Yeah, this post is kinda my way of doing that.

Comment author: Stefan_Schubert 17 August 2014 05:18:18PM 28 points [-]

Thanks for your work! Much appreciated.

Comment author: Kaj_Sotala 17 August 2014 06:00:53PM 17 points [-]

Thanks!

Comment author: shminux 17 August 2014 05:54:08PM 6 points [-]

Well, it is clear that modding a forum is not for you, thanks for trying, if ineffectually (Eugene never stopped downvoting), to make this place better. I appreciate you stepping in when no other mod would, and stepping down when many others in this situation would keep making mistakes. Administration and content creation are completely different skill sets, and I hope this negative experience does not deter you from posting more insightful content here in the future.

Thanks again!

Comment author: Kaj_Sotala 17 August 2014 06:00:37PM 18 points [-]

Thanks!

(Eugene never stopped downvoting)

He should have: banned users have been blocked from voting as of a patch deployed on July 24th. (I also verified that the patch works by creating a new account, banning it, and trying to vote with it.)

View more: Next