Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, August 7 - August 13, 2017

1 Post author: Thomas 07 August 2017 08:07AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (35)

Comment author: username2 10 August 2017 12:56:10PM 4 points [-]

How is development of the new LW platform/closed beta coming along? Does it look like it will actually get off the ground?

I realize username2 will not be welcome there but am very interested in signing up with a normal username when it launches, if there's anything to sign up for. I'm hoping all the action there has just moved out of public view rather than just subsiding as it appears from outside.

Comment author: username2 12 August 2017 02:30:45AM 0 points [-]

Why is anonymous posting not welcome?

Comment author: username2 14 August 2017 10:47:32AM 0 points [-]

Fellow username2s, I think that's the least of our worries. Recent comments there: 7 days ago; 19 days ago; 23 days ago; a month ago. Although to be fair, it's not really about the content at this stage.

Comment author: Daniel_Burfoot 08 August 2017 11:24:59PM *  3 points [-]

Theory of programming style incompatibility: it is possible for two or more engineers, each of whom is individually highly skilled, to be utterly incapable of working together productively. In fact, the problem of style incompatibility might actually increase with the skill level of the programmers.

This shouldn't be that surprising: Proust and Hemingway might both be gifted writers capable of producing beautiful novels, but a novel co-authored by the two of them would probably be terrible.

Comment author: Lumifer 09 August 2017 02:46:39PM 1 point [-]

That seems rather obvious to me.

Comment author: WalterL 09 August 2017 02:49:36AM 0 points [-]

Kind of...

Like, part of being 'highly skilled' as a programmer is being able to work with other people. I mean, I get what you are saying, but working with assholes is part of the devs tool bag, or he hasn't been a dev very long.

Comment author: Screwtape 10 August 2017 05:35:12PM 0 points [-]

Is that really a programming skill though? Aren't most fields of human endeavor theoretically improved by being able to work with people, making it something of a generic skill? Alternately, if cooperation is domain specific enough to be a 'programming' skill then it seems like some programmers are amazing even if they lack that skill.

Various novels have been written by two authors, but I wouldn't say the inability to co-write with an arbitrary makes one a terrible author. Good Omens was amazing, but I'm not sure that Pratchett and Stephen King hypothetically failing to work well together makes either of them a bad writer. This is less obvious in less clearly subjective fields, but I think it might still be true.

It's worth noting that "Gah, I can't work with that guy, I'm too highly skilled in my own amazing paradigm!" is more often a warning sign of different problem rather than a correct diagnosis of this one.

Comment author: MrMind 07 August 2017 01:26:00PM *  1 point [-]

"Inscrutable", related to the meta-rationality sphere, is a word that gets used a lot these days. On the fun side, set theory has a perfectly scrutable definition of indescribability.
Very roughly: the trick is to divide your language in stages, so that stage n+1 is strictly more powerful than stage n. You can then say that a concept (a cardinal) k is n-indescribable if every n-sentence true in a world where k is true, is also true in a world where a lower concept (a lower cardinal) is true. In such a way, no true n-sentence can distinguish a world where k is true from a world where something less than k is true.
Then you can say that k is totally indescribable if the above property is true for every finite n.

Total indescribability is not even such a strong property, in the grand scheme of large cardinals.

Comment author: Thomas 07 August 2017 08:09:16AM *  1 point [-]

This problem to think about.

Comment author: gjm 08 August 2017 02:46:16PM *  2 points [-]

I wrote a little program to attack this question for a smaller number of primes. The results don't encourage me to think that there's a "principled" answer in general. I ran it for (as it happens) the first 1002 primes, up to 7933. The visibility counts fluctuate wildly; it looks as if there may be a tendency for "typical" visibility counts to decrease, but the largest is 256 for the 943rd prime (7451) which not coincidentally has a large gap before it and smallish gaps immediately after.

It seems plausible that the winner with a billion primes might be the 981,765,348th prime, which is preceded by a gap of length 38 (the largest in the first billion primes), but I don't know and I wouldn't bet on it. With 1200 primes you might think the winner would be at position 1183, after the first ever gap of size 14 -- but in fact that gap is immediately followed by another of size 12, and the prime after that does better even though it can't see its next-but-one neighbour, and both are handily beaten by the 943rd prime which sees lots of others above as well as below.

It's still feeling to me as if any solution to this is going to involve more brute force than insight. Thomas, would you like to tell us whether you know of a solution that doesn't involve a lot of calculation? (Since presumably any solution will at least need to know something about all the first billion primes, maybe I need to be less vague. If the solution looked like "the winning prime is the prime p_i for which p_{i+1}-p_{i-1} is greatest" or something similarly simple, I would not consider it to be mostly brute force.)

Comment author: Thomas 09 August 2017 12:21:52PM 0 points [-]

But then again. Now, I think that there is a non-brute-force solution.

Comment author: Thomas 08 August 2017 03:39:38PM 0 points [-]

Well, congratulations for what you have done so far.

I have hoped it will be something like this. An intricated landscape of prime towers. I don't have a solution yet because I have invented this problem this Monday morning. Like "Oh, My God, it's Monday morning, I have to publish another Problem on my blog and cross-post it on Lesswrong ...".

I did some Googling to prevent my brains to plagiarize too much, and that was all.

I doubt, that there is a clever solution, just some brute force solutions are possible here. But one has to be clever to perform a brute force solution in this case.

Which you guys are.

Comment author: gjm 07 August 2017 01:08:45PM 2 points [-]

Initial handwaving:

Super-crudely the n'th prime number is about n log n. If this were exact then each tower would see all the others, because the function x -> x log x is convex. In practice there are little "random" fluctuations which make a difference. It's possible that the answer to the question depends critically on those random fluctuations and can be found only by brute force...

Comment author: IlyaShpitser 09 August 2017 10:17:02PM 0 points [-]

Isn't this literally asking for the largest increasing prime gap sequence between 1 and 1bil? Probably some number theorist knows this.

Comment author: Thomas 10 August 2017 06:24:35AM 1 point [-]

If we ask for only the smallest 3000 primes, the answer is then the first tower, which is 2 in height. From there you can see 592 tops.

No major prime gap around 2.

Comment author: IlyaShpitser 10 August 2017 10:13:26PM 0 points [-]

Got it, it's a combination of gap size and prime magnitude. That's a weird question, but there might still be a theorem on this.

Comment author: Thomas 11 August 2017 06:54:53AM 0 points [-]

Perhaps, there is. But there are billions of such relatively simple constructions possible, such as this "Prime Towers Problem".

I wonder how many of those are already covered by some older proven theorem or some unproven conjecture maybe. I think many of them are still unvisited and unrelated to anything known. If this one is such, I don't know. It might be. Just might.

Comment author: MrMind 07 August 2017 12:25:51PM 0 points [-]

The intuitive answer seems to me to be: the last one. It's the tallest, so it witness exactly one billion towers. Am I misinterpreting something?

Comment author: Oscar_Cunningham 07 August 2017 12:43:44PM 0 points [-]

I guess some of the towers might block each other from view.

Comment author: gjm 07 August 2017 12:43:19PM 0 points [-]

Yes: merely being lower isn't enough to guarantee visibility, because another intermediate tower might be (lower than the tallest but still) tall enough to block it. Like this, if I can get the formatting to work:

#
# #
# #
# #
# # #

You can't see the third tower from the first, because the second is in the way.

Comment author: Thomas 07 August 2017 01:04:55PM 0 points [-]

Yes, exactly so.

There is another small ambiguity here. The towers 2, 3, and 4 have colinear tops. But this is the only case and not important for the solution.

Comment author: gjm 08 August 2017 01:37:58PM 1 point [-]

This is not the only case. For instance, the tops of the towers at heights 11, 17, 23 are collinear (both height differences are 6, both pairs are 2 primes apart).

Even if it turns out not to be relevant to the solution, the question should specify what happens in such cases.

Comment author: Thomas 08 August 2017 01:58:25PM 0 points [-]

Yes. You are right, I thought I might be wrong about this.

Okay. If they are collinear, then they are visible.

Comment author: username2 08 August 2017 01:42:00PM 0 points [-]

As you say, there indeed many examples, even of three literally consecutive primes: https://en.wikipedia.org/wiki/Balanced_prime

Comment author: [deleted] 15 August 2017 02:50:35PM 0 points [-]

Question: How do you make the paperclip maximizer want to collect paperclips? I have two slightly different understandings of how you might do this, in terms of how it's ultimately programmed: 1) there's a function that says "maximize paperclips" 2) there's a function that says "getting a paperclip = +1 good point"

Given these two different understandings though, isn't the inevitable result for a truly intelligent paperclip maximizer to just hack itself and based on my two different understandings: 1) make itself /think/ that it's getting paperclips because that's what it really wants--there's no way to make it value ACTUALLY getting paperclips as opposed to just thinking that it's getting paperclips 2) find a way to directly award itself "good points" because that's what it really wants

I think my understanding is probably flawed somewhere but haven't been able to figure it out so please point out where

Comment author: mortal 11 August 2017 05:22:45PM *  0 points [-]

Here is a video of someone allegedly creating a hybrid of a chicken and a human by fertilizing chicken eggs with human sperm. The 'father' of this homonculus kills the thing eventually on camera.

https://youtu.be/-Cto1DXXHAc

The best part is the comments. This is uncanny valley territory, and one comment especially reminded me of you guys - 'For a moment I thought, what if it is real?'

It reminded me of the idea of AIs torturing simulations. Wew.

Comment author: Tenoke 09 August 2017 05:19:37PM 0 points [-]

Karpathy mentions offhand in this video that he thinks he has the correct approach to AGI but doesnt say what it is. Before that he lists a few common approaches, so I assume it's not one of those. What do you think he suggests?

P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.

Comment author: ChristianKl 09 August 2017 11:08:10PM 0 points [-]

P.S. If this worries you that AGI is closer than you expected do not watch Jeff dean's overview lecture of DL research at Google.

The overview lecture doesn't really get me worried. It basically means that we are at the point where we can use machine learning to solve well-defined problems with plenty of training data. At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.

At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.

Comment author: Tenoke 10 August 2017 12:35:57AM *  1 point [-]

At the moment that seems to require a human machine learning expert and recent Google experiments suggest that they are confident to develop an API that can do this without machine learning experts being involved.

At a recent LW discussion someone told me that this kind of research doesn't even count as an attempt to develop AGI.

Not in itself, sure, but yeah there was the bit about the progress made so you wont need a ML engineer for developing the right net to solve a problem. However, there was also the bit whee they have nets doing novel research (e.g. new activation functions with better performance than sota, novel architectures etc.). And for going further in that direction, they just want more compute which they're going to be getting more and more of.

I mean, if we've entered the point where we AI research is a problem tackalable by (narrow) AI, which can further benefit from that research and apply it to make further improvements faster/wtih more accuracy.. then maybe there is something to potentially worry about .

Unless of course you think that AGI will be built in such a different way that no/very few DL findings are likely to be applicable. But even then I wouldn't be convinced that doing this completely separate AGI research wont also be the kind of problem that DL wont be able to handle - as AGI research is in the end a "narrow" problem.

Comment author: ChristianKl 10 August 2017 09:41:02AM 0 points [-]

To me the question isn't whether new DL findings are applicable but whether they are sufficient. I don't think they are sufficient to be able to solve problems where there isn't a big dataset available.

Comment author: Manfred 09 August 2017 05:57:14PM 0 points [-]

I think I don't know the solution, and if so it's impossible for me to guess what he thinks if he's right :)

But maybe he's thinking of something vague like CIRL, or hierarchical self-supervised learning with generation, etc. But I think he's thinking of some kind of recurrent network. So maybe he has some clever idea for unsupervised credit assignment?

Comment author: rxs 07 August 2017 12:50:28PM 0 points [-]

Is there an alternative to predictionbook.com for private predictions? I'd like to have all the nice goodies like updateble predictions in scicast/metaculus, but for private stuff?

Alternative question: Is there a off-line version of prediction book (command line or gui)?

Comment author: gwern 07 August 2017 05:19:39PM 1 point [-]

You can set PB predictions to be private. Of course, this doesn't guarantee privacy since there are so many ways to hack websites and PB is not the best maintained codebase nor has it ever been audited... You could encrypt your private predictions, which would offer security but also reminders+scoring.

I don't know of any offline CLI versions but the core functionality is pretty trivial so you could hack one up easily.

Comment author: disconnect 07 August 2017 08:25:28PM 0 points [-]

For mobile, there's LW Predictions on Android.

Comment author: rxs 08 August 2017 05:01:29AM 0 points [-]

Thanks!