Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Alicorn 17 March 2017 01:46:56AM 21 points [-]

If you like this idea but have nothing much to say please comment under this comment so there can be a record of interested parties.

Comment author: gallabytes 17 March 2017 04:23:47AM 5 points [-]

I'm fairly interested but don't really want to be around children.

Comment author: PeteMichaud 17 December 2015 08:30:52PM 25 points [-]

(I'm Pete, I work for CFAR, and I'd like to post things, so please upvote this comment to 20 karma.)

Comment author: gallabytes 17 December 2015 09:05:39PM 2 points [-]

I feel like this comment belongs on the LessWrong 2.0 article (to the point that I assumed that's where it was when I was told about it), but it doesn't actually matter.

Comment author: 11kilobytes 03 June 2015 08:28:50AM 5 points [-]

Would a series of posts explaining the basics of Homotopy Type Theory be well accepted here?

Comment author: gallabytes 03 June 2015 08:15:45PM 1 point [-]

I'd be interested to read another take on it if there's some novel aspect to the explanation. Do you have a particular approach to explaining it that you think the world doesn't have enough of?

Comment author: gallabytes 27 January 2015 08:14:52AM 0 points [-]

The tournament definitely made for an interesting game, but more as a conversation starter than a game itself. I suspect part of the issue was how short this particular game was, but the constant cooperation got old pretty quickly. Some ideas for next time someone wants to run a dilemma tournament:

1) Bracket play - you play with another person until one of you defects while the other cooperates. Example round: both cooperate, both cooperate both defect, both defect, player 1 defects and player 2 cooperates, player 1 moves on. This has a trivial vulnerability in that always defecting always wins or ties, so there'd have to be some other incentive to encourage cooperation.

2) Factions - each person is part of a faction, you have some minor term for the total utility of your faction at the end of the game. You could go further with this and have a negative (even more minor) term for the total utility of other factions, or maybe add the option of giving points to other players.

Unrelated note: In the future there should probably be some coordination around walking to/from. While nothing happened this time, Civic Center did not feel like the safest place to be walking around alone late at night.

Comment author: KatjaGrace 25 September 2014 03:19:36AM 1 point [-]

What's your reason for thinking weak AI leads to strong AI?

I doubt there is a sharp distinction between them, so I think probably trying to make increasingly useful weak AIs will lead to strong AI.

Comment author: gallabytes 26 September 2014 06:14:58AM 3 points [-]

I doubt there is a sharp distinction between them

Actually, let's taboo weak and strong AI for a moment.

By weak AI I mean things like video game AI, self driving cars, WolframAlpha, etc.

By strong AI I think I mean something that can create weak AIs to solve problems. Something that does what I mean by this likely includes a general inference engine. While a self driving car can use its navigation programs to figure out lots of interesting routes from a to b, if you tell it to go from California to Japan it won't start building a boat

Comment author: KatjaGrace 23 September 2014 01:01:30AM 5 points [-]

'One result of this conservatism has been increased concentration on "weak AI" - the variety devoted to providing aids to human thought - and away from "strong AI" - the variety that attempts to mechanise human-level intelligence' - Nils Nilson, quoted on p18.

I tend to think that 'weak AI' efforts will produce 'strong AI' regardless, and not take that much longer than if people were explicitly trying to get strong AI. What do you think?

Comment author: gallabytes 24 September 2014 12:11:01AM 4 points [-]

What's your reason for thinking weak AI leads to strong AI? Generally, weak AI seems to take the form of domain-specific creations, which provide only very weak general abstractions.

One example that people previously thought would lead to general AI was chess playing. And sure, the design of chess playing AI forced some interesting development of efficient traversing of large search spaces, but as far as I can tell it has only done so in a very weak way, and hasn't contributed meaningfully to anything resembling the efficiency of human-style chunking.

Comment author: KatjaGrace 23 September 2014 01:03:42AM 3 points [-]

Relatedly, Scott Alexander criticizes the forms of popular reporting on dangers from AI. Why does reporting takes these forms?

Comment author: gallabytes 23 September 2014 11:58:43PM 3 points [-]

AGI takeoff is an event we as a culture have never seen before, except in popular culture. So, that in mind, reporters draw on the only good reference points the population has, sci fi.

What would sane AI reporting look like? Is there a way to talk about AI to people who have only been exposed to the cultural background (if even that) in a way that doesn't either bore them or look at least as bad as this?

Comment author: paulfchristiano 23 September 2014 02:33:08AM 5 points [-]

Excellence at baking cakes is certainly helpful. I agree that there are other cake-experts who might be better poised to predict the future of the cake industry. I don't know who their analogs are in the case of artificial intelligence. Certainly it seems like AI researchers have access to some important and distinctive information (contra the cake example).

Comment author: gallabytes 23 September 2014 11:53:56PM 1 point [-]

Perhaps people who are a step removed from the actual AI research process? When I say that, I'm thinking of people like Robin Hanson and Nick Bostrom, whose work depends on AI but isn't explicitly about it.

Comment author: KatjaGrace 23 September 2014 01:06:57AM 7 points [-]

If a person wanted to make their prediction of human-level AI entirely based on what was best for them, without regard to truth, when would be the best time? Is twenty years really the sweetest spot?

I think this kind of exercise is helpful for judging the extent to which people's predictions really are influenced by other motives - I fear it's tempting to look at whatever people predict and see a story about the incentives that would drive them there, and take their predictions as evidence that they are driven by ulterior motives.

Comment author: gallabytes 23 September 2014 11:49:12PM *  2 points [-]

The answer to this question depends really heavily on my estimation of MIRI's capability as an organization and on how hard the control problem turns out to be. My current answer is "the moment the control problem is solved and not a moment sooner", but I don't have enough of a grip on the other difficulties involved to say when that would be more concrete.

Comment author: gelisam 20 September 2014 01:38:20PM *  7 points [-]

When you make a data declaration like this, you are essentially postulating (A → □ A) and (□ A → A), for any formula A. This essentially removes the distinction between A (the formula A is "true") and □ A (the formula A is "provable"), meaning you are not working inside the modal logic you think you are working in. We made the same mistake in the /r/haskell thread at first.

It's an easy mistake to make, because the first rule of inference for provability logic is

⊢ A
------
⊢ □ A

So it looks like (A → □ A) should be a theorem, but notice that the left-hand side of the turnstile is empty. This means that the rule is only valid when there are no assumptions in context; or equivalently in a programming language, when there are no variables in scope. Since Agda doesn't know how to talk about which Agda variables are in scope, we cannot implement this rule as an Agda function, which would be available whether the scope is empty or not.

A few examples would probably help. A valid use of inference rule 1 would look like this:

----- variable rule
A ⊢ A
--------- deduction rule
⊢ (A → A)
----------- inference rule 1
⊢ □ (A → A)

Here, we have used inference rule 1 and a few other common rules to prove that the trivial implication (A → A) is "provable". By comparison, here is what happens when we try to use the same technique to prove that all "true" statements are "provable", i.e. (A → □ A). (I put "true" and "provable" in quotes in order to distinguish "provable according to the map" from provable in the territory. Provability logic is a map we use to understand provability.)

----- variable rule
A ⊢ A
------- inference rule 1 (invalid, the context is not empty)
A ⊢ □ A
--------- deduction rule
⊢ A → □ A

As you can see, the proof is invalid. And that's a good thing, too, because if we could prove that all "true" statements were "provable", then our provability logic would have allowed us to prove something which we know to be false, so the logic would have been a poor model of real-world provability.

Anyway, the conclusion of all this is that we should not represent inference rule 1 by an Agda function from A to □ A, but by an object of type "Theorem (A ⇾ Provable A)", where "⇾" is a syntactic construct representing implication, not an Agda function.

Comment author: gallabytes 20 September 2014 07:40:51PM *  2 points [-]

I'm not really seeing the importance of the separate construct Theorem (A → B) vs (A → B). It seems to get manipulated in the exact same way in the code above. Is there some extra capability of functions that Theorem + postulates doesn't include?

Also, I think you're misunderstanding what a function A → B means in Agda. A → □ A doesn't mean "all true statements are provable", it means "all statements we can construct a proof term for are provable" - you don't get an element of type A to apply that to without a proof of A, so it's actually a pretty trivial implication. I'm actually tempted to say that the notion of provability is actually wholly unnecessary here because of the Curry-Howard Correspondence, and that the real issue is the construction of the fixpoint

View more: Next