Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open thread, Jul. 03 - Jul. 09, 2017

1 Post author: MrMind 03 July 2017 07:20AM
If it's worth saying, but not worth its own post, then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

Comments (22)

Comment author: CellBioGuy 06 July 2017 11:07:25PM *  9 points [-]

Postdoctoral position acquired. May be doing some work off a NASA astrobiology grant, eventually.

Comment author: Viliam 09 July 2017 01:17:54PM 3 points [-]

I think we had a debate about the exact definition of blackmail, so here is an interesting legal opinion:

Blackmail is surprisingly hard to define

at the heart of blackmail law lies what some call the blackmail paradox: Blackmail — which I’ll define here as threatening to reveal an accurate embarrassing fact about a person unless he does what you demand — generally involves (a) threatening to do something that you have every legal right (even a constitutional right) but no legal obligation to do, in order to (b) get someone to do what he has every legal right to do.

Nor can we resolve this by saying that coercive threats, even threats to do something legal, are generally criminal. “Pay me $10,000 or I’ll stop doing business with you” is perfectly legal (assuming that the threat comes from a sole proprietor, rather than someone lining his own pockets at the expense of his employer). “Pay me $10,000, neighbor, or I’ll sell my house, which is next to yours, to someone you dislike” is perfectly legal, too. Much legitimate hardball negotiation involves threats aimed at getting someone to do something, including threats of financial ruin. It’s just when the threat is to reveal embarrassing information that it becomes blackmail (or, as some statutes label it, coercion or extortion).

Of course, there are lots of possible theoretical and pragmatic responses to this objection; and the law does punish blackmail, though the definition varies from state to state. But the theoretical paradox, and specifically the fact that so much legal and commonplace behavior is very similar to blackmail, causes practical problems. [Too literal interpretation of a law] would even make it a crime to say, “Pay back the money you took from me, or I’ll sue you to get it back,”

Comment author: Daniel_Burfoot 04 July 2017 03:05:26AM 2 points [-]

I am working on a software tool that allows programmers to automatically extract FSM-like sequence diagrams from their programs (if they use the convention required by the tool).

Here is a diagram expressing the Merge Sort algorithm

Here is the underlying source code.

I believe this kind of tool could be very useful for code documentation purposes. Suggestions or improvements welcome.

Comment author: ChristianKl 04 July 2017 05:14:38PM 1 point [-]

Most of code documentation happens in text files. Maybe it's worth drawing the diagram in ASCII or unicode characters?

Comment author: jackk 06 July 2017 02:30:21AM 0 points [-]

You might be interested in Conal Eliott's work on Compiling to Categories, which enables automatic diagram extraction (among a bunch of other things) for Haskell.

Comment author: cousin_it 09 July 2017 08:16:01AM *  1 point [-]

Realistic AI risk scenario similar to The Matrix: ad tech eats the world and keeps humans around for clicks. Clickbots won't do, because clickbot detection evolves as part of ad tech.

Comment author: CurtisSerVaas 04 July 2017 07:04:44PM 1 point [-]

I created a 1dollarscan subscription for "100 sets" (each set is 100 pages, so I paid $99 for the ability to scan up to 100sets*100pages/set = 10,000 pages), but I'm not going to use all of the sets, so if you have dead tree books that you'd like to destroy/convert to PDFs, PM me. My subscription ends on July 15th, and you'd have to mail in the books so that they arrive before then.

Comment author: whpearson 03 July 2017 08:51:12PM 1 point [-]

I've decided to work on a book while I also work on the computer architecture. It pulls together a bunch of threads of thinking I've had around the subject of autonomy. Below is the TL; DR. If lots of peopled are interested I can try and blogify it. If not many people are I might seek your opinions on drafts.


We are entering an age where questions of autonomy become paramount. We have created computers with a certain amount of autonomy and are exploring how to give more autonomy to them. We simultaneously think that autonomous computers are overhyped and that autonomous computers (AI) could one day take over the earth.

The disconnect in views is due to a choice made early in computing's history that requires a programmer or administrator to look after a computer by directly installing programs and stopping and removing bad programs. The people who are worried about AI are worried that the computers will become more autonomous and no longer need an administrator. People embedded in computing cannot see how this would happen as computers, as they stand, still require someone to control the administrative function and we are not moving towards administrative autonomy.

Can we build computer systems that are administratively autonomous? Administration can be seen as a resource allocation problem, with an explicit administrator serving the same role as a dictator in a command economy. An alternative computer architecture is presented that relies on a market based allocation of resources to programs on based on human feedback. This architecture if realized would allow programs to experiment with new programs in the machine and would lead to a more efficient adaptive computer that didn’t need an explicit administrator. Instead it would be trained by a human.

However making computers more autonomous can either lead to more autonomy for each of us by helping us or it could lead to computers being completely autonomous and us at their mercy. Ensuring the correct level of autonomy in the relationship between computers and people should be a top priority.

The question of more autonomy for humans is a also a tricky one. On the one hand it would allow us to explore the stars and safeguard us from corrupt powers. On the other hand more autonomy for humans might lead to more wars and existential risks due to the increase in destructive powers of individuals and decrease in interdependence.

Autonomy is currently ill defined. It is not an all or nothing affair. During this discussion what we mean by autonomy will be broken down, so that we can have a better way of discussing it and charting our path to the future.

Comment author: Thomas 03 July 2017 08:38:51AM 1 point [-]
Comment author: Gurkenglas 03 July 2017 01:09:35PM *  1 point [-]

Your huffman codes with essential indifference are binary trees (each node has 0 or 2 children) with isomorphism.

Let f(n) be the number of trees with n leaves.

f(1)=1
f(2n+1)=sum from i=1 to n of f(i)*f(2n+1-i)
f(2n)=f(n)*(f(n)+1)/2 + sum from i=1 to n-1 of f(i)*f(2n-i)

Here's the first 26 numbers of such trees:

[1,1,1,2,3,6,11,23,46,98,207,451,983,2179,4850,10905, 24631,56011,127912,293547,676157,1563372,3626149,8436379,19680277]

Comment author: Thomas 03 July 2017 03:26:36PM 0 points [-]

It's something. But what are the codes? An algorithm to create them would suffice. A faster one is better, of course.

Comment author: Gurkenglas 03 July 2017 08:54:15PM *  1 point [-]

The same control flow generates them. In Haskell:

data T = N T T | L deriving Show
⠀
ts :: Int -> [T]
ts 1 = [L]
ts k | (n, 1) <- divMod k 2 = [N x y | i <- [1..n ], x <- ts i, y <- ts (k-i)]
ts k | (n, 0) <- divMod k 2 = [N x y | i <- [1..n-1], x <- ts i, y <- ts (k-i)]
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀++ [N x y | ys@(x:_) <- tails (ts n), y <- ys]
⠀
⠀
<Gurkenglas> > ts 4
<lambdabot> [N L (N L (N L L)),N (N L L) (N L L)]

(Beware, I had to use U+2800 to almost align the code block in spite of LW's software eating whitespace. Source here)

Edit: See also: oeis, where you enter an integer sequence and it tells you where people have seen it.

Comment author: Thomas 05 July 2017 07:37:32AM 0 points [-]

Very well, congratulations again!

Perhaps a nonrecursive function would be faster.

Comment author: Gurkenglas 05 July 2017 02:36:31PM *  0 points [-]

Not really, the sequence grows quickly enough to outstrip the recursive overhead. To calculate the overhead, replace the * in f(i)*f(2n+1-i) with a +. Memoizing is of course trivial anyway, using memoFix.

Comment author: madhatter 08 July 2017 03:40:26PM 0 points [-]

Where did the term on the top of page three of this paper after "a team's chance of winning increases by" come from?

https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf

Comment author: madhatter 07 July 2017 06:13:18AM 0 points [-]

Two random questions.

1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?

2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?

Comment author: ChristianKl 08 July 2017 06:48:59PM 1 point [-]

The kind of money that projects like DeepMind or OpenAI cost seem to be within the budget of a Russian billionaire who strongly cares about the issue.

But there seem to be many countries that are stronger than Russia: https://futurism.com/china-has-overtaken-the-u-s-in-ai-research/

Comment author: MrMind 07 July 2017 03:57:38PM 0 points [-]

On 2, I'd say not really: fuzzy logic is a logic which has a continuum of truth values. Logical uncertainty works by imposing, on classical logic, a probability assignment that is as "nice" as possible.

Comment author: WalterL 06 July 2017 01:24:36PM 0 points [-]

This might be better saved for a 'dumb questions' thread, but whatever.

So...I've had a similar experience a couple of times. You go to the till, make a purchase, something gets messed up and you need to void out. The cashier has to call a manager.

This one time I had a cashier who couldn't find her manager, so she put the transaction through, then put a refund through. Neither of these required a manager.

Why is it that you need a manager code to void a transaction, while the cashier is presumed confident for sales and refunds?

Comment author: drethelin 07 July 2017 06:34:02PM 1 point [-]

Voiding a transaction deletes it (I'm pretty sure), which removes the information trail. The other way records the transactions, so if they end up being criminal, the cashier in question is caught.

Comment author: WalterL 07 July 2017 06:37:29PM 0 points [-]

That sounds right, thanks.

Comment author: turchin 04 July 2017 10:20:44AM 0 points [-]

Do we have any non-science-fiction link on a global risk that Narrow AI virus affects robotic hardware, like self-driving cars or home robots, and they start to attack humans?