You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gunnar_Zarncke comments on Open thread, Jan. 25 - Jan. 31, 2016 - Less Wrong Discussion

3 Post author: username2 25 January 2016 09:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (169)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrMind 28 January 2016 11:17:21AM *  0 points [-]

Those that follows are random spurts of ideas that emerged when thinking at AlphaGo. I make no claim of either validity, soundness or even sanity. But they are random interesting directions that are fun for me to investigate, and they might turn out interesting for you too:

  • AlphaGo uses two deep neural networks to prune the enormous search tree of a Go position, and it does so unsupervised.
  • Information geometry allows us to treat information theory as geometry.
  • Neural networks allows us to partition high-dimensional data.
  • Pruning a search tree is also strangely similar to dual intuitionistic logic.
  • Deep neural networks can thus apply a sort of paraconsistent probabilistic deduction.
  • Probabilistc self-reflection is possible.
  • Deep neural networks can operate a sort of paraconsistent probabilistic self-reflection?
Comment author: Gunnar_Zarncke 29 January 2016 10:18:42PM 0 points [-]

The the Alpha Go Discussion Post.