You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Clarity comments on Open Thread August 31 - September 6 - Less Wrong Discussion

5 Post author: Elo 30 August 2015 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (326)

You are viewing a single comment's thread.

Comment author: Clarity 03 September 2015 11:57:22AM 1 point [-]

I have yet to see a treatise, for strategic managers or from academics of any domain, on the game theoretic implications of data science and data-driven firm behaviour in general.

I for one would expect data driven organisations to be act more rationally and therefore predictable, meaning that game-theoretic optimal strategic behaviour, or rather an approximation of it because many data driven organisations will be stupid like many poker players forming a nash equilbirum would maximise expected utility. However, I don't see how machine learning provides an avenue for firms to inform their strategic multi-agent decisions. They instead need to consider artificial intelligence techniques more broadly and to be able to frame machine learning in that context. This, I suspect, will lead to the goldrush for AGI development. As soon as the potential for this becomes common knowledge, linkedin losers will start 'hailing AI experts as the sexiest job in the 21st century. MIRI, take head of my warning that if you are not more transparent with your research agenda (which to those who don't know, is still secret in part) you may find yourself developing FAI solutions way too slow.

Release your agenda and let others work on your problems cooperateively. Maybe you'll even get a more heterogenous audience at the Intelligent Agents Forum. Maybe mainstream researchers can craft work you can actually use on the mathematical foundations of AI or UAI. I suspect the reason that this community blog, albeit devoted to human rationality and not machine rationality, devolves into topics like 'polygamy' is that we don't have shared problems to solve.

Human rationality is a very, very awkward construct and the problem space is unclear and tangential, albeit related to MIRI's work which let's admit, is the very reason this please exists. Let us run wild and perhaps LessWrongers will start alternative agendas like developing criminal networks and intelligence networks so potential hostile AI could be detected in advance and stopped coersively. I'm just giving the first example I could think of.

My point is, you don't have any significant proprietary hard assets, why shouldn't I or any other particular funder instead create a prize on award for a more transparent FAI research organisation to pivet off your incredible work? I'm not in a position to judge whether or not your ongoing contributions are essential, but this could also be good opportunity for the community to discuss what will happen if or when you die or become incapable of contributing to the community. Same goes for other critical members of the community. Are their intellectual succession proceses in place?