Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: James_Miller 24 April 2016 06:13:37PM 6 points [-]

I will probably vote for Trump if he wins the Republican nomination, and I don't think the article was anti-Trump.

Comment author: Thomas 25 April 2016 09:16:14AM *  2 points [-]

I am not an American and I'll not vote. I hate the intelligentsia's attitude toward the man.

Comment author: Gleb_Tsipursky 24 April 2016 04:07:23PM 0 points [-]

Nancy, I could have certainly made similar points about Sanders, although less emphatically. To some extent, all candidates are appealing to anger and fear, although Trump is the clearest and most strident example. This is why at the end of the article, I noted that "he is not the only candidate doing so. Whatever candidate you are considering, my fellow Americans, I hope you deploy intentional thinking and avoid the predictable errors in making your political decisions."

Good question on checking whether I'm right. I didn't go into this in depth in the source, due to space limitations, but I read quite a bit of primary sources of why people are voting for Trump. I have a scholarly background in studying emotions and deployed that methodology for studying this topic.

Comment author: Thomas 24 April 2016 05:47:20PM 3 points [-]

Should we expect your Anti-Trump campaigning here until November, or what?

In response to comment by [deleted] on AIFoom Debate - conclusion?
Comment author: turchin 05 March 2016 09:55:38AM *  3 points [-]

I tried to explain it in my recent post, that on current level of technologies human level AGI is possible, but foom is not yet, in particular, because some problems with size, speed and the way neural nets are learning.

Also human level AGI is not powerful enough to foam. Human science is developing but in includes millions of scientists; foaming AI should be of the same complexity but run 1000 times quicker. We don't have such hardware. http://lesswrong.com/lw/n8z/ai_safety_in_the_age_of_neural_networks_and/

But the field of AI research is foaming with doubling time 1 year now.

Comment author: Thomas 06 March 2016 09:52:50AM 2 points [-]

foom, not foam, right?

Comment author: philh 02 December 2015 02:59:31PM 1 point [-]

So I can trade one currency for another, and then trade back, and the amount I now have in the first currency can be arbitrarily high. This doesn't feel like it particularly changes anything.

Comment author: Thomas 02 December 2015 05:33:42PM 0 points [-]

You are welcome!

Comment author: philh 02 December 2015 10:15:15AM 2 points [-]

Repeating my question from late in the previous thread:

It seems to me that if you buy a stock, you could come out arbitrarily well-off, but your losses are limited to the amount you put in. But if you short, your payoffs are limited to the current price, and your losses could be arbitrarily big, until you run out of money.

Is this accurate? If so, it feels like an important asymmetry that I haven't absorbed from the "stock markets 101" type things that I've occasionally read. What effects does it have on markets, if any? (Running my mouth off, I'd speculate that it makes people less inclined to bet on a bubble popping, which in turn would prolong bubbles.) Are there symmetrical ways to bet a stock will rise/fall?

Comment author: Thomas 02 December 2015 10:41:39AM *  -1 points [-]

if you buy a stock, you could come out arbitrarily well-off, but your losses are limited to the amount you put in

You never only buy, but at the same time you have traded your dollars, euros or whatever currency for that stock.

There is nothing like "buying" and "shorting" - it's always trading. Swapping two "currencies".

Comment author: Thomas 14 November 2015 08:35:20AM -1 points [-]

The idea, that both slits, the electrons, the detector and everything else near the experiment are influencing the outcome of the result - looks very good to me.

Comment author: turchin 29 October 2015 09:30:38AM -2 points [-]

It would also interesting to note that the program can't run and optimise itself simultaneously. Probably it need to copy its source code, edit it, than terminate itself and start the new code. Or edit only subagent which is not in use in current moment.

Comment author: Thomas 29 October 2015 10:53:12AM 0 points [-]

the program can't run and optimise itself simultaneously

I think, the hot updating is to consider as well.

Comment author: username2 14 October 2015 04:29:44AM 4 points [-]

To be fair, it seems that recently almost everyone can speak before a some kind of UN panel.

Comment author: Thomas 15 October 2015 09:22:34AM 1 point [-]

Which is good. The last thing I want is the UN to mess with AI. So, if it is just another UN panel, I don't have to worry.

New Year's Prediction Thread (2014)

9 Thomas 01 January 2014 09:38AM

It's time to look back to see what was predicted a year ago and how successfully it was.

But even more, it's time for the fresh predictions for the following year, 2014.

Open Thread, October 7 - October 12, 2013

5 Thomas 07 October 2013 02:52PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

View more: Next