Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Dallas 27 March 2016 01:27:28PM 31 points [-]

For the interests of identity obfuscation, I have rolled a random number between 1 and 100, and have waited for some time afterwards.

On a 1-49: I have taken the survey, and this post was made after a uniformly random period of up to 24 hours.

On a 50-98: I will take the survey after a uniformly random period of up to 72 hours.

On a 99-100: I have not actually taken the survey. Sorry about that, but this really has to be a possible outcome.

Comment author: Dallas 05 March 2015 10:27:08PM *  19 points [-]

Protips:

  • Given both demographics and recent discourse, you are going to want vegetarian and vegan options for food.
  • HPMOR has a large hatedom, for various reasons. Key vectors for trolls are photos, videos, and flyers. Be more conscious than usual about personal boundaries and privacy.
  • Public events are going to bring together people with varying viewpoints; be emotionally prepared for having your bubble popped by culture shock.
  • Betting pools on the number of clueless attendees who showed up for the Potter and forgot about the Rationality are generally frowned upon by the general public. (That means you, Hanson!)
  • Don't be gross, in either appearance or manners.
  • Don't hand out pamphlets to the general public; it looks, you know...
In response to 2014 Survey Results
Comment author: Gunnar_Zarncke 04 January 2015 10:43:12AM *  2 points [-]

I think one logical correlation following from the Simulation Argument is underappreciated in the correlations.

I spotted this in the uncorrelated data already:

  • P Supernatural: 6.68 + 20.271 (0, 0, 1) [1386]

  • P God: 8.26 + 21.088 (0, 0.01, 3) [1376]

  • P Simulation 24.31 + 28.2 (1, 10, 50) [1320]

Shouldn't evidence for simulations - and apparently the median belief is 10% for simulation - be evidence for Supernatural influences, for which there is 0% median belief (not even 0.01). After all a simulation implies a simulator and thus a more complex 'outer world' doing the simulation and thus disabling occams razor style arguments against gods.

Admittedly there is a small correlation:

  • P God/P Simulation .110 (1296)

Interestingly this is on the same order as

  • P Aliens/P Simulation .098 (1308)

but there is no correlation listed between P Aliens/P God. Thus my initial hypothesis that aliens running the simulation of gods being the argument behind the 0.11 correlation is invalid.

Note that I mentioned simulation as weak argument for theism earlier.

Comment author: Dallas 04 January 2015 03:46:12PM 0 points [-]

I actually calibrated my P(God) and P(Supernatural) based on P(Simulation), figuring that getting an exact figure for cases where (~Simulation & Supernatural) are basically noise.

I forgot what I actually defined "God" as for my probability estimation, as well as the actual estimation.

Comment author: XiXiDu 26 November 2014 12:23:27PM *  18 points [-]

Since you have not yet replied to my other comment, here is what I have done so far:

(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]

(2) I slightly changed your given disclaimer and added it to my about page:

Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret those posts, and leave this note here as an archive to that regret.

The reason for this alteration is that my blog has been around since 2001, and for most of the time it did not contain any mention of you, MIRI, or LW. For a few years it even contained positive referrals to you and MIRI. This can all be checked by looking at e.g. archive.org for domains such as xixidu.com. I estimate that much less than 1% of all content over those years has been related to you or MIRI, and even less was negative.

But my previous comment, in which I asked you to consider that your suggested header would look really weird and confusing if added to completely unrelated posts, still stands. If that's what you desire, let me know. But I hope you are satisfied with the actions I took so far.

[1] If I missed something, let me know.

Comment author: Dallas 26 November 2014 11:24:54PM -2 points [-]

Your updates to your blog as of this post seem to replace "Less Wrong", or "MIRI", or "Eliezer Yudkowsky", with the generic term "AI risk advocates".

This just sounds more insidiously disingenuous.

Comment author: Dallas 23 November 2014 08:25:31PM 7 points [-]

I've had to deal with the stress you are contributing to putting on the broader perception of transhumanism for the weekend, and that is on top of preexisting mental problems. (Whether MIRI/LW is actually representative to this is entirely orthogonal to the point; public perception has and is shifting towards viewing the broader context of futurism as run by neoreactionaries and beige-os with parareligious delusions.)

Of course, that's no reason to stop anything. People are going to be stressed by things independent of their content.

But you are expecting an entity which you have devoted most of blog to criticizing to be caring enough about your psychological state that they take time out to write header statements for each of your posts?

If you want to stop accusations of lying and bad faith, stop spreading the "LW believes in Roko's Basilisk" meme, and do something less directly reputation-warfare escalatory, and more productive-- like hunting down Nazis and creating alternatives to the current decision-theoretic paradigm. (I don't think anybody's going to get that upset over abstract discussions of Newcomb's Problem. At least, I hope.)

Comment author: Error 22 November 2014 10:06:08PM 2 points [-]

I feel the need to switch from Nerd Mode to Dork Mode and ask:

Which would win in a fight, a basilisk or a paperclip maximizer?

Comment author: Dallas 22 November 2014 11:21:38PM 0 points [-]

Paperclip maximizer, obviously. Basilisks typically are static entities, and I'm not sure how you would go about making a credible anti-paperclip 'infohazard'.

Comment author: Dallas 25 October 2014 03:02:42PM 37 points [-]

I completed the survey. (Did not do the digit ratio questions due to lack of available precise tools.)

Comment author: Dallas 23 September 2014 01:57:36AM 1 point [-]

Can you be slightly more specific on the context? Like, at least the vague fields of study it might apply to? This would allow us to make an informed decision.

Comment author: Alejandro1 24 July 2014 09:40:55AM 22 points [-]

"However, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma", however, yields an even better joke (due to an extra meta level) when preceded by its quotation and a comma.

In response to comment by Alejandro1 on Jokes Thread
Comment author: Dallas 25 July 2014 04:17:45PM 31 points [-]

"Is a even better joke than the previous joke when preceded by its quotation" is actually much funnier when followed by something completely different.

[Link] Zack Weinersmith's One-Liner Generator

-1 Dallas 25 March 2014 03:30PM

Zack Weinersmith of SMBC fame has suggested an interesting artificial intelligence project: generate jokes by observing utility as a function of rate of change in initial understanding over time.

(source)

View more: Next