Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: The_Jaded_One 19 March 2017 11:42:06AM 2 points [-]

We had this problem at work quite a few times. Bosses are reluctant to let me do something which will make things run more smoothly, they want new features instead.

The when things break they're like "What! Why is it broken again?!"

In response to LessWrong Discord
Comment author: username2 14 March 2017 05:13:43PM 5 points [-]

Can we please not push a closed source electron based app with no options for encryption on the community ? We already have a irc channel which is on a non-tor friendly network and a slack which is practically the same thing when it comes to the frontend stack with a few differences when it comes to features. (I may be wrong about slack)

Why not go for something based on the matrix protocol which currently has support for bridges for both irc and slack ? Why must we fragment the community another time based on a temporary popular chat application which gained traction just because gamers jumped on it like they jumped on gamergate ?

https://matrix.org/blog/2017/03/11/how-do-i-bridge-thee-let-me-count-the-ways/

It even has a meme app for those afraid of their computers based on.. you guessed it, electron. Why of course we're going to write our desktop applications in javascript and css and use a whole copy of a browser as a runtime for it.

Comment author: The_Jaded_One 14 March 2017 09:57:20PM 0 points [-]

no options for encryption on the community

I've heard the CIA, the FBI and the Illuminati are all onto us. Strong encryption is not negotiable.

Why not go for something based on the matrix protocol

Maybe not everyone is ready to take the red pill?

In response to The Semiotic Fallacy
Comment author: The_Jaded_One 22 February 2017 05:43:38PM 2 points [-]

Call this kind of reasoning the semiotic fallacy: Thinking about the semiotics of possible actions without estimating the consequences of the semiotics.

But you could equally well write a post on the "anti-semiotic fallacy" where you only think about the immediate and obvious consequences of an action, and not about the signals it sends.

I think that rationalists are much more susceptible to the anti-semiotic fallacy in our personal lives. And also to an extent when thinking about global or local politics and economics.

For example, I suspect that I suffered a lot of bullying at school for exactly the reason given in this post: being keen to avoid conflict in early encounters at a school (among other factors).

Comment author: Jiro 17 February 2017 10:03:36PM 3 points [-]

I don't believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn't.

It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs. And that seems to be the main effect of the Balrog analogy.

Comment author: The_Jaded_One 18 February 2017 10:24:03AM 2 points [-]

I don't believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn't.

I disagree, I think there is value in analogies when used carefully.

It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs.

Yes, I also agree with this; you have to be careful of implicitly using fiction as evidence.

Comment author: Jiro 16 February 2017 04:07:23PM 2 points [-]

In other words, if you set up the allegory so as to force a particular conclusion, that proves that that's the proper conclusion in real life, because we all know that the allegory must be correct.

Comment author: The_Jaded_One 17 February 2017 07:14:01PM 3 points [-]

I think this is more useful as a piece that fleshes out the arguments; a philosophical dialogue.

Comment author: Dr_Manhattan 14 February 2017 01:40:31PM *  1 point [-]

or even used it to hire a wizard to work on an admittedly long-shot, Balrog control spell

I have a higher probability of a group of very dedicated wizards succeeding, worth re-doing the above decision analysis with those assumptions

Comment author: The_Jaded_One 15 February 2017 08:58:58PM 1 point [-]

I have a higher probability of a group of very dedicated wizards succeeding, worth re-doing the above decision analysis with those assumptions

Then there is still a problem with how much time we leave for the wizards, which mithril mining approaches we should pursue (risky vs safe)

Comment author: jsalvatier 10 February 2017 10:35:45PM 0 points [-]

There are certainly people who meet it better than others.

Comment author: The_Jaded_One 15 February 2017 06:23:49PM 0 points [-]

Yes, definitely. The more you are in such a community, the more you can do this.

Comment author: lifelonglearner 02 February 2017 03:08:35AM 1 point [-]

Cool, thanks for writing this up! I vaguely remember someone at CFAR bringing something about argument-norms of this kind--"convince or be convinced". Was that in reference to you?

Comment author: The_Jaded_One 02 February 2017 11:01:12PM 0 points [-]

convince or be convinced

Isn't this kind of like the Aumann agreement theorem?

Are there any humans who meet that lofty standard?

Comment author: The_Jaded_One 02 February 2017 10:32:15PM 2 points [-]

It seems kind of common sense that a small group of people using violence against a very large, well-armed group are going to have a tough time.

Comment author: sarahconstantin 28 January 2017 07:00:07PM 0 points [-]

Yep! I want to distinguish between "deep learning by itself is probably not general intelligence" (which I believe) and "nobody is making progress towards general intelligence" (which I'm uncertain about and definitely don't think is safe to assume.)

Comment author: The_Jaded_One 29 January 2017 09:20:49PM 0 points [-]

It is definitely true that progress towards AGI is being made, if we count the indirect progress of more money being thrown at the problem, and importantly perceptual challenges being solved means that there is now going to be a greater ROI for symbolic AI progress.

A world with lots of stuff that is just waiting for AGI-tech to be plugged into it is a world where more people will try hard to make that AGI-tech. Examples of 'stuff' would include robots, drones, smart cars, better compute hardward, corporate interest in the problem/money, highly refined perceptual algorithms that are fast and easy to use, lots of datasets, things like deepmind's universe, etc.

A lot of stuff that was created from 1960 to 1990 helped to create the conditions for machine learning; the internet, Moore's law, databases, operating system, open source software, a computer science education system etc.

View more: Next