Comment author: gjm 22 February 2016 08:35:38PM 0 points [-]

The organization that put this out has a pretty clear sociopolitical agenda.

(The second and third links there are from sites with a definite leftish tilt. It doesn't look to me as if they're telling any lies about the Austin Institute, but they're unlikely to be sympathetically disposed.)

Comment author: ete 22 February 2016 09:07:42PM 0 points [-]

Of course, they're very clearly trying to push a right wing traditional morals agenda, with a bit of dressing up to make it appear balanced to the unobservant. Their other major video is even more overtly propaganda.

I just find it fascinating to watch this kind of attempt at manipulating people's views, especially when a bunch of smart people have clearly tried to work out how to get their message across as effectively as possible. Being aware of those tricks seems likely to offer some protection against them being used to try and push me in ways I may be more susceptible, and knowing the details of what as been used to shape certain opinion means I am better prepared if I get in a debate with people who have been persuaded by them.

In response to The Talos Principle
Comment author: Viliam 22 February 2016 09:26:28AM 8 points [-]

Sharing our own culture, history and language don't make him Human as well?

What is your true question here? What are your trying to achieve by getting a correct answer to this question? The answer depends on that, because the "question you should have asked" itself depends on that.

If by "Human" you mean "member of the homo sapiens species", of course sharing culture and history and language doesn't achieve any of that. This question and answer could be useful for example in context of medicine -- just because the robot with the liquid metal in his veins speaks our language, it wouldn't be a good idea to try transplating his organs to humans or vice versa.

If by "Human" you mean "someone we could interact with just like we do with ordinary humans", seems like you have already answered positively. (Except for the fact that the robot is programmed to obey Europa. That would perhaps make him more analogical to a slave, who is always in a risk of having his wishes trumped by his master's wishes. Or perhaps analogical to a drug addict, whose promises are quite unreliable because using the drug always takes priority. Depends on how often and in which manner does Europa influence the robot's behavior in everyday life.)

Downloading and reading this free book could help you answer many similar questions.

In response to comment by Viliam on The Talos Principle
Comment author: ete 22 February 2016 06:54:36PM 1 point [-]

Seconding this recommendation. The questions you are starting to ask are ones which have been considered here, and we mostly feel we have sound answers to them (or dissolutions of the questions). Chapters likely to be relevant to your current thoughts:

  • N — A Human's Guide to Words
  • O — Lawful Truth
  • P — Reductionism 101
  • R — Physicalism 201
Comment author: ete 22 February 2016 06:21:08PM -1 points [-]

Anti-polyamory propoganda which clearly had some thought put into constructing a persuasive argument while doing lots of subtle or not so subtle manipulations. Always interesting to observe which emotional/psychological threads this kind of thing tries to pull on.

Comment author: TheAltar 22 February 2016 05:42:05PM 2 points [-]

I didn't find any results for that username, but I did get some for aaronsw.

Comment author: ete 22 February 2016 05:56:19PM 1 point [-]

Fixed typo, thanks!

Comment author: ete 22 February 2016 05:19:43PM *  1 point [-]

You can put aaronsw into the LessWrong karma tool to see Aaron Swartz's post history, and read his most highly rated comments. I bet some of them would be good to spread more widely.

Comment author: ete 22 February 2016 02:49:50PM 2 points [-]

Aaron Swartz's highest ranked LW posts can be found with this. I bet a lot of people would love to be able to find his highest rated posts, and share some more widely.

In response to LessWrong 2.0
Comment author: ete 08 December 2015 11:08:13PM 4 points [-]

So compared to when most things were either posted or crossposted to LW, it seems like we currently spend too little attention on aggregating and unifying content spread across many different places. If most of the action is happening offsite, and all that needs to be done is link to it, Reddit seems like the clear low-cost winner. Or perhaps it makes sense to try to do something like an online magazine, with an actual editor. (See Viliam's discussion of the censor role in an online community.) I note that FLI is hiring a news website editor (but they're likely more x-risk focused than I'm imagining).

It should be not extrodinarily hard to re-enable the ability to submit links which this sites software came with (aka make https://www.reddit.com/submit?selftext=false and https://www.reddit.com/submit?selftext=true both work), and run a bot which scrapes from a list of blogs/tumblers/etc to auto-submits those links (could make the list drawn from the XML export of a wiki page, protected so only wiki admins can edit, with a request thread requiring x current recognized people to vouch for you before you're added, or having someone/a small team designated to handle those requests).

Then put top rated posts on the front page with reasonable turnover/ability to see past ones and LW becomes again the best place to rapidly check for new content across the wider rationality community.

Comment author: ete 12 September 2015 12:58:55PM 4 points [-]

Does anyone know where I could find a Steelmaned version of the pro-death arguments which people often bring up in discussions (around stagnation, inequality, etc) written by someone who has thought about a post-singularity world?

Comment author: Gondolinian 09 June 2015 04:41:17PM *  0 points [-]

Perhaps official downvote policies messaged to a user the first time they pass that would help too.

Anything with messages could be implemented by a bot account, right? That could be made without having to change the Less Wrong code itself.

Maybe we could send a message to users with guidelines on downvoting every time they downvote something? This would gently discourage heavy and/or poorly reasoned downvoting, likely without doing too much damage to the kind of downvoting we want. One issue with this is it would likely be very difficult or practically impossible for a bot account to know when someone downvotes something without changing the LW code. (Though it probably wouldn't require a very big change, and things could be limited to just the bot account(s).)

Submitting...

Comment author: ete 09 June 2015 09:14:12PM 0 points [-]

Every time someone downvotes would probably be too much, but maybe the first time, or if we restrict downvotes only for users with some amount of karma then when they hit that level of karma?

Comment author: Gondolinian 08 June 2015 07:46:55PM 4 points [-]

Is anyone in favor of creating a new upvote-only section of LW?

Submitting...

Comment author: ete 09 June 2015 04:04:07PM 1 point [-]

Another approach would be not allowing downvote to be open to all users. On the Stackexchage network for example, you need a certain amount of reputation to downvote someone. I'd bet that a very large majority of the discouraging/unnecessary/harmful downvotes come from users who don't have above, say, 5-15 karma in the last month. Perhaps official downvote policies messaged to a user the first time they pass that would help too.

This way involved users can still downvote bad posts, and the bulk of the problem is solved.

But it requires technical work, which may be an issue.

View more: Prev | Next