Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: MrMind 12 April 2017 07:29:59AM *  1 point [-]

"Arbiter of truth" is too big of a word.
People easily forget two important things:

  1. Facebook is a social media, emphasis on media: it allows the dissemination of content, it does not produce it;

  2. Facebook is a private, for profit enterprise: it exists to generate a revenue, not to provide a service to citizens.

Force 1 obviously acts against any censoring or control besides what is strictly illegal, but force 2 pushes for the creation of an environment that is customer friendly. That is the only reason why there is some form of control on the content published: because doing otherwise would lose customers.

People are silly if they delegate the responsibility of verifying the truth of a content to the transport layer, and the only reason that a flag button is present is because doing otherwise would lose customers.
That said, to answer your question:

No, Facebook does not have any responsability beyond what is strictly illegal. That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). As a general rule of life, do not acquire your facts from comics.

Comment author: denimalpaca 12 April 2017 06:36:11PM 1 point [-]

"That from power comes responsibility is a silly implication written in a comic book, but it's not true in real life (it's almost the opposite). "

Evidence? I 100% disagree with your claim. Looking at governments or business, the people with more power tend to have a lot of responsibility both to other people in the gov't/company and to the gov't/company itself. The only kind of power I can think of that doesn't come with some responsibility is gun ownership. Even Facebook's power of content distribution comes with a responsibility to monetize, which then has downstream responsibilities.

Comment author: DryHeap 12 April 2017 02:59:02PM *  1 point [-]

Right now, Facebook does very little to identify content, only provide it.

They certainly do identify content, and indeed alter the way that certain messages are promoted.

Example.

They faced criticism for allowing fake news to spread on the site

Who decides what is and is not fake news?

Comment author: denimalpaca 12 April 2017 06:31:22PM 1 point [-]

Not quite what I meant about identifying content but fair point.

As for fake news, the most reliable way to tell is whether the piece states information as verifiable fact, and if that fact is verified. Basically, there should be at least some sort of verifiable info in the article, or else it's just narrative. While one side's take may be "real" to half the world, the other side's take can be "real" to the other half of the world, but there should be some piece of actual information that both sides look at and agree is real.

Comment author: Lumifer 11 April 2017 07:00:50PM *  6 points [-]

what do people generally think about Facebook being an arbiter of truth?

It's a horrible idea.

does Facebook have any responsibility to label/monitor content

No.

great power (showing you anything you want)

You're confusing FB and Google (and a library, etc.)

how would you design around the issue of spreading false information?

I wouldn't.

I recommend acquiring some familiarity with the concept of the freedom of speech.

Comment author: denimalpaca 12 April 2017 06:19:26PM 4 points [-]

I'm actually very familiar with freedom of speech and I'm getting more familiar with your dismissive and elitist tone.

Freedom of speech applies, in the US, to the relationship between the government and the people. It doesn't apply to the relationship between Facebook and users, as exemplified by their terms of use.

I'm not confusing Facebook and Google, Facebook also has a search feature and quite a lot of content can be found within Facebook itself.

But otherwise thanks for your reply, it's stunning lack of detail gave me no insight whatsoever.

Comment author: denimalpaca 11 April 2017 06:51:15PM 0 points [-]

Maybe this has been discussed ad absurdum, but what do people generally think about Facebook being an arbiter of truth?

Right now, Facebook does very little to identify content, only provide it. They faced criticism for allowing fake news to spread on the site, they don't push articles that have retractions, and they just now have added a "contested" flag that's less informative than Wikipedia's.

So the questions are: does Facebook have any responsibility to label/monitor content given that it can provide so much? If so, how? If not, why doesn't this great power (showing you anything you want) come with great responsibility? Finally, if you were to build a site from ground-up, how would you design around the issue of spreading false information?

Comment author: Viliam 11 April 2017 04:45:19PM 0 points [-]

That is pretty much it. Except, describing it as zombies makes it seems like the dangers are all fictional, and therefore the people who worry about them are silly.

But real world contains real dangers, so I would expect that people who got hurt in the past will be more likely to adopt the "conservative" mindset, while people who lived relatively sheltered lives will be more likely to adopt the "liberal" mindset. (Reality check: most liberal people? trust fund kids at expensive colleges. most conservative people? working class.)

Comment author: denimalpaca 11 April 2017 05:43:08PM 0 points [-]

Reality check: most liberal people? trust fund kids at expensive colleges. most conservative people? working class.

Really disagree there. Plenty of trust fund kids are conservative, plenty of scholarship students are liberal... even at the same university. I think if you want to generalize, the more apt generalization is city vs. rural areas. There are tons of "working class" liberals, they work in service industries instead of coal mines. The big difference is the proximity to actual diversity, when you work with and live with and see diverse people every day, you get acclimated to it and accept it as the norm; when you live in a rural area with few people, nearly all of whom are white, you get acclimated to that. When the societal norm of rural areas is a more conservative, Christian mindset, and that in the cities is a more liberal mindset then it follows naturally that people in these areas would generally develop into those dominating mindsets.

I'm not sure that your statement about who gets hurt in the past is more likely to be conservative in the future is true, either. Your conclusion doesn't directly follow from the premise, and I can think of numerous personal and historical examples that run counter. Same with "liberals are sheltered", you offer no evidence that links your premise to conclusion and there are tons of counter examples.

Comment author: lmn 10 April 2017 04:52:14PM *  4 points [-]

Liberals see the potatoes, recognize that some people still die even when they eat potatoes like their ancestor, and decide they need more crops.

Like, say, kudzu to enhance the soil and help prevent erosion.

However, unlike the people who actually introduced kudzu, liberals aren't even willing to admit they made a mistake after the fact and will insist that the only reason people object to having their towns and houses completely overgrown with kudzu is irrational kudzuphobia.

Comment author: denimalpaca 10 April 2017 09:14:00PM 1 point [-]

"liberals aren't even willing to admit they made a mistake after the fact and will insist that the only reason people object to having their towns and houses completely overgrown with kudzu is irrational kudzuphobia."

I think this is a drastic overgeneralization taken in bad faith.

Comment author: Lumifer 10 April 2017 02:35:23PM 0 points [-]

But at the core

Would it boil down to risk preferences / risk aversion then?

Comment author: denimalpaca 10 April 2017 08:58:37PM 0 points [-]

Yes I think that's exactly right. Scott Alexander's idea on it from the point of view of living in a zombie world makes this point really clear: do we risk becoming zombies to save someone, or no?

Comment author: denimalpaca 08 April 2017 06:17:07PM 3 points [-]

Seems to me both liberals and conservatives are social farmers, it's a matter of what crop is grown. Conservatives want their one crop, say potatoes, not because it's the most nutritional, but it's been around for forever and it's allowed their ancestors to survive. (If we assume like you do about Christianity, then we also have that God Himself Commanded They Grow Potatoes.) Liberals see the potatoes, recognize that some people still die even when they eat potatoes like their ancestor, and decide they need more crops. Maybe they grow fewer potatoes, and maybe they grow yellow potatoes instead of brown or some such triviality, but the idea like you state is to not privilege those people who are inherently better at digesting potatoes by growing other things as well. This is naturally heresy to conservative potato growers because you shouldn't fix something that isn't broken (and if God didn't say it's broken then it's not - excluding the idea of God and you just get potato-digesting-enzyme supremacy).

Comment author: denimalpaca 03 April 2017 09:19:33PM 7 points [-]

I thought OpenAI was more about open sourcing deep learning algorithms and ensuring that a couple of rich companies/individuals weren't the only ones with access to the most current techniques. I could be wrong, but from what I understand OpenAI was never about AI safety issues as much as balancing power. Like, instead of building Jurassic Park safely, it let anyone grow a dinosaur in their own home.

Comment author: dogiv 03 April 2017 02:06:24PM 0 points [-]

I guess where we disagree is in our view of how a simulation would be imperfect. You're envisioning something much closer to a perfect simulation, where slightly incorrect boundary conditions would cause errors to propagate into the region that is perfectly simulated. I consider it more likely that if a simulation has any interference at all (such as rewinding to fix noticeable problems) it will be filled with approximations everywhere. In that case the boundary condition errors aren't so relevant. Whether we see an error would depend mainly on whether there are any (which, like I said, is equivalent to asking whether we are "in" a simulation) and whether we have any mechanism by which to detect them.

Comment author: denimalpaca 03 April 2017 04:23:07PM 0 points [-]

Everyone has different ideas of what a "perfectly" or "near perfectly" simulated universe would look like, I was trying to go off of Douglas's idea of it, where I think the boundary errors would have effect.

I still don't see how rewinding would be interference; I imagine interference would be that some part of the "above ours" universe gets inside this one, say if you had some particle with quantum entanglement spanning across the universes (although it would really also just be in the "above ours" universe because it would have to be a superset of our universe, it's just also a particle that we can observe).

View more: Next