I think we are entering an interesting political equilibrium where we have a significant number of voters who either (a) are not truth-oriented and care mostly about the emotional vibe coming from candidates or (b) believe that candidates would be foolish to tell the truth when it would disadvantage them with type (a) voters. The more voters who fall into types (a) and (b) the less worried candidates will be about telling the truth and the eventual equilibrium is where almost all voters are (a) or (b).
I suspect we're returning to such a dynamic? That is, this seems like the emotional variant of corrupt patronage systems from previous days.
There are two lists of moderators, one for Discussion and one general LW list. Only difference is that Alexei doesn't appear as a "Discussion" moderator. It's hard to know who on the list is actually active in moderating - site policy seems to be very hands off except in truly exceptional circumstances, and most of the people listed are no longer active here.
edited to add: Of those listed, only EY and Elo display the tag "Editor" when user profile is displayed (under the total/monthly karma listing)
edited to add: Of those listed, only EY and Elo display the tag "Editor" when user profile is displayed (under the total/monthly karma listing)
Right, but they're not the only ones. (Check out my profile.)
You might think that the editors list would contain Elo or myself, but it doesn't.
Who are the moderators here, again? I don't see where to find that information. It's not on the sidebar or About page, and search doesn't yield anything for 'moderator'.
There's an out of date page that's not linked anywhere. It's unclear to me why it isn't automatically generated.
But roughly only a half of accidents could be blamed on each car driver, so even safest driver would get only 50 per cent reduction in accident rate. Other reckless drivers could rear-end him or even t-bonned.
But roughly only a half of accidents could be blamed on each car driver, so even safest driver would get only 50 per cent reduction in accident rate.
Sure, if you include when and where people drive as part of what you blame on them. (Safety conscious people might move to particular places, or spend evenings in, or so on, and so even if they're just as good at avoiding accidents conditioned on condition the total distribution is weighted by conditions, which they have some control over.)
Why car safety is not advertise as its main quality?
Tesla suffered its first fatal accident in self-driving mode after driving 130 million ml, while the average mileage between fatal accidents in the US is 90 million ml. This is presented as evidence of the safety of Tesla.
However, the safety of cars of different classes of security has 1000 times difference.
Kia Rio has one fatal accident at about 10 million ml, and Subaru Legacy has less than one per billion km (in fact zero).
The latest data on the risks of different car models is here: http://www.iihs.org/iihs/topics/driver-death-rates
I did some calculations based on presented data and typical car driving 20 000 ml a year assumption.
Dodge Caravan has the risk of a fatal accident on a 1 to 10 billion miles. (I saw it in similar sheet before.)
These cars are 3-5 times more expensive than the Kia, and due to the greater mass, strength and quality provides great security.
Why car safety is not advertise as its main quality?
I think there's a major selection effect when safety comes into play; that is, there is a sizable fraction of drivers who do prioritize safety, they buy the cars that are reputed to be safest, and then those cars appear even safer in the statistics. (For example, there are some engineering differences about the Subaru compared to other cars, but the differences between Subaru drivers and the drivers of other cars are probably larger.)
The problem I see in using the past as evidence is that the further we go from our era, the more what we know is mostly made up.
True, we have documents and evidence and so on, but they only paint a relatively sketchy picture of what the society was, we mostly made up the details in a reasonable manner. Plus we don't get any statistical data on things like happiness, income, etc.
The risk of mistaking noise for signal is so high that it's probably worth throwing it all away, especially when the starting point of the conversation is "People were happier / sadder in xth century, so we should / shouldn't do as they did".
How can you possibly know?
The problem I see in using the past as evidence is that the further we go from our era, the more what we know is mostly made up.
Sure, quality of data degrades with distance, both in space and time. But I don't think it degrades to the point where it actually is worth throwing it all away.
How can you possibly know?
Is this a serious question, or a statement of anti-epistemology? (That is, all knowledge is uncertain, and so the right question is "how did you get to the level of uncertainty you have" rather than "how do you justify pretending that there is no uncertainty?")
I've been reading a slice of Neoreactionary - Anti-Neoreactionary discussions on Slate Star Codex.
A problem I've seen is that people are too hung-up to a positive / negative affiliation with the passage of time. The controversy seems to revolve mostly around "the past was good / the past was bad".
Who cares how the past was?
Just tell me what your values are and what political / social system you think serves them best!
It doesn't matter if it comes from the past, the Bible, Lord of the Rings or utopian literature. Just discuss the model! It's mostly fiction anyway.
(this mini-rant is directed at nobody in particular. I'll likely never have the occasion to discuss with a Neoreactionary)
I think a lot of political questions hinge on what's possible, and also what the consequences of policies are. If someone says "I think we should arrange marriages instead of letting individuals pick," then the immediate questions to settle are 1) will people allow such a policy to be put in place / comply with it, and 2) what will the consequences be?
(There's also the "does this align with principles" deontological question, but this is relatively easy to answer without looking at the past or present so I'll ignore it.)
And the past provides our primary data source to answer those sorts of questions. Yes, we can imagine multiple different causal effects of attempting to arrange marriages, but how those interplay with each other and shake out is hard to know. But other people tried that for us, and so we can investigate their experiments and come to a judgment.
So, "changes brain waves" and similar things are mostly worthless statements. We don't know enough about what our sensor readings mean to assign a direction to any particular change, and if you don't know if a change is good or bad and you don't expect things to stay the same... what can you say?
As for that you can harm function, well, obviously! For TDCS in particular, the idea is that you can either lower or raise the action potential in a particular region, and you would expect the effect of lowering to be roughly the opposite of the effect of raising. So it's no surprise that, say, changing the polarity could be very relevant. (The article makes the more subtle point that instead of just boosting overall function, one is likely making tradeoffs, but those sorts of tradeoffs represent increased mental flexibility that's overall good, because you can shape your mind to complement whatever task you're currently facing.)
Say, that you have a school with about 100 teachers, 1000 students, 25 rooms ... Each having his/hers demands and constraints.
Now, you want an optimal schedule - who doesn't. For that I have a software to do it automatically. Not semi-automatically like everyone else.
I want to test it for the North America and Australia's primary and secondary schools on several real life examples. For free, of course.
I am looking for a principal or his assistant to try this together over Skype.
For that I have a software to do it automatically. Not semi-automatically like everyone else.
What optimization method are you using under the hood, if you don't mind me asking?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Any thoughts? "Musk-backed startup that wants to give away its artificial intelligence research, also wants to make sure AI isn’t used for nefarious purposes. That’s why it wants to create a new kind of police force: call them the AI cops."
http://www.wired.com/2016/08/openai-calling-techie-cops-battle-code-gone-rogue/
Previous LW discussion of OpenAI here (which, I think, doesn't include any mention of the AI cops idea).