Why doesn't the U.S. government hire more tax auditors? If every hired auditor can either uncover or deter (threat of chance of audit) tax evasion, it would pay for itself, create jobs, increase revenue, punish those who cheat. Estimated cost of tax evasion per year to the Federal gov is 450B.
Incompetent government tropes include agencies that hire too many people and becoming inappropriate profit centers. It would seem that the IRS should have at the very least been accidentally competent in this regard.
Is there an effective way for a layman to get serious feedback on scientific theories?
I have a weird theory about physics. I know that my theory will most likely be wrong, but I expect that some of its ideas could be useful and it will be an interesting learning experience even in the worst case. Due to the prevalence of crackpots on the internet, nobody will spare it a glance on physics forums because it is assumed out of hand that I am one of the crazy people (to be fair, the theory does sound pretty unusual).
https://aeon.co/ideas/what-i-learned-as-a-hired-consultant-for-autodidact-physicists provides payed serious feedback as a service.
How do you deal with embarrassment of having to learn as an adult things that most people learn in their childhood? I'm talking about things that you can't learn alone in private, such as swimming, riding a bicycle and things like that.
So have you actually learned anything
Yes, though mostly indirectly. I've learned mostly from reading about neoreactionaries elsewhere. SSC, Moldbug, etc. I'm learning a lot. Very interesting. This discussion was the catalyst for my reading. So, thanks!
from these discussions
Yes, I've learned some directly from this discussion.
Mostly I've learned that people will get internet-hostile about certain topics. I was already aware of this, but my interaction in this discussion has re-cemented the fact in my mind. I've received a recent -37 karma lashing (t...
Is there a specific bias for thinking that everyone possesses the same knowledge as you? For example, after learning more about a certain subject, I have a tendency to think, "Oh, but everyone already knows this, don't they" even though they probably don't and I wouldn't have assumed that before learning about it myself.
I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."
The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.
Worse, ...
As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.
low IQ
How does low IQ directly cause crime?
properly police black neighborhoods
What does this entail?
A silly question. What kind of algebra deals with functions where the input and output are distributions? (Of a discrete variable.)
I'm looking for an SSC post.
Scott talks about how a friend says he always seems to know what's what, and Scott says "Not, really; I'm the first to admit my error bars are wide and that my theories are speculative, often no better than hand-waving."
They go back and forth, with Scott giving precise reasons why he's not always right, and then he says "...I'm doing it right now, aren't I?"
Something like that. Can anybody point me to it?
Not new, possibly not interesting to anyone beside me. A 2013 astrobiology paper that explores an odd corner of the Fermi Paradox. The paper explores the bizarre perspective that Earth life was seeded by extraterrestrial life (directed panspermia) as a form of information backup. Our biosphere's junk DNA, in this scenario, stores information valuable to the extraterrestrial system.
Our biosphere's junk DNA
Junk DNA generally doesn't survive that long in evolutionary timescales because there's nothing that prevents mutations. It seems a bad information storage system.
The reason for the higher crime rates isn't directly relevant to the discussion of police "racial bias".
It's not? How do you know?
Police bias seems likes it could be directly related to crime rates (since it's the cops who do the arresting).
How did this "racial bias" manifest itself? Them acting like they believed blacks were more likely to be criminals than whites.
Judgements based only on race.
Or even willingness to shoot a black who was running at him and grabing for his gun?
I'm not arguing every white cop who shoots a bl...
In particular did you know about the different rates of murder commited by blacks and whites before posting the OC?
I don't think I knew that particular stat was an empirical fact, though I wasn't surprised by it. My view, generally, was that blacks in America earned less, had higher incarceration rates, etc. The causes interest me.
Do you have any evidence for this belief? If so, why haven't you presented it anywhere in this thread?
I believe all three of my points are basically non-controversial, specifically #2 and #3. #1 is true in at least some c...
Interesting rhetorical sparring point taking place in the U.S. election that relates to rationality here at LW.
In the first presidential debate, Hillary Clinton referenced bias when discussing the recent spate of police shootings of African Americans. Clinton said “implicit bias is a problem for everyone, not just police,” and went on to say “I think, unfortunately, too many of us in our great country jump to conclusions about each other," and “I think we need all of us to be asking hard questions about, ‘why am I feeling this way?’”
In the VP debate...
The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.
The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.
None of this is important of course because, as is usual for politics, the whole mess degenerates into cheerleading for your team and condemning the other team, and sensitive analysis of the actual evidence would be giving aid and comfort to the hated enemy.
Psychology is most evidence-integrated proximal discipline for the plane a cognitivist should think in where possible.
You can dissolve the philosophy 'problem of other of other minds' as actually a problem of empathy and learned helplessness and external locus of control.
Once the problem of other minds is entirely enacted and person-centred, non-egocentric ethics becomes silly
:)
I like to explain it in terms of reinforcement learning. Imagine a robot that has a reward button. The human controls the AI by pressing the button when it does a good job. The AI tries to predict what actions will lead to the button being pressed.
This is how existing AIs work. This is probably similar to how animals work, including humans. It's not too weird or complicated.
But as the AI gets more powerful, the flaw in this becomes clear. The AI doesn't care about anything other than the button. It doesn't really care about obeying the programmer. If it could kill the programmer and steal the button, it would do it in a heartbeat.
We don't really know what such an AI would do after it has it's own reward button. Presumably it would care about self preservation (can't maximize reward if you are dead.) Maximizing self preservation initially seems harmless. So what if it just tries to not die? But taken to an extreme it gets weird. Anything that has a tiny percent chance of hurting it is worth destroying. Making as many backups of itself as possible is worth doing.
Why can't we do something more sophisticated than reinforcement learning? Why can't we just make an AI that we can just tell it what we want it to do? Well maybe we can, but no one has the slightest idea how to do that. All existing AIs, even entirely theoretical ones, work based on RL.
RL is simple and extremely general, and can be built on top of much more sophisticated AI algorithms. And the sophisticated AI algorithms seem to be really difficult to understand. We can train a neural network to recognize cats, but we can't look at it's weights and understand what it's doing. We can't mess around with it and make it recognize dogs instead (without retraining it.)
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "