Why doesn't the U.S. government hire more tax auditors? If every hired auditor can either uncover or deter (threat of chance of audit) tax evasion, it would pay for itself, create jobs, increase revenue, punish those who cheat. Estimated cost of tax evasion per year to the Federal gov is 450B.
Incompetent government tropes include agencies that hire too many people and becoming inappropriate profit centers. It would seem that the IRS should have at the very least been accidentally competent in this regard.
Is there an effective way for a layman to get serious feedback on scientific theories?
I have a weird theory about physics. I know that my theory will most likely be wrong, but I expect that some of its ideas could be useful and it will be an interesting learning experience even in the worst case. Due to the prevalence of crackpots on the internet, nobody will spare it a glance on physics forums because it is assumed out of hand that I am one of the crazy people (to be fair, the theory does sound pretty unusual).
https://aeon.co/ideas/what-i-learned-as-a-hired-consultant-for-autodidact-physicists provides payed serious feedback as a service.
How do you deal with embarrassment of having to learn as an adult things that most people learn in their childhood? I'm talking about things that you can't learn alone in private, such as swimming, riding a bicycle and things like that.
So have you actually learned anything
Yes, though mostly indirectly. I've learned mostly from reading about neoreactionaries elsewhere. SSC, Moldbug, etc. I'm learning a lot. Very interesting. This discussion was the catalyst for my reading. So, thanks!
from these discussions
Yes, I've learned some directly from this discussion.
Mostly I've learned that people will get internet-hostile about certain topics. I was already aware of this, but my interaction in this discussion has re-cemented the fact in my mind. I've received a recent -37 karma lashing (t...
Is there a specific bias for thinking that everyone possesses the same knowledge as you? For example, after learning more about a certain subject, I have a tendency to think, "Oh, but everyone already knows this, don't they" even though they probably don't and I wouldn't have assumed that before learning about it myself.
I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."
The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.
Worse, ...
As Bastian Stern has pointed out to me, people often mix up pro tanto-considerations with all-things-considered-judgements - usually by interpreting what is merely intended to be a pro tanto-consideration as an all-things-considered judgement. Is there a name for this fallacy? It seems both dangerous and common so should have a name.
low IQ
How does low IQ directly cause crime?
properly police black neighborhoods
What does this entail?
A silly question. What kind of algebra deals with functions where the input and output are distributions? (Of a discrete variable.)
I'm looking for an SSC post.
Scott talks about how a friend says he always seems to know what's what, and Scott says "Not, really; I'm the first to admit my error bars are wide and that my theories are speculative, often no better than hand-waving."
They go back and forth, with Scott giving precise reasons why he's not always right, and then he says "...I'm doing it right now, aren't I?"
Something like that. Can anybody point me to it?
Not new, possibly not interesting to anyone beside me. A 2013 astrobiology paper that explores an odd corner of the Fermi Paradox. The paper explores the bizarre perspective that Earth life was seeded by extraterrestrial life (directed panspermia) as a form of information backup. Our biosphere's junk DNA, in this scenario, stores information valuable to the extraterrestrial system.
Our biosphere's junk DNA
Junk DNA generally doesn't survive that long in evolutionary timescales because there's nothing that prevents mutations. It seems a bad information storage system.
The reason for the higher crime rates isn't directly relevant to the discussion of police "racial bias".
It's not? How do you know?
Police bias seems likes it could be directly related to crime rates (since it's the cops who do the arresting).
How did this "racial bias" manifest itself? Them acting like they believed blacks were more likely to be criminals than whites.
Judgements based only on race.
Or even willingness to shoot a black who was running at him and grabing for his gun?
I'm not arguing every white cop who shoots a bl...
In particular did you know about the different rates of murder commited by blacks and whites before posting the OC?
I don't think I knew that particular stat was an empirical fact, though I wasn't surprised by it. My view, generally, was that blacks in America earned less, had higher incarceration rates, etc. The causes interest me.
Do you have any evidence for this belief? If so, why haven't you presented it anywhere in this thread?
I believe all three of my points are basically non-controversial, specifically #2 and #3. #1 is true in at least some c...
Interesting rhetorical sparring point taking place in the U.S. election that relates to rationality here at LW.
In the first presidential debate, Hillary Clinton referenced bias when discussing the recent spate of police shootings of African Americans. Clinton said “implicit bias is a problem for everyone, not just police,” and went on to say “I think, unfortunately, too many of us in our great country jump to conclusions about each other," and “I think we need all of us to be asking hard questions about, ‘why am I feeling this way?’”
In the VP debate...
The problem is that the statistics don't show the claimed bias. Normalized on a per-police-encounter basis, white cops (or cops-in-general) don't appear to shoot black suspects more often than they shoot white suspects. However, police interact with black people more frequently, so the absolute proportion of black shooting victims is elevated.
The fact that the incidence of police encounters with blacks is elevated would be the actual social problem worth addressing, but the reasons for the elevated incidence of police-black encounters do not make a nice soundbite.
None of this is important of course because, as is usual for politics, the whole mess degenerates into cheerleading for your team and condemning the other team, and sensitive analysis of the actual evidence would be giving aid and comfort to the hated enemy.
Psychology is most evidence-integrated proximal discipline for the plane a cognitivist should think in where possible.
You can dissolve the philosophy 'problem of other of other minds' as actually a problem of empathy and learned helplessness and external locus of control.
Once the problem of other minds is entirely enacted and person-centred, non-egocentric ethics becomes silly
:)
I've been thinking about what seems to be the standard LW pitch on AI risk. It goes like this: "Consider an AI that is given a goal by humans. Since 'convert the planet into computronium' is a subgoal of most goals, it does this and kills humanity."
The problem, which various people have pointed out, is that this implies an intelligence capable of taking over the world, but not capable of working out that when a human says pursue a certain goal, they would not want this goal to be pursued in a way that leads to the destruction of the world.
Worse, the argument can then be made that this idea that an AI will interpret goals so literally without modelling a human mind constitutes an "autistic AI" and that only autistic people would assume that AI would be similarly autistic. I do not endorse this argument in any way, but I guess its still better to avoid arguments that signal low social skills, all other things being equal.
Is there any consensus on what the best 'elevator pitch' argument for AI risk is? Instead of focusing on any one failure mode, I would go with something like this:
"Most philosophers agree that there is no reason why superintelligence is not possible. Anything which is possible will eventually be achieved, and so will superintelligence, perhaps in the far future, perhaps in the next few decades. At some point, superintelligences will be as far above humans as we are above ants. I do not know what will happen at this point, but the only reference case we have is humans and ants, and if superintelligences decide that humans are an infestation, we will be exterminated."
Incidentally, this is the sort of thing I mean by painting LW style ideas as autistic (via David Pierce)
As far as we can tell, digital computers are still zombies. Our machines are becoming autistically intelligent, but not supersentient - nor even conscious. [...] Full-Spectrum Superintelligence entails: [...] social intelligence [...] a metric to distinguish the important from the trivial [...] a capacity to navigate, reason logically about, and solve problems in multiple state-spaces of consciousness [e.g. dreaming states (cf. lucid dreaming), waking consciousness, echolocatory competence, visual discrimination, synaesthesia in all its existing and potential guises, humour, introspection, the different realms of psychedelia [...] and finally "Autistic", pattern-matching, rule-following, mathematico-linguistic intelligence, i.e. the standard, mind-blind cognitive tool-kit scored by existing IQ tests. High-functioning "autistic" intelligence is indispensable to higher mathematics, computer science and the natural sciences. High-functioning autistic intelligence is necessary - but not sufficient - for a civilisation capable of advanced technology that can cure ageing and disease, systematically phase out the biology of suffering, and take us to the stars. And for programming artificial intelligence.
Sometimes David Pierce seems very smart. And sometimes he seems to imply that the ability to think logically while on psychedelic drugs is as important as 'autistic intelligence'. I don't think he thinks that autistic people are zombies that do not experience subjective experience, but that also does seem implied.
If it's worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options "Notify me of new top level comments on this article" and "