Just thought I'd point out, actuaries can also do enterprise risk management. Also, a lot of organizations do have a Chief Risk Officer.
I think it's fair to say that most of us here would prefer not to have most Reddit or Facebook users included on this site, the whole "well-kept garden" thing. I like to think LW continues to maintain a pretty high standard when it comes to keeping the sanity waterline high.
This is part of why I tend to think that for the most part, these works aren't (or if they are, they shouldn't be) aimed at de-converting the faithful (who have already built up a strong meme-plex to fall back on), but rather for interception and prevention for young potential converts and people who are on the fence. Particularly college kids who have left home and are questioning their belief structure.
The side effect is that something that is marketed well towards this group (imo, this is the case with "The God Delusion") comes across as sh...
I've met both sorts, people turned off by "The God Delusion" who really would have benefited from something like "Greatest Show on Earth", and people who really seemed to come around because of it (both irl and in a wide range of fora). The unfortunate side-effect of successful conversion, in my experience, has been that people who are successfully converted by rhetoric frequently begin to spam similar rhetoric, ineptly, resulting mostly in increased polarization among their friends and family.
It seems pretty hard to control for enou...
Amongst the sophisticated theists I know (Church of England types who have often actually read large chunks of the Bible and don't dispute that something called "evolution" happened), they will detail their objections to The God Delusion at length ... without, it turns out, having actually read it. This appears to be the religious meme defending itself. I point them at the bootleg PDF and suggest they actually read it, then complain ... at which point they usually never mention it ever again.
First, I do have a couple of nitpicks:
Why evolve a disposition to punish? That makes no sense.
That depends. See here for instance.
Does it make sense to punish somebody for having the wrong genes?
This depends on what you mean by "punish". If by "punish" you mean socially ostracize and disallow mating privileges, I can think of situations in which it could make evolutionary sense, although as we no longer live in our ancestral environment and have since developed a complex array of cultural norms, it no longer makes moral sense....
I'm not sure if it's elementary, but I do have a couple of questions first. You say:
what each of us values to themselves may be relevant to morality
This seems to suggest that you're a moral realist. Is that correct? I think that most forms of moral realism tend to stem from some variant of the mind projection fallacy; in this case, because we value something, we treat it as though it has some objective value. Similarly, because we almost universally hold something to be immoral, we hold its immorality to be objective, or mind independent, when in fa...
I initially wrote up a bit of a rant, but I just want to ask a question for clarification:
Do you think that evolutionary ethics is irrelevant because the neuroscience of ethics and neuroeconomics are much better candidates for understanding what humans value (and therefore for guiding our moral decisions)?
I'm worried that you don't because the argument you supplied can be augmented to apply there as well: just replace "genes" with "brains". If your answer is a resounding 'no', I have a lengthy response. :)
As I understand it, because T proves in n symbols that "T can't prove a falsehood in f(n) symbols", taking the specification of R (program length) we could do a formal verification proof that R will not find any proofs, as R only finds a proof if T can prove a falsehood within g(n)<exp(g(n)<<f(n) symbols. So I'm guessing that the slightly-more-than-n-symbols-long is on the order of:
n + Length(proof in T that R won't print with the starting true statement that "T can't prove a falsehood in f(n) symbols")
This would vary some with the length of R and with the choice of T.
Typically you make a "sink" post with these sorts of polls.
ETA: BTW, I went for the paper. I tend to skim blogs and then skip to the comments. I think the comments make the information content on blogs much more powerful, however.
You can donate it to my startup instead, our board of directors has just unanimously decided to adopt this name. Paypal is fine. Our mission is developing heuristics for personal income optimization.
Winners Evoking Dangerous Recursively Improving Future Intelligences and Demigods
Bob's definition contains my definition
Well here's what gets me. The idea is that you have to create Bob as well, and you had to hypothesize his existence in at least some detail to recognize the issue. If you do not need to contain Bob's complete definition, then It isn't any more transparent to me. In this case, we could include worlds with any sufficiently-Bob-like entities that can create you and so play a role in the deal. Should you pre-commit to make a deal with every sufficiently-Bob-like entity? If not, are there sorts of Bob-agents that ma...
I'm not sure I completely understand this, so Instead of trying to think about this directly I'm going to try to formalize it and hope that (right or wrong) my attempt helps with clarification. Here goes:
Agent A generates a hypothesis about an agent, B, which is analogous to Bob. B will generate a copy of A in any universe that agent B occupies iff agent A isn't there already and A would do the same. Agent B lowers the daily expected utility for agent A by X. Agent A learns that it has the option to make agent B, should A have pre-committed to B's deal?...
SPARC for undergrads is in planning, if we can raise the funding.
Awesome, glad to hear it!
See here.
Alright, I think I'll sign up for that.
Anything for undergrads? It might be feasible to do a camp at the undergraduate level. Long term, doing an REU style program might be worth considering. NSF grants are available to non-profits and it may be worth at least looking into how SIAI might get a program funded. This would likely require some research, someone who is knowledgeable about grant writing and possibly some academic contacts. Other than that I'm not sure.
In addition, it might be beneficial to identify skill sets that are likely to be useful for SI research for the benefit of those who might be interested. What skills/specialized knowledge could SI use more of?
My bigger worry is more along the lines of "What if I am useless to the society in which I find myself and have no means to make myself useful?" Not a problem in a society that will retrofit you with the appropriate augmentations/upload you etc. and I tend to think that is more likely that not, but what if, say, the Alcore trust gets us through a half-century-long freeze and we are revived, but things have moved more slowly than one might hope, yet fast enough to make any skill sets I have obsolete? Well, if the expected utility of living is su...
If I like and want to hug everyone at a gathering except one person, and that one person asks for a hug after I've hugged all the other people and deliberately not hugged them, that's gonna be awkward no matter what norms we have unless I have a reason like "you have sprouted venomous spines".
Out of curiosity, are there any particular behaviors you have encountered at a gathering (or worry you may encounter) that you find off-putting enough to make the hug an issue?
I'm 100% for this. If there were such a site I would probably permanently relocate there.
essentially erasing the distiction of map and territory
This idea has been implied before and I don't think it holds water. That this has come up more than once makes me think that there is some tendency to conflate the map/territory distinction with some kind of more general philosophical statement, though I'm not sure what. In any event, the Tegmark level 4 hypothesis is orthogonal to the map/territory distinction. The map/territory distinction just provides a nice way of framing a problem we already know exists.
In more detail:
Firstly, even if you tak...
Disagreement is perfectly fine by me. I don't agree with the entirety of the sequences either. It's disagreement without looking at the arguments first that bothers me.
Firstly, a large proportion of the Sequences do not constitute "knowledge", but opinion. It's well-reasoned, well-presented opinion, but opinion nonetheless -- which is great, IMO, because it gives us something to debate about. And, of course, we could still talk about things that aren't in the sequences, that's fun too. Secondly:
Whether the sequences constitute knowledge is beside the point - they constitute a baseline for debate. People should be familiar with at least some previously stated well-reasoned, well-presented opinions before th...
But having an AI that circumvents it's own utility function, would be evidence towards poor utility function design.
By circumvent, do you mean something like "wireheading", i.e. some specious satisfaction of the utility function that involves behavior that is both unexpected and undesirable, or do you also include modifications to the utility function? The former meaning would make your statement a tautology, and the latter would make it highly non-trivial.
I'm going to assert that it has something to do with who started the blog.
I think he's talking about "free market optimism" - the notion that deregulation, lowered taxes, less governmental oversight, a removal of welfare programs etc. lead to optimal market growth and eventually to general prosperity. Most conservative groups in America definitely proselytize this idea, I'm not sure about elsewhere.
The sample list of subjects is even broader than all the subjects mentioned someone on this page.
In that case I'm a bit unclear about the sort of research I'd be expected to do were I in that position. Most of those subjects are very wide open problems. Is there an expectation that some sort of original insights be made, above and beyond organizing a clear overview of the relevant areas?
I think it might help if you elaborate on the process some: How are hours tracked? Is it done by the honor system or do you have some software? Will I need to work at any specific times of the day, or do I just need to be available for at least 20 hours? Is there a sample list of subjects?
Either way, I'll probably send in an application and go from there. I currently tutor calculus online for approximately the same pay, but this seems somewhat more interesting.
I posted this article to the decision theory group a moment ago. It seems highly relevant to thinking concretely about logical uncertainty in the context of decision theory, and provides what looks to be a reasonable metric for evaluating the value of computationally useful information.
ETA: plus there is an interesting tie-in to cognitive heuristics/biases.
The original article and usual use of "Ugh Field" (in the link at the top of the post) is summariezed as:
Pavlovian conditioning can cause humans to unconsciously flinch from even thinking about a serious personal problem they have, we call it an "Ugh Field"1. The Ugh Field forms a self-shadowing blind spot covering an area desperately in need of optimization, imposing huge costs.
I agree that LW has Ugh Fields, but I can't see how AI risks is one. There may be fear associated with AI risks here but that is specifically because it ...
Not to mention a massive underestimation of intermediate positions, e.g. the doubting faithful, agnostics, people with consciously chosen, reasonable epistemology etc. This sets that number to 0. I've met plenty of more liberal theists that didn't assert 100% certainty.
That makes sense. It still seems to be more of a rhetorical tool to illustrate that there is a spectrum of subjective belief. People tend to lump important distinctions like these together: "all atheists think they know for certain there isn't a god" or "all theists are foaming at the mouth and have absolute conviction", so for a popular book it's probably a good idea to come up with this sort of scale like this, to encourage people to refine their categorization process. I kind of doubt that he meant it to be used as a tool for inferring Bayesian confidence (in particular, I doubt 6.9 out of 7 is meant to be fungible with P(god exists) = .01428).
There has however be a mechanism for it to work for correct positions better than for incorrect ones. That is absolutely the key.
The whole point of studying formal epistemology and debiasing (major topics on this site) is to build the skill of picking out which ideas are more likely to be correct given the evidence. This should always be worked on in the background, and you should only be applying these tips in the context of a sound and consistent epistemology. So really, this problem should fall on the user of these tips - it's their responsibility ...
I'm speaking of people arguing. Not that there's all that much wrong with it - after all, the folks who deny the global warming, they have to be convinced somehow, and they are immune to simple reasonable argument WRT the scientific consensus. No, they want to second-guess science, even though they never studied anything relevant outside the climate related discussion.
I'm a tad confused. Earlier you were against people using the information they don't fully understand yet happens to be true, but here you seem to be suggesting that this isn't so bad and ...
Given that he's pretty disposed to throwing out rhetorical statements, I'd say that's a reasonable hypothesis. I'd be surprised if there was more behind it than simply recognizing that his subjective belief in any religion was 'very, very low', and just picking a number that seemed to fit.
Just look at the 'tips' for productive arguments. Is there a tip number 1: drop your position ASAP if you are wrong? Hell frigging no (not that it would work either, though, that's not how arguing ever works).
I've done my best to make this a habit, and it really isn't that hard to do, especially over the internet. Once you 'bite the bullet' the first time it seems to get easier to do in the future. I've even been able to concede points of contention in real life (when appropriate). Is it automatic? No, you have to keep it in the back of your mind, ju...
Nevermind
I found myself wondering if there are any results about the length of the shortest proof in which a proof system can reach a contradiction, and found the following papers:
Paper 1 talks about partial consistency. We have statements of the following form:
) is a statement that there is no ZF-proof of contradiction with length =< n.
The paper claims that this is provable in ZF for each n. The paper then discusses results about the proof length of the partial consistency statements is polynomial in n. The author goes on to derive analogous results pertain...
This seems to be conflating rationality centered material with FAI/optimal decision theory material and has lumped them all under the heading "utilit maximization". These individual parts are fundamentally distinct, and aim at different things.
Rationality centered material does include some thought about utility, Fermi calculations and heuristics, but focuses on debiasing, recognizing cognitive heuristics that can get in the way (such as rationalization, cached thoughts) and the like. I've managed to apply them a bit in my day to day thought....
This is not so simple to assert. You have to think of the intensity of their belief in the words of allah. Their fundamental wordview is so different from ours that there may be nothing humane left when we try to combine them.
CAVEAT: I'm using CEV as I understand it, not necessarily as it was intended as I'm not sure the notion is sufficiently precise for me to be able to accurately parse all of the intended meaning. Bearing that in mind:
If CEV produces a plan or AI to be implemented, I would expect it to be sufficiently powerful that it would entail c...
There is little in common between Eliezer, Me and Al Qaeda terrorists, and most of it is in the so called reptilian brain. We may end up with a set of goals and desires that are nothing more than “Eat Survive Reproduce,” which would qualify as a major loss in the scheme of things.
I think you may possibly be committing the fundamental attribution error. It's my understanding that Al Qaeda terrorists are often people who were in a set of circumstances that made them highly succeptible to propaganda - often illiterate, living in poverty and with few, i...
Well the agent definition contains a series of conditionals. You have as the last three lines: if "cooperating is provably better than defecting", then cooperate; else, if "defecting is provably better than cooperating" then defect; else defect. Intuitively, assuming the agent's utility function is consistent, only one antecedent clause will evaluate to true. In the case that the first one does, the agent will output C. Otherwise, it will move through to the next part of the conditional and if that evaluates to true the agent will ou...
That would be very impressive, but I don't see that in any of the stuff on his semiotics on Wikipedia.
A caveat: I'm not at all sure how much I'm projecting on Peirce as far as this point goes. I personally think that his writings clarified my views on the scientific method (at the time I originally read them, which was a good while back) and I was concurrently thinking about machine learning - so I might just be having a case of cached apophenia.
However; if you want a condensed version of his semiotic look over this. You might actually need to read s...
I think there's a problem with your thinking on this - people can spot patterns of good and bad reasoning. Depending on the argument, they may or may not notice a flaw in the reasoning for a wide variety of reasons. Someone who is pretty smart probably notices the most common fallacies naturally - they could probably spot at least a few while watching the news or listening to talk shows.
People who study philosophy are going to have been exposed to many more diverse examples of poor reasoning, and will have had practice identifying weak points and exploit...
Well I can relay my impressions on Peirce and why people seem to be interested in him (and why I am):
I think that the respect for Peirce comes largely from his "Illustrations in the Logic of Science" series for Scientific American. Particularly "The Fixation of Belief" and "How to Make Our Ideas Clear".
When it comes to Tychism, it's kind of silly to take it in a vacuum, especially given that the notion of statistics being fundamental to science was new, and Newtonian determinism was the de facto philosophical stance of his d...
Yes, you're right. Looking at the agent function, the relevant rule seems to be defined for the sole purpose of allowing the agent to cooperate in the even that cooperation is provably better than than defecting. Taking this out of context, it allows the agent to choose one of the actions it can take if it is provably better than the other. It seems like the simple fix is just to add this:
=D\to\pi_{\underline{i}}\chi()=\underline{a})\\\mbox{\quad\quad\quad\quad}\wedge(\psi(\ulcorner%20U%20\urcorner,\underline{i})=C\to\pi_{\underline{i}}\chi()=\underline...
The utilitarian case is interesting because both Mill and Bentham seemed to espouse a multidimensional utility vector rather than a uni-dimensional metric. There is an interesting paper I've been considering summarizing that takes a look at this position in the context of neuroeconomics and the neuroscience of desire.
Of interest from the paper: They argue that "pleasure" (liking), though it comes from diverse sources, is evaluated/consolidated at the neurological level as a single sort of thing (allowing a uni-dimensional representation as is ...
I only looked at this for a bit so I could be totally mistaken, but I'll look at it closely soon, it's a nice write up!
My thoughts:
A change of variables/values in your proof of proposition 3 definitely doesn't yield conjecture 4? At first glance it looks like you could just change the variables and flip the indicies for the projections (use pi_1 instead of pi_0) and in the functions A[U,i]. If you look at the U() defined for conjecture 4, it's exactly the one in proposition 3 with the indices i flipped and C and D flipped, so it's surprising to me if this doesn't work or if there isn't some other minor transformation of the the first proof that yields a proof of conjecture 4.
Ha! Yeah, it seems that his name is pretty ubiquitous in mathematical logic, and he wrote or contributed to quite a number of publications. I had a professor for a sequence in mathematical logic who had Barwise as his thesis adviser. The professor obtained his doctoral degree from UW Madison when it still had Barwise, Kleene and Keisler so he would tell stories about some class he had with one or the other of them.
Barwise seems to have had quite a few interesting/powerful ideas. I've been wanting to read Vicious Circles for a while now, though I haven'...
Is there anything, in particular, you do consider a reasonably tight lower bound for a man-made extinction event? If so, would you be willing to explain your reasoning?
I see your point here, although I will say that decision science is ideally a major component in the skill set for any person in a management position. That being said, what's being proposed in the article here seems to be distinct from what you're driving at.
Managing cognitive biases within an institution doesn't necessarily overlap with the sort of measures being discussed. A wide array of statistical tools and metrics isn't directly relevant to, e.g. battling sunk-cost fallacy or NIH. More relevant to that problem set would be a strong knowledge of kno... (read more)