I think you, EY and most use the term faith in a historical context related to religion rather than its definitional context as it relates to epistemological concerns of trust in an idea or claim
The best definition I have found so far for faith is thus:
Faith is to commit oneself to act based on sufficient experience to warrant belief, but without absolute proof.
So I have no problem using faith and induction interchangeably because it is used just as you say:
...inferring the future from the past (or the past from the present), which basically requires th
Intuition (what you call "faith") is evidence.
If you will, please define intution as you understand it.
From how I understand intuition, it is knowledge for which the origin cannot be determined. I have certainly experienced the "I know I read something about that somewhere but I just can't remember" feeling before and was right about it. However just as equally I have been wrong about conclusions that I have come to through this means.
I think your entire post gives the same visceral description as someone would describe about having...
confidence level.
Most people do not understand what a confidence interval or confidence levels are. At least in my interactions. Unless you have had some sort of statistics (even basic) you probably haven't heard of it.
I think it improperly relabels "uncertainty" as "faith."
Perhaps. The way I see uncertainty as it pertains to one or another claim is that there will almost always be a reasonable counter claim and in order to dismiss the counter claim and accept the premise, that is faith in the same sense.
The only thing one truly must have faith in (and please correct me if you can; I'd love to be wrong) is induction, and if you truly lacked faith in induction, you'd literally go insane.
Intuition and induction are in my view very similar to wha...
Sure. What's not rational is to believe ... politicians
I think that is likely the best approach
Your argument seems to conclude that:
It is impossible to reason with unreasonable people
Agreed. Now what?
Ostensibly your post is about how to swing the ethos of a large group of people towards behaving differently. I would argue that has never been necessary and still is not.
A good hard look at any large political or social movement reveals a small group of very dedicated and motivated people, and a very large group of passive marginally interested people who agree with whatever sounds like it is in their best interest without them really doing too muc...
I am not a fan of internet currency in all its forms generally because it draws attention away from the argument.
Reddit, which this is based on, went to disabling a subtractive karma rule for all submissions and comments. Submissions with down votes greater than up votes just don't go anywhere while negative comment votes get buried similar to how they do here. That seems like a good way to organize the system.
Is the reason that it was implemented in order to be signaling for other users or is it just an artifact of the reddit API? Would disabling the act...
I have no clever reply to most of your comment, but:
I personally do not submit more responses and posts because of the karma system.
In my case, it's very much a motivating factor. In fact, I do not think I would have ever been led to comment or post at all without karma. I think this is primarily because I consider it exceptionally valuable, easy-to-read instant feedback on how I'm being received, which I'm normally bad at discerning and find a very important component of any sort of interaction. I virtually never comment on other blogs at all.
The most important of which is: if you only do what feels epistemically "natural" all the time, you're going to be, well, wrong.
Then why do I see the term "intuitive" used around here so much?
I say this by way of preamble: be very wary of trusting in the rationality of your fellow humans, when you have serious reasons to doubt their conclusions.
Hmm, I was told here by another lw user that the best thing humans have to truth is consensus.
Somewhere there is a disconnect between your post and much of the consensus, at least in practice, of LW users.
From my understanding Mr. Yudkowski has two separate but linked interests, that of rationality, which predominates in writings and blog posts and designing AI, which is the interaction with SIAI. While I disagree about their particular approach (or lack thereof) I can see how it is rational to follow both simultaneously toward similar ends.
I would argue that rationality and AI are really the same project at different levels and different stated outcomes. Even if an AI never develops, increasing rationality is a good enough goal in and of itself.
I suppose my post was poorly worded. Yes, in this case omega is the reference set for possible world histories.
What I was referring to was the baseline of w as an accurate measure. It is a normalizing reference, though not a set.
The main problem I have always had with this is that the reference set is "actual world history" when in fact that is the exact thing that observers are trying to decipher.
We all realize that there is in fact an "actual world history" however if it was known then this wouldn't be an issue. Using it as a reference set then, seems spurious in all practicality.
The most obvious way to achieve it is for the two agents to simply tell each other I(w) and J(w), after which they share a new, common information partition.
I think that summatio...
1) In the pursuit of truth, you must always be on the lookout for the motive force of the resource-seeking that hinges on not finding the truth.
I think this sums up the "follow the money" axiom quite nicely.
There is a fantastic 24 part CBC podcast called How to think about science mp3 format here. It interviews 24 different research scientists and philosophy of science experts on the history and different views of both the scientific process, historical trends and the role of science in society. It is beyond well worth the time to listen to.
I have found that the series confirms what scientists have known already: Researchers rarely behave differently as a group than any other profession, yet they are presented as a non biased objective homogeneous group by mo...
In no way do I think that the parapsychologists have good hypotheses or reasonable claims. I also am a firm adherent to the ethos: Extraordinary claims must have extraordinary proofs. However to state the following:
one in which the null hypothesis is always true.
is making a bold statement about your level of knowledge. You are going so far as to say that there is no possible way that there are hypotheses which have yet to be described which could be understood through the methodology of this particular subgroup. This exercise seems to me to be rejectin...
I have never seen a parapsychology study, so I will go look for one. However does every single study have massive flaws in it?
Damien Broderick's Outside the Gates of Science summarizes a number of parapsychology studies, noting that several of the studies do indeed seem quite solid. It doesn't come to any definite conclusion over whether psi phenomena are actually real or if there's just something wrong with our statistical techniques, but it does seem like there might be enough to warrant more detailed study. See also e.g. Ben Goertzel's review of the ...
See my response here
You want to consider the utility of the terrorists, at the appropriate level of detail.
Huh? Yes it will. You mean "you will still find it undesirable and or hard for you to understand".
What are the units for expected utility? How do you measure them? Can you graph my utility function?
I can look at behaviors of people and say that on this day Joe bought 5 apples and 4 oranges, on this day he bought 2 kiwis, 2 apples and no oranges etc...but that data doesn't reliably forecast expected utility for oranges. There are so many ...
efficient markets quite by definition are allowing greater progress along individual value scales than inefficient markets, though not necessarily as much progress as some further refinement
Inefficient markets are great for increasing individual wealth of certain groups. I think Rothbard would disagree with the second point (regulation) - as would I.
...In short, I, and much of the modern profession of economics, hold little attachment to the origins of economic theory (though I am surprised that you didn't include Smith's Wealth of Nations in your list,
The description you gave of economic theory completely ignores the origins of micro and macro economics, price theory and comparative economics.
The assumptions that underlie these disciplines are normative.
Steve Levitt's finding that the availability of abortion caused a lagged decrease in crime.
Actually that is descriptive statistics. Just as I pointed out before - economics without normative conclusions is statistics.
Doubtful, but in your undergrad you might have read one of the following:
Adam Smith's Theory of Moral Sentiments
John Maynerd Keynes' Ge...
Economics can conclude "If you want X then you should do Y".
This is what economists are trying to do now. Yet, implicit in their advice are normative economic principals that comprise the set list of X: Full employment, lower inflation, lower taxes, higher revenue etc...Obviously whoever wants x is normatively seeking a solution. As a result the analysis must then also and it is implicit in the formulation.
The economists themselves may have no feelings one way or another but they are using the economic and statistical principals toward normat...
Murder can increase utility in the economist's utility function
That is really immaterial though and computationally moot. Ok so his "utility function" is negative. Is that it, is that the difference? Besides, I would argue that reevaluating it on those terms does a poor job of actually describing motivation in a coherent set.
Yet murdering is a net negative in the ethicist's utility function.
It isn't in the economists? These things aren't neutral.
The broader aspect that economists seek is normative. You said it yourself in the economists a...
As I asked in response to your other argument: Who has given utility this new definition?
I think perhaps there is a disconnect between the origins of utilitarianism, and how people who are not economists (Even some economists) understand it.
You as well as black belt bayesian are making the point that utilitarianism as used in an economic sense is somehow non-ethics based, which could not be more incorrect as utilitarianism was explicitly developed with goal seeking behavior in mind - stated by Bentham as greatest hedonic hapiness. It was not derived as a ...
Any AGI will have all dimensions which are required to make a human level or greater intelligence. If it is indeed smarter, then it will be able to figure the theory out itself if the theory is obviously correct, or find a way to get it in a more efficient manner.
I'm trying to be Friendly, but I'm having serious problems with my goals and preferences.
So is this an AGI or not? If it is then it's smarter than Mr. Yudkowski and can resolve it's own problems.
[P]resent only one idea at a time.
Most posts do present one idea at a time. However it may not seem like it because most of the ideas presented are additive - that is, you have to have a fairly good background on topics that have been presented previously in order to understand the current topic. OB and LW are hard to get into for the uninitiated.
To provide more background and context, with the necessarily larger numbers of ideas being presented, while still getting useful feedback from readers.
That is what the sequences were designed to do - give the background needed.
it just takes the understanding that five lives are, all things being equal, more important than four lives.
Your examples rely too heavily on "intuitively right" and ceteris paribus conditioning. It is not always the case that five are more important than four and the mere idea has been debunked several times.
if people agree to judge actions by how well they turn out general human preference
What is the method you use to determine how things will turn out?
...similarity can probably make them agree on the best action even without complete a
You know the Nirvana fallacy and the fallacy of needing infinite certainty before accepting something as probably true? How the solution is to accept that a claim with 75% probability is pretty likely to be true, and that if you need to make a choice, you should choose based on the 75% claim rather than the alternative? You know how if you refuse to accept the 75% claim because you're virtuously "waiting for more evidence", you'll very likely end up just accepting a claim with even less evidence that you're personally biased towards?
Morality work...
The economist's utility function is not the same as the ethicist's utility function
According to who? Are we just redefining terms now?
As far as I can tell your definition is the same as Benthams only implying rules bound more weakly for the practitioner.
I think someone started (incorrectly) using the term and it has taken hold. Now a bunch of cognitive dissonance is fancied up to make it seem unique because people don't know where the term originated.
This is a problem for both those who'd want to critique the concept, and for those who are more open-minded and would want to learn more about it.
Anyone who is sufficiently technically minded undoubtedly finds frustration in reading books which give broad brush stroked counterfactuals to decision making and explanation without delving into the details of their processes. I am thinking of books like Freakonomics, Paradox of Choice, Outliers, Nudge etc..
These books are very accessible but lack the in depth analysis which are expected to be thoroughly cri...
Thus if we want to avoid being arbitraged, we should cleave to expected utility.
Sticking with expected utility works in theory if you have a discrete number of variables (options) and can discern between all variables such that they can be judged equally and the cost (in time or whatever) is not greater than the marginal gain from the process. Here is an example I like: Go to the supermarket and optimize your expected utility for breakfast cereal.
The money pump only works if your "utility function" is static, or more accurately, if your prefer...
This might have something to do with how public commitment may be counterproductive: once you've effectively signaled your intentions, the pressure to actually implement them fades away.
I was thinking about this today in the context of Kurzweil's future predictions and I wonder if it is possible that there is some overlap. Obviously Kurzweil is not designing the systems he is predicting but likely the people who are designing them will read his predictions.
I wonder, if they see the time lines that he predicts if they will potentially think: "oh, w...
As I replied to Tarelton, the not for sake of happiness alone post does not address how he came to the conclusions based on specific decision theoretic optimization. He gives very loose subjective terms for his conclusions:
The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.
which is why I worded my question as I did the first time. I don't think he has done the same amount of thinking on his epistemology as he has on his TDT.
Yes I remember reading both and scratching my head because both seemed to beat around the bush and not address the issues explicitly. Both lean to much on addressing the subjective aspect of non-utility based calculations, which in my mind is a red herring.
Admittedly I should have referenced it and perhaps the issue has been addressed as well as it will be. I would rather see this become a discussion as in my mind it is more important than any of the topics dealt with daily here - however that may not be appropriate for this particular thread.
Thanks, I followed up below.
You'll have to forgive me because I am economist by training and mentions of utility have very specific references to Jeremy Bentham.
Your definition of what the term "maximizing utility" means and the Bentham definition (who was the originator) are significantly different; If you don't know what it is then I will describe it (if you do, sorry for the redundancy).
Jeremy Bentham devised Felicific calculus which is a hedonistic philosophy and seeks as its defining purpose to maximize happiness. He was of the opinion that it was possible in theory t...
Ha, fair enough.
I often see reference to maximizing utility and individual utility functions in your writing and it would seem to me (unless I am misinterpreting your use) that you are implying that hedonic (fellicific) calculation is the most optimal way to determine what is correct when applying counterfactual outcomes to optimizing decision making.
I am asking how you determined (if that is the case) that the best way to judge the optimality of decision making was through utilitarianism as opposed to say ethical egoism or virtue (not to equivocate). Or perhaps your reference is purely abstract and does not invoke the fellicific calculation.
Since you and most around here seem to be utilitarian consequentialists, how much thought have you put into developing your personal epistemological philosophy?
Worded differently, how have you come to the conclusion that "maximizing utility" is the optimized goal as opposed to say virtue seeking?
Inconsistency is a general, powerful case of having reason to reject something. Inconsistency brings with it the guarantee of being wrong in at least one place.
I would agree if the laws of the universe or the system, political or material are also consistent and understood completely. I think history shows us clearly that there are few laws which, under enough scrutiny are consistent in their known form - hence exogenous variables and stochastic processes.
I looked into that but it lacks the database support that would be desired from this project. With LW owning the xml or php database, closest match algorithms can be built which optimize meeting locations for particular members.
That said, if the current LW developer wants to implement this I think it would at least be a start.
I thought so too - however not in the implementation that I think is most user friendly.
I am currently working on a google map API application which will allow LW/OB readers to add their location, hopefully encouraging those around them to form their own meetups. That might also make determining the next singularity summit location easier.
If there are any PHP/MySQL programmers who want to help I could def use some.
Perhaps this could be expanded to be Q&A for the people the readers agree would comparably elucidate on all manners rationality/AGI such as Wei Dei and Nesov rather than a single person.
To me it gives a broader perspective and has an added benefit of eliminating any semblance of cultishness, despite Mr. Yudkowski's protests of such a following.
Would it be inappropriate to put this list somewhere on the Less Wrong Wiki?
I think that would be great if we had a good repository of mind games
I think a lot of it has to do with your experience with computer based games and web applications.
This is why I say it would have to be a controlled study because those with significant computer experience and gaming experience have a distinct edge on those who do not. For example many gamers would automatically go to the WASD control pattern (which is what some first person shooting games use) on the "alternate control" level.
5:57:18 with 15 deaths here
A few months ago I stumbled upon a game wherein the goal is to guide an elephant from one side of the screen to a pipe; perhaps you have seen it:
Here's the rub: The rules change on every level. In order to do well you have to be quick to change your view of how the new virtual world works. That takes a flexible mind and accurate interpretation of the cues that the game gives you.
I sent this to some of my colleagues and have concluded anecdotally that their mental flexibility is in rough correlation with their results from the game. I ...
I probably came off as more "anticapitalist" or "collectivist" than I really am, but the point is important: betraying your partners has long-term consequences which aren't apparent when you only look at the narrow version of this game.
This is actually the real meaning of "selfishness." It is in my own best interest to do things for the community.
The mantras of collectivists and anti-capitalists seem to either not realize or ignore the fact that greedy people aren't really doing things in their own best interest if they are making enemies in the process.
With mechanical respiration, survival with ALS can be indefinitely extended.
What a great opportunity to start your transhuman journey (that is if you indeed are a transhumanist). Admittedly these are not the circumstances you or anyone would have chosen but here we are nonetheless.
If you decide to document your process then I look forward to watching your progression out of organic humanity. I think it is people like you who have both the impetus and the knowledge to really show how transhuman technology can be a bolster to our society.
Cheers!
Upon reading of that link (which I imagine is now fairly outdated?) his theory falls apart under the weight of its coercive nature - as the questioner points out.
It is understood that the impact of an AI will be on all in humanity regardless of it's implementation if it is used for decision making. As a result consequentialist utilitarianism still holds a majority rule position, as the link talks about, which implies that the decisions that the AI would make would favor a "utility" calculation (Spare me the argument about utilons; as an economis...
"Utilons" are a stand-in for "whatever it is you actually value"
Of course - which makes them useless as a metric.
we tend to support decision making based on consequentialist utilitarianism
Since you seem to speak for everyone in this category - how did you come to the conclusion that this is the optimal philosophy?
Thanks for the link.
Maybe I'm just dense but I have been around a while and searched, yet I haven't stumbled upon a top level post or anything of the like here on the FHI, SIAI (other than ramblings about what AI could theoretically give us) OB or otherwise which either breaks it down or gives a general consensus.
Can you point me to where you are talking about?
I never see discussion on what the goals of the AI should be. To me this is far more important than any of the things discussed on a day to day basis.
If there is not a competent theory on what the goals of an intelligent system will be, then how can we expect to build it correctly?
Ostensibly, the goal is to make the correct decision. Yet there is nearly no discussion on what constitutes a correct decision. I see lot's of contributors talking about calculating utilons so that demonstrates that most contributors are hedonistic consequentialist utilitarians....
I had not read that part. Thanks.
I do not see any difference in inductive bias as it is written there and dictionary and wikipedia definitions of faith: