Comment author: Fhyve 04 August 2014 01:44:14AM *  3 points [-]

The most charitable take on it that I can form is a similar one to Scott's on MBTI: (http://slatestarcodex.com/2014/05/27/on-types-of-typologies/). It might not be validated by science, but it provides a description language with a high amount of granularity over something that most people don't have a good description language for. So with this interpretation, it is more of a theory in the social sciences sense, a lens at which to look at human motivation, behaviour, etc. This probably differs from, and is a much weaker claim than people at Leverage would make.

I don't know how I feel about the allegations at the end. It seems that other than connection theory, Leverage is doing good work, and having more money is generally better. I would neither endorse or criticize their use of it, but I think that since I don't want those tactics used by arbitrary people, I'd fall on the side of criticize. I would also recommend that the aforementioned creator not be so open about his ulterior motives and some other things he has mentioned in the past. All in all, Connection Theory is not what Leverage is selling it as.

Edit: I just commented on the theory side of it. The therapy side (or however they are framing the actual actions side), a therapy doesn't need its underlying theory to be correct in order to be effective. I am rather confident that actually doing the connection theory exercises will be fairly beneficial, though actually doing a lot of things coming from psychology will probably be fairly beneficial. And other than the hole in your wallet, talking to the aforementioned creator probably is too.

Comment author: Jiro 01 August 2014 09:32:38PM 13 points [-]

It sounds to me like some of the criticisms of this can be extended to much of LW's unusual ideas, such as the nature of the unfriendly AI danger and its solution: invented by amateurs, not peer reviewed, unknown outside of this community, etc.

Comment author: Fhyve 04 August 2014 01:25:16AM 8 points [-]

I'd say Nick Bostrom (a respected professor at Oxford) writing Superintelligence (and otherwise working on the project), this (https://twitter.com/elonmusk/status/495759307346952192), some high profile research associates and workshop attendees (Max Tegmark, John Baez, quite a number of Google engineers), give FAI much more legitimacy than connection theory.

Comment author: gwern 01 August 2014 02:55:14PM *  18 points [-]

I didn't realize Connection Theory was even still around (the last time I remember reading anything about Leverage Research doing anything was when they failed at polyphasic sleep, which was like a year or two ago now?). Good to have some criticism of it, I suppose.

Comment author: Fhyve 04 August 2014 01:19:13AM 2 points [-]

If you want a more precise date for whatever reason, it was right at the end of the July 2013 workshop, which was July 19-23. There were a number of leverage folk who had just started the experiment there.

Comment author: Fhyve 04 August 2014 12:48:59AM 10 points [-]

I'm currently interning at MIRI, I had a short technical conversation with Eliezer, a multi hour conversation with Michael Vassar, and other people seem to be taking me as somewhat of an authority on AI topics.

Comment author: Douglas_Knight 13 March 2014 05:22:22AM 5 points [-]

You could do this with polls instead of karma. The advantage of karma is that it provides an incentive for people to play to win. The disadvantage is hiding comments.

Comment author: Fhyve 14 March 2014 04:31:47AM 2 points [-]

I agree. I want to comment on some of the downvoted posts, but I don't want to pay the karma

Comment author: Fhyve 14 March 2014 04:21:08AM *  -1 points [-]

Irrationality Game:

Politics (in particular, large governments such as the US, China, and Russia) are a major threat to the development of friendly AI. Conditional on FAI progress having stopped, I give a 60% chance that it was because of government interference, rather than existential risk or some other problem.

Comment author: Eliezer_Yudkowsky 19 February 2014 06:21:55AM 15 points [-]

Who are these mysterious straw Bayesians who refuse to use algorithms that work well and could easily turn out to have a good explanation later? Bayes is epistemological background not a toolbox of algorithms.

Comment author: Fhyve 22 February 2014 09:07:37PM 4 points [-]

Bayes is epistemological background not a toolbox of algorithms.

I disagree: I think you are lumping two things together that don't necessarily belong together. There is Bayesian epistemology, which is philosophy, describing in principle how we should reason, and there is Bayesian statistics, something that certain career statisticians use in their day to day work. I'd say that frequentism does fairly poorly as an epistemology, but it seems like it can be pretty useful in statistics if used "right". It's nice to have nice principles underlying your statistics, but sometimes ad hoc methods and experience and intuition just work.

Comment author: James_Miller 02 February 2014 09:51:59PM 0 points [-]

There are probably autistic LW readers who would score relatively low on IQ tests because they would do poorly on some subsections.

Comment author: Fhyve 03 February 2014 08:03:48AM 0 points [-]

Depending on the IQ test, I don't think your overall score will go down much if you don't do well on a subsection or two. This is low confidence, and based off one data point though. I have scores ranging from 102 to 136 and my total score somehow comes out to be 141.

Comment author: ChrisHallquist 22 November 2013 05:08:21PM -2 points [-]

As soon as I saw that in the post, I began checking it. In my head. Have I mentioned I'm good at math?

Comment author: Fhyve 25 November 2013 08:19:50AM -1 points [-]

That only means you are merely good at arithmetic. Can you prove, say, that there are no perfect squares of the form

3^p + 19(p-1)

where p is prime?

Comment author: Ishaan 19 November 2013 04:50:48AM *  1 point [-]

you appear to have missed the point of my reply.

Let's check: "I can only have preferences over things that exist. The ship probably exists, because my memory of its departure is evidence. The parallel worlds have no similar evidence for their existence." Is that correct paraphrasing?

Before the ship leaves, you know that sometime in the future there will be a future-ship in a location where it cannot interact with future-you.

By the same token, you can observe the laws of physics and the present-state of the universe. If, for some reason, your interpretation of those laws involves Many Worlds splitting off from each other, then, before the worlds split, you know that sometime in the future there will be a future-world unable to interact with future you.

For future-you, the existence of the future-ship is not a testable theory, but the fact that you have a memory of the ship leaving counts as evidence.

For future-you, the existence of the Other-Worlds is not a testable theory, but if Many-Worlds is your best model, then your memory of the past-state of the universe, combined with your knowledge of physics, counts as evidence for the existence of certain specific other worlds.

In your Faeries example, the Faeries do not merit consideration because it is impossible to get evidence for their existence. That's not true in the quantum bomb scenario - if we except Many Worlds, then for the survivors of the quantum bomb, the memory of the existence of a quantum bomb is evidence that there exist many branches with Other Worlds in which everyone was wiped out by the bomb.

So, the actual question should be:

1) Does Many-Worlds fit in our ontology - as in, do universes on other branches constructed in the Many-World format even fit within the definition of "Reality" or not? (For example, if you told me there was a parallel universe which never interacted with us in any way, I'd say that your universe wasn't Real by definition. Many Worlds branches are a gray area because they do interact, but current Other Worlds only interact with the past and the present only interacts with future Other Worlds, not current ones )

2a) If we decide that the Other Worlds from Many Worlds qualify as "Real", can Many Worlds ever be a hypothesis which is Parsimonious enough to not be Pascal-Wager-ish? The Faeries qualify as "Real" because they do cause the raindrops to fall, but because of the nature of that hypothesis it can never be parsimonious enough to rise above Pascal-Wager-thresholds. Is Many-Worlds the same way? (From your answer, I gathered that your answer is "yes", but I disagreed with your reason - see paragraph that begins with "In your Faeries example..." which is why I pointed out that if you accept Many Worlds then you can have evidence that points to certain sorts of worlds existing in my first reply.)

2b) If we decide that the other branches do not qualify as Real, can we make a definition of reality that does not exclude light-cone-leaving-spaceships?

3) And how do we construct our preferences, in relation to what we have defined as "Real"? (For example, we could simply say that despite having an ontology that acknowledges all the branches of Many Worlds as Real, our preferences only care about the world that we end up in.)

Comment author: Fhyve 25 November 2013 07:30:02AM 0 points [-]

The spaceship "exists" (I don't really like using exists in this context because it is confusing) in the sense that in the futures where someone figures out how to break the speed of light, I know I can interact with the spaceship. What is the probability that I can break the speed of light in the future?

Then for Many Worlds, what is the probability that I will be able to interact with one of the Other Worlds?

I would not care more about things if I gain information that I can influence them, unless I also gain information that they can influence me. If I gain credence in Many Worlds, then I only care about Other Worlds to the extent that it might be more likely for them to influence my world.

View more: Prev | Next