All of WikiLogicOrg's Comments + Replies

The issues you raised are interesting but actually make this a pretty good example of my problem - how do you account for weak evidence and assign it a proper likelihood. One way i am testing this is by taking an example which i think is agreed to be 'most likely' (that he existed as opposed to not existing). Then i want to work backwards and see if we there is a method for assessing probability that seems to work well on small scale questions, like probability's of minted coins and give me the expected answer when i add it all together.

At this point i am ... (read more)

0DanArmak
My point was that 'probability of minted coins' isn't a much "smaller-scale" question than 'probability of Alexander', that is, it isn't much simpler and easier to decide. In our model of the world, P(coins) doesn't serve as a a simple 'input' to P(Alexander). Rather, we use P(Alexander) to judge the meaning of the coins we find. This is true not only on the Bayesian level, where all links are bidirectional, but in our high-level conscious model of the world, where we can't assign meaning to a coin with the single word Alexander on it without already believing that Alexander did all the things we think he did. There's very little you can say about these coins if you don't already believe in Alexander.

I find it is more likely that the times it degenerates into a fight is due to the lack of ability on one of the debaters. The alternative is to believe that people like ourselves are somehow special. It is anecdotal but I used to be incredibly stubborn until i met some good teachers and mentors. Now i think the burden of proof lies on the claim that, despite our apparent similarities, a large portion of humans are incapable of being reasoned with no matter how good the teacher or delivery. Of course i expect some people physically cannot reason due to brai... (read more)

0entirelyuseless
You seem to be proposing a simplistic theory of goals, much like the simplistic theory of goals that leads Eliezer to the mistaken conclusion that AI will want to take over the world. In particular, happiness is not one unified thing that everyone is aiming at, that is the same for them and me. If I admit that I do what I do in order to be happy, then a big part of that happiness would be "knowing the truth," while for them, that would be only a small part, or no part at all (although perhaps "claiming to possess the truth" would be a part of it for them -- but it is really not the same to value claiming to possess the truth, and to value the truth.) Additionally, using "happiness" as it is typically used, I am in fact less happy on account of valuing the truth more, and there is no guarantee that this will ever be otherwise.
1ChristianKl
I guess when you say stubborn you mean that you tried to be independent and didn't listen to other people. That's not the issue with the person who's religious because most of his friends are religious. A good teacher who teaches well can get a lot of people to adopt a specific belief but that doesn't necessarily mean that the students get the belief through "reasoning". If the teacher would teach a different belief on the concept he would also get that accross. What evidence do you have that education in fallicies or biases helps people think better? There seem to be many people who want to believe that's true but as far as I know the decision science literature doesn't consider that belief to be true.

Yes I feel that you are talking in vague but positive generalities.

First, on a side note, what do you mean by "but positive"? As in idealistic? Excuse my vagueness. I think it comes from trying to cover too much at once. I am going to pick on a fundamental idea i have and see your response because if you update my opinion on this, it will cover much of the other issues you raised.

I wrote a small post (www.wikilogicfoundation.org/351-2/) on what i view as the starting point for building knowledge. In summary it says our only knowledge is that o... (read more)

0TheAncientGeek
What I mean by "vague but positive" is that you keep saying there is no problem, but not saying why. That's a standard starting point. I am not seeing anything that dissolves the standard problems. We all have the same meta-desire, whilst having completely different object level desires. How is that helping?

Thanks for taking the time to write all that for me. This is exactly the nudge in the right direction i was looking for. I will need at least the next few months to cover all this and all the further Google searches it sends me down. Perfect, thanks again!

Thanks for the links and info. I actually missed this last time around, so cannot comment much more until i get a chance to research Jaynes and read that link.

Who decides on what information is relevant? If i said i want to use men without beards and Alexander never had one, that would be wrong (at least my intuition tells me it would be) as i am needless disregarding information that skews the results. You say use all the info but what about collecting info on items such as a sword or a crown. I feel that is not relevant and i think most would agree. But where to draw the line? Gram_Stone pointed me to the reference class problem which is exactly the issue i face.

From the correct perspective, it is more extraordinary that anyone agrees.

Correct by whose definition? In a consistent reality that is possible to make sense of, one would expect evolved beings to start coming to the same conclusions.

Corrected by whose definition of correct?

From this question i assume you are getting at our inability to know things and the idea that what is correct for one, may not be for another. That is a big discussion but let me say that i premise this on the idea that a true skeptic realizes we can not know anything for sure an... (read more)

0TheAncientGeek
I wouldn't necessarily expect that for the reasons given. You have given contrary opinion, not a counter argument. I don't see how it addresses the circularity problem. Or that. Is everyone going to be on the same axioms? The existence of a single reality isn't enough to guarantee convergence of beliefs for the reasons given. That doesn't make sense. The fact that something was settled eventually doesn't mean that you probably problems are going to be settled at a time convenient for you. Yes I feel that you are talking in vague but positive generalities.

I think the probability is close to zero because trying to "drill down" to force agreement between people results in fights, not in agreement.

We are not in agreement here! Do you think its possible to discuss this and have one or both of us change our initial stance or will that attempt merely result in a fight? Note, i am sure it is possible to result in a fight but i do not think its a forgone conclusion. On the contrary, i think most worthwhile points of view were formed by hearing one or more opposing views on the topic.

they will each su

... (read more)
0entirelyuseless
I agree that we are not in agreement. And I do think that if we continue to respond to each other indefinitely, or until we agree, it will probably result in a fight. I admit that is not guaranteed, and there have been times when people that I disagree with changed their minds, and times when I did, and times when both of us did. But those cases have been in the minority. "We are all trying to reach a certain goal and a truer map of reality helps us get there..." The problem is that people are interested in different goals and a truer map of reality is not always helpful, depending on the goal. For example, most of the people I know in real life accept false religious doctrines. One of their main goals is fitting in with the other people who accept those doctrines. Accepting a truer map of reality would not contribute to that goal, but would hinder it. I want the truth for its own sake, so I do not accept those doctrines. But they cannot agree with me, because they are interested in a different goal, and their goal would not be helped by the truth, but hindered.

Thanks for the suggestion. Added to reading list and commented on the stats site.

Sure, but why will they disagree? If I say there is 60% chance of x and you say no it is more like 70% then i can ask you why you think its 10% more likely. I know many will say "its just a feeling" but what gives that feeling? If you ask enough questions, i am confident one can drill down to the reasoning behind the feeling of discomfort at a given estimate. Another benefit of WL is it should help people get better at recognizing and understanding their subconscious feelings so they can be properly evaluated and corrected. If you do not agree, it would be really interesting to hear your thoughts on this. Thanks

0entirelyuseless
I don't agree. If you're right, we can do it right here and now, since we do not agree, which means that we are giving different probabilities of your project working -- in particular, I say the probability of your project being successful is very close to zero. You presumably think it has some reasonable probability. I think the probability is close to zero because trying to "drill down" to force agreement between people results in fights, not in agreement. But to address your argument directly, each person will end up saying "it is just a feeling" or some equivalent, in other words they will each support their own position by reasons which are effective for them but not for the other person. You could argue that in this case they should each adopt a mean value for the probability, or something like that, but neither will do so. And since I have given my answer, why do you think there is a reasonable probability that your project will succeed?
3TheAncientGeek
From the correct perspective, it is more extraordinary that anyone agrees. Yes but that is not where the problems stop, it is where they get really bad. Object level disagreements can maybe be solved by people who agree on an epistemology. But people aren't in complete agreement about epistemology. And there is no agreed meta epistemology to solve epistemological disputes..that's done with same epistemology as before. And that circularity means we should expect people to inhabit isolated, self sufficient philosophical systems. Corrected by whose definition of correct? Do you not see that you are assuming you will suddenly be able to solve the foundational problems that philosophers have been wrestling with for millennia.

Thanks for an excellent, in-depth reply!

https://wiki.lesswrong.com/wiki/Debate_tools

Brilliant resource! Thanks for pointing it out.

You bring up a few worries although i think you also realize how i plan to deal with them. (Whether i am successful or not is another matter!)

One problem here is that some people are simply better debaters even though their ideas may be unsound

One part of this project is to make some positive aspects of debating skills easy to pick up by newbies using the site. Charisma and confidence are worthless in a written forma... (read more)

Hello!

I am new to this site but judging from HPMOR and some articles I read here, I think I have come to the right place for some help.

I am working on the early stages of a project called WikiLogic which has many aims. Here are some that may interest LW readers specifically:

-Make skills such as logical thinking, argument construction and fallacy recognition accessible to the general public

-Provide a community created database of every argument ever made along with their issues and any existing solutions

-Highlight the dependencies between different fields i... (read more)

3Regex
Welcome! I've seen these sorts of argument maps before. https://wiki.lesswrong.com/wiki/Debate_tools http://en.arguman.org/ It seems there is some overlap with your list here Generally what I've noticed about them is that they focus very hard on things like fallacies. One problem here is that some people are simply better debaters even though their ideas may be unsound. Because they can better follow the strict argument structure they 'win' debates, but actually remain incorrect. For example: http://commonsenseatheism.com/?p=1437 He uses mostly the same arguments debate after debate and so has a supreme advantage over his opponents. He picks apart the responses, knowing full well all of the problems with typical responses. There isn't really any discussion going on anymore. It is an exercise in saying things exactly the right way without invoking a list of problem patterns. See: http://lesswrong.com/lw/ik/one_argument_against_an_army/ Now, this should be slightly less of an issue since everyone can see what everyone's arguments are, and we should expect highly skilled people on both sides of just about every issue. That said the standard for actual solid evidence and arguments becomes rather ridiculous. It is significantly easier to find some niggling problem with your opponents argument than to actually address its core issues. I suppose I'm trying to describe the effects of the 'fallacy fallacy.' Thus a significant portion of manpower is spent on wording and putting the argument precisely exactly right instead of dealing with the underlying facts. You'll also have to deal with the fact that if a majority of people believe something then the shear amount of manpower they can spend on shoring up their own arguments and poking holes in their opponents will make it difficult for minority views to look like they hold water. What are we to do with equally credible citations that say opposing things? 'Every argument ever made' is a huge goal. Especially with th