Jasnah Kholin

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

in my own frame, Yudkowsky's post is central example of Denying Reality. Duncan's Fabricated Options are another example of Denying Reality. when reality is to hard to deal with, people are... denying reality. refuse to accept it. 

the only protection i know is to leave line of retreat - and it's easier if you do it as algorithm, even when you honestly believe it's not needed. 

not all your examples are Denying Reality be my categorization. other have different kind if Unthinkable things. and sometimes they mess together - the Confused Young Idealist may be actually confused - there are two kinds if Unthinkables. the one when if someone point it up to you you say - wow, i would have never thought that myself! and then understand, and the one when the reaction is angry denial (and of course it's not actually two, and there are a lot of space on the spectrum between the two).


not very helpful, but... i'm struggling with how to talk to people who do that. I tried various strategies, and came back to tell it as it is. it's actually get me better results then trying to sneak around this. not that i got good results, but... i think it reveal useless conversations faster, AND let good potential conversations to actually occur.  

Are you sure the math hold up? there are a bunch of posts about how spend money to buy time, and if I need to chose between waste 50 HOURS on investigation and just buy the more expensive product, it's pretty obvious to me that the second option is best. maybe not in this example, though I see it as false dichotomy - I tend to go with "ask in the specialized good-looking facebook group" as way to chose when stakes are high.

In the last years I internalize more and more that I was raised by poorer people then I am now, that my heuristics just doesn't count all the time that I waste comparing products or seeking trusted professionals, and it would have been best for me to just buy the expensive phone, instead of asking people for recommendations and specs. 

also, and this is important - the interpersonal dynamics of trust networks can be so much more expansive then mere money. I preferred to work and pay for my degree myself then ask my parents for help. I see in real time as one my friend that depend on reputation for her work constantly censure herself and fret if she should censor herself. 

basically, I would have give my past self the opposite advise, and what i want is an algorithm - how to know if you want more trust networks or more markets?

or, actually, i want BETTER MAP. facebook recommendations are not exactly trust network, but not markets, either. I don't think this distinction cut reality at the joints. there is a lot to explore here - although I'm not the one who should do the exploring. IT will not be useful for me, as I try to move to the direction of wasting less time and more money on things. 

it sometimes happen in conversations, that people talk past each other, don't notice that they both use the word X and mean two different things, and behave as if they agree on what X is but disagree on where to draw the boundary.

from my point of view, you said some things that make it clear you mean very different thing then me by "illegible". prove of theorem can't be illegible to SOMEONE. illegibility is property of the explanation, not the explanation and person. i encountered papers and posts that above my knowledge in math and computer science. i didn't understand them despite them being legible. 


you also have different approach to concepts in generally. i don't have concept because it make is easier for people to debug. i try to find concepts that reflect the territory most precisely. that is the point of concepts TO ME.

i don't sure it worth it go all the way back, and i have no intention go over you post and adding "to you" in all the places where it should be add, to make it clearer that goals are something people have, not property of the teritory. but if you want to do half of the work of that, we can continue this discussion. 

this is one of the posts when i wish for three examples for the thingy described. because i see two options:
1. this is weakman of the position i hold, in which i seek the ways to draw a map that correspond to the territory, and have my estimations of what work and what no, and disagree with someone about that. and the someone instead of providing evidence that his method providing good predictions or insights, just say i should have more slack.

all you description on why believe in things sounds anti-Beysian. it's not boolean believe-disbelieve. update yourself incrementally! if i believe something provide zero evidence i will not update, if the deviance dubious, i will update only a little. and then the question is how much credence you assign to what evidence, and methods to find evidence. 

2. it's different worlds situation, when the post writer encountered problem i didn't.

and i have no way to judge that, without at least one, and better more, actual examples of the interaction, better linked to and not described by the author. 

list of implicit assumptions in the post i disagree with:

 

  • that there are significant amount of people that see advise and their cached thought is "that can't work for me".
  • that this cached thought is bad thing.
  • that you should to try to apply every advice you encounter to yourself.
  • that it's hard.
  • that the fact that it hard is evidence that it's good and worthy thing to do.
  • that "being kind of person" is good category to think in, or good framing to have.

 

i also have a lot of problems with the example - which is example of advise that most people try to follow but shouldn't, and should think about their probability of success by looking on the research and not by thinking that "you can be any kind of person" - statement whose true value is obviously false. 

this is not how the third conversation should go, in my opinion. instead. you should say inquiry your Inner Simulator, and then say that you expect that learning GTD will make them more anxious or will work for two weeks and then stop, so the initial investment in time will not pay off, or that in the past you encountered people who tried and it make them to crash down parts of themselves, or you expect it will work to well and lead to burnout.

it is possible to compare illegible intuitions - by checking what different predictions they produce, by comparing possible differences in the sorting of the training data. 

in my experience, different illegible intuitions come from people see different parts of the whole picture, and it's valuable to try to understand better. also, making predictions, describe the differences between word when you right and world when you wrong, having at least two different hypotheses, is all way to make the illegible intuitions better.

One of the things that I searched for in EA and didn't find, but think should exist: algorithm, or algorithms, to decide how much to donate, as a personal-negotiation thing.

There is Scott Alexander's post about 10% as Schelling point and way to placate anxiety, there is the Giving What You Can calculation. but both have nothing with personal values.

I want an algorithm that is about introspection - about not smashing your altruistic and utilitarian parts, but not other parts too, about finding what number is the right number for me, by my own Utility Function.

and I just... didn't find those discussions.
 

in dath ilan, when people expect to be able to name a price for everything more or less, and did extensive training to have the same answer to the questions  'how much would you pay to get this extra' and 'how much additional payment would you forgo to get this extra' and 'how much would you pay to avoid losing this' and 'how much additional payment would you demand if you were losing this.', there are answers.
 

What is the EA analog? how much I'm willing to pay if my parents will never learn about that? If I could press a button and get 1% more taxes that would have gone to top Giving Well charities but without all second order effects except the money, what number would I choose? What if negative numbers were allowed? what about the creation of a city with rules of its own, that take taxes for EA cause - how much i would accept then?
 

where are the "how to figure out how much money you want to donate in a Lawful way?" exercises?
 

Or maybe it's because far too many people prefer and try to have their thinking, logical part win internal battle against other, more egotistical ones?
 

Where are all the posts about "how to find out what you really care about in a Lawful way"? The closest I came about is Internal Double Crux and Multi-agent Model of the soul and all its versions. But where are my numbers?
 

so, I'm in the same time happy there is an answer, but can't be happy with the answer itself. which is to say, i tried to go and find the pints i agree with, and find one after another point of disagreement. but i also believe this post deserve more serious answer, so i will try to write at least part of my objections.

i do believe that x-risk and societies destroying themselves as thy become more clever then wise is a real problem. but i disagree with the framing that the ants are the ones to blame. it's running from the problem. if grasshoppers are to grow, even if slower, they too may bring atomic winter. 

and you just... assume it away. in the way of worst Utopian writing, where societies have features that present-people hate and find bad but somehow everyone happy and no one have any problems with that and everything is okay.  it's just... feel cheap to me.

and if you assume no growth at all, then... what about all the people that value growth? there are a lot of us in the world. if it's actually "steady-state existence", not sustainable growth but everything stay the same way... it's really really really bad by my utility function, and the one good thing i can say about that, is that state doesn't look stable to me. there were always innovators and progressors. you can't have your stable society without some Dystopian repression of those.

but you can have dath ilan. this was my main problem with the original parable. it was very black-and-white. dath ilan didn't come to the ants and ask for food, instead it offered it. but it definitely not table state. and to my intuition, it's look both possible and desirable.

 and it also doesn't assume that the ants throw away decision theory from the window. the original parables explicitly mentioned it. i find representation of ants that forego cooperation in prisoner dilemma strawmanish.

but beside all that, there is another, meta-point. there was prediction after prediction for pick-oil and the results, and they all proved wrong. so are other predictions for that strand of socialism. from my point of view, the algorithm that generating this predictions is untrustworthy. i don't think Less Wrong is the right place for all those discussions.

and i don't plan to write my own, dath-ilani replay to the parables.

but i don't think some perspectives are missing. i think they was judged false and ignored afterwards. and the way in which the original parables felt fair to the ants, and those don't, is evidence this is good rule to follow. 

it's not bubble, it's the trust in the ability for fair discussion, or the absent of trust. because discussion in which my opinions assumed to be result of bubble and not honest disagreement... i don't have words to describe the sense of ugliness, wrongness, that this create. it the same that came from feeling the original post as honest and fair, and this as underhanded and strawmanish. 

(all written here is not very certain and not precise representation of my opinions, but i already took way too much time to write it, and i think it better to write it then not)

this would be much closer to the Pareto frontier then our curren social organization! unfortunately, this is NOT how society work. if you will operate like that, you will loss almost all your resources. 

but it's more complicated then that - why not gate this on cooperation? why should i give 1 dollar for 2 of someone else dollars, when they will not do that for me? 

and this is why all this scheme doesn't work. every such plan need to account for defectors, and it doesn't look like you address it, anywhere.

on the issue of politics - most people who involve in politics make things worse. before declaring that it's people duty to do something, it's important to verify this is net-positive thing to do. if i look on people involved in politics and decide that less politics would have been better to society, then my duty is to NOT get involved in politics. or at least, not to get involved more then the level that i believe is the right level of involvement.

but... i really don't see how all this politics even connected to the first half of the post, about the right ratio of my utility : other person utility? 

 

regarding the first paragraph - Eliezer not criticizing the Drowning Child story in our world, but in dath ilan. dath ilan, that is utilitarian in such questions, when more or less everyone is utilitarian when children lives are what at stake. we don't live in dath ilan. in our world, it's often the altruistic parts that hammer down the selfish parts, or warm-fuzzies parts that hammer down the utilitarian ones as heartless and cruel.

EA sometimes is doing the opposite - there are a lot of stories of burnout.

and in the large scheme of things, what i want is a way to find what actions in the world will represent my values to the fullest - but this is a problem when i can't learn from dath ilan, that have a lot of things fungible, that are not in Earth. 

 

Load More