All of Ronak's Comments + Replies

Ronak00

Oh this is nice. I've also come to realise this over time, ,in different words, and my mind is extremely tickled by how your formulation puts it on an equal footing with other non-explicit-rationality avenues of thought.

I would love to help you. I am very interested in a passion project right now. And we seem to be classifying similar things as hard-won realisations, though we have very different timelines for different things; talking to you might be all-round interesting for me.

Ronak10

My steelman is this (without having read anything downstairs, so I apologise if there's a better on extant): the world is a complicated place, and we all form beliefs based on the things we think are important in the world; and since we are all horrible reasoners, it's impossible to believe about some things that they are important movers of the world without seeing it actually happen and viscerally feeling it change things.

Cognitive biases in yourself are like this, methinks. Your thought processes really need to be broken down repeatedly for you to be ab... (read more)

Ronak750

I took it. If it's anything like last year, officially 2/5 of my karma will be from surveys.

Ronak20

So, Romila Thapar is not a Dalit activist, just a historian (I'm guessing this is a source of confusion; I could be wrong).

I'm saying they should have read up before starting their project.

I can't find the study for some reason, so I'll try and do it from memory. They randomly picked from a city Dalits (Dalit is a catch-all term coined by B R Ambedkar for people of the lowest castes, and people outside the caste system, all of whom were treated horribly) and people from the merchant castes to look for genetic differences. Which is all fine and dandy - but ... (read more)

Ronak00

Actually, with the caveat that I don't have any object-level research, I doubt it; they assign a rigidity to the whole thing that seems hard to institute. My point was that 'do there exist genetic differences' is not the issue here.

0Azathoth123
So what is the issue, that geneticists didn't consult with Dalit activists before designing their experiment?
Ronak00

a) Actually Thapar's point wasn't that there were no genetic differences (in fact, the theory of caste promulgated by Dalit activists is that it's created by the prohibition of inter-caste marriage and therefore pretty much predicts genetic differences) - but that the groupings done by the researchers wasn't the correct one.

b) I should actually check that what I surmised is what she said. Thanks for alerting me to the possibility.

0Azathoth123
So do you have independent evidence that the theory promulgated by the Dalit activists is correct, because theories promulgated by activists don't exactly have the best track record.
Ronak30

When I said humanities I didn't mean social sciences; in fact, I thought social sciences explicitly followed the scientific method. Maybe the word points to something different in your head, or you slipped up. Either way, when I say humanities, I actually mean fields like philosophy and literature and sociology which go around talking about things by taking the human mind as a primitive.

The whole point of the humanities is that it's a way of doing things that isn't the scientific method. The disgraceful thing is the refusal to interface properly with scien... (read more)

3Azathoth123
Funny humanities people were saying the same thing about genetic racial differences until said difference started showing up.
Ronak00

In the abstract. Though, undoubtedly, many of the people can do wonders too.

Ronak10

Why? This looks as if you're taking a hammer to Ockham's razor.

1AABoyles
In the strictest sense, yes I am. I design, build and test social models for a living (so this may simply be a case of me holding Maslow's Hammer). The universe exhibits a number of physical properties which resemble modeling assumptions. For example, speed is absolutely bounded at c. If I were designing an actual universe (not a model), I wouldn't enforce upper bounds--what purpose would they serve? If I were designing a model, however, boundaries of this sort would be critical to reducing the complexity of the model universe to the realm of tractable computability. On any given day, I'll instantiate thousands of models. Having many models running in parallel is useful! We observe one universe, but if there's a non-zero probability that the universe is a model of something else (a possibility which Ockham's Razor certainly doesn't refute), the fact that I generate so many models is indicative of the possibility that a super-universal process or entity may be doing the same thing, of which our universe is one instance.
Ronak100

[Please read the OP before voting. Special voting rules apply.]

Humanities is not only an useful method of knowing about the world - but, properly interfaced, ought to be able to significantly speed up science.

(I have a large interval for how controversial this is, so pardon me if you think it's not.)

0polymathwannabe
Although the social sciences have undeniably helped a lot with our understanding of ourselves, their refusal to follow the scientific method is disgraceful.
3Azathoth123
Do you mean humanities in the abstract or the people currently occupying humanities departments?
Ronak130

Do you mind telling me how you think he's being uncharitable? I agree mostly with your first two statements. (If you don't want to put it on this public forum because hot debated topic etc I'd appreciate it if you could PM; I won't take you down the 'let's argue feminism' rabbit-hole.)

(I've always wondered if there was a way to rebut him, but I don't know enough of the relevant sciences to try and construct an argument myself, except in syllogistic form. And even then, it seems his statements on feminists are correct.)

7gattsuru
For a very quick example, see this Tumblr post. Mr. Alexander finds an example of a neoreactionary leader trying to be mean to a transgender woman inside the NRx sphere, and then shows the vast majority response of (non-vile) neoreactionaries to at least be less exclusionary than that, even though they have ideological issues with the diagnosis or treatment of gender dysphoria. Then he describes a feminist tumblr which develops increasingly misgendering and rude ways to describe disagreeing transgender men. I don't know that this is actually /wrong/. All the actual facts are true, and if anything understate their relevant aspects -- if anything, I expect Ozy's understated the level of anti-transmale bigotry floating around the 'enlightened' side of Tumblr. I don't find NRx very persuasive, but there are certainly worse things that could be done than using it as a blunt "you must behave at least this well to ride" test. I don't know that feminism really needs external heroes: it's certainly a large enough group that it should be able to present internal speakers with strong and well-grounded beliefs. And I can certainly empathize with holding feminists to a higher standard than neoreactionaries hold themselves. The problem is that it's not very charitable. Scott's the person that's /come up/ with the term "Lizardman's Constant" to describe how a certain percentage of any population will give terrible answers to really obvious questions. He's a strong advocate of steelmanning opposing viewpoints, and he's written an article about the dangers of only looking at the . But he's looking at a viewpoint shown primarily in the <5% margin feminist tumblr, and comparing them to a circle of the more polite neoreactionaries (damning with faint praise as that might be, still significant), and, uh, I'm not sure that we should be surprised if the worst of the best said meaner things than the best of the worst. I'm not sure he /needs/ to be charitable, again -- feminism should h
7spxtr
Fortunately, LW is not an appropriate forum for argument on this subject, but for an example of an uncharitable post, see Social Justice and Words, Words, Words.
Ronak00

I think this is a good argument. Thanks.

After some thought on why your argument sounded unsatisfatory to me, I decided that I have a much more abstract, much less precise argument, to do with things like the beginning of epistemology.

In the logcial beginning, I know nothing about the territory. However, I notice that I have 'experiences.' However, I have nore ason for believing that these experiences are 'real' in any useful sense. So, I decide to base my idea of truth on the usefulness of helping me predict further experiences. 'The sun rises every mornin... (read more)

Ronak30

The 'how to think of planning fallacy' I grokked was 'people while planning don't simulate the scenario in enough detail and don't see potential difficulties,'* so this is new to me. Or rather, what you say is in some sense part of the way I thought, except I didn't simulate it in enough detail to realise that I should understand it in a probabilistic sense as well, so it's new to me when it shouldn't be.

*In fact, right now I'm procrastinating goign and telling my prof that an expansion I told him I'd do is infinite.

Ronak10

I'm interested in your calling it 'paying in sanity.' Are you referring to the insanity of believing in Bengali babus, or the fact that they're preserving their own sanity in some way by not going to a real doctor for things they know they can't afford?

2chaosmage
The former. I'm speculating this tendency to rely on hope for serious problems while relying on science for small ones creates compartmentalization, which impairs rationality and increases religiosity. The correlation between poverty and religiosity is obvious, this is just a speculative direction of causation. Irrationality would probably lead to poverty, but if poverty also led to irrationality, the two causations would reinforce each other and explain the robustness of the correlation.
Ronak100

Indeed, 'being poor is expensive' is related to how they frame this fact. From the end of the same chapter:

The poor seem to be trapped by the same kinds of problems that afflict the rest of us—lack of information, weak beliefs, and procrastination among them. It is true that we who are not poor are somewhat better educated and informed, but the difference is small because, in the end, we actually know very little, and almost surely less than we imagine. Our real advantage comes from the many things that we take as given. We live in houses where clean wat

... (read more)
-3cameroncowan
These are all nice ideas but someone has to pay for them and it won't be cheap and 2nd of all. I know of plenty of people who are living in terrible conditions right here in this country. When one is poor everything is harder because you have to do everything yourself and pay out the nose for services that the wealthy get for far less. Whether in Africa or the US, poverty has a cost.
Ronak30

Thanks for your comment.

My issue is much 'earlier' in terms of logic. When I started reading that post, the Boltzmann brain problem seemed like a non-problem; an inevitable conclusion that people were unwilling to accept for reasons of personal bias - analogous to how most LWers would view someone who insists on metaphysical free will.

Even if certain facts about the universe didn't solve the issue, it seems to me that Carroll would still want to find reasons that we weren't Boltzmann brains. Now, from my own interest in entropy and heat death, I had long a... (read more)

3Viliam_Bur
It's not necessarily an either/or situation. Maybe this universe has started a few billions of years ago in a Boltzmann-like event, but since then it evolves, uhm, just like we think it does. The analogy of the monkeys with typewriters is misleading. The laws of physics are local: what happens next does depend on what happens now; that's unlike the monkey with the typewriter where the following letter is completely independent on the previous part of the book. If some random process would create a brain, in a body, in a room, then even if the room is immediately destroyed at the speed of light, still, during those few microseconds until the destruction reaches the brain, the brain would operate logically. On the other hand, random processes creating the brain in the body in the room are much less likely than random processes creating only the brain, or only parts of the brain. So this requires some more though, and I am too tired now to make it. But my point is that if you are randomly created exactly in this moment, you don't have a reason to trust your reason... but if you were created a while ago, and your reason had some time to work, that's not the same situation. In the extreme situation, if the universe was created randomly billions of years ago and then we have evolved lawfully, that's business as usual: the details of random creation of the universe long ago should not be relevant for our reasoning about our reason now.
Ronak160

From Poor Economics by Esther Duflo and Abhijit Bannerjee

There is potentially another reason the poor may hold on to beliefs that might seem indefensible: When there is little else they can do, hope becomes essential. One of the Bengali doctors we spoke to explained the role he plays in the lives of the poor as follows: “The poor cannot really afford to get treated for anything major, because that involves expensive things like tests and hospitalization, which is why they come to me with their minor ailments, and I give them some little medicines which m

... (read more)
4chaosmage
Thank you, that was very interesting. It seems to me these people are paying in sanity what they can't pay in money - and the price they're paying is arguably higher than what the rich are paying, not even considering the physical health effects. This might be one of the ways that being poor is expensive.
Ronak80

From http://www.preposterousuniverse.com/blog/2013/08/22/the-higgs-boson-vs-boltzmann-brains/

A room full of monkeys, hitting keys randomly on a typewriter, will eventually bang out a perfect copy of Hamlet. Assuming, of course, that their typing is perfectly random, and that it keeps up for a long time. An extremely long time indeed, much longer than the current age of the universe. So this is an amusing thought experiment, not a viable proposal for creating new works of literature (or old ones).

There’s an interesting feature of what these thought-experi

... (read more)
1[anonymous]
The expansion of the universe blows up the Boltzmann Brain problem. The universe is not of uniform density over time, and far into the future things get thinner and thinner on average with more and more concentrated local knots of matter of changing atomic/etc composition. It pushes the question to why we see the universe as it is rather than something smaller in space rather than in time, which becomes a question about the properties of the event we call the big bang, which nobody really understands - was it a singular event or one of many such events and what was its/their scale?
Ronak00

I should have been clearer, sorry. Facebook is less inconvenient on two non-trivial counts: there are other reasons to open it (whereas a birthday diary will only have information related to birthdays and similar stuff), and it records the birthdays without any effort on your part.

0hyporational
Most of the greetings I've seen are so generic I wonder if they have apps to automate them too.
Ronak00

Quite well, I thought. We were talking for four or so hours.

Ronak00

But they made those diary entries. And looked into the diary regularly to make sure they remembered.

0Richard_Kennaway
And on Facebook they friend those people, and look into Facebook regularly. I don't think the dynamics of birthday-remembering have changed. Computers already made it easier before Facebook, and diaries before computers.
Ronak00

We're moving it to cafe royal. It's in the area.

Ronak00

Where it used to be. Look outside regal cinema. Close to a shop called sixth street yogurt.

Ronak00

This is next to a shop called sixth street yogurt.

Ronak00

So Gloria jeans coffee is closed. I'm standing outside with a sign if anyone's around. Sorry for the mix-up.

0Ronak
We're moving it to cafe royal. It's in the area.
0Ronak
This is next to a shop called sixth street yogurt.
0pragmatist
Standing outside where exactly?
Ronak00

Yeah, conversation most probably. Backup games and things in case too many of us are bad at social stuff.

0zerotimer
Sorry, too lazy to make it. How did it go?
Ronak00

Awesome. Look forward to meeting you.

Ronak20

Totally. Moving to fifteenth.

Ronak440

I took the survey - extra credit and everything!

Ronak10

When I said 'A and B are the same,' I meant that it is not possible for one of A and B to have a different truth-value from the other. Two-boxing entails you are a two-boxer, but being a two-boxer also entails that you'll two-box. But let me try and convince you based on your second question, treating the two as at least conceptually distinct.

Imagine a hypothetical time when people spoke about statistics in terms of causation rather than correlation (and suppose no one had done Pearl's work). As you can imagine, the paradoxes would write themselves. At one... (read more)

Ronak30

[Saying same thing as everyone else, just different words. Might work better, might not.]

Suppose once Omega explains everything to you, you think 'now either the million dollars are there or aren't and my decision doesn't affect shit.' True, your decision now doesn't affect it - but your 'source code' (neural wiring) contains the information 'will in this situation think thoughts that support two-boxing and accept them.' So, choosing to one-box is the same as being the type of agent who'll one-box.
The distinction between agent type and decision is artifici... (read more)

0PhilosophyStudent
Two-boxing definitely entails that you are a two-boxing agent type. That's not the same claim as the claim that the decision and the agent type are the same thing. See also my comment here. I would be interested to know your answer to my questions there (particularly the second one).
Ronak10

What's the hairy green sphere? My search engine gives this page as first result.

4gwern
Really? When I google feynman hairy green sphere, I get as the second hit a quote from Surely You're Joking, Mr. Feynman! which runs: Clicking through reveals the whole story, of course. And the third hit is a blog post which excerpts the key summary:
4Shmi
It's the "hairy green ball" from "Surely you are joking...":
Ronak20

It sounds unlikely to be a cause - with a different reward system different teaching will be deemed right.

Ronak20

Epistemic:
-> finding out that I can't, even given an exponentially bigger universe to compute in, be predicted.
It would also potentially destroy my sense of identity. Even if I can be predicted, I can do anything I want: it's just that what I'll want is constrained. However, if the converse is true, any want I feel has nothing to do with me (and my intuitive sense of identity is similar to 'something that generates the wants and thoughts that I have') and I'm not sure I'll feel particularly obliged to satisfy them.
(Warning: writing it out made it sound ... (read more)

Ronak50

Because the solution to the problem is worthless except to the extent that it establishes your position in an issue it's constructed to illuminate.

Ronak40

*I don't know what you understand and don't.

*I can tell you that he's talking about rich people's concerns and how they've taken over litfic and how there's a very narrow understanding of character building, but there's lots more intricacy to it and that's why I'd do better at explaining bits than the whole thing.

Ronak00

No. I wouldn't mind that, but those two are hardly the only things novels can do; and I can't provide an exhaustive list of what literature does and how it does it - if I could do such things I'd have written something worth reading by now.

I'm sorry, but I have no idea how to explain Mieville's statements to you. Lit people are often vague, and often because they aren't able to be clearer. Maybe if you had specific points of confusion I could help.
It might help to know that the litfic audience is a lot more like an academy than a fanbase, and that Mieville is a Marxist so he's using language from there.

0[anonymous]
.
Ronak20

They're hard to pin down, and different people I know have different explanations.

The one in my head is basically that they pay too much attention to theme and perspective; while in many cases litfic is directly about perspectives (themes), lots of people tend to be reductio ad absurdums of this, focusing on these things in rather simplistic ways that sometimes ignore how the world works or the basic potentially interesting things in the setting**. This is made worse because it's less obvious to the unpractised eye by the very nature of what's being tackle... (read more)

2hesperidia
I have heard that the decline in the compelling qualities of literary fiction is due to classes in writing taught by literature professors, who know how to identify things like themes but who have no idea how to write compelling writing. Does this seem like a plausible statement to you?
0[anonymous]
.
Ronak00

It'd take me a while to explain it fully, but basically that the worst trends in litfic writing are manifested in his work.

1[anonymous]
.
Ronak00

In response to this, I want to roll back to saying that while you may not actually be simulated, having the programming to one-box is what causes there to be a million dollars in there. But, I guess that's the basic intuition behind one-boxing/the nature of prediction anyway so nothing non-trivial is left (except the increased ability to explain it to non-LW people).

Also, the calculation here is wrong.

Ronak20

Saturday.

To be clear, I liked the book, though I otherwise don't like the guy's writing.

1[anonymous]
.
Ronak40

Genre people and litfic people love flinging shit at each other, and it rarely makes much sense to a person actually familiar with the writing. Far as I can make out, it's because of generalising from a little evidence - a lot of the insults make more sense when you look at the more-likely-to-be-recommended stuff (for example, Ian McEwan wrote a whole book which can be very easily strawmanned into "these poor people are really badly off; but you shouldn't give in to the temptation to therefore dismiss all rich people").

Even positive reviews that cross the divide are horribly condescending.

0[anonymous]
.
Ronak00

I usually deal with people who don't have strong opinions either way, so I try to convince them. Given total non-compatibilists, what you do makes sense.

Also, it struck me today that this gives a way of one-boxing within CDT. If you naively blackbox prediction, you would get an expected utility table {{1000,0},{1e6+1e3,1e6}} where two-boxing always gives you 1000 dollars more.

But, once you realise that you might be a simulated version, the expected utility of one-boxing is 1e6 but of two-boxing is now is 5e5+1e3. So, one-box.

A similar analysis applies to t... (read more)

Ronak30

Care to elaborate? Because otherwise I can say "it totally is!", and we leave at that.

Basically, signals take time to travel. If it is ~.1 s, then predicting it that much earlier is just the statement that your computer has faster wiring.

However, if it is a minute earlier, we are forced to consider the possibility - even if we don't want to - that something contradicting classical ideas of free will is at work (though we can't throw out travel and processing time either).

Ronak00

For less than 85 pages, his main argument is in sections 3 and 4, ~20 pages.

Ronak50

No his thesis is that it is possible that even a maximal upload wouldn't be human in the same way. His main argument goes like this:

a) There is no way to find out the universe's initial state, thanks to no-cloning, the requirement of low entropy, and there being only one copy.

b) So we have to talk about uncertainty about wavefunctions - something he calls Knightian uncertainty (roughly, a probability distribution over probability distributions).

c) It is conceivable that particles in which the Knightian uncertainties linger (ie they have never spoken to any... (read more)

Ronak00

Yes, I agree with you - but when you tell some people that the question arises of what is in the big-money box after Omega leaves... and the answer is "if you're considering this, nothing."

A lot of others (non-LW people) I tell this to say it doesn't sound right. The bit just shows you that the seeming closed-loop is not actually a closed loop in a very simple and intuitive way** (oh and it actually agrees with 'there is no free will'), and also it made me think of the whole thing from a new light (maybe other things that look like closed loops c... (read more)

2TheOtherDave
I suppose. When dealing with believers in noncompatibilist free will, I typically just accept that on their view a reliable Predictor is not possible in the first place, and so they have two choices... either refuse to engage with the thought experiment at all, or accept that for purposes of this thought experiment they've been demonstrated empirically to be wrong about the possibility of a reliable Predictor (and consequently about their belief in free will). That said, I can respect someone refusing to engage with a thought experiment at all, if they consider the implications of the thought experiment absurd. As long as we're here, I can also respect someone whose answer to "Assume Predictor yadda yadda what do you do?" is "How should I know what I do? I am not a Predictor. I do whatever it is someone like me does in that situation; beats me what that actually is."
Ronak-20

I like his causal answer to Newcomb's problem:

In principle, you could base your decision of whether to one-box or two-box on anything you like: for example, on whether the name of some obscure childhood friend had an even or odd number of letters. However, this suggests that the problem of predicting whether you will one-box or two-box is “you-complete.” In other words, if the Predictor can solve this problem reliably, then it seems to me that it must possess a simulation of you so detailed as to constitute another copy of you (as discussed previously).

... (read more)
4Manfred
Simple but misleading. This is because Newcomb's problem is not reliant on the predictor being perfectly accurate. All they need to do is predict you so well that people who one-box walk away with more expected utility than people who two-box. This is easy - even humans can predict other humans this well (though we kinda evolved to be good at it). So if it's still worth it to one-box even if you're not being copied, what good is an argument that relies on you being copied to work?
0TheOtherDave
It seems way simpler to leave out the "freely willed decision" part altogether. If we posit that the Predictor can reliably predict my future choice based on currently available evidence, it follows that my future choice is constrained by the current state of the world. Given that, what remains to be explained?
Ronak10

While reading books. Always particular voices for every character. So much so, I can barely sit through adaptations of books I've read. And my opinion of a writer always drops a little bit when I meet hjm/her, and the voice in my head just makes more sense for that style.

Ronak00

I'd wager people who do well on tests are apt to be the same ones who get high marks on Cognos reports--i.e., the same prejudices affect what's deemed valuable for both.

Well, fair enough.

Load More