All of Viktor Riabtsev's Comments + Replies

If you were dead in the future, you would be dead already. Because time travel is not ruled out in principle.

Danger is a fact about fact density and your degree of certainty. Stop saying things with the full confidence of being afraid. And start simply counting the evidence.

Go back a few years. Start there.

-4Viktor Riabtsev
If you were dead in the future, you would be dead already. Because time travel is not ruled out in principle. Danger is a fact about fact density and your degree of certainty. Stop saying things with the full confidence of being afraid. And start simply counting the evidence. Go back a few years. Start there.

Yeah, if you use religious or faith baised terminology, it might trigger negative signals (downvotes). Though whether that is because the information you meant to convey was being disagreed with, or it's because the statements themselves are actually overall more ambiguous, would be harder to distinguish.

Some kinds of careful resoning processes vibe with the community, and imop yours is that kind. Questioning each step separatetly on it's merits, being sufficiently skeptical of premises leading to conclusions.

Anyways, back to the subject of f and inferring... (read more)

Somehow, he has to populate the objective function whose maximum is what he will rationally try to do. How he ends up assigning those intrinsic values relies on methods of argument that are neither deductive nor observational.

In your opinion, does this relate in any way to the "lack of free will" arguments, like those alleged by Sam Harris? The whole: I can ask you about what your favourite movie is, and you will think of some. You will even try to justify your choices if asked about it, but ultimately you had no control of what movies popped into your head.

3Jim Pivarski
This is a good example of needing to watch my words: the same sentence, interpreted from the point of view of no-free-will, could mean the complex function of biochemical determinism playing out, resulting in what the human organism actually does. What I meant was the utility function of consequentialism: for each possible goal x, you have some preference of how good that is f(x), and so what you're trying to do is to maximize f(x) over x. It's presupposing that you have some ability to choose x1 instead of x2, although there are some compatibilist views of free will and determinism that blur the line. My point in that paragraph, though, is that you might have a perfectly rational machinery for optimizing f, but one has to also choose f. The way you choose f can't be by optimizing over f. The reasons one has for choosing f also can't be directly derived from scientific observations about the physical world, because (paraphrasing David Hume), an "is" does not imply an "ought." So the way we choose f, whatever that is, requires some kind of argumentation or feeling that is not derivable from the scientific method or Bayes' theorem.

I feel like there are local optima. That getting to a different stable equilibrium involves having to "get worse" for a period of time. To question existing paradigms and assumptions. I.e. performing the update feels terrible, in that you get periodic glimpses of "oh, my current methodology is clearly inadequate", which feels understandably crushing.

The "bad mental health/instability" is an interim step where you are trying to integrate your previous emotive models of certain situations, with newer models that appeal to you intelligently (i.e. feels like t... (read more)

1Noosphere89
I definitely agree that the goal should be to be emotionally healthy while accepting reality as it is, but my point really is that the two goals may not always come together. I suspect that truths that could cause bad mental health/instability probably have the following properties: 1. Non-local belief changes must be made. That is, you can't compartmentalize the changes to a specific area. 2. Extreme implications, that is it implies much higher implications than your previous beliefs. 3. Contradicts what you deeply believe or value. These are the properties I expect to cause mental health problems for truths.

No, that's fair.

I was mostly having trouble consuming that 3-4-5 stage paradigm. Afraid that it's a not a very practically useful map; i.e. doesn't actually help you instrumentally navigate anywhere. But realized half way through composing that argument, that it's very possible I'm just wrong. So decided to ask for an example of someone using this framework to actually successfully orient somewhere.

So the premise is that there are goals you can aim for. Could you give an example a goal you are currently aiming for?

1Sean Aubin
I am irrationally/disproportionately insecure about discussing my mediocre/generic goals in a public forum, so I'd rather discuss this in-person at the meetup. :apologetic-emoji

Would it be okay to start some discussion about the David Chapman reading in the comments here?

Here's some thoughts that I had while reading.

When Einstein produced general relativity, the success criteria was "it produces Newton's laws of gravity as a special case approximation". I.e. it had to produce the same models as have already been verified as accurate to a certain level of precision.

If more rationality knowledge produces depression and otherwise less stable equilibria within you, then that's not a problem with rationality. Quoting from a lesswrong ... (read more)

1Noosphere89
I conjecture roughly the opposite, that is sometimes in the pursuit of winning or truth with rationality, that sometimes there will be things that are more likely to be right but also cause bad mental health/instability. In other words, there are truths that are both important but also likely to cause bad mental health.
1Sean Aubin
I think David's primary concern is choosing the goals in "systematically finds a better path to goals" which he wants to name "meta-rationality" for the sake of discussion, but I think could be phrased as part of the rationality process?

I found the character sheet system to be very helpful. In two words its just a ranked list of "features"/goals you're working towards, with a comment slot (it's just a google sheet).

I could list personal improvements I was able to gain from the regular use of this tool, like weight loss/exercise habits etc., but that feels too much like bragging. Also, I can't prove correlation vs causation.

The cohort system provides a cool social way to keep yourself accountable to yourself.

Dead link for "Why Most Published Research Findings Are False". Googling just the url parameters yields this.

Did anyone else get so profoundly confused that they googled "Artificial Addition"? Only when I was half way though the bullet point list that it clicked that the whole post is a metaphor for common beliefs about AI. And that was on the second time reading, first time I gave up before that point.

I shall not make the mistake again!

You probably will. I think this biases thing doesn't disappear even when you're aware of it. It's a generic human feature. I think self-critical awareness will always slip at the crucial moment; it's important to remember this and acknowledge it. Big things vs small things as it were.

3lesswronguser123
  The point is to be lesswrong! :) 

On my more pessimistic days I wonder if the camel has two humps.)

Link is dead. Is this the new link?

It seems less and less like a Prisoner's Dilemma the more I think about it. Chances are, "oops" I messed up.

I still feel like the thing with famous names like Sam Harris, is that there is a "drag" force on his penetration on the culture nowadays because there is a bunch of history that has been (incorrectly) publicized. His name is associated with controversy; despite his best to avoid it.

I feel like you need to overcome a "barrier to entry" when listening to him. Unlike Eliezer, who's public image (in my limited opinion) is actually new user friendly.

Some

... (read more)

I could be off base here. But a lot of cooperate vs non-cooperate classical stories often involve two parties who hate each other's ideologies.

Could you then not say: "They have to first agree and/or fight a Prisoner's Dilemma on an ideological field"?

4Sniffnoy
I think you're going to need to be more explicit. My best understanding of what you're saying is this: Each participant has two options -- to attempt to actually understand the other, or to attempt to vilify them for disagreeing, and we can lay these out in a payoff matrix and turn this into a game. I don't see offhand why this would be a Prisoner's Dilemma, though I guess that seems plausible if you actually do this. It certainly doesn't seem like a Stag Hunt or Chicken which I guess are the other classic cooperate-or-don't games. My biggest problem here is the question of how you're constructing the payoff matrices. The reward for defecting is greater ingroup acceptance, at the cost of understanding; the reward for both cooperating is increased understanding, but likely at the cost of ingroup acceptance. And the penalty for cooperating and being defected on seems to be in the form of decreased outgroup acceptance. I'm not sure how you make all these commensurable to come up with a single payoff matrix. I guess you have to somehow, but that the result would be a Prisoner's Dilemma isn't obvious. Indeed it's actually not obvious to me here that cooperating and being defected on is worse than what you get if both players defect, depending on one's priorities, which woud definitely not make it a Prisoner's Dilemma. I think that part of what's going on here is that different people's weighting of these things may substantially affect the resulting game.

So ... a prisoner's dilemma but on a meta level? Which then results in primary consensus.

5Sniffnoy
What does this have to do with the Prisoners' Dilemma?

Yep. Just have to get into the habit of it.

Less Wrong consists of three areas: The main community blog, the Less Wrong wiki and the Less Wrong discussion area.

Maybe redirect the lesswrong.com/r/discussion/ link & description to the "Ask a Question" beta?

5TheWakalix
This is the old version, kept for the sake of not deleting old things. It is not meant to be an accurate description of modern LW.

That was a great read.

figure out what was going on rather than desperately trying to multiply and divide all the numbers in the problem by one another.

That one hits home. I've been doing a bit of math lately, nothing too hard, just some derivatives/limits, and I've found myself spending inordinate amounts of time trying taking derivatives and do random algebra. Just generally flailing around hoping to hit the right strategy instead pausing to think first: "How should this imply that?" or "What does this suggest?" before doing rote algebra.

UV meters! Thank you! Seems such an obvious idea in hindsight.

Why wonder blindly when you can quantify it. I'll look into getting one.

2Douglas_Knight
Or you could just look at the weather report, now that you know what to look for.

Dead link to "scientists shouldn't even try to take ethical responsibility for their work" link is now here

2Raemon
fixed

I did that a couple minutes ago. Then tried to fix the formatting, and I think I then subsequently undid your formatting fixes.

5Ben Pace
ahaha Added: I fixed it again.

Related:

“Sometimes a hypocrite is nothing more than a man in the process of changing.” ― Brandon Sanderson, Oathbringer (By Dalinar Kholin)

Umm, it's a real thing. ECC memory https://en.m.wikipedia.org/wiki/ECC_memory I'm sure it isn't 100% foolproof (coincidentally the point of this article) but I imagine it reduces error probability by orders of magnitude.

I'd say there are mental patterns/heuristics that can be learned from video games that are in fact useful.

Persistence, optimization, patience.

I won't argue there aren't all sorts of exciting pitfalls and negatives that could also be experienced; I would just point at something like Dark Souls and claim: "yeah, that one does it well enough on the positives".

That's one large part of the traditional approach to the Santa-ism, yeah. But, it doesn't have to be, as Eliezer describes in the top comment.

it is still relatively unlikely that a person disagree for an opportunity to refine their model of the universe.

It still does happen though. I've only gotten this far in the Recommended Sequences, but I've been reading the comments whenever I finish a sub-sequence; and they (a) definitely add to the understanding, and (b) expose occasional comment threads where two people arrive at mutual understanding (clear up lexical miscommunication etc.). "oops" moments are rare, but the whole karma system seems great for occasional productive di... (read more)

35 - 8 = 20 + (15 - 8)

Wow. I've never even conceived of this (on it's own or) as a simplification.

My entire life has been the latter simplification method.

My favorite thing to do in physics/math classes, all the way up 2nd year in university (I went into engineering), was to ask others how they fared on tests, (in order to) then try to figure out why my answers were wrong.

I found genuine pleasure in understanding where I went wrong. Yet this seemed taboo in highschool, and (slightly less) frowned upon in university.

I feel like rewarding the student who messed up, however much or little, with some fraction of the total test score, like 10%; would be a great idea. You gain incentive to figure out what you missed; even if you care little about it. That's better then nothing.

Reading these comment chains somehow strongly reminds of listening to Louis CK.

I found a reference to a very nice overview for the mathematical motivations of Occam's Razor on wikipedia.

It's Chapter 28: Model Comparison and Occam's Razor; from (page 355 of) Information Theory, Inference, and Learning Algorithms (legally free to read pdf) by David J. C. MacKay.

The Solomonoff Induction stuff went over my head, but this overview's talk of trade-offs between communicating increasing numbers of model parameters vs having to communicate less residuals (ie. offsets from real data); was very informative.

then your model says that your beliefs are not themselves evidence, meaning they

I think this should be more like "then your model offers weak evidence that your beliefs are not themselves evidence".

If you're Galileo and find yourself incapable of convincing the church about heliocentrism, this doesn't mean you're wrong.

Edit: g addresses this nicely.

Upvoted for the "oops" moment.

Thank you. I tried using http://archive.fo/ , but no luck.

I'll add https://web.archive.org/ to bookmarks too.

3Said Achmiz
Archived version.

Yeah, you never know if someone in the process of reading the Sequences, won't periodically go back and try to read all the discussions. Like, I am not going to read the twenty posts with 0 karma and 0 replies; but ones with comments? Opposing ideas and discussions spark invigorating thought. Though it does get a bit tedious on the more popularized articles, like this one.

I am going to try and sidetrack this a little bit.

Motivational speeches, pre-game speeches: these are real activities that serve to "get the blood flowing" as it were. Pumping up enthusiasm, confidence, courage and determination. These speeches are full of cheering lines, applause lights etc., but this doesn't detract from their efficacy or utility. Bad morale is extremely detrimental to success.

I think that "Joe has utility-pumping beliefs" in that he actually believes the false fact "he is smart and beautiful"; is the w... (read more)

Show him how to send messages using flashing mirrors.

Oh god. That is actually just humongous in it's possible effect on warfare.

I mean add simple ciphers to it and you literally add another whole dimension to warfare.

Communication lines setup this way are almost like adding radio. Impractical in some situation, but used in regional warfare with multiple engagements? This is like empire forming stuff just from reflective stone plus semi-trivial education equals dominance.

LessWrong FAQ

Hmm, couldn't find a link directly on this site. Figured someone else might want it too (although a google search did kind of solve it instantly).

I suggest the definition that biases are whatever cause people to adopt invalid arguments.

False or incomplete/insufficient data can cause the adoption of invalid arguments.

Contrast this with:

The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concl
... (read more)

drag in Bayes's Theorem and ; the link was moved to http://yudkowsky.net/rational/bayes/, but Eliezer seems to suggest https://arbital.com/p/bayes_rule/?l=1zq over it. (and it's really really good)

Thanks. I bookmarked http://archive.fo/ for these kinds of things.

The Simple Truth link should be http://yudkowsky.net/rational/the-simple-truth/

2habryka
Thanks, fixed!

I am guessing that the link what truth is. is meant to be http://yudkowsky.net/rational/the-simple-truth

3habryka
Thanks, fixed as well!

something terrible happens link is broken. Was moved to http://yudkowsky.net/other/yehuda/

2habryka
Also fixed!
3Vladimir_Nesov
There's an archived copy here.
Load More