Dagon

Just this guy, you know?

Wiki Contributions

Comments

Can you simplify "idiosyncratic triggers of internal states"? Also, if most people are bad observers, then wouldn't that it's more helpful for them to have direct experience with it?

It matters a lot what "it" is.  Common targets of "just try it" are mystic or semi-mystic experiences around drugs, meditation, religion, etc., These tend to be hard to communicate because they're not actually evidence of outside/objective phenomena, they're evidence of an individual's reaction to something.  I have no clue whether that applies here or not - that's my primary point: one size does not fit all.

Note that Bob is making an error if he flatly denies Alice's experiences, rather than acknowledging that the experiences can be real without the underlying model being correct.

Answer by DagonMar 18, 202420

Probably crazy, yes.  Don't feel bad, all humans are.  

But when you lead with 'I can't shake my belief', that indicates an internal conflict that part of you doesn't believe, and since there's no evidence that can resolve the question, you probably could use professional help to figure out how to believe more mainstream illusions that are easier and more satisfying for most humans.

Yes, different "it" will have VASTLY different costs and potential evidence from trying, so the discussion doesn't generalize very well.  "you are reasoning too much" implies "you are empirically testing too little", which could easily be true or false, or neither (it could be "you are reasoning badly from evidence we agree on" or "you need to BOTH measure and reason a lot more clearly").

For some (but not all) topics, "just try it" is INCREDIBLY unhelpful - most people are pretty bad observers, and a lot of things don't separate experiential elements in ways that are easy to analyze which parts are evidence, which parts are idiosyncratic triggers of internal states.

I suppose it's because Bob isn't aware of all the things he needs to say before posting the question, and Alice assumes on what he needs while he thinks he doesn't need it.

Without actual specifics, it's hard to know WHY the disconnect is happening.  It does seem that Alice and Bob aren't in agreement over what the question is, but it's unclear which (if either) is closer to something useful.

This seems WAY over-abstracted.  There are important differences in what kinds of evidence is obtainable by what techniques for problems in very different domains.

Also, this seems unnecessarily adversarial between Bob and Alice.  Have they not agreed on the problem or on what would constitute a solution?  If they can reframe to a shared seeking of knowledge, it may be easier to actually talk about what each believes and why.

Is this in a situation where you're limited in time or conversational turns?  It seems like the follow-up clarification was quite successful, and for many people it would feel more comfortable than the more specific and detailed query.

In technical or professional contexts, saving time and conveying information more  efficiently gets a bit more priority, but even then this seems like over-optimizing.

That said, I do usually include additional information or a conversational follow-up hook in my "I don't know" answers.  You should expect to hear from me "I don't know, but I'd go at least 2 hours early if it's important", or "I don't know, what does Google Maps say?", or "I don't know, what time of day are you going?" or the like.

I'd love to see some reasoning and value calculations or sketches of what to do INSTEAD of the things you eschew (planning, saving, and working toward slight improvements in chances).  

Even if the likelihood is small, it seems like the maximum value activities are those which prepare for and optimize a continued future.  Who knows, maybe the horse will learn to sing!

Causal commitment is similar in some ways to counterfactual/updateless decisions.  But it's not actually the same from a theory standpoint.

Betting requires commitment, but it's part of a causal decision process (decide to bet, communicate commitment, observe outcome, pay).  In some models, the payment is a separate decision, with breaking of commitment only being an added cost to the 'reneg' option.

There's some subtlety here about exactly what "zooming" means.  In standard implementations, zooming recalculates a small area of the current view, such that the small area has higher precision ("zoomed"), but the rest of the space ("unzoomed") goes out of frame and the memory gets reused.  The end result is the same number of sampled points ("pixels" in the display) each zoom level.

There’s a saying about investing which somewhat applies here. “The market can stay irrational longer than you can stay solvent”. Another is “in the long run, we’re all dead.”

Nothing is forever, but many things can outlast your observations. Eventually everything is steady state, fine. But there can be a LOT of signal before then.

Note that your computer doesn’t run out of bits when exploring the Mandelbrot set. Bits can encode an exponential number of states, and a few megabytes is enough to not terminate for millennia if it’s only zooming in and recalculating thousands of times per second. Likewise with your job - if it maxes or mins a hundred years out, rather than one, it’s a very different frame.

It's surprising that it's taken this long, given how good public AI coding assistants were a year ago.  I'm skeptical of anything with only closed demos and not interactive use by outside reviewers, but there's nothing unbelievable about it.

As a consumer, I don't look forward to the deluge of low-quality apps that's coming (though we already have it to some extent with the sheer number of low-quality coders in the world). As a developer,I don't like the competition (mostly for "my" junior programmers, not yet me directly), and I worry a lot about whether the software profession can make great stuff ever again.

Load More