Comment author: Zack_M_Davis 03 October 2016 10:24:54PM 1 point [-]

That sounds like "thesis is true" or "thesis is not true" are reasonable positions. Bayesian beliefs have probabilities attached to them.

Sometimes, even people who understand Bayesian reasoning use idiomatic phrases like "believe is true" as a convenient shorthand for "assign a high probability to"! I can see how that might be confusing!

Comment author: ChristianKl 03 October 2016 02:02:14PM 0 points [-]
  1. They don't force you to sit inside, rather allow you to go to the loo or just for a little walk.

It might be true for the seminar that you attended but historically there's limitation of how often people are allowed to go to the toilet.

All the participants benefitted a lot from this program.

That's a silly statement. You very likely don't have access to metrics to know whether it's true.

P.S - They don't force you to advertise their courses, they just ask you to make a difference in someone else's life whom you care about. Also, we signed up for the advance courses ourselves.

They tell people that they can say "No" but they do exert psychological pressure on people to advertise the seminar.

Landmark quite often pretends that the care about inviting friends&relatives for a graduate party and then treat that graduate party as a sales event.

Comment author: Alfred30 03 October 2016 09:34:39AM 0 points [-]

"I completed my Landmark Forum around 7 weeks ago and it was one hell of an experience!

A few things I would like to bring to your notice- 1. This course is held in a hall with no windows due to air conditioning. 2. They do provide breaks and ask you to stretch in-between the seminar. 3. They don't force you to sit inside, rather allow you to go to the loo or just for a little walk.

It all started on a Friday morning and lasted for 3 long, non-tiring days. We were welcomed by the volunteers and name tags for easier conversations. We were hammered and questioned at every point but the results were remarkable. It was a life-changing adventure and we learnt a lot of things. The vocabulary used was a little different but the message was effective and helpful. We basically worked as a team and helped each other out, analysed the matter and gained a new perspective. We all experienced a huge transformation till the end of the third day. All the participants benefitted a lot from this program.

P.S - They don't force you to advertise their courses, they just ask you to make a difference in someone else's life whom you care about. Also, we signed up for the advance courses ourselves. (It was irresistible).

At the end, I would like to say that Landmark Forum is of great value; it changes your life and makes it a happier journey. Somehow we all are stuck in some part or the other and this program helps us get rid of all the blockages. Many people rave about them and it's definitely worth a try!"

Comment author: nshepperd 03 October 2016 04:50:52AM 0 points [-]

"For a true Bayesian, it is impossible to seek evidence that confirms a theory"

The important part of the sentence here is seek. The isn't about falsificationism, but the fact that no experiment you can do can confirm a theory without having some chance of falsifying it too. So any observation can only provide evidence for a hypothesis if a different outcome could have provided the opposite evidence.

For instance, suppose that you flip a coin. You can seek to test the theory that the result was HEADS, by simply looking at the coin with your eyes. There's a 50% chance that the outcome of this test would be "you see the HEADS side", confirming your theory (p(HEADS | you see HEADS) ~ 1). But this only works because there's also a 50% chance that the outcome of the test would have shown the result to be TAILS, falsifying your theory (P(HEADS | you see TAILS) ~ 0). And in fact there's no way to measure the coin so that one outcome would be evidence in favour of HEADS (P(HEADS | measurement) > 0.5), without the opposite result being evidence against HEADS (P(HEADS | ¬measurement) < 0.5).

Comment author: lucidfox 02 October 2016 10:44:01PM *  0 points [-]

It is correct that we can never find enough evidence to make our certainty of a theory to be exactly 1 (though we can get it very close to 1). If we were absolutely certain in a theory, then no amount of counterevidence, no matter how damning, could ever change our mind.

Comment author: ChristianKl 02 October 2016 07:17:02PM *  1 point [-]

Like I just said, modern science started with an extreme outlier.

There's a lot of history of science and it generally doesn't find that it all hinges on one event like Newton.

Comment author: DittoDevolved 02 October 2016 04:13:27PM *  0 points [-]

Hi, new here.

I was wondering if I've interpreted this correctly:

'For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it.'

Does this mean that it is impossible to prove the truth of a theory? Because the only evidence that can exist is evidence that falsifies the theory, or supports it?

For example, something people know about gravity and objects under it's influence, is that on Earth objects will accelerate at something like 9.81ms^-2. If we dropped a thousand different objects and observed their acceleration, and found it to be 9.81ms^-2, we would have a thousand pieces of evidence supporting the theory, and zero pieces to falsify the theory. We all believe that 9.81 is correct, and we teach that it is the truth, but we can never really know, because new evidence could someday appear that challenges the theory, correct?

Thanks

Comment author: wafflepudding 02 October 2016 09:04:04AM 0 points [-]

Gotcha. So, assuming that the actual Isaac Newton didn't rise to prominence*, are you thinking that human life would usually end before his equivalent came around and the ball got rolling? Most of our existential risks are manmade AFAICT. Or you think that we'd tend to die in between him and when someone in a position to build the LHC had the idea to build the LHC? Granted, him being "in a position to build the LHC" is conditional on things like a supportive surrounding population, an accepting government, etcetera; but these things are ephemeral on the scale of centuries.

To summarize, yes, some chance factor would def prevent us from building the LHC as the exact time we did, but with a lot of time to spare, some other chance factor would prime us to build it somewhen else. Building the LHC just seems to me like the kind of thing we do. (And if we die from some other existential risk before Hadron Colliding (Largely), that's outside the bounds of what I was originally responding to, because no one who died would find himself in a universe at all.)

*Not that I'm condoning this idea that Newton started science.

Comment author: hairyfigment 02 October 2016 01:33:56AM 0 points [-]

I am strongly disagreeing with you. The cultures that existed on Earth for tens of millenia or more were recognizably human; one of them built an LHC "eventually", but any number of chance factors could have prevented this. Like I just said, modern science started with an extreme outlier.

Comment author: wafflepudding 02 October 2016 01:04:03AM 0 points [-]

Are you responding to "Unless human psychology is expected to be that different from world to world?"? Because that's not my position, I'd think that most things recognizable as human will be similar enough to us that they'd build an LHC eventually. I guess I'm not exactly sure what you're getting at.

Comment author: hairyfigment 29 September 2016 10:15:19PM 0 points [-]

...As I pointed out recently in another context, humans have existed for tens of thousands of years or more. Even civilization existed for millenia before obvious freak Isaac Newton started modern science. Your position is a contender for the nuttiest I've read today.

Possibly it could be made better by dropping this talk of worlds and focusing on possible observers, given the rise in population. But that just reminds me that we likely don't understand anthropics well enough to make any definite pronouncements.

Comment author: curtd59 29 September 2016 06:03:43PM 0 points [-]

WHERE PHILOSOPHY(rational instrumentalism) MEETS SCIENCE (physical instrumentalism)?

Philosophy and Science are identical processes until we attempt to use one of them without the other.

That point of demarcation is determined by the limits beyond which we cannot construct either (a) logical, or (b) physical, instruments with which to eliminate error, bias, wishful thinking, suggestion, loading and framing, obscurantism, propaganda, pseudorationalism, pseudoscience, and outright deceit.

Comment author: Good_Burning_Plastic 29 September 2016 08:03:41AM 2 points [-]

Computing can't harm the environment in any way

Well...

Comment author: wafflepudding 29 September 2016 02:28:39AM 0 points [-]

I'd agree that certain worlds would have the building of the LHC pushed back or moved forward, but I doubt there would be many where the LHC was just never built. Unless human psychology is expected to be that different from world to world?

Comment author: SilasBarta 29 September 2016 01:52:03AM 1 point [-]

My favorite one: burning wood for heat. Better than fossil fuels for the GW problem, but really bad for local air quality.

Comment author: TheAncientGeek 28 September 2016 02:02:22PM 0 points [-]

Given that qualia ere what they appear to be., are you denying that qualia can appear simple, or that they are just appearances?

Comment author: Vaniver 27 September 2016 09:38:12PM 2 points [-]

There shouldn't be any conflicts between VoI and Bayesian reasoning; I thought of all of my examples as Bayesian.

From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:

I don't think that example describes the situation you're talking about. Remember that VoI is computed in a forward-looking fashion; when one has a (1, 1) beta distribution over the arm, one thinks it is equally likely that the true propensity of the arm is above .5 and below .5.

The VoI comes into that framework by being the piece that agitates for exploration. If you've pulled arm1 seven times and gotten 4 heads and three tails, and haven't pulled arm2 yet, the expected value of pulling arm1 is higher than pulling arm2 but there's a fairly substantial chance that arm2 has a higher propensity than arm1. Heuristics that say to do something like pull the level with the higher 95th percentile propensity bake in the VoI from pulling arms with lower means but higher variances.


If, from a forward-looking perspective, one does decrease their subjective value of decision situation by gaining information, then one shouldn't gain that information. That is, it's a bad idea to pay for a test if you don't expect the cost of the test to pay for the additional value. (Maybe you'll continue to pull arm1, regardless of the results of pulling arm2, as in the case where arm1 has delivered heads 7 times in a row. Then switching means taking a hit for nothing.)

One thing that's important to remember here is conservation of expected evidence--if I believe now that running an experiment will lead me to believe that arm1 has a propensity of .1 and arm2 has a propensity of .2, then I should already believe those are the propensities of those arms, and so there's no subjective loss of well-being.

Comment author: capybaralet 26 September 2016 10:48:41PM *  1 point [-]

Does anyone have any insight into VoI plays with Bayesian reasoning?

At a glance, it looks like the VoI is usually not considered from a Bayesian viewpoint, as it is here. For instance, wikipedia says:

""" A special case is when the decision-maker is risk neutral where VoC can be simply computed as; VoC = "value of decision situation with perfect information" - "value of current decision situation" """

From the perspective of avoiding wireheading, an agent should be incentivized to gain information even when this information decreases its (subjective) "value of decision situation". For example, consider a bernoulli 2-armed bandit:

If the agent's prior over the arms is uniform over [0,1], so its current value is .5 (playing arm1), but after many observations, it learns that (with high confidence) arm1 has reward of .1 and arm2 has reward of .2, it should be glad to know this (so it can change to the optimal policy, of playing arm2), BUT the subjective value of this decision situation is less than when it was ignorant, because .2 < .5.

Comment author: So8res 26 September 2016 06:39:53PM 2 points [-]

Thanks!

Comment author: gucciCharles 26 September 2016 05:02:56AM 0 points [-]

Isn't teaching itself a skill? So what that she was a bad musician, she was obviously a first rate teacher (independent of the subject that she taught).

View more: Prev | Next