I have a couple of questions about this subject...
Does it still count if the AI "believes" that it needs humans when it, in fact, does not?
For example does it count if you code into the AI the belief that it is being run in a "virtual sandbox," watched by a smarter "overseer" and that if it takes out the human race in any way, then it will be shut down/tortured/highly negative utilitied by said overseer?
Just because an AI needs humans to exist, does that really mean that it won't kill them anyway?
This argument seems to be co...
Sorry, I am having difficulty explaining as I am not sure what it is I am trying to get across, I lack the words. I am having trouble with the use of the word predict, as it could imply any number of methods of prediction, and some of those methods change the answer you should give.
For example if it was predicting by the colour of the player's shoes it may have a micron over 50% chance of being right, and just happened to have been correct the 100 times you heard of. In that case one should take a and b, if, on the other hand, it was a visitor from a highe...
Thank you. By depersonalising the question it makes it easier for me to think about. If do you take one box or two becomes should one take one box or two... I am still confused. I'm confident that just box B should be taken, but I think that I need information that is implied to exist but is not presented in the problem to be able to give a correct answer. Namely the nature of the predictions Omega has made.
With the problem as stated I do not see how one could tell if Omega got lucky 100 times with a flawed system, or if it has a deterministic or causalit...
Thanks, that does help a little, though I should say that I am pretty sure I hold a number of irrational beliefs that I am yet to excise. Assuming that Omega literally implanted the idea into my head is a different thought experiment to Omega turned out to be predicting is different to Omega saying that it predicted the result etc. Until I know how and why I know it is predicting the result I am not sure how I would act in the real case. How Omega told me that I was only allowed to pick box a and b or just b may or may not be helpful but either way not as ...
The difficulty I am having here is not so much that the stated nature of the problem is not real so much that it is asking one to assume they are irrational. With a .999999999c spaceship it is not irrational to assume one is in a trolley on a space ship if one is in a trolley on a space ship. There is not enough information in the Omega puzzle as it assumes you, the person it drops the boxes in front of, know that omega is predicting, but does not tell you how you know that. As the mental state 'knowing it is predicting' is fundamental to the puzzle, not k...
Sorry, I'm new here, I am having trouble with the Idea that anyone would consider taking both boxes in a real world situation. How would this puzzle be modeled differently, versus how would it look differently if it were Penn and Teller flying Omega?
If Penn and Teller were flying Omega then they would have been able to produce exactly the same results as seen, without violating causality or time travelling or perfectly predicting people by just cheating and emptying the box after you choose to take both.
Given that "it's cheating" is a significant...
I agree with the terms, for the sake of explanation by magical thinker I was thinking along the lines of young non science trained children, or people who have either no knowledge of or no interest in the scientific method. Ancient Greek philosophers could come under this label if they never experimented to test their ideas. The essence is that they theorise without testing their theory.
In terms of the task, my first idea was the marshmallow test from a Ted lecture, "make the highest tower you can that will support a marshmallow on top from dry spagh...
Good point, I do not, but I find it strange that people, myself included, practice at enjoying something when there are plenty of things that are enjoyable from the start. Especially when starting an aquired taste is often quite uncomfortable. I salute the mind that looked at a tobacco plant, smoked it, coughed its lungs out, and then kept doing it till it felt good.
Why do people take the time to develop "aquired tastes". "That was an unpleasant experience", somehow becomes "I will keep doing it until I like it."
My guess is social conditioning, but then how did it become popular enough for that to be a factor?
I do it because I love variety and thus value having more possible pleasant experiences to have.
Well said. In considering your response I notice that a process P as part of its cost E has room to include the cost of learning the process if necessary, something that was concerning me.
I am now considering a more complicated case.
You are in a team of people of which you are not the team leader. Some of the team are scientists, some are magical thinkers, you are the only Bayesian.
Given an arbitrary task which can be better optimised using Bayesian thinking, is there a way of applying a "Bayes patch" to the work of your teammates so that they...
Science is simple enough that you can sic a bunch of people on a problem with a crib sheet and an "I can do science, me" attitude, and get a good enough answer early. The mental toolkit for applying Bayes is harder to give to people. I am right at the beggining approaching from a mentally lazy, slight psychological, and engineering background, when I first saw the word Bayes was in a certain Harry Potter fanfic a week or so ago. I failed the insightful tests in the early sequences, and caught myself noticing I was confused and not doing anything ...
Edit - I didn't read the premises correctly. I missed the importance of the bit "Your mind keeps drifting to the explanations you use on television, of why each event plausibly fits your market theory. But it rapidly becomes clear that plausibility can't help you here—all three events are plausible. Fittability to your pet market theory doesn't tell you how to divide your time. There's an uncrossable gap between your 100 minutes of time, which are conserved; versus your ability to explain how an outcome fits your theory, which is unlimited."
The t...
If I am given a thing, like a mug, I now have one more mug than I had before. My need for mugs has therefore decreased. If I am to sell the mug, I must examine how much I will need the mug after it is gone and place a price on that loss of utility. If I am buying a mug I must set a price on how much I need it after I have it and place a price on that increase of utility. If the experiment is not worded carefully then the thought process could go along the lines of...
I have 2 mugs, and often take a tea break with my mate Steve. To sell one of those mugs wou... (read more)