I don't see how this is a problem. Do you think it is a problem ? If so, then why specifically and do you have any ideas for a solution?
To be fair, it's really hard to figure out WTF is going on when humans are involved. Their reasoning is the result of multiple motivations and a vast array of potential reasoning errors. If you don't believe me try the following board games with your friends: Avalon, Coup, Sheriff of Nottingham, Battlestar Galactica, or any that involve secrets and lying.
Edited phrasing to make it more clear....
Your phrasing makes it also look like a plausible mistake for someone in a new situation with little time to consider things.
A story for the masses is necessary and this doesn't appear to be a bad stab at one. Harry can always bring trusted others on board by telling them what actually happened. He might have actually done that already and this is their plan. How much time did Harry have to do stuff before needing to show up anyhow (40m? 50m?)? Also, Prof. McGonagall is terrible at faking anything so telling her the truth before this seems like a bad idea.
Lucius is both dead and warm. I think he's dead dead unless Eliezer has someone like Harry does something in a very narrow time window. Dumbledore is a much easier problem to solve (story wise) and can be solved at the same time as the Atlantis story thread if that is what the author plans.
If you want to make the scenario more realistic then put more time pressure on Voldemort or put him under more cognitive stress some other way. The hardest part for Voldemort is solving this problem in a short time span and NOT coming up with a solution that foils Harry. The reason experienced soldiers/gamers with little to no intelligence still win against highly intelligent combatants with no experience is that TIME matters when you're limited to a single human's processing power. In virtually every combat situation one is forced to make decisions fas...
I begin to wonder exactly how the story will be wrapped up. I had thought the source of magic would be unlocked or the Deathly Hallows riddle would be tied up. However, I wonder if there are enough chapters to do these things justice. I also wonder whether Eliezer will do anything like was done for Worm where the author invited suggestions for epilogs for specific characters.
I see your point, but Voldemort hasn't encountered the AI Box problem has he? Further, I don't think Voldemort has encountered a problem where he's arguing with someone/something he knows is far smarter than himself. He still believes Harry isn't as smart yet.
You should look at reddit to coordinate your actions with others. One idea I like is to organize the proposal of all reasonable ideas and minimize duplication. Organization thread here: http://www.reddit.com/r/HPMOR/comments/2xiabn/spoilers_ch_113_planning_thread/
I agree that this task is far "easier task than a standard AI box experiment". I attacked it from a different angle though (HarryPrime can easily and honestly convince Voldemort he is doomed unless HarryPrime helps him).:
http://lesswrong.com/r/discussion/lw/lsp/harry_potter_and_the_methods_of_rationality/c206
Quirrelmort would be disgusted with us if we refused to consider 'cheating' and would certainly kill us for refusing to 'cheat' if that was likely to be extremely helpful.
"Cheating is technique, the Defense Professor had once lectured them. Or rather, cheating is what the losers call technique, and will be worth extra Quirrell points when executed successfully."
Actually, this isn't anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is 'good', 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result 'good').
Why hasn't Voldemort suspended Harry in air? He floated himself into the air as a precaution against proximity, line of sight problems, and probably magics that require a solid substance to transmit through. If Harry were suspended in air partial transfiguration options would be vastly reduced.
Why hasn't Voldemort rendered Harry effectively blind/deaf/etc. - Harry is gaining far more information in real time than necessary for Voldemort's purposes?
Also, it seems prudent not to let Harry get all over the place by shooting him, smashing him, etc. without...
I like this exercise. It is useful in at least two ways.
It might also be interesting to come up with a cherished group view and try to take that apart (e.g., cryonics after death is a good idea - perhaps start with the possibility that the future likely to be hostile to you such as unfriendly AI).
Anecdotal evidence amongst people I've questioned falls into two main categories. The 1st is the failure to think the problem through formally. Many simply focus on the fact that whatever is in the box remains in the box. The 2nd is some variation of failure to accept the premise of an accurate prediction of their choice. This actually counter intuitive to most people and for others it is very hard to even casually contemplate a reality in which they can be perfectly predicted (and therefore, in their minds, have no 'free will / soul'). Many conversations simply devolve into 'Omega can't actually make a such an accurate prediction about my choice therefore or I'd normally 2 box so I'm not getting my million anyhow'.
Game of Thrones and the new Battlestar Galactica appear to me to have characters that are either shallow and/or conflicted by evil versus evil. Yet they are very popular and as far as I can tell, character driven. I was wondering what it means. One thought I had was that many people are interested in relationship conflicts and that the characters don't need to be deep, they just need to reflect, between the main character cast, the personalities of the audience (as messed up as the audience might be).
It is not as much that they haven't given an argument or stated their position. It is that they are telling you (forcefully) WHAT to do without any justification. From what I can tell of the OP's conversation this person has decided to stop discussing the matter and gone straight to telling the OP what to do. In my experience, when a conversation reaches that point, the other person needs to be made aware of what they are doing (politely if possible - assuming the discussion hasn't reached a dead end, which is often the case). It is very human and tempting to rush to the 'Are you crazy?!! You should __.' and skip all the hard thinking.
Given the 'Sorry if it offends you' and the 'Like... no' I think your translation is in error. When a person says either of those things they are A. saying I no longer care about keeping this discussion civil/cordial and B. I am firmly behind (insert their position here). What you have written is much more civil and makes no demands on the other party as opposed to what they said "... you should ...."
That being said, it is often better to be more diplomatic. However, letting someone walk all over you isn't good either.
Do you have any suggestions on how to limit this? I find meetings often meander from someone's pet issue to trivial / irrelevant details while the important broader topic withers and dies despite the meeting running 2-3x longer than planned.
In meetings where I have some control, I try to keep people on topic, but it's quite hard. In meetings where I'm the 'worker bee' it's often hopeless (don't want to rub the boss the wrong way).
"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."
Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.
Let me offer a different translation: "You are proposing something that is profoundly inhuman to my sensibilities and is likely to have bad outcomes."
Rukifellth below has, I think, a much more likely reason for the reaction presented.
A more charitable translation would be "I strongly disagree with you and have not yet been able to formulate a coherent explanation for my objection, so I'll start off simply stating my disagreement." Helping them state their argument would be a much more constructive response than confronting them for not giving an argument initially.
I agree that they should uphold strict standards for numerous reasons. That doesn't prevent CFAR from discussing potential benefits (and side effects) of different drugs (caffeine, aspirin, modafinil, etc.). They could also recommend discussing such things with a person's doctor as well as what criteria are used to prescribe such drugs (they might already for all I know).
My current stance, which I'll push for quite strongly unless and until I encounter enough evidence against to update significantly, is that CFAR would do very poorly to talk explicitly about any drugs that the USA has a neurosis about. We can talk at a layer of abstraction above: "How might you go about determining what kinds of effects a given substance has on you?" But I am pretty solidly against CFAR listing potential benefits and drawbacks of any drugs that have become rallying cries for law enforcement or political careers.
Ah, I thought it was an over the counter drug.
It is, some places. Just not the USA where CFAR is operating now and the foreseeable future. I'm a big fan of modafinil as you might guess, but if CFAR were even idly considering providing or condoning modafinil use, I'd smack them silly (metaphorically); organizations must obey different standards than individuals.
I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?
What about trying bright lighting?: http://lesswrong.com/lw/gdl/my_simple_hack_for_increased_alertness_and/
I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?
As a schedule IV drug, it's surely some sort of crime to offer or accept. Some people will not want to associate with such people or organizations on moral grounds, risk-aversion grounds, or fear of other people's disapproval on either ground etc.
I'm glad to hear it is working well and is well received!
Once there has been some experience running these workshops I really hope there is something that CFAR can design for meetup groups to try / implement and/or an online version.
Is there a CFAR webpage that covers this particular workshop and how it went?
It is useful to consider because if AI isn't safe when contained to the best of our ability then no method reliant on AI containment is safe (i.e., chat boxing and all the other possibilities).
My draft attempt at a comment. Please suggest edits before I submit it.:
The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don't believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone ...
"1. Life is better than death. For any given finite lifespan, I'd prefer a longer one, at least within the bounds of numbers I can reasonably contemplate."
Have you included estimates of possible negative utilities? One thing we can count on is that if you are revived you will certainly be at the mercy of whatever revived you. How do you estimate the probability that what wakes you will be friendly? Is the chance at eternal life worth the risk of eternal suffering?
I think the CFAR is a great idea with tons of potential so I'm curious if there are any updates on how the meetup went and what sorts of things were suggested?
I'm confused as to what the point of the gate keeper is. Let us assume (for the sake of argument) everything is 'safe' except the gate keeper who may be tricked/convinced/etc. into letting the AI out.
With the understanding that I only have a few minutes to check for research data:
http://www.ncbi.nlm.nih.gov/pubmed/1801013
http://www.ncbi.nlm.nih.gov/pubmed/21298068 - "cognitive response ... to light at levels as low as 40 lux, is blue-shifted"
In the context of "what is the minimal amount of information it takes to build a human brain," I can agree that there is some amount of compressibility in our genome. However, our genome is a lot like spaghetti code where it is very hard to tell what individual bits do and what long range effects a change may have.
Do we know how much of the human genome can definitely be replaced with random code without problem?
In addition, do we know how much information is contained in the structure of a cell? You can't just put the DNA of our genome in ...
If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, I think it's fair to call it junk.
Unless this is a standard definition for describing DNA, I do not agree that such DNA is 'junk'. If the DNA serves a purpose it is not junk. There was a time when it was believed (as many still do) that the nucleus was mostly a disorganized package of DNA and associated 'stuff'. However, it is becoming increasing clear that it is highly structured and that structure is critical fo...
If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.
This is false. Just because we do not know what role a lot of DNA performs does not mean it is 'almost certainly junk'. There is far more DNA that is critical than just the 30,000 gene coding regions. You also have: genetic switches, regulation of gene expression, transcription factor binding sites, operators, enhancers, splice sites, DNA packaging sites, etc. Even in cases whe...
I would bet this is totally impractical for most studies. In the medical sciences the cost is prohibitive and for many other studies you need permission to experiment on organisms (especially hard when humans or human tissues are involved). Perhaps it would be easier for some of the soft sciences, but even psychology studies often work with human subjects and that would require non-trivial approval.
I look forward to the results of this study. Quite frankly, most soft science fields could use this sort of scrutiny. I'd also love to see how reproducible the studies done by medical doctors (as opposed to research scientists) are. Quite frankly, even the hard sciences have a lot of publications with problems, however, these erroneous results, especially if they are important to current topics of interest, are relatively quickly discovered since other labs often need to reproduce the results before moving forward.
I would add one caution. Failure to ...
Long term caffeine tolerance can be problematic. To combat this problem, every 2-4 months I stop taking caffeine for about 2 weeks (carefully planned for less hectic weeks). In my experience and that of at least one other colleague this method significantly lowers and possibly removes the caffeine tolerance. Two people does not make a study, but if you need to combat caffeine tolerance it may be worth a try.
How do you propose organizing a 'master list' of solutions, relevant plot pieces, etc. given the current forum format? Some people have made some lists, but they are often quickly buried beneath other comments. I'm also not familiar enough with how things work to know if a post can be edited days after it has been posted. One obvious solution is that a HPMOR reader who likes making webpages puts up wiki page for this. Can this be done on Lesswrong.com?
...Eliezer Yudkowsky's Author Notes, Chp. 81
This makes me worry that the actual chapter might’ve come as an anticlimax, especially with so many creative >suggestions that didn’t get used. I shall poll the Less Wrong discussants and see how they felt before I decide whether >to do this again. This was actually intended as a dry run for a later, serious “Solve this or the story ends sadly” puzzle – >like I used in Part 5 of my earlier story Three Worlds Collide - but I’ll have to take the current outcome into account when >deciding whether to go
If that is the case then the hat didn't actually say "it couldn't tell if Harry had any false memories." It said it couldn't detect deleted memories and seems to imply that 'sophisticated analysis' of all of his memories for 'inconsistencies' would be required to do so. The false memory given to Hermione is at the forefront of her mind and doesn't require the hat to scan her memories (though Hermione could replay memories of event for the hat presumably). In addition the false memory is entirely out of character with Hermione's personality whi...
The hat says specifically: "I can go ahead and tell you that there is definitely nothing like a ghost - mind, intelligence, memory, personality, or feelings - in your scar. Otherwise it would be participating in this conversation, being under my brim." It says memory specifically. Both a false memory and a 'scar memory' could at this point be treated as 'foreign' to Hermione.
Are you referring to this slightly earlier quote: "Anyway, I have no idea whether or not you've been Obliviated. I'm looking at your thoughts as they form, not readin...
If the sorting hat has enough access / ability to one's mind to sort children into their appropriate house then it seems entirely possible that it has enough access / ability to identify a false memory. The sorting hat is an extremely powerful artifact which implies that the false memory would have to be a significantly greater power for us to conclude at this point that it can remain hidden from the sorting hat.
The Sorting Hat when it was on Harry said that it couldn't tell if Harry had any false memories and that it just looks at thoughts as they form. So it is unlikely it can do much to detect such issues.
I'd like to "Hold Off on Proposing Solutions" or in this case hold off on advocating answers. I don't have time to list all the important bits of data we should be considering or enumerate all the current hypotheses, but I think both would be quite valuable.
Some quick hypothesis:
-Mr. Hat & Cloak is Quirrellmort & responsible for Hermione's 'condition'
-Mr. Hat & Cloak is Lucious & responsible for Hermione's 'condition'
-Mr. Hat & Cloak is Voldemort, but not the Quirrell body.
-Mr. Hat & Cloak is Quirrellmort and trying to ...
One of the primary problems with the rationalists, humanists, atheist, skeptics, etc. is that there is no higher level organization and thus we tend to accomplish very little compared to most other organizations. I fully support efforts to fix this problem.
If I understand this correctly your 'AI' is biased to do random things, but NOT as a function of its utility function. If that is correct then your 'AI' simple does random things (according to its non-utility bias) since its utility function has no influence on its actions.
I consider all of the behaviors you describe as basically transform functions. In fact, I consider any decision maker a type of transform function where you have input data that is run through a transform function (such as a behavior-executor, utility-maximizer, weighted goal system, a human mind, etc.) and output data is generated (and in the case of humans sent to our muscles, organs, etc.). The reason I mention this is that trying to describe a human's transform function (i.e., what people normally call their mind) as mostly a behavior-executor or jus...
I am having trouble scanning the HPMoR thread for topics I'm interested in due to both it's length and the lack of a hierarchical organization by topic. I would appreciate any help with this problem since I do not want to make comments that are simple duplicates of previous comments I failed to notice. With that in mind, is there a discussion forum or some method to scan the HPMoR discussion thread that doesn't involve a lot of effort? I have not found organizing comments by points to be useful in this respect.
Edit: I'm new and this is my 1st comment. I've read a lot of the sequences, but I don't know my way around yet. It's quite possible I'm missing a lot about how things work here.
Yes, yes there is :). http://boardgamegeek.com/boardgame/37111/battlestar-galactica