All of Duncan's Comments + Replies

0Dorikka
Mmhmm. I find it quite fun, despite having no familiarity with the show.
Duncan00

I don't see how this is a problem. Do you think it is a problem ? If so, then why specifically and do you have any ideas for a solution?

Duncan120

To be fair, it's really hard to figure out WTF is going on when humans are involved. Their reasoning is the result of multiple motivations and a vast array of potential reasoning errors. If you don't believe me try the following board games with your friends: Avalon, Coup, Sheriff of Nottingham, Battlestar Galactica, or any that involve secrets and lying.

0Transfuturist
There's a Battlestar Galactica board game? :D
Duncan50

Your phrasing makes it also look like a plausible mistake for someone in a new situation with little time to consider things.

2Luke_A_Somers
I was aiming for it to be a mistake that someone could make even in a relatively familiar situation with ample time to consider.
3Luke_A_Somers
I'm not sure what you mean here.
Duncan90

A story for the masses is necessary and this doesn't appear to be a bad stab at one. Harry can always bring trusted others on board by telling them what actually happened. He might have actually done that already and this is their plan. How much time did Harry have to do stuff before needing to show up anyhow (40m? 50m?)? Also, Prof. McGonagall is terrible at faking anything so telling her the truth before this seems like a bad idea.

Duncan60

Lucius is both dead and warm. I think he's dead dead unless Eliezer has someone like Harry does something in a very narrow time window. Dumbledore is a much easier problem to solve (story wise) and can be solved at the same time as the Atlantis story thread if that is what the author plans.

0linkhyrule5
All he really has to do is convince Lucius to be a rock for about five minutes while he would have been summoned. Heal anything with transfigurative healing + the Stone.
6Nornagest
I doubt we can do justice to Atlantis in five chapters of plot; the last few chapters only resolved as much as they did because Eliezer fired almost all of the available Chekhov's guns. We might get some hints, a sketch of a solution, but we're not going to see it in detail.
Duncan60

If you want to make the scenario more realistic then put more time pressure on Voldemort or put him under more cognitive stress some other way. The hardest part for Voldemort is solving this problem in a short time span and NOT coming up with a solution that foils Harry. The reason experienced soldiers/gamers with little to no intelligence still win against highly intelligent combatants with no experience is that TIME matters when you're limited to a single human's processing power. In virtually every combat situation one is forced to make decisions fas... (read more)

Duncan50

I begin to wonder exactly how the story will be wrapped up. I had thought the source of magic would be unlocked or the Deathly Hallows riddle would be tied up. However, I wonder if there are enough chapters to do these things justice. I also wonder whether Eliezer will do anything like was done for Worm where the author invited suggestions for epilogs for specific characters.

1skeptical_lurker
There aren't enough chapters, and didn't Harry say that it might take decades of work? In rationalist fiction there's no reason for all plot threads to come together at once. We've got four chapters. Maybe one chapter dealing with what happens when Hermione wakes up, one for Draco dealing with the death of his father, one for McGonnigal giving a speech as the new Headmistress, and one for Harry considering his plans for the future.
Duncan20

I see your point, but Voldemort hasn't encountered the AI Box problem has he? Further, I don't think Voldemort has encountered a problem where he's arguing with someone/something he knows is far smarter than himself. He still believes Harry isn't as smart yet.

1TobyBartels
Sure, but now your argument, it seems to me, is 6) Harry is playing against the intelligent but naïve Voldemort instead of against the intelligent and experienced Nathan Russell. (Actually, I don't know who Russell is apart from being the first person to let EY out of the box, but he may well be experienced with this problem, for all I know, and he's probably intelligent if he got into this stuff at all.)
Duncan90

You should look at reddit to coordinate your actions with others. One idea I like is to organize the proposal of all reasonable ideas and minimize duplication. Organization thread here: http://www.reddit.com/r/HPMOR/comments/2xiabn/spoilers_ch_113_planning_thread/

1JenniferRM
Thanks for the URL :-)
Duncan10

I agree that this task is far "easier task than a standard AI box experiment". I attacked it from a different angle though (HarryPrime can easily and honestly convince Voldemort he is doomed unless HarryPrime helps him).:

http://lesswrong.com/r/discussion/lw/lsp/harry_potter_and_the_methods_of_rationality/c206

Duncan90

Quirrelmort would be disgusted with us if we refused to consider 'cheating' and would certainly kill us for refusing to 'cheat' if that was likely to be extremely helpful.

"Cheating is technique, the Defense Professor had once lectured them. Or rather, cheating is what the losers call technique, and will be worth extra Quirrell points when executed successfully."

Duncan60

Actually, this isn't anywhere near as hard as the AI Box problem. Harry can honestly say he is the best option for eliminating the unfriendly AGI / Atlantis problem. 1) Harry just swore the oath that binds him, 2) Harry understands modern science and its associated risks, 3) Harry is 'good', 4) technological advancement will certainly result in either AGI or the Atlantis problem (probably sooner than later), and 5) Voldemort is already worried about prophecy immutability so killing Harry at this stage means the stars still get ripped apart, but without all the ways in which that could happen with Harry making the result 'good').

2TobyBartels
Other than (5), these are all things that are liable to be true of an AI asking to be let out of the box. 1. Code that appears Friendly but has not been proved Friendly 2. Advanced intelligence of the AI 3. General programming goals, much weaker than (1) really 4. True verbatim in the standard AI box experiment (and arguably in the real world right now)
Duncan70

Why hasn't Voldemort suspended Harry in air? He floated himself into the air as a precaution against proximity, line of sight problems, and probably magics that require a solid substance to transmit through. If Harry were suspended in air partial transfiguration options would be vastly reduced.

Why hasn't Voldemort rendered Harry effectively blind/deaf/etc. - Harry is gaining far more information in real time than necessary for Voldemort's purposes?

Also, it seems prudent not to let Harry get all over the place by shooting him, smashing him, etc. without... (read more)

Duncan00

I like this exercise. It is useful in at least two ways.

  1. Help me take a critical look at my current cherished views. Here's one: work hard now and save for retirement; it is still cherished, but I already know of several lines of attack that might work if I think them through.
  2. Help me take time to figure out how I'd hack myself.

It might also be interesting to come up with a cherished group view and try to take that apart (e.g., cryonics after death is a good idea - perhaps start with the possibility that the future likely to be hostile to you such as unfriendly AI).

Duncan00

Anecdotal evidence amongst people I've questioned falls into two main categories. The 1st is the failure to think the problem through formally. Many simply focus on the fact that whatever is in the box remains in the box. The 2nd is some variation of failure to accept the premise of an accurate prediction of their choice. This actually counter intuitive to most people and for others it is very hard to even casually contemplate a reality in which they can be perfectly predicted (and therefore, in their minds, have no 'free will / soul'). Many conversations simply devolve into 'Omega can't actually make a such an accurate prediction about my choice therefore or I'd normally 2 box so I'm not getting my million anyhow'.

Duncan90

Game of Thrones and the new Battlestar Galactica appear to me to have characters that are either shallow and/or conflicted by evil versus evil. Yet they are very popular and as far as I can tell, character driven. I was wondering what it means. One thought I had was that many people are interested in relationship conflicts and that the characters don't need to be deep, they just need to reflect, between the main character cast, the personalities of the audience (as messed up as the audience might be).

2FiftyTwo
Theres a difference between deep, well written and compelling characters. GoT and BSG gained praise for having well written/compelling characters, and particularly for making them realistic. A real person or a well written character may have a single overriding obsession that means they are not deep or complex, but are very compelling to watch. Conversely someone can be deep but dull (trivial example: composed of a hundred sub agents obsessed with different accounting standards).
4DavidAgain
I think the characterisation in BSG is actually surprisingly deep. Not that good characterisation always has to mean very complex characters: well-drawn simple characters can be very effective. But I think the personalities of the main characters in BSG are much more realistic and plausible than most of other things I've seen. Can't think of many evil vs. evil characters either. Many of the main characters are struggling with their place on the principle vs. pragmatism spectrum. In terms of 'evil' characters, there's one I can think of who's pretty much straightforward Freudian evil (totally evil, but somewhat justified psychologically) and one who just seems to not have ever obtained any ethics or indeed empathy at all, while only ever doing one noticeably evil thing that I can remember (as being without empathy doesn't instantly make you mad axeman). GoT, I dunno. Some of them are deeper than others. For a lot they just have widely divergent senses of good: they might just care about a single person, or about family, or about a grudge... But I have nothing against boldly drawn characters of that type, they can be very enjoyable to read. Interesting point at the end about the personalities of the audience: if the 'shallow' or 'evil vs. evil' characters are capturing real, damaged personalities and plausible relationship conflicts, (and the BSG relationships are definitely plausible to me) then surely they're doing something right?
2John_D
I don't think it is an indicator that the audience is messed up. I haven't seen Battlestar Galactica but regarding Game of Thrones, if the boards are any indicator of the audience, then most people seem to root for the more morally acceptable (good) guys, and are disappointed that they keep getting screwed over. The show is also known for unexpected character deaths, so it could just be an indicator of the audience wanting to be surprised or in a state of suspense.
0aelephant
Many of the characters seem straightforward, but you could almost imagine each House as being an individual, and the members of each House as the "parts", each with competing (but somewhat aligned) morals, goals, methods, etc.
0[anonymous]
That got me curious whether you consider Commander Adama more evil than good.
Duncan00

It is not as much that they haven't given an argument or stated their position. It is that they are telling you (forcefully) WHAT to do without any justification. From what I can tell of the OP's conversation this person has decided to stop discussing the matter and gone straight to telling the OP what to do. In my experience, when a conversation reaches that point, the other person needs to be made aware of what they are doing (politely if possible - assuming the discussion hasn't reached a dead end, which is often the case). It is very human and tempting to rush to the 'Are you crazy?!! You should __.' and skip all the hard thinking.

1AlexMennen
It sounds like the generic "you" to me. So "you shouldn't apply this stuff to society" means "people shouldn't apply this stuff to society." I don't see anything objectionable about statements like that.
Duncan00

Given the 'Sorry if it offends you' and the 'Like... no' I think your translation is in error. When a person says either of those things they are A. saying I no longer care about keeping this discussion civil/cordial and B. I am firmly behind (insert their position here). What you have written is much more civil and makes no demands on the other party as opposed to what they said "... you should ...."

That being said, it is often better to be more diplomatic. However, letting someone walk all over you isn't good either.

0AlexMennen
"Like..." = "I'm about to explain myself, but need a filler word to give myself more time to formulate the sentence." "no" = "whoops, couldn't think of what to say quick enough to avoid an awkwardly long pause; I'd better tie off that sentence I just suggested I was about to start." I'm not quite sure what to make of "Sorry if it offends you", but I don't see how you can get from there to "I'm not even trying to be polite."
Duncan00

Do you have any suggestions on how to limit this? I find meetings often meander from someone's pet issue to trivial / irrelevant details while the important broader topic withers and dies despite the meeting running 2-3x longer than planned.

In meetings where I have some control, I try to keep people on topic, but it's quite hard. In meetings where I'm the 'worker bee' it's often hopeless (don't want to rub the boss the wrong way).

5Qiaochu_Yuan
When I've been in such meetings I've been fairly insistent that we write down whatever it is we're discussing (e.g. on a blackboard) and point to it periodically. No sense in keeping everything we're thinking inside our heads. It also helps to appoint a competent moderator explicitly from the start.
Duncan200

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.

0ChristianKl
Their conversation was longer than one sentence. If his discussion partner wouldn't have backed up his point in any way, I doubt mszegedy would have felt enough cognitive dissonance to contemplated suicide. "You should do what I say because I said so.", generally doesn't make people feel cognitive dissonance that's that strong.
[anonymous]100

Let me offer a different translation: "You are proposing something that is profoundly inhuman to my sensibilities and is likely to have bad outcomes."

Rukifellth below has, I think, a much more likely reason for the reaction presented.

A more charitable translation would be "I strongly disagree with you and have not yet been able to formulate a coherent explanation for my objection, so I'll start off simply stating my disagreement." Helping them state their argument would be a much more constructive response than confronting them for not giving an argument initially.

Duncan10

I agree that they should uphold strict standards for numerous reasons. That doesn't prevent CFAR from discussing potential benefits (and side effects) of different drugs (caffeine, aspirin, modafinil, etc.). They could also recommend discussing such things with a person's doctor as well as what criteria are used to prescribe such drugs (they might already for all I know).

Valentine290

My current stance, which I'll push for quite strongly unless and until I encounter enough evidence against to update significantly, is that CFAR would do very poorly to talk explicitly about any drugs that the USA has a neurosis about. We can talk at a layer of abstraction above: "How might you go about determining what kinds of effects a given substance has on you?" But I am pretty solidly against CFAR listing potential benefits and drawbacks of any drugs that have become rallying cries for law enforcement or political careers.

Duncan10

Ah, I thought it was an over the counter drug.

gwern110

It is, some places. Just not the USA where CFAR is operating now and the foreseeable future. I'm a big fan of modafinil as you might guess, but if CFAR were even idly considering providing or condoning modafinil use, I'd smack them silly (metaphorically); organizations must obey different standards than individuals.

Duncan20

I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?

What about trying bright lighting?: http://lesswrong.com/lw/gdl/my_simple_hack_for_increased_alertness_and/

gwern160

I'm curious as to why caffiene wasn't sufficient, but also why modafinil would offend people?

As a schedule IV drug, it's surely some sort of crime to offer or accept. Some people will not want to associate with such people or organizations on moral grounds, risk-aversion grounds, or fear of other people's disapproval on either ground etc.

Duncan40

I'm glad to hear it is working well and is well received!

Once there has been some experience running these workshops I really hope there is something that CFAR can design for meetup groups to try / implement and/or an online version.

Is there a CFAR webpage that covers this particular workshop and how it went?

3Valentine
This is definitely on our horizon. Not yet. I'm not sure putting it on our website is the right thing to do either. We might send out a summary in our newsletter, though. You can subscribe to it by clicking the letter icon at the top of our website.
Duncan120

It is useful to consider because if AI isn't safe when contained to the best of our ability then no method reliant on AI containment is safe (i.e., chat boxing and all the other possibilities).

Duncan30

My draft attempt at a comment. Please suggest edits before I submit it.:

The AI risk problem has been around for a while now, but no one in a position of wealth, power, or authority seems to notice (unless it is all kept secret). If you don't believe AI is a risk or even possible consider this. We ALREADY have more computational power available than a human brain. At some point, sooner rather than later, we will be able to simulate a human brain. Just imagine what you could do if you had perfect memory, thought 10x, 100x, 1,000,000x faster than anyone ... (read more)

2timtyler
In a word, IARPA. In a sentence: They are large and well-funded.
Duncan-20

"1. Life is better than death. For any given finite lifespan, I'd prefer a longer one, at least within the bounds of numbers I can reasonably contemplate."

Have you included estimates of possible negative utilities? One thing we can count on is that if you are revived you will certainly be at the mercy of whatever revived you. How do you estimate the probability that what wakes you will be friendly? Is the chance at eternal life worth the risk of eternal suffering?

Duncan20

I think the CFAR is a great idea with tons of potential so I'm curious if there are any updates on how the meetup went and what sorts of things were suggested?

Duncan10

I'm confused as to what the point of the gate keeper is. Let us assume (for the sake of argument) everything is 'safe' except the gate keeper who may be tricked/convinced/etc. into letting the AI out.

  1. If the point of the gate keeper is to keep the AI in the box then why has the gate keeper been given the power to let the AI out? It would be trivial to include 'AI DESTROYED' functionality as part of the box.
  2. If the gate keeper has been given the power to let the AI out then isn't the FUNCTION of the gate keeper to decide whether to let the AI out or not?
... (read more)
1handoflixue
Here's another comment-thread discussing that
4Qiaochu_Yuan
A text channel is already enough power to let the AI out. The AI can print its own source code and convince the gatekeeper to run it on a machine that has internet access.
Duncan80

With the understanding that I only have a few minutes to check for research data:

http://www.ncbi.nlm.nih.gov/pubmed/1801013

http://www.ncbi.nlm.nih.gov/pubmed/21298068 - "cognitive response ... to light at levels as low as 40 lux, is blue-shifted"

3Alex_Altair
Sample sizes are dismal, but at least they tried. Thanks for looking this up!
Duncan00

In the context of "what is the minimal amount of information it takes to build a human brain," I can agree that there is some amount of compressibility in our genome. However, our genome is a lot like spaghetti code where it is very hard to tell what individual bits do and what long range effects a change may have.

Do we know how much of the human genome can definitely be replaced with random code without problem?

In addition, do we know how much information is contained in the structure of a cell? You can't just put the DNA of our genome in ... (read more)

Duncan10

If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, I think it's fair to call it junk.

Unless this is a standard definition for describing DNA, I do not agree that such DNA is 'junk'. If the DNA serves a purpose it is not junk. There was a time when it was believed (as many still do) that the nucleus was mostly a disorganized package of DNA and associated 'stuff'. However, it is becoming increasing clear that it is highly structured and that structure is critical fo... (read more)

5philh
I think the term "junk" has fallen out of favour. Fair enough, let's taboo that word. If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, it contains no useful information - or at least, no more than it takes to say "a megabase of arbitrary DNA goes here". The context is roughly "how much information does it take to express a brain?" It's true that we can't completely ignore those regions unless we're confident that they could be completely removed, but they only add O(1) complexity instead of O(n).
Duncan30

If you actually look at the genome, we’ve got about 30,000 genes in here. Most of our 750 megabytes of DNA is repetitive and almost certainly junk, as best we understand it.

This is false. Just because we do not know what role a lot of DNA performs does not mean it is 'almost certainly junk'. There is far more DNA that is critical than just the 30,000 gene coding regions. You also have: genetic switches, regulation of gene expression, transcription factor binding sites, operators, enhancers, splice sites, DNA packaging sites, etc. Even in cases whe... (read more)

5philh
Your objections are correct, but Eliezer's statement is still true. The elements you list, as far as I know, take up even less space than the coding regions. (If a section of DNA is serving a useful purpose, but would be just as useful if it was replaced with a random sequence of the same length, I think it's fair to call it junk.) Comparison with the mouse genome shows at least 5% of the human genome is under selective pressure, whereas only something like 2% has a purpose that we've discovered. But at the same time, there's a lot that we're pretty sure really is junk.
Duncan20

I would bet this is totally impractical for most studies. In the medical sciences the cost is prohibitive and for many other studies you need permission to experiment on organisms (especially hard when humans or human tissues are involved). Perhaps it would be easier for some of the soft sciences, but even psychology studies often work with human subjects and that would require non-trivial approval.

5beoShaffer
Finding participants is already one of the biggest bottlenecks in psychology research, and it would get worse in shend's scenario, because the supply of participants is fairly inelastic.
Duncan60

I look forward to the results of this study. Quite frankly, most soft science fields could use this sort of scrutiny. I'd also love to see how reproducible the studies done by medical doctors (as opposed to research scientists) are. Quite frankly, even the hard sciences have a lot of publications with problems, however, these erroneous results, especially if they are important to current topics of interest, are relatively quickly discovered since other labs often need to reproduce the results before moving forward.

I would add one caution. Failure to ... (read more)

Duncan00

Long term caffeine tolerance can be problematic. To combat this problem, every 2-4 months I stop taking caffeine for about 2 weeks (carefully planned for less hectic weeks). In my experience and that of at least one other colleague this method significantly lowers and possibly removes the caffeine tolerance. Two people does not make a study, but if you need to combat caffeine tolerance it may be worth a try.

Duncan00

How do you propose organizing a 'master list' of solutions, relevant plot pieces, etc. given the current forum format? Some people have made some lists, but they are often quickly buried beneath other comments. I'm also not familiar enough with how things work to know if a post can be edited days after it has been posted. One obvious solution is that a HPMOR reader who likes making webpages puts up wiki page for this. Can this be done on Lesswrong.com?

0Rejoyce
If we held off proposing solutions the first two days of analysis wouldn't get buried down in the first place. And to answer your question, forum posts can be edited, and the date posted is marked with an asterisk if it was. A wiki sounds sensible but it might be a little too complex for those who are unfamiliar with it, not to mention there'd be tons of editing conflicts going on. I propose Google Docs, for its real-time collaboration, or any other similar alternative. Etherpad?
Duncan10

Eliezer Yudkowsky's Author Notes, Chp. 81
This makes me worry that the actual chapter might’ve come as an anticlimax, especially with so many creative >suggestions that didn’t get used. I shall poll the Less Wrong discussants and see how they felt before I decide whether >to do this again. This was actually intended as a dry run for a later, serious “Solve this or the story ends sadly” puzzle – >like I used in Part 5 of my earlier story Three Worlds Collide - but I’ll have to take the current outcome into account when >deciding whether to go

... (read more)
Duncan30

If that is the case then the hat didn't actually say "it couldn't tell if Harry had any false memories." It said it couldn't detect deleted memories and seems to imply that 'sophisticated analysis' of all of his memories for 'inconsistencies' would be required to do so. The false memory given to Hermione is at the forefront of her mind and doesn't require the hat to scan her memories (though Hermione could replay memories of event for the hat presumably). In addition the false memory is entirely out of character with Hermione's personality whi... (read more)

Duncan10

The hat says specifically: "I can go ahead and tell you that there is definitely nothing like a ghost - mind, intelligence, memory, personality, or feelings - in your scar. Otherwise it would be participating in this conversation, being under my brim." It says memory specifically. Both a false memory and a 'scar memory' could at this point be treated as 'foreign' to Hermione.

Are you referring to this slightly earlier quote: "Anyway, I have no idea whether or not you've been Obliviated. I'm looking at your thoughts as they form, not readin... (read more)

0JoshuaZ
Yes, I'm referring to the second bit.
Duncan20

If the sorting hat has enough access / ability to one's mind to sort children into their appropriate house then it seems entirely possible that it has enough access / ability to identify a false memory. The sorting hat is an extremely powerful artifact which implies that the false memory would have to be a significantly greater power for us to conclude at this point that it can remain hidden from the sorting hat.

JoshuaZ110

The Sorting Hat when it was on Harry said that it couldn't tell if Harry had any false memories and that it just looks at thoughts as they form. So it is unlikely it can do much to detect such issues.

Duncan-20

I'd like to "Hold Off on Proposing Solutions" or in this case hold off on advocating answers. I don't have time to list all the important bits of data we should be considering or enumerate all the current hypotheses, but I think both would be quite valuable.

Some quick hypothesis:

-Mr. Hat & Cloak is Quirrellmort & responsible for Hermione's 'condition'

-Mr. Hat & Cloak is Lucious & responsible for Hermione's 'condition'

-Mr. Hat & Cloak is Voldemort, but not the Quirrell body.

-Mr. Hat & Cloak is Quirrellmort and trying to ... (read more)

0Anubhav
Anyone going to explain this? Sounds important, but I have no idea what it's referring to. Edit: The groundhog day attack has been rewritten.
4Eneasz
I completely read her tone of voice as "in shock and brain is stuck." Very much like Buffy after the 6:30min mark in The Body. Telling the paramedics "Good luck", cleaning up the vomit, etc.
8Xachariah
The movie 'Groundhog Day' is about a man who relives the same day over and over again repeatedly. Because the day is reset, he is able to re-play each interaction with any person repeatedly until he can convince them of whatever he wants or work around them. Eg, he finds the hottest woman in town. The first day, he hits on her and is shot down but learns of her highschool. The second day, he says 'hey, didn't we go to highschool together at...?' He is quickly shot down again, but gets more information to keep the conversation longer. This repeats until he eventually gets her to have sex with him. In chapter 77, H&C performs a similar hack. He tries to convince her, then obliviates her memory and uses his gained information to convince her even more, etc. Instead of resetting the day, he is resetting her mind back again and again. After enough iterations, he'll know exactly the right things to say to convince her to do whatever it is he wishes. As hinted in Chapter 77, what we viewed was not the first nor the last iteration of the attack.
Duncan10

One of the primary problems with the rationalists, humanists, atheist, skeptics, etc. is that there is no higher level organization and thus we tend to accomplish very little compared to most other organizations. I fully support efforts to fix this problem.

Duncan20

If I understand this correctly your 'AI' is biased to do random things, but NOT as a function of its utility function. If that is correct then your 'AI' simple does random things (according to its non-utility bias) since its utility function has no influence on its actions.

Duncan20

I consider all of the behaviors you describe as basically transform functions. In fact, I consider any decision maker a type of transform function where you have input data that is run through a transform function (such as a behavior-executor, utility-maximizer, weighted goal system, a human mind, etc.) and output data is generated (and in the case of humans sent to our muscles, organs, etc.). The reason I mention this is that trying to describe a human's transform function (i.e., what people normally call their mind) as mostly a behavior-executor or jus... (read more)

0BrandonReinhart
Is "transform function" a technical term from some discipline I'm unfamiliar with? I interpret your use of that phrase as "operation on some input that results in corresponding output." I'm having trouble finding meaning in your post that isn't redefinition.
Duncan60

I am having trouble scanning the HPMoR thread for topics I'm interested in due to both it's length and the lack of a hierarchical organization by topic. I would appreciate any help with this problem since I do not want to make comments that are simple duplicates of previous comments I failed to notice. With that in mind, is there a discussion forum or some method to scan the HPMoR discussion thread that doesn't involve a lot of effort? I have not found organizing comments by points to be useful in this respect.

Edit: I'm new and this is my 1st comment. I've read a lot of the sequences, but I don't know my way around yet. It's quite possible I'm missing a lot about how things work here.

2Unnamed
You're right, the MOR discussion threads aren't very well organized for that. They work well enough for having an ongoing discussion, but not so well as an archive of the discussion that's already happened. If you have a particular subject in mind and you want to see what's been posted about it, the simplest thing is probably to search the thread(s) for relevant keywords, including the chapter number. You could either use ctrl+f on each one of the threads that might contain relevant discussion, or the site's search function. Don't worry so much about duplicating previous comments. It's worth doing a quick search to try to avoid it, but when it happens it's not so bad (especially with threads like these ones). If you don't have a particular subject in mind and you just want to skim the discussion to see what's interesting, I don't have anything better to suggest than sorting by karma points.