If it's worth saying, but not worth its own post, even in Discussion, it goes here.
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
NEW GAME:
After reading some mysterious advice or seemingly silly statement, append "for decision theoretic reasons." at the end of it, you can now pretend it makes sense and earn karma on LessWrong. You are also entitled to feel wise.
Variants:
"due to meta level concerns."
"because of acausal trade."
Unfortunately, I must refuse to participate in your little game on LW - for obvious decision theoretic reasons.
The priors provided by Solomonoff induction suggest, for decision-theoretic reasons, that your meta-level concerns are insufficient grounds for acausal karma trade.
The most merciful thing in the world, I think due to meta level concerns, is the inability of the human mind to correlate all its contents.
I've been trying-and-failing to turn up any commentary by neuroscientists on cryonics. Specifically, commentary that goes into any depth at all.
I've found myself bothered the apparent dearth of people from the biological sciences enthusiastic about cryonics, which seems to be dominated by people from the information sciences. Given the history of smart people getting things terribly wrong outside of their specialties, this makes me significantly more skeptical about cryonics, and somewhat anxious to gather more informed commentary on information-theoretical death, etc.
Somewhat positive:
Ken Hayworth: http://www.brainpreservation.org/
Rafal Smigrodzki: http://tech.groups.yahoo.com/group/New_Cryonet/message/2522
Mike Darwin: http://chronopause.com/
It is critically important, especially for the engineers, information technology, and computer scientists who are reading this to understand that the brain is not a computer, but rather, it is a massive, 3-dimensional hard-wired circuit.
Aubrey de Grey: http://www.evidencebasedcryonics.org/tag/aubrey-de-grey/
Ravin Jain: http://www.alcor.org/AboutAlcor/meetdirectors.html#ravin
Lukewarm:
Sebastian Seung: http://lesswrong.com/lw/9wu/new_book_from_leading_neuroscientist_in_support/5us2
Negative:
kalla724: comments http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/
The critique reduces to a claim that personal identity is stored non-redundantly at the level of protein post-translational modifications. If there was actually good evidence that this is how memory/personality is stored, I expect it would be better known. Plus if this is the case how has LTP been shown to be sustained following vitrification and re-warming? I await kalla724's full critique.
There are skeptics, such as Kenneth Storey, http://www4.carleton.ca/jmc/catalyst/2004/sf/km/km-cryonics.html
Wow. Now there's a data point for you. This guy's an expert in cryobiology and he still gets it completely wrong. Look at this:
Storey says the cells must cool “at 1,000 degrees a minute,” or as he describes it somewhat less scientifically, “really, really, really fast.” The rapid temperature reduction causes the water to become a glass, rather than ice.
Rapid temperature reduction? No! Cryonics patients are cooled VERY SLOWLY. Vitrification is accomplished by high concentrations of cryoprotectants, NOT rapid cooling. (Vitrification caused by rapid cooling does exist -- this isn't it!)
I'm just glad he didn't go the old "frozen strawberries" road taken by previous expert cryobiologists.
Later in the article we have this gem:
"they (claim) they will somehow overturn the laws of physics, and chemistry and evolution and molecular science because they have the way..."
This guy apparently thinks we are planning to OVERTURN THE LAWS OF PHYSICS. No wonder he dismisses us as a religion!
When it comes to smart people getting something horribly wrong that is ou...
Why do the (utterly redundant) words "Comment author:" now appear in the top left corner of every comment, thereby pushing the name, date, and score to the right?
Can we fix this, please? This is ugly and serves no purpose. (If anyone is truly worried that someone might somehow not realize that the name in bold green refers to the author of the comment/post, then this information can be put on the Welcome page and/or the wiki.)
To generalize: please no unannounced tinkering with the site design!
Can a moderator please deal with private_messaging, who is clearly here to vent rather than provide constructive criticism?
You currently have 290 posts on LessWrong and Zero (0) total Karma.
I don't care about opinion of a bunch that is here on LW.
Others: please do not feed the trolls.
Standing rules are to make user's comments bannable if their comments are systematically and significantly downvoted, and the user keeps making a whole lot of the kind of comments that get downvoted. In that case, after giving a notice to the user, a moderator can start banning future comments of the kind that clearly would be downvoted, or that did get downvoted, primarily to prevent development of discussions around those comments (that would incite further downvoted comments from the user).
So far, this rule was only applied to crackpot-like characters that got something like minus 300 points within a month and generated ugly discussions. private_messaging is not within that cluster, and it's still possible that he'll either go away or calm down in the future (e.g. stop making controversial statements without arguments, which is the kind of thing that gets downvoted).
I'm going to reduce (or understand someone else's reduction of) the stable AI self-modification difficulty related to Löb's theorem. It's going to happen, because I refuse to lose. If anyone else would like to do some research, this comment lists some materials that presently seem useful.
The slides for Eliezer's Singularity Summit talk are available here, reading which is considerably nicer than squinting at flv compression artifacts in the video for the talk, also available at the previous link. Also, a transcription of the video can be found here.
On provability logic by Švejdar. A little introduction to provability logic. This and Eliezer's talk are at the top because they're reference material. Remaining links are organized by my reading priority:
LessWrong/Overcoming Bias used to be a much more interesting place. Note how lacking in self-censorship Vassar is in that post. Talking about sexuality and the norms surrounding it like we would any other topic. Today we walk on eggshells.
A modern post of this kind is impossible despite its great personal benefit to in my estimation at least 30% of the users of this site and making available a better predictive models of social reality for all the users.
If I understand correctly, the purpose of the self-censorship was to make this site more friendly for women. Which creates a paradox: An idea that one can speak openly with men, but with women a self-censorship is necessary, is kind of offensive to women, isn't it?
(The first rule of Political Correctness is: You don't talk about Political Correctness. The second rule: You don't talk about Political Correctness. The third rule: When someone says stop, or expresses outrage, the discussion about given topic is over.)
Or maybe this is too much of a generalization. What other topics are we self-censoring, besides sexual behavior and politics? I don't remember. Maybe it is just politics being self-censored; sexual behavior being a sensitive political topic. Problem is, any topic can become political, if for whatever reasons "Greens" decide to identify with a position X, and "Blues" with a position non-X.
We are taking the taboo on political topics too far. Instead of avoiding mindkilling, we avoid the topics completely.
Although we have traditional exceptions: it is allowed to talk about evolution and atheism, despite the fact that some people might consider these topics...
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples, since if do this you are violating scared taboos, which means you lose your argument by default.
The pathetic exception to this is attacking very marginal and unpopular applications that your fellow debaters can easily dismiss as misguided extremism or even a straw man argument.
The second problem is that as time goes on, if reality happens to be politically incorrect on some issue, any other issue that points to the truth of this subject becomes potentially tainted by the label as well. You actively have to resort to thinking up new models as to why the dragon is indeed obviously in the garage. You also need to have good models of how well other people can reason about the absence of the dragon to see where exactly you can walk without concern. This is a cognitively straining process in which everyone slips up.
I recall my country's Ombudsman once visiting my school for a talk wearing a T-shirt that said "After a close up no one looks ...
As to political correctness, its great insidiousness lies that while you can complain about it in a manner of a religious person complaining abstractly about hypocrites and Pharisees, you can't ever back up your attack with specific examples
My fault for using a politically charged word for a joke (but I couldn't resist). Let's do it properly now: What exactly does "political correctness" mean? It is not just any set of taboos (we wouldn't refer to e.g. religious taboos as political correctness). It is a very specific set of modern-era taboos. So perhaps it is worth distinguishing between taboos in general, and political correctness as a specific example of taboos. Similarities are obvious, what exactly are the differences?
I am just doing a quick guess now, but I think the difference is that the old taboos were openly known as taboos. (It is forbidden to walk in a sacred forest, but it is allowed to say: "It is forbidden to walk in a sacred forest.") The modern taboos pretend to be something else than taboos. (An analogy would be that everyone knows that when you walk in a sacred forest, you will be tortured to death, but if you say: "It is forbidden to w...
A political correctness (without hypocrisy) feels from inside as a fight against factual incorrectness with dangerous social consequences. It's not just "you are wrong", but "you are wrong, and if people believe this, horrible things will happen".
Mere factual incorrectness will not invoke the same reaction. If one professor of mathematics admits belief that 2+2=5, and other professor of mathematics admit belief that women in average are worse in math than men, both could be fired, but people will not be angry at the former. It's not just about fixing an error, but also about saving the world.
Then, what is the difference between a politically incorrect opinion, and a factually incorrect opinion with dangerous social consequences? In theory, the latter can be proved wrong. In real life, some proofs are expensive or take a lot of time; also many people are irrational, so even a proof would not convince everyone. But still I suspect that in case of factually incorrect opinion, opponents would at least try to prove it wrong, and would expect support from experts; while in case of politically incorrect opinion an experiment would be considered dangerous and experts unreliable. (Not completely sure about this part.)
I hope you realize that by picking the example of race you make my above comment look like a clever rationalization for racism if taken out of context.
Also you are empirically plain wrong for the average online community. Give me one example of one public figure who has done this. If people like Charles Murray or Arthur Jensen can't pull this off you need to be a rather remarkable person to do so in a random internet forum where standards of discussion are usually lower.
As to LW, it is hardly a typical forum! We have plenty of overlap with the GNXP and the wider HBD crowd. Naturally there are enough people who will up vote such an argument. On race we are actually good. We are willing to consider arguments and we don't seem to have racists here either, this is pretty rare online.
Ironically us being good on race is the reason I don't want us talking about race too much in articles, it attracts the wrong contrarian cluster to come visit and it fries the brains of newbies as well as creates room for "I am offended!" trolling.
Even if I for the sake of argument granted this point it dosen't directly addressed any part of my description of the phenomena and how they are problematic.
Summary of IRC conversation in the unoffical LW chatroom.
On the IRC channel I noted that there are several subjects on which discourse was better or more interesting in OB/LW 2008 than today, yet I can't think of a single topic on which LW 2012 has better dialogue or commentary. Another LWer noted that it is in the nature of all internet forums to "grow more stupid over time", I don't think LW is stupider, I just I think it has grown more boring and definitely isn't a community with a higher sanity waterline today than back then, despite many individuals levelling up formidably in the intervening period.
some new place started by the same people, before LW was OB. before OB was SL4, before that was... I don't know
This post is made in the hopes people will let me know about the next good spot.
Random thought, if we assume a large universe, does that imply that somewhere/when there is an novel that just happens to perfectly resemble our lives? If it does I am so going to acausally break the fourth wall. Bonus questions, how does this intersect with the rules of the internet?
I am interested in reading on a fairly specific topic, and I would like suggestions. I don't know any way to describe this other than be giving the two examples I have thought of:
Some time ago my family and I visited India. There, among other things, we saw many cows with an extra, useless leg growing out of their backs near the shoulders. This mutation is presumably not beneficial to the cow, but it strikes me as beneficial to the amateur geneticist. Isn't it incredibly interesting that a leg can be the by-product of random mutation? Doesn't that tell us a lot about the way genes are structured - namely that somewhere out there is a gene that encodes things at near the level of genes - some small number of genes corresponds nearly directly to major, structural components of the cow. It's not all about molecules, or cells, or even tissues! Gene's aren't like a bitmap image - they're hierarchical and structured. Wow!
Similarly, there are stories of people losing specific memory 'segments', say, their personal past but not how to read and write, how to drive, or how to talk. Assuming that these stories are approximately true, that suggests that some forms of memory loss are not random...
Related to: List of public drafts on LessWrong
Is meritocracy inhumane?
Consider how meritocracy leeches the lower and middle class of highly capable people and how this increases the actual differences both in culture and in ability between the various parts of a society. This then increases the gap between them. It seems to make sense that ceteris paribus they will live more segregated from each other than ever before.
Now merit has many dimensions, but lets take the example of a trait that helps you with virtually anything. Highly intelligent people have positive externalities they don't fully capture. Always using the best man for the job should produce more wealth for society as a whole. Also it appeals to our sense of fairness. Isn't it better that the most competent man get the job, than the one with the highest title of nobility or from the right ethnic group or the one who got the winning lottery ticket?
Let us leave aside problems with utilitarianism for the sake of argument and ask does this automatically mean we have a net gain in utility? The answer seems to be no. A transfer of wealth and quality of life not just from the less deserving to the more deserving but from th...
I see at least two other major problems with meritocracy.
First, a meritocracy opens for talented people not only positions of productive economic and intellectual activity, but also positions of rent-seeking. So while it's certainly great that meritocracy in science has given us von Neumann, meritocracy in other areas of life has at the same time given us von Neumanns of rent-seeking, who have taken the practices of rent-seeking to an unprecedented extent and to ever more ingenious, intellectually involved, and emotionally appealing rationalizations. (In particular, this is also true of those areas of science that have been captured by rent-seekers.)
Worse yet, the wealth and status captured by the rent-seekers are, by themselves, the smaller problem here. The really bad problem is that these ingenious rationalizations for rent-seeking, once successfully sold to the intellectual public, become a firmly entrenched part of the respectable public opinion -- and since they are directly entangled with power and status, questioning them becomes a dangerous taboo violation. (And even worse, as it always is with humans, the most successful elite rent-seekers will be those who honestly inte...
After a painful evening, I got an A/B test going on my site using Google Website Optimizer*: testing the CSS max-width
property (800, 900, 1000, 1200, 1300, & 1400px). I noticed that most sites seem to set it much more narrowly than I did, eg. Readability. I set the 'conversion' target to be a 40-second timeout, as a way of measuring 'are you still reading this?'
Overnight each variation got ~60 visitors. The original 1400px converts at 67.2% ± 11% while the top candidate 1300px converts at 82.3% ± 9.0% (an improvement of 22.4%) with an estimated 92.9% chance of beating the original. This suggests that a switch would materially increase how much time people spend reading my stuff.
(The other widths: currently, 1000px: 71.0% ± 10%; 900px: 68.1% ± 10%; 1200px: 66.7% ± 11%; 800px: 64.2% ± 11%.)
This is pretty cool - I was blind but now can see - yet I can't help but wonder about the limits. Has anyone else thoroughly A/B-tested their personal sites? At what point do diminishing returns set in?
* I would prefer to use Optimizely or Visual Website Optimizer, but they charge just ludicrous sums: if I wanted to test my 50k monthly visitors, I'd be paying hundreds of dollars a month!
New heuristic: When writing an article for LessWrong assume the casual reader knows about the material covered in HPMOR.
I used to think one could assume they read the sequences and some other key stuff (Hanson ect.), but looking at debates this simply can't be true for more than a third of current LW users.
I find it pretty easy to pursue a course of study and answer assessment questions on the subject. Experience teaches me that such assessment problems usually tell you how to solve them, (either implicitly or explicitly), and I won't gain proper appreciation for the subject until I use it in a more poorly-defined situation.
I've been intending to get a decent understanding of the HTML5 canvas element for a while now, and last week I hit upon the idea of making a small point & click adventure puzzle game. This is quite ambitious given my past experience...
A fellow LessWrong user on IRC: "Good government seems to be a FAI-complete problem. "
I just read the new novel by Terry Pratchett and Stephen Baxter, The Long Earth. I didn't like it and don't recommend it (I read it because I loved other books by Pratchett, but there's no similarity here).
There was one thing in particular that bothered me. I read the first 10 reviews of the book that Google returns, and they were generally negative and complained about many things, but never mentioned this issue. Many described Baxter as a master of hard sci fi, which makes it doubly strange.
Here's the problem: in this near-future story, gurer vf n Sbbz...
A usual idea of utopia is that chores-- repetitive, unsatisfying, necessary work to get one's situation back to a baseline-- are somehow eliminated. Weirdtopia would reverse this somehow. Any suggestions?
Some more SIAI-related work: looking for examples of costly real-world cognitive biases: http://dl.dropbox.com/u/85192141/bias-examples.page
One of the more interesting sources is Heuer's Psychology of Intelligence Analysis. I recommend it, for the unfamiliar political-military examples if nothing else. (It's also good background reading for understanding the argument diagramming software coming from the intelligence community, not that anyone on LW actually uses them.)
I read quite a bit, and I really like some of the suggestions I found in LW. So, my question is: is there any recent or not-so-recent-but-really-good book you would recommend? Topics I'd like read more about are:
I'm happy to read pop-sci, as long as it's written with a skeptical, rationalist mindset. I.e. I liked Linden's The accidental mind, but take Gladwell's writings with a rather big grain of salt.
A question about acausal trade
(btw, I couldn't find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Lets say we're...
Substitute the word causal for acausal. In a situation of "causal trade", does everyone end up with the same utility function?
I'm feeling fairly negative on lesswrong this week. Time spent here feels unproductive, and I'm vaguely uncomfortable with the attitudes I'm developing. On the other hand there are interesting people to chat with.
Undecided what to do about this. Haven't managed to come up with anything to firm up my vague emotions into something specific.
Perhaps I'll take a break and see how it feels.
After a week long vacation at Disney World with the family, it occurs to me there's a lot of money to be made in teaching utility maximization to families...mostly from referrals by divorce lawyers and family therapists.
I'm trying to memorise mathematics using spaced repetition. What's the best way to transcribe proofs onto Anki flashcards to make them easy to learn? (ie what should the question and answer be?)
Did the site CSS just change the font used for discussion (not Main) post bodies? It looks bad here.
Edit: it only happens with some posts. Like these:
http://lesswrong.com/r/discussion/lw/dd0/hedonic_vs_preference_utilitarianism_in_the/ http://lesswrong.com/r/discussion/lw/dc4/call_for_volunteers_publishing_the_sequences/
But not these:
http://lesswrong.com/r/discussion/lw/ddh/aubrey_de_grey_has_responded_to_his_iama_now_with/ http://lesswrong.com/r/discussion/lw/dcy/the_fiction_genome_project/
Is it a perhaps a formatting change applied when posting?
Also, whe...
One more item for the FAI Critical Failure Table (humor/theory of lawful magic):
37. Any possibility automatically becomes real, whenever someone justifiably expects that possibility to obtain.
Discussion: Just expecting something isn't enough, so crazy people don't make crazy things happen. The anticipation has to be a reflection of real reasons for forming the anticipation (a justified belief). Bad things can be expected to happen as well as good things. What actually happens doesn't need to be understood in detail by anyone, the expectation only has to be...
What is the meaning of the three-digit codes in American university lessons? Such as: "Building a Search Engine (CS101)", "Crunching Social Networks (CS215)", "Programming A Robotic Car (CS373)" currently in Udacity.
Seems to me that 101 is always the introduction to the subject. But what about the other numbers? Do they correspond to some (subject specific) standard? Are they arbitrary (perhaps with general trend to give more difficult lessons higher numbers)?
We often hear about how professional philanthropy is a very good way to improve others' lives. Have any LWers actually gone this route?
I would like to try some programming in Lisp, could you give me some advice? I have noticed that in the programming community this topic is prone to heavy mindkilling, which is why I ask on LW instead of somewhere else.
There are many variants of Lisp. I would prefer to learn one that is really used these days for developing real-world applications. Something I could use to make e.g. a Tetris-like game. I will probably need some libraries for input and output; which ones do you recommend? I want a free software that works out of the box; preferably on a Win...
My research suggests Clojure is a lisp-like language most suited to your requirements. It runs on the JVM so should be relatively low hassle on Windows. I believe there's some sort of Eclipse support but I can't confirm it.
If you do end up wanting to do something with Common Lisp, I recommend Practical Common Lisp as a good free introduction.
I'm Xom#1203 on Diablo 3. I have a lvl 60 Barb and a lvl ~35 DH. I'm willnewsome on chesscube.com, ShieldMantis on FICS. I like bullet 960 but I'm okay with more traditional games too. Currently rated like 2100 on chesscube, 1600 or something on FICS. Rarely use FICS. I'd like to play people who are better than me, gives me incentive to practice.
Suggestion:
I consider tipping to be a part of the expense of dining - bad service bothers me, but not tipping also bothers me, as I don't feel like I've paid for my meal.
So I've come up with a compromise with myself, which I think will be helpful for anybody else in the same boat:
If I get bad service, I won't tip (or tip less, depending on how bad the service is). But I -will- set aside what I -would- have tipped, which will be added to the tip the next time I receive good service.
Double bonus: When I get bad service at very nice restaurants, the waiter at the Steak and Shake I more regularly eat at (it's my favored place to eat) is going to get an absurdly large tip, which amuses me to no end.
What would a poster designed to spread awareness of a less wrong meetup look like? How can it appeal to non-technophiles / students of social sciences?
I don't follow and understand the "timeless decision" topic on LW, but I have a feeling that a significant part of that is one agent predicting what other agent would do, by simulating their algorithm. (This is my very uninformed understanding of the "timeless" part: I don't have to wait until you do X, because I can already predict if you would do X, and behave accordingly. And you don't have to wait for my reaction, because you can already predict it too. So let's predict-cause each other to cooperate, and win mutually.)
If I am corre...
Is anyone familiar with any statistical or machine-learning based evaluations of the "Poverty of Stimulus" argument for language innateness (the hypothesis that language must be an innate ability because children aren't exposed to enough language data to learn it properly in the time they do).
I'm interested in hearing what actually is and isn't impossible to learn from someone in a position to actually know (ie: not a linguist).
Does any one know of a good guide to Godel's theorems along the lines of the cartoon guide to lob's theorem?
Has anybody here has changed their minds on the matter of catastrophic anthropogenic global warming, and what evidence or arguments made you reconsider your original positions on the matter?
I've bounced back and forth on the matter several times, and right now I'm starting to doubt global warming itself, nevermind catastrophic or anthropogenic; since those I read most frequently are biased against, and my sources which support it have a bad habit of deleting any comments that disagree or criticize the evidence, which has led to my taking them less seriously, the ideal for me would be arguments or evidence for it that changed somebody's mind towards the end of supporting the theory.
Blogs by LWers:
Note: About this list. New suggestions are welcomed. Anyone searching for interesting blogs that may not be written by LWers chec out this or maybe this threa...
I find that, sporadically, I act like a total attention whore around people whom I respect and may talk to more or less freely - whether I know them or we're only distantly acquainted. This mostly includes my behavior in communities like this, but also in class and wherever else I can interact informally with a group of equals. I talk excitedly about myself, about various things that I think my audience might find interesting, etc. I know it might come across as uncouth, annoying and just plain abnormal, but I don't even feel a desire to stop. It's not due...
With regards to Optimal Employment, what does anyone think of the advice given in this article?
"...There are career waiters in Los Angeles and they’re making over $100,000 a year.”
That works out (for the benefit of other Europeans) at €80,000 - an astonishing amount of money to me at least. LA seems like a cool place, with a lot of culture and a more interesting places that can be easily traveled to than Dublin.
Why don't people like markets?
A very interesting read where the author speculates on possible reasons for why people seem to be biased against markets. To summarize:
Positive Juice seems to have several posts related to rationality. (Look under "most viewed posts" on the sidebar.)
A question about acausal trade
(btw, I couldn't find a good link for acausal trade introduction discussion; I would be grateful for one)
We discussed this at a LW Seattle meetup. It seems like the following is an argument for why all AIs with a decision theory that does acausal trade act as if they have the same utility function. That's a surprising conclusion to me which I hadn't seen before, but also doesn't seem too hard to come up with, so I'm curious where I've gone off the rails. This argument has a very Will_Newsomey flavor to it to me.
Lets say we're in a big universe with many many chances for intelligent life, but most of them are so far apart that they will never meet eachother. Lets also say that UDT/TDT-like decision theories are are in some sense the obviously correct decision theory to follow, so that many civilizations, when they build an AI, they use something like UDT/TDT. At their inception, these AIs will have very different goals since since the civilizations that built them would have very different evolutionary histories.
If many of these AIs can observe that the universe is such that there will be other UDT/TDT AIs out there with different goals then each AI trade acausally with the AIs it thinks will be out there. Presumably each AI will have to study the universe and figure out a probability distribution for the goals of those AIs. Since the universe is large, each AI will expect many other AIs to be out there and thus bargain away most of its influence over its local area. Thus, the starting goals of each AI will only have a minor influence on what it does; each AI will act as if it has some combined utility function.
What are the problems with this idea?
One problem is that, in order to actually get specific about utility functions, the AI would have to simulate another AI that is simulating it - that's like trying to put a manhole cover through its own manhole by putting it a box first.
If we assume that the computation problems are solved, a toy model involving robots laying different colors of tile might be interesting to consider. In fact there's probably a post in there. The effects will be different sizes for different classes of utility functions over tiles. In the case of infinity robots with cosmopolitan utility functions, you do get an interesting sort of agreement though.