Here.

Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.

The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.

I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!

New Comment
75 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Logos130

To summarize my reasons for downvoting, after first reading the entire contents of the linked blog:

There are standard scenarios in which our world is a hoax, e.g. a computer simulation or stage-managed by aliens. These are plausible enough to be non-negligible in their most general form, although claims of weird specific hoaxes are unlikely. Given some weird observation, like waking up with a blue tentacle, a claim of a weird specific hoax is the most likely non-delusory explanation.

Because of the schizophrenia you have previously mentioned here, you make a lot of weird observations, and have trouble interpreting mundane coincidences as mundane. You also picked up a lot of ideas from the Less Wrong community. So you reach out to the hoax hypotheses to justify your delusions and hallucinations, and go on to encrust them with theological language. This is both a common tendency in paranoid schizophrenics, and a way to assert opposition to and claim superiority to Less Wrong, per your usual self-admitted trolling.

This approach seem unlikely to lead to fruitful or pleasant reading. And empirically, the ratio of nonsense, "raving crank style," and insanity to interesting ideas (all available elsewhere) is far too high. The situation is sad, but I want to see less of this, including posts linking to it, so I downvoted.

3Will_Newsome
Perhaps I should also note that I disagree with your analysis on various points. I'm schizotypal I suppose, but not schizophrenic given the standard definition. I don't think I have any trouble interpreting mundane coincidences as mundane. Not especially so, actually. No, I honestly prefer something like Thomism to tricky hoaxes. At Computational Theology I haven't even really gotten into theology yet, and I certainly haven't claimed that any supposed paranormal influences are or aren't related to God. I'm not sure what "this" is that you're referring to. Theological language? I don't think schizophrenics commonly try to "justify" their delusions by couching them in terms of theological language. What would the point be? I don't get it. Note that talking about the abstract nature of God and so on is completely unrelated to common schizophrenic symptoms like thinking one is God or that one is somehow an ontologically privileged person. No, I don't represent LessWrong as a thing in that way. Some on LessWrong are very interesting, some aren't. I try to only talk to the interesting folk, even if they have serious disagreements with me. I certainly don't think I'm "superior" to sundry people who participate on LessWrong. I rarely troll—few of my LessWrong comments are downvoted. Is trolling relevant to the post? I don't think the writing style and content of the post smacks of superiority, and I don't think it's trolling. It seems to me to be an argument made in good faith in the hopes of calling attention to a hypothesis that is rightly or wrongly seen as neglected. Which approach? I don't think I'm trolling, or condescend-ing. Regarding pleasantness, is there something else wrong with my writing style? Regarding fruitfulness, is it that you're not interested in the things I discuss for whatever reason, or, more likely, is it that I generally don't come up with ideas that catalyze further fruit-bearing insights for you? If the latter, I agree this is a problem,

I rarely troll—few of my LessWrong comments are downvoted.

(Empirical data: According to a karma histogram program someone posted some months ago (I saved a copy locally, but regrettably have forgotten the author's identity), 294 of 2190 of your recent comments (about 13.4%) have negative karma as of around 1735 PDT today.)

[Edited to add: However, as Will points out in the child, it might be misleading to simply count downvoted comments, because it is believed that some users mass-downvote the comments of certain others rather than judging each comment individually; only 80 out of the 2190 comments under consideration (about 3.7%) were voted to -4 or below.]

-1Will_Newsome
Thanks! Note that much of that is likely due to karmassassination, not legitimate downvoting.

Note that much of that is likely due to karmassassination, not legitimate downvoting.

Disagree. I approve downvoting of most of your comments that were downvoted to -2 or below, for reasons triggered by those particular comments. This makes it plausible that they were downvoted for similar reasons, rather than in a way insensitive to qualities of individual comments.

-1Will_Newsome
Right, but I also know that karmassassination has occurred at various points, and any karmassassination is likely to take up a disproportionate chunk of the downvotes. No? Zack's statistic of -4 or below is the most pertinent. It's at 3.7%. People will naturally wish to compare this with the percentage of my comments that are +4 or more. Zack tells us that this percentage is 19.2%. So there's clearly a very large asymmetry. What one makes of it depends on a lot of other background stuff.

I also know that karmassassination has occurred at various points, and any karmassassination is likely to take up a disproportionate chunk of the downvotes. No?

Not necessarily. Taboo "karmassassination", what were you actually observing? One scenario is that some comments you make draw attention and people look over the recent N of your posts, judge them individually, but it turns out that the judgment is mostly negative. Another is that people who want to discourage a certain type of comments downvote multiple already-downvoted posts without paying too much attention, expecting that downvotes that are already present carry sufficient evidence in the context. Both cases result in surges of negative votes which remain sensitive to qualities of individual comments.

People will naturally wish to compare this with the percentage of my comments that are +4 or more. Zack tells us that this percentage is 19.2%.

You're drifting from the topic, I'm not discussing a net perception of your participation, only explanations for the negatively judged contributions. Your writing them off as not-particularly-meaningful (effect of "karmassassination" rather than of comments' negative qualities) seems like a rationalization, given the observations above.

3Will_Newsome
Like, I'm not trying to avoid the knowledge that I often make contributions to LessWrong that aren't well-received. It happens, more for me than for others. I was just pointing out that I've also noticed strict karmassassination sometimes, not necessarily often in my 2190 most recent comments. It's just a thing to take into account. The karmassassination I have experience with is often not of the sort that you describe. But I'm perfectly willing to accept such explanations sometimes, and I've already noticed that they explain a few big chunks lost a few months back.
0Will_Newsome
I don't write all of them off as meaningless, of course! Didn't mean to imply that. Some comments just aren't positive contributions to LessWrong. It happens, and it happpens to me more than others. I'm not denying that at all.
0Zack_M_Davis
Oh, that's a good point---I've added an addendum to the grandparent.
1Will_Newsome
I have a request, which you're not at all obligated to fulfill of course. But could you tell me what percentage of my 2190 most recent comments have received 4 or more upvotes?
8Zack_M_Davis
19.2% (And I am sorry if it was rude of me to have initiated this exchange at all, but surely it will be understood that this is the type of venue where if someone uses a word like most or few and one happens to have the actual data easily available, then one should be encouraged to share it.)
2Will_Newsome
Not at all! I very much appreciate the data. Thank you for sharing.
1Will_Newsome
The linked argument doesn't require blue-tentacle-like psi phenomena. See the three bullet points that apply when there's no superintelligent influence. The planetarium hypothesis is completely disjunctive with psi arguments, and explains the Fermi paradox even in the absence of psi. It's also not just my hypothesis—there's historical precedent, as has been linked to in the post. ETA: I hope that the second, Fermi-centric half of the linked post can be judged on its own terms and inspire debate about its arguments, regardless of the various theological or paranormal claims that might exist elsewhere on the blog. [My primary interpretation of the downvotes for this comment is basically: "I want to discourage people from talking about psi, parapsychology, or anything like that—we all know that magic doesn't exist, so we should try to explain phenomena that actually exist and that are therefore actually interesting. Admittedly you (Will_Newsome) didn't spontaneously bring up psi in your comment, and your comment is a more-or-less reasonable reply to its parent, but downvoting this comment is the easiest way to punish you for associating LessWrong with blatantly irrational speculation."]
3gwern
I'm a tad annoyed that it apparently breaks my space bar - arrow keys and pgup/pgdwn work, but space does nothing. Anyway, my basic reaction is that you give no interesting reasons for preferring a planetarium over a simulation besides philosophy of mind (most of which theories, I believe, would not predict any output difference in the absence of real qualia in a simulation) or efficiency (which to the extent we can analyze at all, weighs in strongly for simulation being more efficient). I also don't understand how such an entity would even build a planetarium in the first place. Wouldn't any physical shell badly interfere with predictions of planetary or cometary orbits? Or cause parallax? etc. What would the timing be, and is there really no natural records that would throw off a planetarium constructed just in time for humans to be fooled (akin to testing the fine structure constant by looking at natural nuclear reactors from millions/billions of years ago)?
2JoshuaZ
Can you expand on this? This isn't obvious to me.
3gwern
Existing matter seems highly redundant, and building a full-scale 1:1 replica, as it were, means you cannot opt for any amount of approximation by definition or possible optimization. I would draw an analogy to NP problems: yes, the best way to solve the pathologically hardest instances of any NP problem is brute force, just like there are probably arrangements of matter which cannot be calculated more efficiently by computronium than the actual arrangement of matter. But nevertheless, SAT solvers run remarkably fast on many real-world problem and far faster than anyone focused on the general asymptotic behavior would expect, and we have no reason to believe the world itself is a pathological instance of worlds.
1Will_Newsome
One possible objection: what if humans are doing hypercomputation? E.g., being created by evolution (which is fundamentally "tied into" reality's computation) lets humans tap into the latent computation of the universe in a way that an algorithmic AI can't emulate, so it keeps humans around to use as hypercomputers. Various people have proposed similar hypotheses. I think this objection can be met, though.
8gwern
The usual anti-Penrose point comes to mind: if quantum microtubules are really that useful, we can probably just build them into chips, and better, and the problem goes away. Unless you mean the "tieing into" somehow requires a prefrontal cortex, at least 1 kidney, a working gallbladder, etc, in which case I think that's just sheer privileging of hypothesis with not a scrap of evidence for it.
-1Will_Newsome
Former, not the latter. And yes, the anti-Penrose point applies, but we can skirt it by postulating that the superintelligence is limited in its decision theory—it can recognize good results when it seems them, much as TDT can recognize that UDT beats it at counterfactual mugging, but it's architecturally constrained not to self-modify into the winning thing. So humans might run native hypercomputation or native super-awesome decision theory that an AI could exploit but that the AI would know it couldn't emulate given its knowledge of its own limited architecture.
2gwern
I guess you're distantly alluding to the old discussion of 'what would AIXI do if it ran into a hypercomputing oracle?' in modern guise. I'm afraid I know too little about TDT or UDT to appreciate the point. It just seems a little far-fetched - so not only are we thinking about hypercomputation, which I believe is generally regarded as being orders of magnitude less likely than say P=NP, we're also thinking about a superintelligent and superpowerful agent with a decision theory that just happens to be broken in the right way? If we were being mined for our computational potential, I can't help but feel human lives ought to be less repetitive than they are.
3Will_Newsome
Haven't seen any surveys, but I don't think so. I think hypercomputation is considered by some important people to be more likely than P=NP. I believe very few people have really considered it, so you shouldn't take anyone's off-the-cuff impressions as meaning very much unless you know they've thought a lot about the limitations of theoretical computer science. I don't really have any ax to grind on the matter, but I think hypercomputation is neglected. I think my points were supposed to be disjunctive, not conjunctive. A broken decision theory or a limited theory of computation can both result in humans outcompeting superintelligences on certain very specific decision problems or (pseudo-)computations. Wei Dai's "Metaphilosophical Mysteries" is relevant. Given some models, yes. Given other models, the AI might not be able to locate what parts of the system have the special sauce and what parts don't, so it's more likely to let humans be.
2gwern
Your link isn't a stupid person, but to some extent, the lack of interest in hypercomputation says what the field thinks of it. Compare it to quantum computation, where people were avidly researching it and coming up with algorithms decades before even toy quantum computers showed up in cutting-edge labs. Wei Dai's link is pretty controversial.
1Will_Newsome
Not sure, but it seems that whenever I get into discussions with you it's usually about some potentially-important edge case or something. Strange. But anyway, yeah. I just want to flag hypercomputation as a speculative thing that it might be worth taking an interest in, much like mirror matter. One or two of my default models are probably very similar to yours when it comes down to betting odds.
0Eugine_Nier
But only after it was discovered that the theory of quantum mechanics implied it was theoretically possible.
0Eugine_Nier
My understanding of the history is that everyone believed the extended Church-Turing thesis until someone noticed that the (already established) theory of quantum mechanics contradicted it.
2gwern
I don't think I've ever seen anyone invoke the extended Church-Turing thesis by either name or substance before quantum computing came around.
0Eugine_Nier
People were talking about P-time before quantum computing and implicitly assuming that it applied to any computer they could build.
0gwern
I don't see how one would apply "P-time" to "any computer they could build".
0Eugine_Nier
I meant "apply" in the sense that one applies a mathematical model to a phenomenon. Specifically, it was implicitly assumed the the notion of polynomial time captured what was actually possible to compute in polynomial time.
0Eugine_Nier
Um, you do realize you're comparing apples and oranges there, since one is a statement about physics and the other a statement about mathematics.
0gwern
In this area, I do not think there is such a hard and fast distinction.
1Eugine_Nier
So, how would you phrase the existence of hypercomputation as a mathematical statement?
0gwern
Presumably something involving recursively enumerable functions...
0Eugine_Nier
As someone who understands computational theory, I strongly suspect you're seriously confused about how computational complexity theory works. As I don't have the time or interest to give a course in computational complexity, might I recommend asking the original question on mathoverflow if you are interested. Apologies if that came off as rude.
0JoshuaZ
I don't find this argument persuasive or even strong. n qubits can't simulate n+1 qubits in general. In fact, n qubits can't even in general simulate n+1 bits. This suggests that if our understanding of the laws of physics are close to correct for our universe and the larger universe (whether holographic planetarium or simulationist), simulation should be tough.
6gwern
That may be, but such a general point would be about arbitrary qubits or bits, when a simulation doesn't have to work over all or even most arrangements.
2JoshuaZ
Hmm, so thinking about this more, I think that Holevo's theorem can probably be interpreted in a way that much more substantially restricts what one would need to know about the other n bits in order to simulate them, especially since one is apparently simulating not just bits but qubits. But I don't really have a good understanding of this sort of thing at all. Maybe someone who knows more can comment? Another issue which backs up simulation being easier- if one cares primarily about life forms one doesn't need a detailed simulation then of the inside of planets and stars. The exact quantum state of every iron atom in the core of the planet for example shouldn't matter that much. So if one is mainly simulating the surface of a single planet in full detail, or even just the surfaces of a bunch of planets, that's a lot less computation. One other issue is that I'm not sure you can have simulations run that much faster than your own physical reality (again assuming that the simulated universe uses the same basic physics as the underlying universe). See for example this paper which shows that most classical algorithms don't get major speedup from a quantum computer beyond a constant factor. That constant factor could be big, but this is a pretty strong result even before one is talking about general quantum algorithms. Of course, if the external world didn't quite work the same (say different constants for things like the speed of light) this might not be much of an issue at all.
0JoshuaZ
Hmm, that's a good point. So it would then come down to how much of an expectation of what the simulation is likely to do do you need in order to get away with using fewer qubits. I don't have a good intuition for that, but the fact that BQP is likely to be fairly small compared to all of PSPACE suggests to me that one can't really get that much out of it. But that's a weak argument. Your remark makes me update in favor of simulationism being more plausible.
-2Will_Newsome
Google's fault. Thanks for letting me know, though. Right—the argument is pretty modest. It's mostly just that the planetarium hypothesis is on par with other hypotheses like the simulation argument. Yeah, I left this to "a wizard did it"—if you accept simulation, then you can mix and match bigger and smaller planetariums around your brain or around the solar system to pose various physical problems. The planetarium hypothesis is sort of continuous with the simulation hypothesis if you like simulationistic assumptions. [ETA: And I didn't address any of those problems at any scale, because there's a problem for each scale. Factor your intuitions about the improbability of actually engineering a planetarium into your a posteriori estimate, to get a custom fit probability.]

I like the idea, certainly not as a preferred explanation of the Fermi paradox, but as an addition to the list of explanations. But as gwern points out, getting the "planetarium" to work isn't so easy. Comets and planets ought to feel its mass, in fact comets ought to collide with it on the way out. It has to produce radiation patterned so as to imitate interstellar parallax. And it has to physically emit very high energy particles such as we detect on earth in cosmic rays. It's one form of the hypothesis "there's an invisible wall right there, projecting the appearance of a world beyond." And the main issue facing such a hypothesis is, what about the things that go into or come out of the wall?

6Eugine_Nier
It doesn't have to be "perfect". Keep in mind the old joke about the experimental and theoretical physicist:
1Will_Newsome
That's my take as well. Personally, my pet hypothesis is the Thomistic God, but there are three or so solutions that I treat as live. I'm not committed to any of 'em.
2Mitchell_Porter
One version of the idea, that I do normally favor, is the "cranium hypothesis", which says that my brain is surrounded by a wall and that everything I experience is a sort of reconstruction of what's on the other side of that wall, rather than being the thing itself. But that doesn't explain the Fermi paradox.
1Will_Newsome
But you agree that a significantly bigger wall could explain the Fermi paradox in theory? Also I figured you might be partial to naive realism. I am, if only because I'd have considered it obviously completely retarded a year ago. IIRC the Thomists have a solution to some problem of intentionality where you directly perceive something's form itself. (Er, it's not a form, what's it called? Weird word, starts with an 'h'.) Seems like it fits well with monadology, but I guess not quantum monadology. ...You know, that monads don't change at all is really quite important. I know you know that, but still, "quantum monadology" is a pretty meh name.
0Mitchell_Porter
It's certainly a way to have a universe full of dark megastructures efficiently harvesting energy on behalf of ancient superintelligences, coexisting with a planet of yokels who just see a wilderness of stars squandering their radiative output. But I would rate 1. Great Filter 2. the "wilderness" is actually alive and busy but the yokels don't know how to see it that way 3. appearances are even more thoroughly illusory than in the planetarium scenario, all as more likely. That would make hallucination impossible. I think we have direct awareness of something, but not the outside world. The "something" is either part of us or it's alongside "us" in the brain.
1Will_Newsome
Luckily the point at which we start sending conscious beings out beyond the solar system is by one hypothesis the point at which we reach a technological singularity. How a planetarium would interact with an AI, only God knows. But for things like Voyager, it's of course no problem: the superintelligence eats the Voyager, and in its place sends back the signals the Voyager would have sent back if it hadn't gotten eaten.

Reading this piece is difficult.

The first sentence of the second paragraph starts off

But because theology has traditionally been mostly Christian

That's not true. It might be that you are only aware of Christian theology, but very similar issues have been extensively discussed in other religions. Islamic theology is a pretty strong example.

I'm going to skip commenting on most of the theological discussion (aside from noting that sentences being grammatically well-formed doesn't mean they have content) and per your request move directly to the part ... (read more)

4Will_Newsome
How so? I'm posting on the blog to practice my writing in preparation for writing a treatise, so any suggestions for improvement would be greatly appreciated. I also wrote it while on Adderall which affects my style in various ways. Are you referring to the sentences I wrote, or to sentences like "I talked to a spirit the other day"? Hm? You disagree with the "mostly"? Maybe you're thinking of the majority—I was thinking of the mode. Do you agree that the mode of theology is Christian, given some informal, intuitive measure? Yes, and I'm not sure but I think similar arguments are made by religious folk. It's just a possibility of course, and it relies heavily on the notion that at least on some topics we don't have strong introspective access to our preferences. I'm of course aware of people who search for God or gods in good faith and don't find Him/them, and that is indeed a counterargument, but how strong a counteragument it is depends on other unmentioned variables. I leave it to the reader to fill in the values for those variables. Right, I was just sharing my impression to wrap up the post. It could easily be wrong.

Reading this piece is difficult.

How so?

For me, it was primarily because you had large stretches with low communication per word.

For example:

Though Logos is always involved somehow, today's post will be mostly pneumatological. Wik tells us that pneumatology is "the study of spiritual beings and phenomena, especially the interactions between humans and God." In Christian theology pneumatology is always about the Holy Spirit, but here at Computational Theology we're not quite that pigeonholed, so we'll discuss the interactions between humans and all spiritual beings, who may or may not be God. ('Cuz after all, how could you tell? We'll discuss that problem—the problem of discernment—in future posts. Expect some algorithmic information theory.) And if you accept Crowley's rule—to interpret every phenomenon as a particular dealing of God with your soul—then all phenomena are subject to pneumatology anyway.

Compare with

This post will be primarily about the interaction between humans and spirits, e.g. gods or invisibly-acting AIs.

4Will_Newsome
Also, I have to keep in mind that many people have complained that my writing is much too compressed, relying too much on hidden or external concepts or inferences. Hopefully I can strike a balance between inscrutable esotericity and points that belabor the point.
1Will_Newsome
Thanks! Yeah, that's the Adderall talking. I'm planning to write a book (a treatise), where there's more room to expand and explain. But I suppose I should practice my skills on the appropriate medium. So I'll try to cut down on excursions like the above in the future. [ETA: Actually, I won't. There were good reasons to have the quoted part in there.]
0Will_Newsome
(Upon further reflection, replied here.)
0Manfred
Aw. How about at least treating my impression as evidence, rather than dismissing it.
0Will_Newsome
Of course I'm treating it as evidence. I'm not insane. For me especially, it's not even possible for me to dismiss someone's impression without treating it as evidence.
0Manfred
Great :D
0Will_Newsome
Mostly. It also causes a lot of stress, due to, e.g., a total inability to disregard negative social judgments. This has been true my whole life, and it's caused me to become a very strange person. That said, I find it entirely worth it, because I think it makes me a better rationalist and a better person, at least in the limit.
0JoshuaZ
Manfred summarized the issues with readability pretty well, but the issue is slightly more complicated. There were also sections in the theology bit especially where it felt like there were a lot of unstated premises. In that case, I'm not sure, and I suspect that any intuition is going to be drastically impacted by availibility bias. For example, I know intellectually that there's a lot of Hindu theology out there, but my rough intuition for how much is out there for different groups is wildly in favor of the Abrahamic religions and then a little bit to Buddhism and that only because I took an intro Buddhism class in college. I suspect that any sort of judgment about such a mode is more a statement about what religions one has been exposed to more than anything else. Overall, I think this would have been much better received if it had not made any mention of theology at all and had just been presented with just the second half as a discussion of variants of the Zoo/Planetarium hypotheses.
0Will_Newsome
I didn't flag them? Usually I'll flag assumptions, and then you can choose to take them on or not. If I'm not flagging them then they shouldn't be used further down in the post. Were they? Sorry if I'm unjustifiably crowdsourcing.
0JoshuaZ
Well: This seems likely but I know that some denominations have discussed the nature of angels and their interaction with humans. If you said "God" or the "Christian God" here that might be ok, but you seem to be trying to smuggling in a notion about deities that simply isn't true for the lowercase gods. There were other points I think that showed up the first time I read it, but I'm not reading it as carefully now (reading this is a bit exhausting).
2Will_Newsome
Okay, given those two examples I think your objections are nitpicks. I think you're probably unsatisfied with the piece for other, unmentioned reasons that you might not have introspective access to. Same with the people who upvoted Manfred's comment, which singles out the only paragraph in the piece that could really be interpreted as containing much too much fluff, and even then I explicitly recommended that people who weren't interested in the meta stuff about the blog skip ahead to the discussion of the solution only. Overall, given the criticisms of the piece, I think I should be satisfied that I didn't leave out anything important, and that people who are unsatisfied with it are mostly not the people I want in my audience anyway. I'm left thinking that my primary aim should be to experiment with writing style more.
0Will_Newsome
It also applies to the AI risk debate. I've made the argument in that context before here on LW. I believe User:Dmytry started to champion it at some point.
0JoshuaZ
Yes, I've seen in in that sort of context. It seems much less plausible that an AI would try to reach a Schelling point of that sort. It requires it to have a very human notion of intervention. While it is plausible that other evolved entities would have such a notion, figuring out how to get it to understand that would seem to be possibly extremely difficult.

I opened the link to your blog and had an initial negative aesthetic/readability reaction, which is a typical problem I've encountered when jumping away from Less Wrong. LW is highly optimized for clean readability. How does your cathedral background image help quickly communicate the ideas of your post? Also, the italicized text in particular is hard to read. The visual jump from LW to your blog's layout is jarring, and this immediately sets up an internal negative 'ugh' reaction. I'm attentive to these aesthetic details because I've encountered the ... (read more)

0Will_Newsome
On the main page it seems unmotivated—vague connections to God and architecture—but if you click "About" you see more of the photo. The photo was selected because it implies God but its emphasis is architecture, and the highly organized structure of the architecture is supposed to evoke formalism and technology, thus linking God to computationalism. I couldn't think of anything better, and honestly I quite like the photo. Any suggestions? And yes, if you're willing to accept the simulation solution, then the planetarium hypothesis just isn't as good an explanation. The planetarium hypothesis is mostly for people who are skeptical of simulationism, or people who want to have a backup hypothesis in case simulationism doesn't work, like myself. I generally prefer something like simulationism, but the planetarium hypothesis is my second favored hypothesis. ETA: I've changed the typeface to Times New Roman, which should make the italics easily readable. Thanks for the feedback, I wasn't sure if the the olde font was appropriate or not.
4jacob_cannell
To me the full background picture is visually distracting. I find it aesthetically jarring/unpleasing, and probably subconsciously associate that visual style with hastily constructed blogs, or at least blogs outside of my typical reading preference. I prefer the background image to be constrained to just the top of the blog, in the typical fashion of blogs like LW. If you really like the photo, have a link to it or embed it in the article somewhere. You wording suggests you view these hypotheses as tools required to achieve some predetermined objective, rather than just as beliefs subject to observational revision.
0Will_Newsome
Thanks for the feedback. I'll go for a solid background. (ETA: Changed to timeless black to be a little easier on the eyes than some websites. Unfortunately I can't change the page's color to a light grey, will have to use some CSS. I'll consider further optimization later.) Both and neither. I have many different epistemic practices, and I also try to switch up my epistemological approaches often. Coherentism, pragmatism, correspondence, whatever—ultimately I think the foundations of epistemology are to be found in decision theory, and any other epistemological approaches are just phenomenal shards of the fundamental nature of rationality. Hypotheses can be tools, hypotheses can be correspondences—whatever leads to intellectual fruit. "May we not forget interpretations consistent with the evidence, even at the cost of overweighting them." Similarly, may we not forget epistemologies consistent with potentially optimal decisions, even at the cost of overweighting them. We must be meta, we must be large.

This post has thus far gotten an upvote and two [eta:3] downvotes. Downvoters: what do you dislike about this post? Please let me know so I can accommodate your discussion-section-content preferences in the future. Thanks for any feedback!

6FAWS
You mostly talk about your new blog instead of the idea the post claims to be about, and the post largely sounds like an advertisement. Two paragraphs summarizing your idea and one sentence talking about the blog (preferably worded as a disclaimer instead of an advertisement) would have been better.
3Oscar_Cunningham
No rationality info!
0Will_Newsome
Thanks.

By the way, your "Leibniz' monads" link is broken.

2Will_Newsome
fixed thx

Aesthetics issue: that slide up animation at the beginning is bad; makes me feel a bit queasy.

0Will_Newsome
I don't like it either. I think I'll have to edit the CSS to change it. Hopefully that'll work. Overall Google's aesthetics are pretty decent but sometimes they really goof. The Blogger platform has recently been revamped so hopefully they'll make a few tweaks soon. Overall I like the new platform, and Blogger makes it pretty easy to have guest authors post whenever they want to. (ETA: There are alternative layouts of course, but they're not as beautiful. I'd prefer to keep the current layout but just somehow get rid of those annoying transitions.)