How would any epistemic insights change the terminal values you'd want to maximize? They'd change your actions pursuant to maximizing those terminal values, certainly, but your own utility function? Wouldn't that be orthogonality-threatening? edit: I remembered this may be a problem with e.g. AIXI[tl], but with an actual running AGI? Possibly.
If you cock up and define a terminal value that refers to a mutable epistemic state, all bets are off. Like Asimov's robots on Solaria, who act in accordance with the First Law, but have 'human' redefined not to include non-Solarians. Oops. Trouble is that in order to evaluate how you're doing, there has to be some coupling between values and knowledge, so you must prove the correctness of that coupling. But what is correct? Usually not too hard to define for the toy models we're used to working with, damned hard as a general problem.
I have a comment waiting in moderation on the isteve post Konkvistador mentioned, the gist of which is that the American ban on the use of genetic data by health insurers will cause increasing adverse selection as these services get better and cheaper, and that regulatory restrictions on consumer access to that data should be seen in that light. [Edit: it was actually on the follow-up.]
A pertinent question is what problem a government or business (not including a general AI startup) may wish to solve with a general AI that is not more easily solved by developing a narrow AI. 'Easy' here factors in the risk of failure, which will at least be perceived as very high for a general AI project. Governments and businesses may fund basic research into general AI as part of a strategy to exploit high-risk high-reward opportunities, but are unlikely to do it in-house.
One could also try and figure out some prerequisites for a general AI, and see what would lead to them coming into play. So for instance, I'm pretty sure that a general AI is going to have long-term memory. What AIs are going to get long-term memory? A general AI is going to be able to generalize its knowledge across domains, and that's probably only going to work properly if it can infer causation. What AIs are going to need to do that?
Consider those charities that expect their mission to take years rather than months. These charities will rationally want to spread their spending out over time. Particularly for charities with large endowments, they will attempt to use the interest on their money rather than depleting the principal, although if they expect to receive more donations over time they can be more liberal.
This means that a single donation slightly increases the rate at which such a charity does good, rather than enabling it to do things which it could not otherwise do. So the scaling factor of the endowment is restored: donating $1000 to a charity with a $10m endowment increases the rate at which it can sustainably spend by 1000/10^7 = 0.1%.
This does not mean that a charity will say, look, if our sustainable spending rate was 0.1% higher we'd have enough available this year to fund the 'save a million kids from starvation' project, oh well. They'll save the million kids and spend a bit less next year, all other things being equal. In other words, the charity, by maximising the good it does with the money it has, smooths out the change in its utility for small differences in spending relative to the size of its endowment, i.e. the higher order derivatives are low. So long as the utility you get from a charity comes from it fulfilling its stated mission, your utility will also vary smoothly with small spending differences.
Likewise, with rational collaborating charities, they will each adjust their spending to increase any mutually beneficial effects. So mixed derivatives are low, too.
The upshot is that unless your donation is of a size that it can permanently and significantly raise the spending power of such a charity, you won't be leaving the approximately linear neighbourhood in utility-space. So if you're looking for counterexamples, you'll need to find one of:
- charities with both low endowments and low donation rates, which nevertheless can produce massive positive effects with a smallish amount of money
- charities which must fulfil their mission in a short time and are just short of having the money to do so.
I don't think you should write the post. Reason: negative externalities.
Then let me make it as easy as possible:
--- a/r2/r2/models/subreddit.py
+++ b/r2/r2/models/subreddit.py
@@ -173,6 +173,8 @@ class Subreddit(Thing, Printable):
return True
elif self.is_banned(user):
return False
+ elif self == Subreddit._by_name('discussion') and user.safe_karma < 5:
+ return False
elif self.type == 'public':
return True
elif self.is_moderator(user) or self.is_contributor(user):
Note that this is a bit of a hack; the right thing to do is to replace the karma_to_post global variable with a member of Subreddit, and make some UI for adjusting it. One side-effect of doing it this way is that the message users get when they don't have enough karma (from r2/r2/templates/newlink.html:81) always says they need 20, regardless of which section they tried to post to.
It looks like wezm has followed your suggestion, with extra hackishness - he added a new global variable.
Task: Write a patch for the Less Wrong codebase that hides deleted/banned posts from search engines.
Deadline: Sunday, 30 January.
Just filed a pull request. Easy patch, but it took a while to get LW working on my computer, to get used to the Pylons framework and to work out that articles are objects of class Link. That would be because LW is a modified Reddit.
There's an additional problem, it seems that the banned posts are still showing up in Google. This means that Less Wrong's failure to deal with spam is harming the utility of people outside LW who are searching Google for certain classes of products. Any final solution should also include the actual removal of these pages from LW.
I'm also worried about what this shows about our actual level of instrumental rationality given that we've now had this problem from a single set of spammers for a fair bit of time and all agree that there are problems, have had multiple threads about the problem, and still have done absolutely nothing about it.
Task: Write a patch for the Less Wrong codebase that hides deleted/banned posts from search engines.
Deadline: Sunday, 30 January.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
My model of the Peverells has them substantially earlier than Hogwarts (because the Elder Wand seems like a more powerful artifact than the Sword of Gryffindor).
Aha! The prophecy we just heard in chapter 96 is Old English. However, by the 1200s, when, according to canon, the Peverell brothers were born, we're well into Middle English (which Harry might well understand on first hearing). I was beginning to wonder if there was not some old wizard or witch listening, for whom that prophecy was intended.
There's still the problem of why brothers with an Anglo-Norman surname would have Old English as a mother tongue... well, that could happen rather easily with a Norman father and English mother, I suppose.
And the coincidence of Canon!Ignotus Peverell being born in 1214, the estimated year of Roger Bacon's birth, seemed significant too... I shall have to go back over the chapters referring to his diary.