All of Peter_de_Blanc's Comments + Replies

I'm really excited about software similar to Anki, but with task-specialized user interfaces (vs. self-graded tasks) and better task-selection models (incorporating something like item response theory), ideally to be used for both training and credentialing.

2cicatriz
I've explored using spaced repetition in various web-based learning interfaces, which are described at http://cicatriz.github.com I'd love to talk more with anyone who's interested. Based on my experiences, I have reservations about when and how exactly spaced repetition should be used and don't believe there's a general solution using current techniques to quickly go from content to SRS cards. But with a number of dedicated individuals working on different domains, there's certainly potential for better learning. I've been working on writing up a series of articles about this. Again, contact me if you want to be notified when that is released.
1sinak
I've just posted a couple of ideas that involve Anki-like systems, it'd be great to get your feedback. In general I think that anything that promotes wider use of spaced repetition and optimized learning techniques has massive societal.
6Will_Newsome
If you're looking for an institution around which to organize people and projects, I recently created Kilometa Labs for that kinda thing. Will start promoting it in a week or so after we think more about strategy &c. We have a few cool projects in the pipeline as well, so you'd be associating with cool people and cool stuff. I think you should talk to muflax and Zak Vance about the Anki-like ideas. Will email you with more info, I've already mentioned your ideas to them.

No hypothesis is a prefix of another hypothesis.

Shannon is a network hub. I spent some time at her previous house and made a lot of connections, including my current partners.

What happens when an antineutron interacts with a proton?

3RolfAndreassen
Very complicated things. Both the antineutron and the proton are soups of gluons and virtual quarks of all kinds surrounding the three valence quarks Dreaded_Anomaly mentions; all of which interact by the strong force. The result is exceedingly intractable. Almost anything that doesn't actually violate a conservation law can come out of this collision. The most common case, nonetheless, is pions - lots of pions. This is also the most common outcome from neutron-proton and neutron-antiproton collisions; the underlying quark interactions aren't all that different.
5Dreaded_Anomaly
There are various possibilities depending on the energy of the particles. An antineutron has valence quarks , , . A proton has valence quarks u, u, d. There are two quark-antiquark pairs here: u + and d + . In the simplest case, these annihilate electromagnetically: each pair produces two photons. The leftover u + becomes a positively-charged pion. The pi+ will most often decay to an antimuon + muon neutrino, and the antimuon will most often decay to a positron + electron neutrino + muon antineutrino. (It should be noted that muons have a relatively long lifetime, so the antimuon is likely to travel a long distance before decaying, depending on its energy. The pi+ decays much more quickly.) There are many other paths the interaction can take, though. The quark-antiquark pairs can interact through the strong force, producing more hadrons. They can also interact through the weak force, producing other hadrons or leptons. And, of course, there are different alternative decay paths for the annihilation products that will occur in some fraction of events. As the energy of the initial particles increases, more final states become available. Energy can be converted to mass, so more energy means heavier products are allowed. Edit: thanks to wedrifid for the reminder of LaTeX image embedding.
0wedrifid
Good question. I'm going to tender the guess that you get a kaboom (energy release equivalent to the mass of two protons) and a left over positron and neutrino spat out kind of fast.

I now realise you might be asking "how does this demonstrate hyperbolic, as opposed to exponential, discounting", which might be a valid point, but hyperbolic discounting does lead to discounting the future too heavily, so the player's choices do sort of make sense.

That is what I was wondering. Actually, exponential discounting values the (sufficiently distant) future less than hyperbolic discounting. Whether this is too heavy depends on the your parameter (unless you think that any discounting is bad).

Another player with Hyperbolic Discounting went further: he treated cities, any city near him, while carrying 5 red city cards in his hand and pointing out, in response to entreaties to cure red, that red wasn't much of an issue right now.

How does this demonstrate hyperbolic discounting?

4bentarm
I don't know if you've played the game. There are 4 disease, red, blue, yellow and black. "Curing red" doesn't automatically eliminate the disease - it just makes it easier to deal with, and possible to eliminate in the future (and also is part of the win condition). Treating people who have a disease right now helps them right now. Curing red has only future benefits. I now realise you might be asking "how does this demonstrate hyperbolic, as opposed to exponential, discounting", which might be a valid point, but hyperbolic discounting does lead to discounting the future too heavily, so the player's choices do sort of make sense.
0Rhwawn
Discounting with distance, I assume. Nearby people are extremely important, while it takes 100 African cities to get his attention, etc.

What's special about a mosquito is that it drinks blood.

Phil originally said this:

My point was that vampires were by definition not real - or at least, not understandable - because any time we found something real and understandable that met the definition of a vampire, we would change the definition to exclude it.

Note Phil's use of the word "because" here. Phil is claiming that if vampires weren't unreal-by-definition, then the audience would not have changed their definition whenever provided with a real example of a vampire as defined. It ... (read more)

0MinibearRex
Ah. That makes more sense.

I understand that Phil was not suggesting that all non-real things are vampires. That's why my example was a mosquito that isn't real, rather than, say, a Toyota that isn't real.

-1MinibearRex
But there's nothing particularly special about a mosquito. It's still an incorrect application of modus tollens. We have: If something is a vampire, then it is not real. From this, we can infer (from modus tollens) that if something is real, then it is not a vampire. Thus, if a certain mosquito is real, it is not a vampire. However, there is nothing here that justifies the belief that if a certain mosquito is imaginary, then it is a vampire.

My point was that vampires were by definition not real

So according to you, a mosquito that isn't real is a vampire?

1MinibearRex
His point is that: P(not real | vampire) ~= 1, which is not the same as: "vampire = not real". It's an if-then relationship, not a logical equivalency.

My fencing coach emphasizes modeling your opponent more accurately and setting up situations where you control when stuff happens. Both of these skills can substitute somewhat for having faster reflexes.

3gwern
Yeah. But thinking about it some more, TKD was probably not the best example - I actually have thought, quite a few times, during fencing that 'man I wish I had faster reflexes, he's ridiculous'. (Weapons are a lot faster than legs.)

This argument does not show that.

0byrnema
Which argument? I meant the argument loosely defined as the one where you count which fraction of innocent people are jailed to determine if the probability of guilt at 87% is appropriate. Steven0641 correctly pointed out that the target space for the fraction isn't all people in jail, but then you modify the target space to all people judged guilty with probability 87% and the argument 'works'.

I still don't see why you would want to transform probabilities using a sigmoidal function. It seems unnatural to apply a sigmoidal function to something in the domain [0, 1] rather than the domain R. You would be reducing the range of possible values. The first sigmoidal function I think of is the logistic function. If you used that, then 0 would be transformed into 1/2.

I have no idea how something like this could be a standard "game design" thing to do, so I think we must not be understanding Chimera correctly.

0Jonathan_Graehl
Yes, no fixed sigmoidal would have the effect I assumed was his intent. You could set a very steep sigmoidal filter that just catches the modal region of the pdf, but that's clunky, and you have to have exactly the right filter for the particular distribution. A simpler way to achieve "sharpening" a non-uniform probability distribution (making the modal region even more likely to pay off) is to raise it to some power >1, then renormalize.

The standard "game design" thing to do would be push the probabilities through a sigmoid function (to reward correct changes much more often than not, as well as punish incorrect choices more often than not).

I don't understand. You're applying a sigmoid function to probabilities... what are you doing with the resulting numbers?

2Jonathan_Graehl
Hopefully normalizing them so they sum to 1 again and using them to draw an outcome :) I assume the intent was to say that in normal games, if the goal is to choose the "smartest" action (highest EV, or whatever the objective fn is) under uncertainty, and the player makes the optimal choice, they should always on average be noticeably higher rewarded (not just slightly more rewarded). It's fine (maybe more addictive?) for right decisions to sometimes not result in a win, so long as there enough chances for a masterful player to recover from bad luck.

The setting in my paper allows you to have any finite amount of background knowledge.

There are robots that look like humans, but if you want an upload to experience it as a human body, you would want it to be structured like a human body on the inside too, e.g. by having the same set of muscles.

3DanielLC
If you want them to experience it as a human body, just add software that makes it feel like they're moving muscles that aren't really there. Also, if you can find the right part of the brain to mess with, make them incapable of noticing something's wrong with their body. Besides, pneumatic motors are tiny. It's not that hard to add more. Granted, they'd have to carry around a compressed air tank or air compressor if they want to walk around a lot, but that's not going to be a problem if you just want to hug your grand-daughter.
1steven0461
In the sense that evolution came up with my mind, or in some more direct sense?

You can't use the mind that came up with your preferences if no such mind exists. That's my point.

0steven0461
What would have come up with them instead?

Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences

What makes you think a mind came up with them?

0steven0461
I don't understand what point you're making; could you expand?

There was a specific set of algorithms that got me thinking about this topic, but now that I'm thinking about the topic I'd like to look at more stuff. I would proceed by identifying spaces of policies within a domain, and then looking for learning algorithms that deal with those sorts of spaces. For sequential decision-making problems in simple settings, dynamic bayesian networks can be used both as models of an agent's environment and as action policies.

I'd be interested in talking. You can e-mail me at peter@spaceandgames.com.

It is not available. The thinking on this matter was that sharing a bibliography of (what we considered) AGI publications relevant to the question of AGI timelines could direct researcher attention towards areas more likely to result in AGI soon, which would be bad.

I don't think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn't make it into this paper.

This tracks how good a god you are, and seems to make the paradox disappear.

How? Are you assuming that P(N) goes to zero?

1Stuart_Armstrong
Yes. This avoids assuming there is a non-zero probability that someone has infinite power; even the dark lords of the matrix couldn't grant me unlimited utility. I think the single "I am a god" forced one into an over-strict dichotomy. (there is a certain similarity to the question as to whether we should give non-zero probability to there existing non-computable oracles)

LCPW cuts two ways here, because there are two universal quantifiers in your claim. You need to look at every possible bounded utility function, not just every possible scenario. At least, if I understand you correctly, you're claiming that no bounded utility function reflects your preferences accurately.

That doesn't sound like an expected utility maximizer.

It seems to me that expanding further would reduce the risk of losing the utility it was previously counting on.

0orthonormal
LCPW isn't even necessary: do you really think that it wouldn't make a difference that you'd care about?

what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

It's not worth what?

2orthonormal
A small risk of losing the utility it was previously counting on. Of course you can do intuition pumps either way- I don't feel like I'd want the AI to sacrifice everything in the universe we know for a 0.01% chance of making it in a bigger universe- but some level of risk has to be worth a vast increase in potential fun.
1drethelin
resources, whether physical or computational. Presumably the AI is programmed to utilize resources in a parsimonious manner, with terms governing various applications of the resources, including powering the AI, and deciding on what to do. If the AI is programmed to limit what it does at some large but arbitrary point, because we don't want it taking over the universe or whatever, then this point might end up actually being before we want it to stop doing whatever it's doing.

Depth perception can be gained through vision therapy, even if you've never had it before. This is something I'm looking into doing, since I also grew up without depth perception.

2[anonymous]
I should have been more precise. I was born without a fully formed right eye - it has no lens and does not transmit a signal to my brain. Therefore, no "therapy" can improve my Vision Onefold. People in my situation (monocular blindness from birth) are extremely rare, so your assumption is understandable. I can get around in 3D space just fine, and I'm extremely good at first-person shooters, so I know I'm not missing much. (Coincidentally, I have no interest in physical sports.) The wiggle images do "work" for me. (On the other hand, "Possession of a single Eye is said to make the bearer equivalent to royalty.")

Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong.

You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not "both right" or "both wrong", but "both -3 decibels."

2gjm
No; in what I wrote "resolving a disagreement" means "agreeing to hold the same position, or something very close to it". Deciding "cheaply" that you'll both set p=1/2 (note: I assume that's what you mean by -3dB here, because the other interpretations I can think of don't amount to "agreeing to disagree") is no more rational than (even the least rational version of) "agreeing to disagree". If the evidence is very evenly balanced then of course you might end up doing that not-so-cheaply, but in such cases what more often happens is that you look at lots of evidence and see -- or think you see -- a gradual accumulation favouring one side. Of course you could base your position purely on the number of people on each side of the issue, and then you might be able to reach p=1/2 (or something near it) cheaply and not entirely unprincipledly. Unfortunately, that procedure also tells you that Pr(Christianity) is somewhere around 1/4, a conclusion that I think most people here agree with me in regarding as silly. You can try to fix that by weighting people's opinions according to how well they're informed, how clever they are, how rational they are, etc. -- but then you once again have a lengthy, difficult and subjective task that you might reasonably worry will end up giving you a confident wrong answer. I should perhaps clarify that what I mean by "wouldn't be much more likely to leave us both right than to leave us both wrong" is: for each of the two people involved, who (at the outset) have quite different opinions, Pr(reach agreement on wrong answer | reach agreement) is quite high. And, once again for the avoidance of doubt, I am not taking "reach agreement" to mean "reach agreement that one definite position or another is almost certainly right". I just think that empirically, in practice, when people reach agreement with one another they more often do that than agree that Pr(each) ~= 1/2: I disagree with you about "the most common result" unless "cheaply" is ta

Obviously I didn't mean that being broke (or anything) is infinite disutility.

Then what asymptote were you referring to?

4ata
It was in response to the "indefinitely" in the parent comment, but I think I was just thinking of the function and not about how to apply it to humans. So actually your original response was pretty much exactly correct. It was a silly thing to say. I wonder if it's correct, then, that the marginal disutility (according to whatever preferences are revealed by how people actually act) of the loss of another dollar actually does eventually start decreasing when a person is in enough debt. That seems humanly plausible.

I thought human utility over money was roughly logarithmic, in which case loss of utility per cent lost would grow until (theoretically) hitting an asymptote.

So you're saying that being broke is infinite disutility. How seriously have you thought about the realism of this model?

1ata
Obviously I didn't mean that being broke (or anything) is infinite disutility. Am I mistaken that the utility of money is otherwise modeled as logarithmic generally?

I praise you for your right action.

First, I imagine a billion bits. That's maybe 15 minutes of high quality video, so it's pretty easy to imagine a billion bits. Then I imagine that each of those bits represents some proposition about a year - for example, whether or not humanity still exists. If you want to model a second proposition about each year, just add another billion bits.

5ArisKatsaris
Perhaps I don't understand your usage of the word 'imagine' because this example doesn't really help me 'imagine' them at all. Imagine their result (the high quality video) sure, but not the bits themselves.

Me: Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time.

In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?

1TimFreeman
I have a really poor intuition for time, so I"m the wrong person to ask. I can imagine a thousand things as a 10x10x10 cube. I can imagine a million things as a 10x10x10 arrangements of 1K cubes. My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years. In order to have desires about something, you have to have a compelling internal representation of that something so you can have a desire about it. X didn't say "I can too imagine a billion years!", so none of this pertains to my point.

Instead of a strict straight/bi/gay split, I prefer to think of it as a spectrum where 0 is completely straight, 5 is completely bisexual and 10 is completely gay.

Hah! You're trying to squish two axes into one axis. Why not just have an "attraction to males" axis and an "attraction to females" axis? After all, it is possible for both to be zero or negative.

4Strange7
I would say there are more than two axes which could be meaningfully considered, here. Male and female body types, personalities, and genitals can exist in a variety of combinations, and any given combination can (in principle) be considered sexy or repulsive separate from the others. For example, there are those who prefer [feminine/curvy/penis] having sex with [masculine/buff/vagina] over all other thus-far-imagined pairings.
4Cyan
Dimension reduction is not automatically an illegitimate move. That said, I grant that in this case it's worthwhile to keep at least two axes.
3TheOtherDave
In a similar spirit, many discussions of sexuality separate "attraction" from "identity" from "experience" onto different axes to get at the differences between a man who is occasionally attracted to men but identifies as straight, vs. a man who is equally often attracted to men but identifies as bi, or various other possible combinations.
2Kaj_Sotala
An excellent point.

OK, I guess my biggest complaint is this:

"If this approximation is close enough to the true value, the rest of the argument goes through: given that the sum Δx+Δy+Δz is fixed, it's best to put everything into the charity with the largest partial derivative at (X,Y,Z)."

What does "close enough" mean? I don't see this established anywhere in your post.

I guess one sufficient condition would be that a single charity has the largest partial derivative everywhere in the space of reachable outcomes.

8Anatoly_Vorobey
I'm unsure of what more I could have done, to be honest. The math involved is just Taylor's formula, and I pointed at its exact form in Wikipedia. Would it be better if I wrote out the exact result of substituting n=1 into the equation? I figured anyone who knows what a partial derivative is can do that on their own, and I wouldn't be helping much to those who don't know that, so it'd just be a token effort.

Weirdtopia: sex is private. Your own memories of sex are only accessible while having sex. People having sex in public will be noticed but forgotten. Your knowledge of who your sex partners are is only accessible when it is needed to arrange sex. You will generally have warm feelings towards your sex partners, but you will not know the reason for these feelings most of the time, nor will you be curious. When you have sex, you will take great joy in realizing/remembering that this person you love is your sex partner.

Your knowledge of who your sex partners are is only accessible when it is needed to arrange sex.

As a result of the necessity of some degree of masturbation for efficient planning, nearly everyone has a fetish for rigorously accurate schedules. Phrases of the form "[politician] made the trains run on time" are provocative and disorienting to the point of being completely socially unacceptable.

If you want to predict how someone will answer a question, your own best answer is a good guess. Even if you think the other person is less intelligent than you, they are more likely to say the correct answer than they are to say any particular wrong answer.

Similarly, if you want to predict how someone will think through a problem, and you lack detailed knowledge of how that person's mind happens to be broken, then a good guess is that they will think the same sorts of thoughts that a non-broken mind would think.

3sark
Yay! I got it! Thanks for putting up with me.

This paper says its variance is from mutation-selection balance. I.e. it is a highly polygenic trait giving it a huge mutational target size, which makes it hard for natural selection to remove its variance.

That's what I said in the comment you are replying to.

0sark
OK. I just fail to see the utility of this concept of 'prototypical human intelligence' for issues touched on in the OP.

There is no contradiction in believing that a prototypical human is smarter than most humans. Perhaps the variance in human intelligence is mostly explained by different degrees of divergence from the prototype due to developmental errors.

0sark
I don't get this. Surely the variance is on both sides of the mean. I'm guessing the prototype is not the mean, but then I don't see how the variance relates to the prototype being smarter than most humans. What determines the prototype? Wouldn't it make more sense for us to model others on the mean? That ideal prototype seems a silly prior. Incidentally, intelligence is a bell-curve. This paper says its variance is from mutation-selection balance. I.e. it is a highly polygenic trait giving it a huge mutational target size, which makes it hard for natural selection to remove its variance.

Yeah, that bothered me too. But maybe Old One didn't know how long it would take to activate the Countermeasure.

2Daniel Kokotajlo
Or maybe when the Blight fleet reaches the Countermeasure it can undo it and revive the Blight, so by delaying the fleet arrival they make it possible for the Tines to build up fleets of their own and defend the Countermeasure?

This sounds reasonable. What sort of thought would you recommend responding with after noticing oneself procrastinating? I'm leaning towards "what would I like to do?"

1Jonathan_Graehl
Sounds great. Or, "what will I do now?". Obviously, with curiosity, not frustration.

Offhand, I'm guessing the very first response ought to be "Huzzah! I caught myself procrastinating!" in order to get the reverse version of the effect I mentioned. Then go on to "what would I like to do?"

I think it would be helpful to talk about exactly what quantities one is risk averse about. If we can agree on a toy example, it should be easy to resolve the argument using math.

For instance, I am (reflectively) somewhat risk averse about the amount of money I have. I am not, on top of that, risk averse about the amount of money I gain or lose from a particular action.

Now how about human lives?

I'm not sure if I am risk averse about the amount of human life in all of spacetime.

I think I am risk averse about the number of humans living at once; if you added... (read more)

-3timtyler
I'm an ethical egoist - so my opinions here are likely to be off topic. Perhaps that makes non-utilitarian preferences seem less unreasonable to me, though. If someone prefers saving 9 lives at p = 0.1 to 1 life with certainty - well, maybe they just want to make sure that somewhere in the multiverse is well-populated. It doesn't necessarily mean they don't care - just that they don't care in a strictly utilitarian way. If you are risk-neutral, I agree that there is no reason to diversify.

Real analysis is the first thing that comes to mind. Linear algebra is the second thing.

Lately I've been thinking about if and how learning math can improve one's thinking in seemingly unrelated areas. I should be able to report on my findings in a year or two.

2patrissimo
This seems like a classic example of the standard fallacious defense of undirected research (that it might and sometimes does create serendipitous results)? Yes, learning something useless/nonexistent might help you learn useful things about stuff that exists, but it seems awfully implausible that it helps you learn more useful things about existence than studying the useful and the existing. Doing the latter will also improve your thinking in seemingly unrelated areas...while having the benefit of not being useless. If instead of learning the clever tricks of combinatorics as an undergraduate, I had learned useful math like statistics or algorithms, I think I would have had just as much mental exercise benefit and gotten a lot more value.

Converting Go positions from SGF to LaTeX.

Writing the above comment got me thinking about agents having different discount rates for different sorts of goods. Could the appearance of hyperbolic discounting come from a mixture of different rates of exponential discounting?

I remembered that the same sort of question comes up in the study of radioisotope decay. A quick google search turned up this blog, which says that if you assume a maximum-entropy mixture of decay rates (constrained by a particular mean energy), you get hyperbolic decay of the mixture. This is exactly the answer I was looking for.

In this metaphor, are learning and knowing investments that will return future cash? Why should there be different discount rates?

By learning, I mean gaining knowledge. Humans can receive enjoyment both from having stuff and from gaining stuff, and knowledge is not an exception.

It's true that a dynamically-consistent agent can't have different discount rates for different terminal values, but bounded rationalists might talk about instrumental values using the same sort of math they use for terminal values. In that context it makes sense to use different discount rates for different sorts of good.

6Peter_de_Blanc
Writing the above comment got me thinking about agents having different discount rates for different sorts of goods. Could the appearance of hyperbolic discounting come from a mixture of different rates of exponential discounting? I remembered that the same sort of question comes up in the study of radioisotope decay. A quick google search turned up this blog, which says that if you assume a maximum-entropy mixture of decay rates (constrained by a particular mean energy), you get hyperbolic decay of the mixture. This is exactly the answer I was looking for.

Peter isn't the only person on that twitter account.

How did you figure that out?

3shokwave
I misunderstood how retweeting works.
Load More