To check understanding: if in the first timeline, we use a radiation that doesn't exceed double the heteropneum's EFS, then there remains one timeline. But if we do, there are multiple timelines that aren't distinguishable ... except that the ones with <2x the EFS can't have been the original timeline, because otherwise there wouldn't be branching. I guess I'm confused
I'm confused by the predictions of death rates for the global population -- seems like that's what would happen only if the 50% of world population is infected all at once. Is it just exponential growth that's doing the work there? I'm also confused about how long contagion is well-modelled as exponential
To the extent this is a correct summary, I note that it's not obvious to me that agents would sharpen their reasoning skills via test cases rather than establishing proofs on bounds of performance and so on. Though I suppose either way they are using logic, so it doesn't affect the claims of the post
Here is my attempt at a summary of (a standalone part of) the reasoning in this post.
Most of the rituals were created by individuals that did actually understand the real reasons for why certain things had to happen
This is not part of my interpretation, so I was surprised to read this. Could you say more about why you think this? (Either why you think this being argued for in Vaniver's / Scott's posts or why you believe it is fine; I'm mostly interested in the arguments for this claim).
For example, Scott writes:
...How did [culture] form? Not through some smart Inuit or Fuegian person reasoning it out; if that had been it, smart European
This link (and the one for "Why do we fear the twinge of starting?") is broken (I think it's an admin view?).
Yes, you're quite right!
The intuition becomes a little clearer when I take the following alternative derivation:
Let us look at the change in expected value when I increase my capabilities. From the expected value stemming from worlds where I win, we have . For the other actor, their probability of winning decreases at a rate that matches my increase in probability of winning. Also, their probability of deploying a safe AI doesn't change. So the change in expected value stemming fro m worlds where they win is .
We should be indifferent
...It seems like keeping a part 'outside' the experience/feeling is a big part for you. Does that sound right? (Similar to the unblending Kaj talks about in his IFS post or clearing a space in Focusing)
Now of course today's structure/process is tomorrow's content
Do you mean here that as you progress, you will introspect on the nature of your previous introspections, rather than more 'object-level' thoughts and feelings?
This points to a lot of what the difference feels like to me! It jibes with my intuition for the situation that prompted this question.
I was mildly anxious about something (I forget what), and stopped myself as I was about to move on to some work (in which I would have lost the anxiety). I thought it might be useful to be with the anxiety a bit and see what was so anxious about the situation. This felt like it would be useful, but then I wondered if I would get bad ruminative effects. It seemed like I wouldn't, but I wasn't sure why.
I'm not sure if I shoul
...I came back to this post because I was thinking about Scott's criticism of subminds where he complains about "little people who make you drink beer because they like beer".
I'd already been considering how your robot model is nice for seeing why something submind-y would be going on. However, I was still confused about thinking about these various systems as basically people who have feelings and should be negotiated with, using basically the same techniques I'd use to negotiate with people.
Revisiting, the "Personalized characters" section was pretty useful
...Section of an interesting talk relating to this by Anna Salamon. Makes the point that if ability to improve its model of fundamental physics is not linear in the amount of Universe it controls, such an AI would be at least somewhat risk-averse (with respect to gambles that give it different proportions of our Universe)
I really enjoyed this post and starting with the plausible robot design was really helpful for me accessing the IFS model. I also enjoyed reflecting on your previous objections as a structure for the second part.
The part with repeated unblending sounds reminiscent of the "Clearing a space" stage of Focusing, in which one acknowledges and sets slightly to the side the problems in one's life. Importantly, you don't "go inside" the problems (I take 'going inside' to be more-or-less experiencing the affect associated with the problems). This seems pretty simil
...I think this is a great summary (EDIT: this should read "I think the summary in the newsletter was great").
That said, these models are still very simplistic, and I mainly try to derive qualitative conclusions from them that my intuition agrees with in hindsight.
Yes, I agree. The best indicator I had of making a mathematical mistake was whether my intuition agreed in hindsight
Probably the most valuable nets are those deployed on people who already have malaria, to prevent it from spreading to mosquitoes, and thus to more people
I hadn't thought about this! I'd be interested in learning more about this. Do you have a suggested place to start reading or more search term suggestions (on top of Ewald)?
Also, can animals harbour malaria pathogens that harm humans? This section of the wiki page on malaria makes me think not, but it's not explicitly stated
Parasites in general and malaria in particular are pretty specific. For example, humans developed immunity shortly after speciation from chimps and malaria only jumped back 30kya (but probably did so multiple times to produce the several species of malaria). It's pretty clear that it doesn't have other hosts in the New World because the strategy of treating all humans in an area for 3 weeks wipes it out. But it's hard to rule out the possibility that it has other hosts in Africa.
Ewald has written lots of great papers. Here is a paper summari...
your decision theory maps from decisions to situations
Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that's not it
But the point is that each theory should be capable of standing on its own
Can you say a little more about how ADT doesn't stand on its own? After all, ADT is just defined as:
...An A
So I think an account of anthropics that says "give me your values/morality and I'll tell you what to do" is not an account of morality + anthropics, but has actually pulled out morality from an account of anthropics that shouldn't have had it. (Schematically, rather than define adt(decisionProblem) = chooseBest(someValues, decisionProblem)
, you now have define adt(values, decisionProblem) = chooseBest(values, decisionProblem)
)
Perhaps you think that an account that makes mention of morality ends up being (partly) a theory of morality? And that also we shou
...It seems to me that ADT separates anthropics and morality. For example, Bayesianism doesn't tell you what you should do, just how to update your beliefs. Given your beliefs, what you value decides what you should do. Similarly, ADT gives you an anthropic decision procedure. What exactly does it tell you to do? Well, that depends on your morality!
As I read through, the core model fit well with my intuition. But then I was surprised when I got to the section on religious schisms! I wondered why we should model the adherents of a religion as trying to join the school with the most 'accurate' claims about the religion.
On reflection, it appears to me that the model probably holds roughly as well in the religion case as the local radio intellectual case. Both of those are examples of "hostile" talking up. I wonder if the ways in which those cases diverge from pure information sharing explains the differ
...When it comes to disclosure policies, if I'm uncertain between the "MIRI view" and the "Paul Christiano" view, should I bite the bullet and back one approach over the other? Or can I aim to support both views, without worrying that they're defeating each other?
My current understanding is that it's coherent to support both at once. That is, I can think that possibly intelligence needs lots of fundamental insights, and that safety needs lots of similar insights (this is supposed to be a characterisation of a MIRI-ish view). I can think that work done on figu
...I think you've got a lot of the core idea. But it's not important that we know that the data point has some ranking within a distribution. Let me try and explain the ideas as I understand them.
The unbiased estimator is unbiased in the sense that for any actual value of the thing being estimated, the expected value of the estimation across the possible data is the true value.
To be concrete, suppose I tell you that I will generate a true value, and then add either +1 or -1 to it with equal probability. An unbiased estimator is just to report back the value y
...I am also pretty interested in 2 (ex-post giving). In 2015, there was impactpurchase.org. I got in contact with them about it, and the major updates Paul reported were a) being willing to buy partial contributions (not just for people who were claiming full responsibility for things) and b) more focused on what's being funded (like for example, only asking for people to submit claims on blog posts and articles).
I realise that things like impactpurchase is possibly framed in terms of a slightly divergent reason for 2 (it seems more focused on changing the i
...I'm interested in the predictors' incentives.
One problem with decision markets is that you only get paid for your information about an option if the decision is taken, which can incentivise you to overstate the case for an option (if you see its predicted benefit X, its true benefit is X+k and it would have to be at X+k+l to be chosen, if l < k, you will want to move the predicted benefit to X+k+l and make a k-l profit).
Maybe you avoid this if you pay for participation in PAES, but then you might risk people piling on to obvious judgments to get paid. M
...On estimating expected value, I'm reminded by some of Hanson's work where he suggests predicting later evaluation (recent example: http://www.overcomingbias.com/2018/11/how-to-fund-prestige-science.html). I think this is an interesting subcase of the evaluating subprocess. It also fits nicely with this post by PC
Thanks for the video! I had already skimmed this post when I noticed it, and then I watched it and reread the post. Perhaps my favourite thing about it was that it was slightly non-linear (skipping ahead to the diagram, non-linearity when covering sections).
Could you say a bit more about your worries with (scaling) prediction markets?
Do you have any thoughts about which experiments have the best expected information value per $?
This was really interesting. I've thought of this comment on-and-off for the last month.
You raised an interesting reason for thinking that transhumans would have high anthropic measure. But if you have a reference-class based anthropic theory, couldn't transhumans have a lot of anthropic measure, but not be in our reference class (that is, for SSA, we shouldn't reason as if we were selected from a class containing all humans and transhumans)?
Even if we think that the reference class should contain transhumans, do we have positive reasons for thinking that
...Yes, that seems an important case to consider.
You might still think the analysis in the post is relevant if there are actors that can shape the incentive gradients you talk about: Google might be able to focus its sub-entities in a particular way while maintaining profit or a government might choose to implement more or less oversight over tech companies.
Even with the above paragraph, it seems like the relative change-over-time in resources and power of the strategic entities would be important to consider, as you point out. In this case, it seems like (known) fast takeoffs might be safer!
I talked to a couple of people in relevant organisations about possible info hazards for talking about races (not because this model is sophisticated or non-obvious, but because it contributes to general self-fulfilling chattering). Amongst those I talked to, they were not worried about (a) simple pieces with at least some nuance in general and (b) this post in particular
Having the rules in the post made me think you wanted new suggestions in this thread. The rest of the post and habryka's comment point towards new comments in the old thread.
If you want people to update the old thread, I would either remove the rules from this post, or add a caveat like "Remember, when you go to post in that thread, you should follow the rules below"
I've been trying this for a couple of weeks now. It's hard! I often will have a missing link in the distraction chain: I know something that came at point X in the distraction chain and X-n, for n > 1. When I try and probe the missing part it's pretty uncomfortable. Like using or poking a numb limb. It can be pretty aversive, so I can't bring myself to do this meditation every time I meditate.
This changed my mind about the parent comment (I think the first paragraph would have done so, but the example certainly helped).
In general, I don't mind added concreteness even at the cost of some valence-loading. But seeing how well "sanction" works and some other comments that seem to disagree on the exact meaning of "punch", I guess not using "punch" would have been better
Does your program assume that the Kelly bet stays a fixed size, rather than changing?
Here's a program you can paste in your browser that finds the expected value from following Kelly in Gurkenglas' game (it finds EV to be 20)
https://pastebin.com/iTDK7jX6
(You can also fiddle with the first argument to experiment
to see some of the effects when 4 doesn't hold)
It sounds like in the first part of your post you're disagreeing with my choice of reference class when using SSA? That's reasonable. My intuition is that if one ends up using a reference class-dependent anthropic principle (like SSA) that transhumans would not be part of our reference class, but I suppose I don't have much reason to trust this intuition.
On anthropic measure being tied to independently-intelligent minds, what is the difference between an independently- and dependently-intelligent mind? What makes you think the mind needs to be specifically independently-intelligent?
Yes, I suppose the only way that this would not be an issue is if the aliens are travelling at a very high fraction of the speed of light and inflation means that they will never reach spatially distant parts of the Universe in time for this to be an issue.
In SETI-attack, is the idea that the information signals are disruptive and cause the civilisations they may annihilate to be too disrupted (perhaps by war or devastating technological failures) to defend themselves?
Yeah, that's a good point. I will amend that part at some point.
Also, the analysis might have some predictions if civilisations don't pass through a (long) observable stage before they start to expand. It increases the probability that a shockwave of intergalactic expansion will arrive at Earth soon. Still, if the region of our past light cone where young civilisations might exist is small enough, we probably just lose information on where the filter is likely to be
I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.
To be clear about what I mean about those constraints, here's an example. One way you might be able to implement this function is if you can enumerate all the values of A and then pick the maximum B according to some ordering. If you can't enumerate A, you might have some strategy for searching through it.
But that's not the only feasible strategy. For example, if you can order B, take two elements of B to C and order C, you might do
...I feel a pull towards downvoting this. I am not going to, because I think this was posted in good faith, and as you say, it's clear a lot of time and effort has gone into these comments. That said, I'd like to unpack my reaction a bit. It may be you disagree with my take, but it may also be there's something useful in it.
[EDIT: I should disclaim that my reaction may be biased from having recently received an aggressive comment.]
First, I should note that I don't know why you did these edits. Did sarahconstantin ask you to? Did you think a good post was bein
...If you can’t viscerally feel the difference between .1% and 1%, or a thousand and a million, you will probably need more of a statistics background to really understand things like “how much money is flowing into AI, and what is being accomplished, and what does it mean?”
I'm surprised at the suggestion that studying statistics strengthens gut sense of the significance of probabilities. I've updated somewhat towards that based on the above, but I would still expect something more akin to playing with and visualising data to be useful for this
I think this can be resolved by working in terms of packages of property (in this case, uninterrupted ownership of the land), where the value can be greater than the sum of its parts. If someone takes a day of ownership, they have to be willing to pay in excess of the difference between "uninterrupted ownership for 5 years" and "ownership for the first 3 years and 2 days", which could be a lot. Certainly this is a bit of a change from Harberger taxes / needs to allow people to put valuations on extended periods.
It also doesn't really resolve Gwern's case below, where the value to an actor of some property might be less than the amount of value they have custody over via that property.