All of rk's Comments + Replies

I think this can be resolved by working in terms of packages of property (in this case, uninterrupted ownership of the land), where the value can be greater than the sum of its parts. If someone takes a day of ownership, they have to be willing to pay in excess of the difference between "uninterrupted ownership for 5 years" and "ownership for the first 3 years and 2 days", which could be a lot. Certainly this is a bit of a change from Harberger taxes / needs to allow people to put valuations on extended periods.

It also doesn't really resolve Gwern's case below, where the value to an actor of some property might be less than the amount of value they have custody over via that property.

1Rachel Shu
I don't think those are separate things? The value of a roof is the value of everything underneath it when it rains.

To check understanding: if in the first timeline, we use a radiation that doesn't exceed double the heteropneum's EFS, then there remains one timeline. But if we do, there are multiple timelines that aren't distinguishable ... except that the ones with <2x the EFS can't have been the original timeline, because otherwise there wouldn't be branching. I guess I'm confused

3abstractapplic
If your EFS is more than double a heteropneum's amplitude, you can get a (perfectly accurate) recording of what your EFS would have been had you used a different resonance on it. The in-universe justification for this is that Sphere scientists can observe - and infer things about - alternate timelines under the right conditions.

I'm confused by the predictions of death rates for the global population -- seems like that's what would happen only if the 50% of world population is infected all at once. Is it just exponential growth that's doing the work there? I'm also confused about how long contagion is well-modelled as exponential

To the extent this is a correct summary, I note that it's not obvious to me that agents would sharpen their reasoning skills via test cases rather than establishing proofs on bounds of performance and so on. Though I suppose either way they are using logic, so it doesn't affect the claims of the post

Here is my attempt at a summary of (a standalone part of) the reasoning in this post.

  • An agent trying to get a lot of reward can get stuck (or at least waste data) when the actions that seem good don't plug into the parts of the world/data stream that contain information about which actions are in fact good. That is, an agent that restricts its information about the reward+dynamics of the world to only its reward feedback will get less reward
  • One way an agent can try and get additional information is by deductive reasoning from propositions (if they can r
... (read more)
2rk
To the extent this is a correct summary, I note that it's not obvious to me that agents would sharpen their reasoning skills via test cases rather than establishing proofs on bounds of performance and so on. Though I suppose either way they are using logic, so it doesn't affect the claims of the post

Somehow you're expecting to get a lot of information about task B from performance on task A

Are "A" and "B" backwards here, or am I not following?

3abramdemski
Backwards, thanks!
Answer by rk30

is true iff one of (i) is false or (ii) is true. Therefore, if is some true sentence, for any . Here, is .

2Shmi
OK, so the trouble with logical induction is assuming mathematical realism, where "the claim that the 87,653rd digit of π is a 7" is either true or false even when not yet evaluated by someone, and the paper is discussing a way to assign a reasonable probability to it (e.g. 1/10 in this case if you know nothing about digits or pi apriori) using the trading market model. In which case the implication condition does not hold ever, (since the chance of making an error in calculating the 87,653rd digit of π is always larger than in calculating 1+1). So they are treating logical uncertainty as environmental then. It makes sense if so.
2Charlie Steiner
To elaborate, A->B is an operation with a truth table: A B A->B T T T T F F F T T F F T The only thing that falsifies A->B is if A is true but B is false. This is different from how we usually think about implication, because it's not like there's any requirement that you can deduce B from A. It's just a truth table. But it is relevant to probability, because if A->B, then you're not allowed to assign high probability to A but low probability to B. EDIT: Anyhow I think that paragraph is a really quick and dirty way of phrasing the incompatibility of logical uncertainty with normal probability. The issue is that in normal probability, logical steps are things that are allowed to happen inside the parentheses of the P() function. No matter how complicated the proof of φ, as long as the proof follows logically from premises, you can't doubt φ more than you doubt the premises, because the P() function thinks that P(premises) and P(logical equivalent of premises according to Boolean algebra) are "the same thing."

Most of the rituals were created by individuals that did actually understand the real reasons for why certain things had to happen

This is not part of my interpretation, so I was surprised to read this. Could you say more about why you think this? (Either why you think this being argued for in Vaniver's / Scott's posts or why you believe it is fine; I'm mostly interested in the arguments for this claim).

For example, Scott writes:

How did [culture] form? Not through some smart Inuit or Fuegian person reasoning it out; if that had been it, smart European

... (read more)

This link (and the one for "Why do we fear the twinge of starting?") is broken (I think it's an admin view?).

(Correct link)

1Eli Tyre
They should both be fixed now. Thanks!

Yes, you're quite right!

The intuition becomes a little clearer when I take the following alternative derivation:

Let us look at the change in expected value when I increase my capabilities. From the expected value stemming from worlds where I win, we have . For the other actor, their probability of winning decreases at a rate that matches my increase in probability of winning. Also, their probability of deploying a safe AI doesn't change. So the change in expected value stemming fro m worlds where they win is .

We should be indifferent

... (read more)
1BurntVictory
Oh wait, yeah, this is just an example of the general principle "when you're optimizing for xy, and you have a limited budget with linear costs on x and y, the optimal allocation is to spend equal amounts on both." Formally, you can show this via Lagrange-multiplier optimization, using the Lagrangian L(x,y)=xy−λ(ax+by−M). Setting the partials equal to zero gets you λ=y/a=x/b, and you recover the linear constraint function ax+by=M. So ax=by=M/2. (Alternatively, just optimizing xM−axb works, but I like Lagrange multipliers.) In this case, we want to maximize pq+(1−p)rq0=p(q−rq0)−rq0, which is equivalent to optimizing p∗(q−rq0). Let's define w = q−rq0, so we're optimizing p∗w. Our constraint function is defined by the tradeoff between p and w. p(k)=(.5−p0)k+p0, so k=p−p0.5−p0. w(k)=(r−1)q0k+q0−rq0=(r−1)q0(k−1), so k=−w(1−r)q0+1=p−p0.5−p0 . Rearranging gives the constraint function .5−p0(1−r)q0w+p=.5. This is indeed linear, with a total 'budget' M of .5 and a p-coefficient b of 1. So by the above theorem we should have 1∗p=.5/2=.25.

It seems like keeping a part 'outside' the experience/feeling is a big part for you. Does that sound right? (Similar to the unblending Kaj talks about in his IFS post or clearing a space in Focusing)

Now of course today's structure/process is tomorrow's content

Do you mean here that as you progress, you will introspect on the nature of your previous introspections, rather than more 'object-level' thoughts and feelings?

5Gordon Seidoh Worley
Yeah, I do sometimes make an inside/outside distinction as a metaphor for talking about the subject/object distinction because things that are object can in a certain sense be said to be outside the self and thus available for manipulation and considering by the self and those things that are subject as inside and cannot as easily be manipulated and seen, just as it's easier for me to see and manipulate the cup on my desk than to see and manipulate the stomach inside my body. Most progress with insight meditation consists of gradually (or suddenly!) moving what was subject/inside to object/outside, and a way to do that is by engaging with it in this way through a deliberative introspective process as part of meditation. Yes, and also more broadly that what was once skillful inspection of, say, observable behavior, can later become unskillful excess attention on behavior when you should now be paying more attention to the precursors of behavior because those are more readily accessible to you.

I think that though one may use the techniques looking for a solution (which I agree makes them solution-oriented in a sense), it's not right to so that in, say, Focusing, you introspect on solutions rather than causes. So maybe the difference is more the optimism than the area of focus?

This points to a lot of what the difference feels like to me! It jibes with my intuition for the situation that prompted this question.

I was mildly anxious about something (I forget what), and stopped myself as I was about to move on to some work (in which I would have lost the anxiety). I thought it might be useful to be with the anxiety a bit and see what was so anxious about the situation. This felt like it would be useful, but then I wondered if I would get bad ruminative effects. It seemed like I wouldn't, but I wasn't sure why.

I'm not sure if I shoul

... (read more)
4Raemon
I feel like I do two types of things, that feel conceptually similar. (Maybe only one of them is rumination?) * Thinking about the state of the world and being stressed by it * Thinking about a particular social situation that is stressing me out, and rehearsing what I want to say to that person. The former is more classical rumination, but they feel related. In the second case, my brain is clearly trying to get to a state where it feels like it knows what to do the next time I encounter the social situation, which is action-oriented. Even in the first case... while I may not be planning any actions, it still feels like it's oriented around action. Like, I'm feeling trapped and unable to act, but the whole thought process is still oriented around "man, I wish I could act." Or "man, I'm worried about how other people are acting."

I came back to this post because I was thinking about Scott's criticism of subminds where he complains about "little people who make you drink beer because they like beer".

I'd already been considering how your robot model is nice for seeing why something submind-y would be going on. However, I was still confused about thinking about these various systems as basically people who have feelings and should be negotiated with, using basically the same techniques I'd use to negotiate with people.

Revisiting, the "Personalized characters" section was pretty useful

... (read more)

Not Ben, but I have used X Goodhart more than 20 times (summing over all the Xs)

rk100

Section of an interesting talk relating to this by Anna Salamon. Makes the point that if ability to improve its model of fundamental physics is not linear in the amount of Universe it controls, such an AI would be at least somewhat risk-averse (with respect to gambles that give it different proportions of our Universe)

I really enjoyed this post and starting with the plausible robot design was really helpful for me accessing the IFS model. I also enjoyed reflecting on your previous objections as a structure for the second part.

The part with repeated unblending sounds reminiscent of the "Clearing a space" stage of Focusing, in which one acknowledges and sets slightly to the side the problems in one's life. Importantly, you don't "go inside" the problems (I take 'going inside' to be more-or-less experiencing the affect associated with the problems). This seems pretty simil

... (read more)
6Kaj_Sotala
Thanks, that's very nice and specific feedback. :) Yeah, these feel basically like the same kind of thing. I find that Focusing and IFS have basically blended into some hybrid technique for me, with it being hard to tell the difference anymore. Possibly combined with other related practices, such as Focusing: Elimination of internal conflicts, increased well-being due to improved access to Self, better ability to do things which feel like worth doing. The personal examples in my other comment may give a better idea.

I think this is a great summary (EDIT: this should read "I think the summary in the newsletter was great").

That said, these models are still very simplistic, and I mainly try to derive qualitative conclusions from them that my intuition agrees with in hindsight.

Yes, I agree. The best indicator I had of making a mathematical mistake was whether my intuition agreed in hindsight

2Rohin Shah
(Fyi, this was only my opinion, the summary is in the newsletter. I usually don't post the summary on the post itself, since it is typically repeating the post in a manner that doesn't generate new insights.)

Thanks! The info on parasite specificity/history of malaria is really useful.

I wonder if you know of anything specifically about the relative cost-effectiveness of nets for infected people vs uninfected people? No worries if not

3Douglas_Knight
I don't know. My claim was based on reasoning from first principles. It was intended as an illustrative example that there could be positive externalities, not to measure them. If you have to triage nets, it's probably the way to go, but if you're triaging nets, you've probably made a bad decision. I can think of so many reasons to concentrate nets in one village, rather than spreading them out and micro-managing the deployments in the villages. One reason is habit formation. Another is the cost of distribution, which is probably low for marginal nets and high for a new village. A third is that there positive externalities compound, at least if you cross over the threshold of locally wiping out malaria. (Under that threshold, I'm not sure.)

Probably the most valuable nets are those deployed on people who already have malaria, to prevent it from spreading to mosquitoes, and thus to more people

I hadn't thought about this! I'd be interested in learning more about this. Do you have a suggested place to start reading or more search term suggestions (on top of Ewald)?

Also, can animals harbour malaria pathogens that harm humans? This section of the wiki page on malaria makes me think not, but it's not explicitly stated

Parasites in general and malaria in particular are pretty specific. For example, humans developed immunity shortly after speciation from chimps and malaria only jumped back 30kya (but probably did so multiple times to produce the several species of malaria). It's pretty clear that it doesn't have other hosts in the New World because the strategy of treating all humans in an area for 3 weeks wipes it out. But it's hard to rule out the possibility that it has other hosts in Africa.

Ewald has written lots of great papers. Here is a paper summari... (read more)

your decision theory maps from decisions to situations

Could you say a little more about what a situation is? One thing I thought is maybe that a situation is a result of a choice? But then it sounds like your decision theory decides whether you should, for example, take an offered piece of chocolate, regardless of whether you like chocolate or not. So I guess that's not it

But the point is that each theory should be capable of standing on its own

Can you say a little more about how ADT doesn't stand on its own? After all, ADT is just defined as:

An A

... (read more)

So I think an account of anthropics that says "give me your values/morality and I'll tell you what to do" is not an account of morality + anthropics, but has actually pulled out morality from an account of anthropics that shouldn't have had it. (Schematically, rather than define adt(decisionProblem) = chooseBest(someValues, decisionProblem), you now have define adt(values, decisionProblem) = chooseBest(values, decisionProblem))

Perhaps you think that an account that makes mention of morality ends up being (partly) a theory of morality? And that also we shou

... (read more)
2Chris_Leong
The way I see it, your morality defines a preference ordering over situations and your decision theory maps from decisions to situations. There can be some interaction there is that different moralities may want different inputs, ie. consequentialism only cares about the consequences, while others care about the actions that you chose. But the point is that each theory should be capable of standing on its own. And I agree with probability being somewhat ambiguous for anthropic situations, but our decision theory can just output betting outcomes instead of probabilities.

It seems to me that ADT separates anthropics and morality. For example, Bayesianism doesn't tell you what you should do, just how to update your beliefs. Given your beliefs, what you value decides what you should do. Similarly, ADT gives you an anthropic decision procedure. What exactly does it tell you to do? Well, that depends on your morality!

2Chris_Leong
The point is that ADT is a theory of morality + anthropics. When your core theory of anthropics conceptually shouldn't refer to morality at all, but should be independent.

As I read through, the core model fit well with my intuition. But then I was surprised when I got to the section on religious schisms! I wondered why we should model the adherents of a religion as trying to join the school with the most 'accurate' claims about the religion.

On reflection, it appears to me that the model probably holds roughly as well in the religion case as the local radio intellectual case. Both of those are examples of "hostile" talking up. I wonder if the ways in which those cases diverge from pure information sharing explains the differ

... (read more)
2ozziegooen
Good points; I would definitely agree that people are generally reluctant to blatantly deceive themselves. There is definitely some cost to incorrect beliefs, though it can vary greatly in magnitude depending on the situation. For instance, just say all of your friends go to one church, and you start suspecting your local minister of being less accurate than others. If you actually don't trust them, you could either pretend you do and live as such, or be honest and possibly have all of your friends dislike you. You clearly have a strong motivation to believe something specific here, and I think generally incentives trump internal honesty.[1] On the end part, I don't think that "hostile talking up" is what the hostile actors want to be seen as doing :) Rather, they would be trying to make it seem like the people previously above them are really below them. To them and their followers, they seem to be at the top of their relevant distribution. 1) There's been a lot about politics being tribal being discussed recently, and I think it makes a lot of pragmatic sense. link

When it comes to disclosure policies, if I'm uncertain between the "MIRI view" and the "Paul Christiano" view, should I bite the bullet and back one approach over the other? Or can I aim to support both views, without worrying that they're defeating each other?

My current understanding is that it's coherent to support both at once. That is, I can think that possibly intelligence needs lots of fundamental insights, and that safety needs lots of similar insights (this is supposed to be a characterisation of a MIRI-ish view). I can think that work done on figu

... (read more)

I think you've got a lot of the core idea. But it's not important that we know that the data point has some ranking within a distribution. Let me try and explain the ideas as I understand them.

The unbiased estimator is unbiased in the sense that for any actual value of the thing being estimated, the expected value of the estimation across the possible data is the true value.

To be concrete, suppose I tell you that I will generate a true value, and then add either +1 or -1 to it with equal probability. An unbiased estimator is just to report back the value y

... (read more)
4Chris_Leong
"So, though the asymmetry is doing some work here (the further we move above 0, the more likely that +1 rather than -1 is doing some of the work), it could still be that 23,000 is the smallest of the values I sampled" - That's very interesting. So I looked at the definition on Wikipedia and it says: "An estimator is said to be unbiased if its bias is equal to zero for all values of parameter θ." This greatly clarifies the situation for me as I had thought that the bias was a global aggregate, rather than a value calculated for each value of the parameter being optimised (say basketball ability). Bayesian estimates are only unbiased in the former, weaker sense. For normal distributions, the Bayesian estimate is happy to underestimate the extremeness of values in order to narrow the probability distribution of predictions for less extreme values. In other words, it is accepting a level of bias in order to narrow the range.

I am also pretty interested in 2 (ex-post giving). In 2015, there was impactpurchase.org. I got in contact with them about it, and the major updates Paul reported were a) being willing to buy partial contributions (not just for people who were claiming full responsibility for things) and b) more focused on what's being funded (like for example, only asking for people to submit claims on blog posts and articles).

I realise that things like impactpurchase is possibly framed in terms of a slightly divergent reason for 2 (it seems more focused on changing the i

... (read more)

I'm interested in the predictors' incentives.

One problem with decision markets is that you only get paid for your information about an option if the decision is taken, which can incentivise you to overstate the case for an option (if you see its predicted benefit X, its true benefit is X+k and it would have to be at X+k+l to be chosen, if l < k, you will want to move the predicted benefit to X+k+l and make a k-l profit).

Maybe you avoid this if you pay for participation in PAES, but then you might risk people piling on to obvious judgments to get paid. M

... (read more)
2ozziegooen
I'm happy to talk theoretically, though have the suspicion that there are a whole lot of different ways to approach this problem and experimentation really is the most tractable way to make progress on it. That said, ideally, a prediction system would include ways of predicting the EVs of predictions and predictors, and people could get paid somewhat accordingly; in this world, high-EV predictions would be ones which may influence decisions counterfactually. You may be able to have a mix of judgments from situations that will never happen, and ones that are more precise but only applicable to ones that do. I would be likewise suspicious that naive decision markets that use one or two techniques like that would be enough to really make a system robust, but could imagine those ideas being integrated with others for things that are useful.

On estimating expected value, I'm reminded by some of Hanson's work where he suggests predicting later evaluation (recent example: http://www.overcomingbias.com/2018/11/how-to-fund-prestige-science.html). I think this is an interesting subcase of the evaluating subprocess. It also fits nicely with this post by PC

2ozziegooen
Good find. I didn't see that post (it came out a day after I published this, coincidentally). I'm surprised it came out so recently but imagine he probably had similar ideas, and likely wrote them down, much earlier. I definitely recommend it for more details on the science aspect. From the post: "For each scientific paper, there is a (perhaps small) chance that it will be randomly chosen for evaluation in, say, 30 years. If it is chosen, then at that time many diverse science evaluation historians (SEH) will study the history of that paper and its influence on future science, and will rank it relative to its contemporaries. To choose this should-have-been prestige-rank, they will consider how important was its topic, how true and novel were its claims, how solid and novel were its arguments, how influential it actually was, and how influential it would have been had it received more attention. .... Using these assets, markets can be created wherein anyone can trade in the prestige of a paper conditional on that paper being later evaluated. Yes traders have to wait a long time for a final payoff. But they can sell their assets to someone else in the meantime, and we do regularly trade 30 year bonds today. Some care will have to be taken to make sure the base asset that is bet is stable, but this seems quite feasible."

Thanks for the video! I had already skimmed this post when I noticed it, and then I watched it and reread the post. Perhaps my favourite thing about it was that it was slightly non-linear (skipping ahead to the diagram, non-linearity when covering sections).

Could you say a bit more about your worries with (scaling) prediction markets?

Do you have any thoughts about which experiments have the best expected information value per $?

1ozziegooen
I'm not too optimistic about traditional prediction markets, I have feelings similar to Zvi. I haven't seen prediction markets be well subsidized for even a few dozen useful variables; in prediction augmented evaluation systems they would have to be done for thousands+ variables. They seem like more overhead per variable then simply stating one's probability and moving on. My next step is just messing around a lot with my own prediction application and seeing what seems to work. I plan to gradually invite people, but let them mostly do their own testing. At this point, I want to get an intuitive idea of what seems useful, similar to my experiences making other experimental applications. I'm really not sure what ideas I may come up with, with more experimentation. That said, I am particularly excited about estimating expected values of things, but realize I may not be able to make all of these public, or may have to keep things very apolitical. I expect it to be really easy to anger people if estimates that are actually important are public. https://www.lesswrong.com/posts/a4jRN9nbD79PAhWTB/prediction-markets-when-do-they-work

This was really interesting. I've thought of this comment on-and-off for the last month.

You raised an interesting reason for thinking that transhumans would have high anthropic measure. But if you have a reference-class based anthropic theory, couldn't transhumans have a lot of anthropic measure, but not be in our reference class (that is, for SSA, we shouldn't reason as if we were selected from a class containing all humans and transhumans)?

Even if we think that the reference class should contain transhumans, do we have positive reasons for thinking that

... (read more)

Yes, that seems an important case to consider.

You might still think the analysis in the post is relevant if there are actors that can shape the incentive gradients you talk about: Google might be able to focus its sub-entities in a particular way while maintaining profit or a government might choose to implement more or less oversight over tech companies.

Even with the above paragraph, it seems like the relative change-over-time in resources and power of the strategic entities would be important to consider, as you point out. In this case, it seems like (known) fast takeoffs might be safer!

I talked to a couple of people in relevant organisations about possible info hazards for talking about races (not because this model is sophisticated or non-obvious, but because it contributes to general self-fulfilling chattering). Amongst those I talked to, they were not worried about (a) simple pieces with at least some nuance in general and (b) this post in particular

Comment here if you have structure/writing complaints for the post

Comment here if you are worried about info-hazard-y-ness of talking about AI races

2rk
I talked to a couple of people in relevant organisations about possible info hazards for talking about races (not because this model is sophisticated or non-obvious, but because it contributes to general self-fulfilling chattering). Amongst those I talked to, they were not worried about (a) simple pieces with at least some nuance in general and (b) this post in particular
4BurntVictory
I think your solution to "reckless rivals" might be wrong? I think you mistakenly put a multiplier of q instead of a p on the left-hand side of the inequality. (The derivation of the general inequality checks out, though, and I like your point about discontinuous effects of capacity investment when you assume that the opponent plays a known pure strategy.) I'll use slightly different notation from yours, to avoid overloading p and q. (This ends up not mattering because of linearity, but eh.) Let p0,q0 be the initial probabilities for winning and safety|winning. Let k be the capacity variable, and without loss of generality let k start at 0 and end at km. Then p(k)=.5−p0kmk+p0, and q(k)=rq0−q0kmk+q0 . So p′=.5−p0km, so pp′=p∗km.5−p0. And q′=rq0−q0km, so −q′q=q0(1−r)q∗km. Therefore, the left-hand side of the inequality, −pq′p′q, equals p.5−p0∗q0(1−r)q. At the initial point k=0, this simplifies to p0.5−p0(1−r). Let's assume α=1. The relative safety of the other project is β=rq0q, which at k=0 simplifies to r. Thus we should commit more to capacity when 1−r>p0.5−p0(1−r), or 1>p0.5−p0, or .25>p0. This is a little weird, but makes a bit more intuitive sense to me than q0+p0 or q0−p0 mattering.

Having the rules in the post made me think you wanted new suggestions in this thread. The rest of the post and habryka's comment point towards new comments in the old thread.

If you want people to update the old thread, I would either remove the rules from this post, or add a caveat like "Remember, when you go to post in that thread, you should follow the rules below"

I've been trying this for a couple of weeks now. It's hard! I often will have a missing link in the distraction chain: I know something that came at point X in the distraction chain and X-n, for n > 1. When I try and probe the missing part it's pretty uncomfortable. Like using or poking a numb limb. It can be pretty aversive, so I can't bring myself to do this meditation every time I meditate.

This changed my mind about the parent comment (I think the first paragraph would have done so, but the example certainly helped).

In general, I don't mind added concreteness even at the cost of some valence-loading. But seeing how well "sanction" works and some other comments that seem to disagree on the exact meaning of "punch", I guess not using "punch" would have been better

I did indeed! So I guess this game fails (5) out of Zvi's criteria.

Does your program assume that the Kelly bet stays a fixed size, rather than changing?

Here's a program you can paste in your browser that finds the expected value from following Kelly in Gurkenglas' game (it finds EV to be 20)

https://pastebin.com/iTDK7jX6

(You can also fiddle with the first argument to experiment to see some of the effects when 4 doesn't hold)

3Oscar_Cunningham
I believe you missed one of the rules of Gurkenglas' game, which was that there are at most 100 rounds. (Although it's possible I misunderstood what they were trying to say.) If you assume that play continues until one of the players is bankrupt then in fact there are lots of winning strategies. In particular betting any constant proportion less than 38.9%. The Kelly criterion isn't unique among them. My program doesn't assume anything about the strategy. It just works backwards from the last round and calculates the optimal bet and expected value for each possible amount of money you could have, on the basis of the expected values in the next round which it has already calculated. (Assuming each bet is a whole number of cents.)

It sounds like in the first part of your post you're disagreeing with my choice of reference class when using SSA? That's reasonable. My intuition is that if one ends up using a reference class-dependent anthropic principle (like SSA) that transhumans would not be part of our reference class, but I suppose I don't have much reason to trust this intuition.

On anthropic measure being tied to independently-intelligent minds, what is the difference between an independently- and dependently-intelligent mind? What makes you think the mind needs to be specifically independently-intelligent?

6mako yass
Mm. I think I oppose that intuition. It's hard to see how there can be much of a distinction between existing at low measure and simply existing less, or being less likely to have occurred, or to have been observed. So, for a garden to be considered successful I would expect its caretakers to at least try to ensure that its occupants have high anthropic measure, and at least some of the time they would succeed. Incisive question... All I can think of is... human organizations are often a lot more conscious- behaviorally- than any individual pretends to be, and I find that I am an individual rather than an organization. I am immersed the sensory experience of one human sitting at one terminal, rather than the immense, abstract sensory experience of, say, wikipedia, or the US intelligence community. It's conceivable that organizations with tightly integrated knowledge-bases and decisionmaking processes do have a lot of anthropic measure, but maybe there just aren't very many of them yet. I'm trying to imagine speaking to some representative of the state of knowledge of a highly integrated organization, and hearing it explain that its subjective experience anthropic measure prior for organizations is higher than its anthropic measure for individuals (multiplied by the number of individuals), but I don't know what a hive-mind representative would even act like, at what point does it stop saying "we" and start saying "I"? Humans' orgs are more like ant colonies than brains, at this point, there is collective intelligence but there's no head to talk to.

Yes, I suppose the only way that this would not be an issue is if the aliens are travelling at a very high fraction of the speed of light and inflation means that they will never reach spatially distant parts of the Universe in time for this to be an issue.

In SETI-attack, is the idea that the information signals are disruptive and cause the civilisations they may annihilate to be too disrupted (perhaps by war or devastating technological failures) to defend themselves?

4avturchin
The idea is that aliens purposely send dangerous AI-code aimed on self-replication and transmitting the code farther. There are a lot of technical details how it could happen, which I described in the recently published article, available here: https://philpapers.org/rec/TURTRC

Yeah, that's a good point. I will amend that part at some point.

Also, the analysis might have some predictions if civilisations don't pass through a (long) observable stage before they start to expand. It increases the probability that a shockwave of intergalactic expansion will arrive at Earth soon. Still, if the region of our past light cone where young civilisations might exist is small enough, we probably just lose information on where the filter is likely to be

1avturchin
If shock wave is anything below с, something like 0.9c, when we could observer the incoming shockwave, and also, because of t^4 volume rule, the chances that we are in the outer volume of the cone where we could observer the incoming shock wave are larger and are 0.35 for 0.9c. I think that the shock originators know all this and try to send information signals ahead of physical starships, in what I call SETI-attack.

I wonder if there are any plausible examples of this type where the constraints don't look like ordering on B and search on A.

To be clear about what I mean about those constraints, here's an example. One way you might be able to implement this function is if you can enumerate all the values of A and then pick the maximum B according to some ordering. If you can't enumerate A, you might have some strategy for searching through it.

But that's not the only feasible strategy. For example, if you can order B, take two elements of B to C and order C, you might do

... (read more)
2MrMind
Yes, as I shown in my post, such operators must know at least an element of one of the domains of the function. If it knows at least an element of A, a constant function on that element has the right type. Unfortunately, it's not much interesting.

I wasn't aware that CFAR had workshops in Europe before this comment. I applied for a workshop off the back of this. Thanks!

rk120

I feel a pull towards downvoting this. I am not going to, because I think this was posted in good faith, and as you say, it's clear a lot of time and effort has gone into these comments. That said, I'd like to unpack my reaction a bit. It may be you disagree with my take, but it may also be there's something useful in it.

[EDIT: I should disclaim that my reaction may be biased from having recently received an aggressive comment.]

First, I should note that I don't know why you did these edits. Did sarahconstantin ask you to? Did you think a good post was bein

... (read more)
2Elo
I want this level of feedback culture to be more common. I want every writer to be able to grow from in depth pulling apart of their words and putting back together. Quality writing comes from iteration. Often on the small details like the hedges and the examples and the flow. I don't know how to do the blatant thing without words, and my other option of post without comment didn't have the same effect.

It is probably true that those are the places with most engagement. However, as someone without Facebook, I'm always grateful for things (also) being posted in non-FB places (mailing lists work too, but there is a longer lag on finding out about things that way).

2habryka
Oh, definitely agree. I wasn't advocating for not posting this here, I was advocating for also posting it other places.

It seems like the images of the gears have disappeared. Are they still available anywhere? EDIT: They're back!

If you can’t viscerally feel the difference between .1% and 1%, or a thousand and a million, you will probably need more of a statistics background to really understand things like “how much money is flowing into AI, and what is being accomplished, and what does it mean?”

I'm surprised at the suggestion that studying statistics strengthens gut sense of the significance of probabilities. I've updated somewhat towards that based on the above, but I would still expect something more akin to playing with and visualising data to be useful for this

Load More