All of antanaclasis's Comments + Replies

Isn't the counterfactual trolley problem setup backwards? It should be counterfactual Omega giving you the better setup (not tying people to the tracks) if it predicts you'll take the locally "worse" option in the actual case, not the other way around, right?

Because with the current setup you just don't pull and Omega doesn't tie people to tracks.

2Tapatakt
O-ops, you're absolutely right, I accidentally missed "not" when I was rewriting the text. Fixed now. Thank you!

As an example of differentiating different kinds of footnotes, waitbutwhy.com uses different appearances for “interesting extra info” notes vs “citation” notes.

Both kinds also appear as popups when interacted with (certainly an advantage of the digital format).

3gwern
(Specifically, asides are given superscript large blue circles with numbers inside; while mere citations/sources are given instead a small faded gray box with numbers. They are separately numbered. The footnote popups themselves are fairly standard click-to-popover.)

Somehow I missed that bit.

That makes the situation better, but there’s still some issue. The refund is not earning interest, but you liabilities are.

Take the situation with owing $25 million. Say that there’s a one year time between the tax being assessed and your asset going to $0 (at which time you claim the refund). In this time the $25 million loan you took is accruing interest. Let’s say it does so at a 4% rate per year, when you get your $25 million refund you therefore have $26 million in loans.

So you still end up $1 million in debt due to “gains” that you were never able to realize.

1gb
That’d be a problem indeed, but only because the contract you’re proposing is suboptimal. Given that the principal is fully guaranteed, it shouldn’t be terribly difficult for you to borrow at >4% yearly with a contingency clause that you don’t pay interest if the asset goes to ~0.

Scenario: you have equity worth (say) $100 million in expectation, but of no realized value at the moment.

You are forced to pay unrealized gains tax on that amount, and so are now $25 million in the hole. Even if you avoid this crashing you immediately (such as by getting a loan), if your equity goes to $0 you’re still out for the $25 million you paid, with no assets to back it.

The fact that this could be counted as a prepayment for a hypothetical later unrealized gain doesn’t help you, you can’t actually get your money back.

1gb
But the OP explicitly said (as quoted in the parent) that the proposal allows for refunds if the basis is not (fully) realized, which would cover the situation you’re describing.
antanaclasisΩ021

But if UDT starts with a broad prior, it will probably not learn, because it will have some weird stuff in its prior which causes it to obey random imperatives from imaginary Lizards.

I don’t think this necessarily follows? For there to be a systematic impact on UDT’s behavior there would need to be more Lizard-Worlds that reward X than Anti-Lizard-Worlds that penalize X, so this is only a concern if there is reason to believe that there are “more” worlds (in an abstract logical-probability sense) that favor a specific direction.

Clearly this could still potentially cause problems, but (at least to me) it doesn’t seem like the problem is as ubiquitous as the essay makes it out to be.

2abramdemski
You're right, I was overstating there. I don't think it's probable that everything cancels out, but a more realistic statement might be something like "if UDT starts with a broad prior which wasn't designed to address this concern, there will probably be many situations where its actions are more influenced by alternative possibilities (delusional, from our perspective) than by what it knows about the branch that it is in".

My benchmark for thinking about the experience machine: imagine a universe where only one person and the stuff they interact with exist (with any other “people” they interact with being non-sapient simulations) and said person lives a fulfilling life. I maintain that such a universe has notable positive value, and that a person in an experience machine is in a similarly valuable situation to the above person (both being sole-moral-patients in a universe not causally impacting any other moral patients).

This does not preclude the possibility of improving on ... (read more)

Also related: Yudkowsky on making Solvable Mysteries:

If you have not called upon your readers explicitly to halt and pay attention, they are already reading the next sentence. Even if you do explicitly ask them to pay attention, they are already reading the next sentence. If you have your character think, “Hm… there’s something funny about that story, I should stop and think about that?” guess what your reader does next? That’s right, your reader goes on to read the next sentence immediately, to see what the character thinks about it.

You can’t just trivially scale up the angular resolution by bolting more sensors together (or similar methods). It gets more difficult to engineer the lenses and sensors to meet super-high specs.

And aside from that, the problem behaves nonlinearly with the amount of atmosphere between you and the plane. Each bit of distortion in the air along the way will combine, potentially pretty harshly limiting how far away you can get any useful image. This may be able to be worked around with AI to reconstruct from highly distorted images, but it’s far from trivial on the face of it.

My guess is the largest contributor is the cultural shift to expecting much more involved parenting (example: the various areas where parents had CPS called on them for letting their kids do what the parents were allowed to do independently as kids)

Another big thing is that you can’t get tone-of-voice information via text. The way that someone says something may convey more to you than what they said, especially for some types of journalism.

I’d imagine that once we see the axis it will probably (~70%) have a reasonably clear meaning. Likely not as obvious as the left-right axis on Twitter but probably still interpretable.

I think a lot of the value that I’d get out of something like that being implemented would be getting an answer to “what is the biggest axis along which LW users vary” according to the algorithm. I am highly unsure about what the axis would even end up being.

1Shankar Sivarajan
Would that even be a meaningful question? Thinking of it as a kind of PCA, there will be some axis, with a lot of correlations, and how you interpret that is up to you.

To lay out some of the foundation of public choice theory:

We can model the members of an organization (such as the government) as being subject to the dynamics of natural selection. In particular, in a democracy elected officials are subject to selection whereby those who are better at getting votes can displace those who are worse at it, through elections.

This creates a selection dynamic where over time the elected officials will become better at vote-gathering, whether through conscious or unconscious adaptation by the officials to their circumstances, o... (read more)

1M. Y. Zuo
Thanks, you've listed some plausible downsides, but the upsides also need to be enumerated too, and then likely several stages of synthesis to arrive at a final, persuasive, argument, one way or the other. I'm not saying you have to do all this work, just that someone does in order to advance the argument. So far I've never seen such, anywhere online.

Just because the US government contains agents that care about market failures, does not mean that it can be accurately modeled as itself being agentic and caring about market failures.

The more detailed argument would be public choice theory 101, about how the incentives that people in various parts of the government are faced with may or may not encourage market-failure-correcting behavior.

1M. Y. Zuo
I agree, just the fact that it contains such does not necessarily imply anything for or against. e.g. It's entirely possible for two or more far flung branches of the USG to work towards opposite ends and end up entirely negating each another. Can you lay out this argument with more detail?  

For chess in particular the piece-trading nature of the game also makes piece handicaps pretty huge in impact. Compare to shogi: in shogi having multiple non-pawn pieces handicapped can still be a moderate handicap, whereas multiple non-pawns in chess is basically a predestined loss unless there is a truly gargantuan skill difference.

I haven’t played many handicapped chess games, but my rough feel for it is that each successive “step” of handicap in chess is something like 3 times as impactful as the comparable shogi handicap. This makes chess handicaps harder to use as there’s much more risk of over- or under-shooting the appropriate handicap level and ending up with one side being highly likely to win.

Also note that socks with sandals being uncool is not a universal thing. For example, in Japan it is reasonably common to wear (often split-toed) socks with sandals, though it’s more associated with traditional garb than modern fashion.

A way of implementing the serving-vs-kitchen separation that avoids that problem (and actually the way of doing it I initially envisioned after reading the post) would be that within each workplace there is a separation, but different workplaces are split between the polarities of separation. That way any individual’s available options of workplace are, at worst, ~half of what they could be with mixed workplaces, regardless of their preference.

(Caveat that an individual’s options could end up being less than half the total if there is a workplace-gender co... (read more)

3Viliam
In theory this would be a great solution; in practice I would expect coordination problems, as most (almost all?) people who start companies would simply go with the majority model. Analogically to the current situation where people often say "if you believe that X are discriminated against in industry Y, why don't you make an Y company that would employ only X?" That sounds like a reasonable proposal -- people cannot discriminate against X at workplace if everyone at the company is X, and if you are the only company providing great working conditions for X, you should be able to pick the greatest talent without having to pay them more. Sounds like win/win! And yet, calls to make such companies are not answered by examples who already did that. So this proposal sounds like something that people approve verbally, but no one wants to do the experiment with their own company.

It kind of passed without much note in the post, but isn’t the passport non-renewal one of the biggest limiters here? $59,000 divided by 10 years is $5,900 per year, so unless you’re willing to forgo having a passport that’s the upper limit of how much you could benefit from non-payment (exclusive of the tax liability reduction strategies). That seems like a pretty low amount per year in exchange for having to research and plan this, then having your available income and saving methods limited (which could easily lower your income by more than $5,900 just by limiting the jobs available to you).

2David Gross
Last I heard, about 40% of U.S. citizens don't have passports to begin with, so I expect that at least for some readers, this isn't such a big deal. For the rest it is certainly a consideration to factor in. Note that it typically takes some time before it becomes a problem: you accumulate $59,000 (actually more, as this number is inflation-adjusted) in delinquent taxes, the I.R.S. notices you're over the limit and submits paperwork to the State Department, then somewhere down the line your passport expires and you're unable to renew it until you resolve the tax delinquency (and go through a State Department paperwork dance of your own).

One other way of putting the reverse order, though it sounds a bit stilted in English: “beagles have Fido”. I don’t think it’s used commonly at all but it came to mind as a form in the reverse order without looping.

1Bill Benzon
Sure, we can do all sort of things with language if we put our minds to it. That's not the point. What's important is how do people actually use language. In the corpus of texts used to train, say, GPT-4, how many times is the phrase "beagles have Fido" likely to have occurred?
Answer by antanaclasis20

I would be interested in this, probably in role A (but depending on the pool of other players possibly one of the other roles; I have no opposition to any of them). I play chess casually with friends, and am probably at somewhere around 1300 elo (based on my winrate against one friend who plays online).

To add to this, if the ranked choice voting is implemented with a “no confidence” option (as it should to prevent the vote-in vote-out cycle described above), then you could easily end up in the same situation as the house currently is in, where no candidate manages to beat out “no confidence”.

SIA can be considered (IMO more naturally) as randomly sampling you from “observers in your epistemic situation”, so it’s not so much “increasing the prior” but rather “caring about the absolute number of observers in your epistemic situation” rather than “caring about the proportion of observers in your epistemic situation” as SSA does.

This has the same end result as “up-weighting the prior then using the proportion of observers in your epistemic situation”, but I find it to be much more intuitive than that, as the latter seems to me to be overly circuito... (read more)

I think the point being made in the post is that there’s a ground-truth-of-the-matter as to what comprises Art-Following Discourse.

To move into a different frame which I feel may capture the distinction more clearly, the True Laws of Discourse are not socially constructed, but our norms (though they attempt to approximate the True Laws) are definitely socially constructed.

From the SIA viewpoint the anthropic update process is essentially just a prior and an update. You start with a prior on each hypothesis (possible universe) and then update by weighting each by how many observers in your epistemic situation each universe has.

This perspective sees the equalization of “anthropic probability mass” between possible universes prior to apportionment as an unnecessary distortion of the process: after all, “why would you give a hypothesis an artificial boost in likelihood just because it posits fewer observers than other hypothese... (read more)

On the question of how to modify your prior over possible universe+index combinations based on observer counts, the way that I like to think of the SSA vs SIA methods is that with SSA you are first apportioning probability mass to each possible universe, then dividing that up among possible observers within each universe, while with SIA you are directly apportioning among possible observers, irrespective of which possible universes they are in.

The numbers come out the same as considering it in the way you write in the post, but this way feels more intuitive to me (as a natural way of doing things, rather than “and then we add an arbitrary weighing to make the numbers come out right”) and maybe to others.

1TobyC
That's a nice way of looking at it. It's still not very clear to me why the SIA approach of apportioning among possible observers is something you should want to do. But it definitely feels useful to know that that's one way of interpreting what SIA is saying.

If you’re adding the salt after you turn on the burner then it doesn’t actually add to the heating+cooking time.

To steelman the anti-sex-for-rent case, it could be considered that after the tenant has entered into that arrangement, the tenant could feel pressure to keep having sex with the landlord (even if they would prefer not to and would not at that later point choose to enter the contract) due to the transfer cost of moving to a new home. (Though this also applies to monetary rent, the potential for threatening the boundaries of consent is generally seen as more harmful than threatening the boundaries of one’s budget)

This could also be used as a point of levera... (read more)

2Dumbledore's Army
Thanks for the comment. I think tenants are still better off with a legal contract than not. Analogously, a money-paying tenant with a legal contract has some protections against a landlord raising rents, and gets a notice period and the option to refuse and go elsewhere; a money-paying tenant who pays cash in hand to an illegal landlord probably has less leverage to negotiate. (Although there will be exceptions.) Likewise, a sex-paying tenant is better off with a legal contract. I realise that the law won’t protect everyone and that some people will have bad outcomes no matter what - I deliberately picked this example to make people think about uncomfortable trade offs - but I still think the general approach of trying to give people more choice rather than less is preferable.

In terms of similarity between telling the truth and lying, think about how much of a change you would have to make to the mindset of a person at each level to get them to level 1 (truth)

Level 2: they’re already thinking about world models, you just need to get them cooperate with you in seeking the truth rather than trying to manipulate you.

Level 3: you need to get them the idea of words as having some sort of correspondence with the actual world, rather than just as floating tribal signifiers. After doing that, you still have to make sure that they are f... (read more)

2Adam Zerner
Ah I see. Thanks for explaining.

Re: “best vs better”: claiming that something is the best can be a weaker claim than claiming that it it better than something else. Specifically, if two things are of equal quality (and not surpassed) then both are the best, but neither is better than the other.

Apocryphally, I’ve heard that certain types of goods are regarded by regulatory agencies as being of uniform quality, such that there’s not considered to be an objective basis for claiming that your brand is better than another. However, you can freely claim that yours is the best, as there is similarly no objective basis on which to prove that your product is inferior to another (as would be needed to show that it is not the best).

One other mechanism that would lead to the persistence of e.g. antibiotic resistance would be when the mutation that confers the resistance is not costly (e.g. a mutation which changes the shape of a protein targeted by an antibiotic to a different shape that, while equally functional, is not disrupted by the antibiotic). Note that I don’t actually know whether this mechanism is common in practice.

Thanks for writing this nice article. Also thanks for the “Qualia the Purple” recommendation. I’ve read it now and it really is great.

In the spirit of paying it forward, I can recommend https://imagakblog.wordpress.com/2018/07/18/suspended-in-dreams-on-the-mitakihara-loopline-a-nietzschean-reading-of-madoka-magica-rebellion-story/ as a nice analysis of themes in PMMM.

It seems like this might be double-counting uncertainty? Normal EV-type decision calculations already (should, at least) account for uncertainty about how our actions affect the future.

Adding explicit time-discounting seems like it would over-adjust in that regard, with the extra adjustment (time) just being an imperfect proxy for the first (uncertainty), when we only really care about the uncertainty to begin with.

Indeed humans are significantly non-aligned. In order for an ASI to be non-catastrophic, it would likely have to be substantially more aligned than humans are. This is probably less-than-impossible due to the fact that the AI can be built from the get-go to be aligned, rather than being a bunch of barely-coherent odds and ends thrown together by natural selection.

Of course, reaching that level of alignedness remains a very hard task, hence the whole AI alignment problem.

I had another thing planned for this week, but turned out I’d already written a version of it back in 2010

What is the post that this is referring to, and what prompted thinking of those particular ideas now?

I see it in a similar light to “would you rather have more or fewer cells in your body?”. If you made me choose I probably would rather have more, but only insofar as having fewer might be associated with certain bad things (e.g. losing a limb).

Correspondingly, I don’t care intrinsically about e.g. how much algae exists except insofar as that amount being too high or low might cause problems in things I actually care about (such as human lives).

Seeing the relative lack of pickup in terms of upvotes, I just want to thank you for putting this together. I’ve only read a couple of Dath Ilan posts, and this provided a nice coverage of the AI-in-Dath-Ilan concepts, many of the specifics of which I had not read previously.

My understanding of it is that there is conflict between different “types” of the mixed population based on e.g. skin lightness and which particular blend of ethnic groups makes up a person’s ancestry.

EDIT: my knowledge on this topic mostly concerns Mexico, but should still generally apply to Brazil.

That PDF seems like it is a part of a spoken presentation (it’s rather abbreviated for a standalone thing). Does there exist such a presentation? If so, I was not successful in funding it, and would appreciate it if you could point it out.

I similarly offer myself as an author, in either the dungeon master or player role. I could possibly get involved in the management or technical side of things, but would likely not be effective in heading a project (for similar reasons to Brangus), and do not have practical experience in machine learning.

I am best reached through direct message or comment reply here on Lesswrong, and can provide other contact information if someone wants to work with me.

Answer by antanaclasis20

The main post of what amounts of evidence different tests give is this one: https://www.lesswrong.com/posts/cEohkb9mqbc3JwSLW/how-much-should-you-update-on-a-covid-test-result

Also related is part of this post from Zvi (specifically the section starting “Michael Mena”): https://www.lesswrong.com/posts/CoZitvxi2ru9ehypC/covid-9-9-passing-the-peak

Combining the information from the two, it seems like insofar as you care about infectivity rather than the person having dead virus RNA still in their body, the actual amount of evidence from rapid antigen tests wil... (read more)

2Yoav Ravid
You can link to specific parts of posts, and thanks to the devs, it should now also show that part in the hover preview.  Example: https://www.lesswrong.com/posts/CoZitvxi2ru9ehypC/covid-9-9-passing-the-peak#NPIs_Including_Mask_and_Testing_Mandates__  Or with a hyperlink. Use the table of contents to get the link.

This is a good piece of writing. It reminds me of another piece of fiction (somewhat happier in tone) which I cannot find again. The plot involves a woman trying to rescue her boyfriend from a nemesis in a similar AI-managed world. I think it involves her jumping out of a plane, and landing in the garden of someone who eschews AI-protection for his garden, rendering it vulnerable to destruction without his consent. Does anyone recall the name/location of this story?

5Markvy
https://www.lesswrong.com/posts/sMsvcdxbK2Xqx8EHr/just-another-day-in-utopia

Copyediting: “Miriam removed off her cornea too” should probably not have the “off”.

2lsusr
Fixed. Thanks.

The part about hiring proofreading brought a question to mind: where does the operating budget for the lesswrong website come from, both for stuff like that and standard server costs?

9Ruby
Our most recent round of funding was from OpenPhilanthropy and the Survival & Flourishing Fund.

Do you have any recommendations of such stories?

3Dagon
Watchmen was pretty good on this front.  Worm (https://parahumans.wordpress.com/) is LONG, but great.

If you also consider the indirect deaths due to the collapse of civilization, I would say that 95% lies within the realm of reason. You don’t need anywhere close to 95% of the population to be fully affected by the scissor to bring about 95% destruction.

Sorry if I was ambiguous in my remark. The comparison that I’m musing about is between “fierce” vs “not fierce” nerds, with no particular consideration of those who are not nerds in the first place.

It’s interesting to read posts like this and “Fierce Nerds” while myself being much less ambitious/fierce/driven than the objects of said essays. I wonder what other psychological traits are associated with the difference between those who are more vs less ambitious/fierce/driven, other things being equal.

6lsusr
Anxiety. Lack of slack. Natural amphetamines. If the natural amphetamines correlation is true then that gets us a whole basket of correlations including low appetite, skipping meals, high energy, high NEAT (non-exercise automatic thermogenesis) and difficulty sleeping.
3Pattern
Correlation is arguably, at odds with other things being equal.

Nice poem! It’s cool to see philosophical and mathematical concepts expressed through elegant language, though it it somewhat less common, due to the divergence of interests and skills.

Load More