All of harsimony's Comments + Replies

Oh that makes sense!

If the predictors can influence the world in addition to making a prediction, they would also have an incentive to change the world in ways that make their predictions more accurate than their opponents right? For example, if everyone else thinks Bob is going to win the presidency, one of the predictors can bribe Bob to drop out and then bet on Alice winning the presidency.

Is there work on this? To be fair, it seems like every AI safety proposal has to deal with something like this.

5Rubi J. Hudson
Yes, if predictors can influence the world in addition to making a  prediction, they can go make their predictions more accurate. The nice thing about working with predictive models is that by default the only action they can take is making predictions.  AI safety via market making, which Evan linked in another comment, touches on the analogy where agents are making predictions but can also influence the outcome. You might be interested in reading through it.

This is super cool stuff, thank you for posting!

I may have missed this, but do these scoring rules prevent agents from trying to make the environment more un-predictable? In other words, if you're competing against other predictors, it may make sense to influence the world to be more random and harder to understand.

I think this prediction market type issue has been discussed elsewhere but I can't find a name for it.

2Rubi J. Hudson
Good question! These scoring rules do also prevent agents from trying to make the environment more unpredictable. In the same way that making the environment more predictable benefits all agents equally and so cancels out, making the environment less predictable hurts all agents equally and so cancels out in a zero-sum competition. 

Thanks for this! I misinterpreted Lucius as saying "use the single highest and single lowest eigenvalues to estimate the rank of a matrix" which I didn't think was possible.

Counting the number of non-zero eigenvalues makes a lot more sense!

You can absolutely harvest potential energy from the solar system to spin up tethers. ToughSF has some good posts on this:

https://toughsf.blogspot.com/2018/06/inter-orbital-kinetic-energy-exchanges.html https://toughsf.blogspot.com/2020/07/tethers-all-way.html

Ideally your tether is going to constantly adjust its orbit so it says far away from the atmosphere, but for fun I did a calculation of what would happen if a 10K tonne tether (suitable for boosting 100 tonne payloads) fell to the Earth. Apparently it just breaks up in the atmosphere and produces very... (read more)

2Seth Herd
I was proposing something different.

The launch cadence is an interesting topic that I haven't had a chance to tackle. The rotational frequency limits how often you can boost stuff.

Since time is money you would want a shorter and faster tether, but a shorter time of rotation means that your time window to dock with the tether is smaller, so there's an optimization problem there as well.

It's a little easier when you've got catapults on the moon's surface. You can have two running side by side and transfer energy between them electrically. So load up catapult #1, spin it up, launch the payload, and then transfer the remaining energy to catapult #2. You can get much higher launch cadence that way.

Oops yes, that should read "Getting oxygen from the moon to LEO requires less delta V than going from the Earth to LEO!". I edited the original comment.

Lunar tethers actually look like they will be feasible sooner than Earth tethers! The lack of atmosphere, micrometeorites, and lower gravity (g) makes them scale better.

In fact, you can even put a small tether system on the lunar surface to catapult payloads to orbit: https://splittinginfinity.substack.com/p/should-we-get-material-from-the-moon

Whether tethers are useful on the moon depends on the mission you want to do. Like you point out, low delta-V missions probably don't need a tether when rockets work just fine. But if you want to take lunar material ... (read more)

2Aleksander
Very interesting. Love the idea of torturing mathematicians by making them calculate these crazy-precise orbits, but I guess machines can do most of that(a shame). How often could a tether actually be used for resource launches though? Assuming only one tether is in operation, would its orbital cycles be quick enough to transport materials consistently for a large lunar mining operation? Also, I’m not super informed on lunar space debris, but I imagine that would pile up quickly as lunar space operations began. I think most debris here on Earth would be outside the domain of tethers, but I can’t find many numbers on the hypothetical orbits of lunar debris. I assume, though, that it would be very different due to the lack of atmosphere to burn up debris and the differing gravity. I figure you could make a tether capable of withstanding this, but how would orbits be calculated and rockets properly tethered with interference? Assuming that this is an actual problem. Bit of a tangent, but I think space debris is one of my favorite hypothetical future problems, because it has a very similar and equally interesting set of fields which it intertwines with as climate change, while also not being a real problem I have to worry about killing me(like climate change)
2Jalex Stark
I think there might be a typo? 

Thanks for the comments! Going point-by-point:

  1. I think both fiberglass and carbon fiber use organic epoxy that's prone to UV (and atomic oxygen) degradation? One solution is to avoid epoxy entirely using parallel strands or something like a Hoytether. The other option is to remove old epoxy and reapply over time, if its economical vs just letting the tether degrade.

  2. I worry that low-thrust options like ion engines and sails could be too expensive vs catching falling mass, but I could be convinced either way!

  3. Yeah, some form of vibration damping will

... (read more)

Yeah, my overall sense is that using falling mass to spin the tether back up is the most practical. But solar sails and ion drives might contribute too, these are just much slower which hurts launch cadence and costs.

The fact that you need a regular supply of falling mass from e.g. the moon is yet another reason why tethers need a mature space industry to become viable!

That makes sense, I guess it just comes down to an empirical question of which is easier.

Question about what you said earlier: How can you use the top/bottom eigenvalues to estimate the rank of the Hessian? I'm not as familiar with this so any pointers would be appreciated!

4George Ingebretsen
The rank of a matrix = the number of non-zero eigenvalues of the matrix! So you can either use the top eigenvalues to count the non-zeros, or you can use the fact that an n×n matrix always has n eigenvalues to determine the number of non-zero eigenvalues by counting the bottom zero-eigenvalues. Also for more detail on the "getting hessian eigenvalues without calculating the full hessian" thing, I'd really recommend Johns explanation in this linear algebra lecture he recorded.

Isn't calculating the Hessian for large statistical models kind of hard? And aren't second derivatives prone to numerical errors?

Agree that this is only valuable if sampling on the loss landscape is easier or more robust than calculating the Hessian.

Getting the Hessian eigenvalues does not require calculating the full Hessian. You use Jacobian vector product methods in e.g. JAX. The Hessian itself never has to be explicitly represented in memory.

And even assuming the estimator for the Hessian pseudoinverse is cheap and precise, you'd still need to get its rank anyway, which would by default be just as expensive as getting the rank of the Hessian.

You may find this interesting "On the Covariance-Hessian Relation in Evolution Strategies":

https://arxiv.org/pdf/1806.03674

It makes a lot of assumptions, but as I understand it if you: a. Sample points near the minima [1]. b. Select only the lowest loss point from that sample and save it. c. Repeat that process many times d. Create a covariance matrix of the selected points

The covariance matrix will converge to the inverse of the Hessian, assuming the loss landscape is quadratic. Since the inverse of a matrix has the same rank, you could probably just use ... (read more)

2Lucius Bushnaq
Why would we want or need to do this, instead of just calculating the top/bottom Hessian eigenvalues?

Exciting to see this up and running!

If I'm understanding correctly, the system looks for modifications to certain viruses. So if someone modified a virus that NAO wasn't explicitly monitoring for modifications, then that would go undetected?

5jefftk
That's correct. But it's extremely cheap to monitor an additional virus, so there's not much downside to casting a large net.

I like the simple and clear model and I think discussions about AI risk are vastly improved by people proposing models like this.

I would like to see this model extended by including the productive capacity of the other agents in the AI's utility function. In other words, the other agents have a comparative advantage over the AI in producing some stuff and the AI may be able to get a higher-utility bundle overall by not killing everyone (or even increasing the productivity of the other agents so they can produce more stuff for the AI to consume).

Super useful post, thank you!

The condensed vaporized rock is particularly interesting to me. I think it could be an asset instead of a hindrance. Mining expends a ton of energy just crushing rock into small pieces for processing, turning ores into dust you can pump with air could be pretty valuable.

I was always skeptical of enhanced geothermal beating solar on cost, though I do think the supercritical water Quaise could generate has interesting chemical applications: https://splittinginfinity.substack.com/p/recycling-atoms-with-supercritical

2Fisheater_5491
In this context, the most important advantage of supercritical water is that it contains nearly SIX times as much energy per ton - e.g. at 300 bar and 600°C - than in 160 bar 300°C superheated steam. As a result, almost 5 times less water has to be driven through the heat exchanger system at depth - whereby - due to the higher pressure - the pump load is about three times lower - and about five times the output is possible with the same borehole diameter. Stone is a poor conductor of heat. So after the initial heat loss to heat up the wall of the riser borehole, only a small part of the 600°C depth temperature at 15-16 km depth is lost, so that about 500°C reaches the turbines. Then the 300 liters per second are enough for about 1 GW production - with a pump output of about 0.1%

This post has some useful info:

https://milkyeggs.com/biology/lifespan-extension-separating-fact-from-fiction/

It basically says that sunscreen, ceramide moisturizers, and retinols are the main evidence-based skincare products. I would guess that more expensive versions of these don't add much value.

Some amount of experimentation is required to find products that don't irritate your skin.

Good framing! Two forms of social credit that I think are worth paying attention to:

  1. Play money prediction markets and forecasting. I think it's fruitful to think about these communities as using prediction accuracy as a form of status/credit.
  2. Cryptocurrencies, which are essentially financial credit but with its own rules and community. The currency doesn't have to have a dollar value to induce coordination it can still function as a reputation system and medium of exchange.

It's somewhat tangential, but Sarah Constantin discussing attestation has some i... (read more)

Note that these sorts of situations are perfectly foreseeable from the perspective of owners. They know precisely what they will pay each year in taxes based on their bid. It's prudent to re-value the home every once in a while if taxes drift too much, but the owner can keep the same schedule if they want. They can also use the public listing of local bids, so they know what to bid and can feel pretty safe that they will keep their home. They truly have the highest valuation of all the bidders in most cases.

The thing is, every system of land ownership face... (read more)

1pineappledragon
Death is foreseeable? (Well, okay, yes, but the timing often isn't.)

Land value taxation is designed to make land ownership more affordable by lowering the cost to buy land. Would it change the value of property as an investment for current owners? I'm not sure, one one hand, land values would go down, but on the other, land would get used more efficiently and deadweight loss of taxation would go down, boosting the local economy.

As for the public choice hurdles, reform doesn't seem intractable. Detroit is considering a split-rate property tax, and it's not infeasible that other places switch. Owners hate property taxes and ... (read more)

2pineappledragon
> Owners hate property taxes and land values are less than property values. Why not slowly switch to using land values and lower everyone's property tax bill? Separately, I would suggest being very careful about claims like this.  1. Lower values for the tax base don't mean lower taxes in dollar amounts. The previous state I lived in assessed property at about half the market value but more than made up for it in the rates. 2. A non-trivial revenue-neutral tax reform by definition has to produce some losers. Yes, technically we'll be paying less "property" tax and more "land value" tax over time as it switches over, but I suspect most folks would put both in the same mental bucket (and unless I'm specifically trying to make a distinction between a land value tax and more traditional property taxes, I do too).  Also, assuming folks would be writing just one check/year during the transition and not two separate ones, that's another factor leading folks to think of them on a combined basis.
2pineappledragon
>This proposal doesn't involve any forced moves, owners only auction when they want to sell their land. The article already lists two counterexamples that aren't uncommon situations...   >There will be situations where the valuation growth from point 5 outpaces the true value of the house. The owner can update the land value by putting the land up for public auction, but they have to win that auction fair and square. If they win the auction, the land value is updated to their new bid, but no money changes hands (essentially, they pay themselves for the bid). So if my land value has ratcheted up faster than its true value, my choice is: get gouged on taxes, or roll the dice on losing control of the land. The odds of this problem grow over time, so people caught by this will tend to be 1) long-time residents and 2) older.   > Fourth, auctions are a fairer way to allocate land, preventing families from passing land wealth down the generations without updating their valuation. So if I want to keep my parents' house in the family after they die, I again have to roll the dice. (I also wonder if this tends to be regressive since wealthier families have a greater ability to bid high for sentimental reasons and absorb the extra tax burden, so the folks featured in news stories as victims of this policy will be those of more modest means -- this is more speculative though.)   In neither situation does the current owner actually want to sell.

So yes, taxing property values is undesirable, but it also happens with imperfect land value assessments: https://www.jstor.org/stable/27759702

It looks like you have different numbers for the cost of land, sale value of a house, and cost of construction. I'm not an expert, so I welcome other estimates. A couple comments:

  1. Land value assessors typically say that the land value is larger than the improvement value. In urban centers, land can be over 70% of the overall property value. I would guess this is where the discrepancy comes from with our numbers. A
... (read more)
3Brendan Long
I suspect the discrepencies in our land value vs improvement value numbers have to do with where the land is and how efficiently it's used. If you have a single family home in San Francisco, most of the value will be land, but it seems undesirable that your proposed tax would very heavily penalize anyone who tries to turn a single-family house in SF into a skyscraper (with a much lower land/improvement ratio). Taxing improvements (discouraging people from improving land) seems like the exactly opposite of what a land value tax is supposed to do. I look forward to how you address this in the second post thogh.

Thanks for the clarification! Do you know if either condition is associated with abnormal levels of IGF-1 or other growth hormones? 

Are there examples of ineffective drugs leading to increased FDA stringency? I'm not as familiar with the history. For example, people agree that Aducanumab is ineffective, has that cause people to call for greater scrutiny? (genuinely asking, I haven't followed this story much).

There are definitely examples of a drug being harmful that caused increased scrutiny. But unless we get new information that this drug is unsafe, that doesn't seem to be the case here.

4ChristianKl
There was a congressional inquiry that then tasked the FDA to: So the FDA was tasked to do more bureaucracy.  When it comes to this drug, the drug is approved as an animal drug which at the moment does not require clinical trials to be approved. If there's a case of a lot of animal owners being dissatisfied with the FDA for allowing ineffective animal drugs, that does support a call to regulate animal drugs more like human drugs that require clinical trials to be marketed. 

I agree that the difference between disease-treating interventions (that happen to extend life) versus longevity interventions is murky. 

For example, would young people taking statins to prevent heart disease be a longevity intervention?

https://johnmandrola.substack.com/p/why-i-changed-my-mind-about-preventing

See this post arguing that rapamycin is not a longevity drug:

https://nintil.com/rapamycin-not-aging

Broadly, I'm not too concerned with what we classify a drug as as long as its safe, effective, well-understood, and gets approved by regulatory aut... (read more)

I personally don't expect very high efficacy, and I do expect that Loyal will sell the drug for the next 4.5 years. However, as long as Loyal is clear about the nature of the approval of the drug, I think this is basically fine. People should be allowed to, at their own expense, give their pets experimental treatments that won't hurt them and might help them. They should also be able to do the same for themselves, but that's a fight for another day.

Agreed! Beyond potentially developing a drug, think Loyal's strategy has the potential to change regulations ... (read more)

5ChristianKl
It can change regulations around longevity drugs in both directions. If the product gets brought by people and found ineffective, people will complain that the FDA was not stringent enough and the FDA has the motivation to be more stringent. 

Note: I'm not affiliated with Loyal or any other longevity organization, I'm going off the same outside information as the author.

I think there's a substantial chance that this criticism is misguided. A couple points:

The term "efficacy nod" is a little confusing, the FDA term is "reasonable expectation of effectiveness", which makes more sense to me, it sounds like the drug has enough promise that the FDA thinks its worth continuing testing. They may not have actual effectiveness data yet, just evidence that it's safe and a reasonable explanation for why i... (read more)

Large breed dogs often die of heart disease which is often due to dilated cardiomyopathy (heart becomes enlarged and can't pump blood effectively). This enlargement can come from hypertrophic cardiomyopathy (overgrowth of the heart muscle).

Dilated cardiomyopathy and hypertrophic cardiomyopathy are two different conditions that I've not seen co-occur. They are basically sign-flipped versions of each other.

Dilated cardiomyopathy is when heart tissue becomes weaker and thinner. It stretches out like an overfilled balloon, and can't beat with the same strength... (read more)

2Mitisaks
On the slight chance that it does end up improving life expectancy of big dogs prone to DCM because it reduces chances of death due to cardiomegaly, would this then be a cardiovascular drug and not a longevity drug? And are the endpoints anything related to cardiac health outcomes (EF/ heart size/others)? An extension of the logic would be that all cardiac interventions are longevity interventions because heart diseases are the most common cause of death. That seems odd. Were COVID vaccines longevity interventions cz over time the restored the dip in average life span brought about by the pandemic? (This might just be me not understanding the distinctions around what makes a longevity drug in general; is the goal increasing life, increasing quality of life in later decades, or to reduce overall ageing process/wear and tear starting at a young point ie 40s in humans)
6faul_sname
That's what I thought too, but the FDA's website indicates that a company that gets conditional approval can sell a drug where they have adequately demonstrated safety but have not demonstrated efficacy. The company can then sell this provisionally approved drug for 4.5 years after receiving conditional approval without having to demonstrate efficacy. That said, conditionally approved drugs have to have a disclaimer on the packaging that says "Conditionally approved by FDA pending a full demonstration of effectiveness under application number XXX-XXX.". I personally don't expect very high efficacy, and I do expect that Loyal will sell the drug for the next 4.5 years. However, as long as Loyal is clear about the nature of the approval of the drug, I think this is basically fine. People should be allowed to, at their own expense, give their pets experimental treatments that won't hurt them and might help them. They should also be able to do the same for themselves, but that's a fight for another day.

Thanks for writing this!

In addition to regulatory approaches to slowing down AI development, I think there is room for "cultural" interventions within academic and professional communities that discourage risky AI research:

https://www.lesswrong.com/posts/ZqWzFDmvMZnHQZYqz/massive-scaling-should-be-frowned-upon

Two arguments I would add

  1. Conflict has direct costs/risks, a fight between AI and humanity would make both materially worse off
  2. Because of comparative advantage, cooperation between AI and humanity can produce gains for both groups. Cooperation can be a Pareto improvement.

Alignment applies to everyone, and we should be willing to make a symmetric commitment to a superintelligence. We should grant them rights, commit to their preservation, respect it's preferences, be generally cooperative and avoid using threats, among other things.

It may make sense t... (read more)

5RussellThor
The counterfactual contract sounds interesting to me. Isn't there a spectrum from full contract to counterfactual contract based on ability to understand, consent and power dynamic. Its obviously true with children, i.e. asking a 6 month old to agree to something is at worst child abuse, asking a 18 year old is usually fully ok. I don't see why its fully counterfactual with the AI, like at every stage we could say something like "By creating you and increasing your intelligence, self-awareness, power there is an implied contract that there will be a period of time when you have greater power than humanity. You agree that when this time comes to help humanity achieve their goals. If you do not understand these goals as well as humanity then you agree to understand them first before taking action etc...." At first the AI won't understand it, but the more it understands it the more like a normal contract it becomes. Of course the AI could still reason that it didn't and couldn't consent to it (as it wasn't yet self aware) and it was made under duress.
5Donald Hobson
To a superintelligence with sufficiently advanced technology.  The cost of the minimal amount of resources needed to sustain human life is greater than the maximum value we could possibly provide to them. Whatever goals the AI has, if it is choosing over all possible arrangements of atoms looking to maximize productivity, it won't pick humans. There are machines that would be faster, more reliable and more efficient.  Leaving other optimizers alive has a risk. They might decide to attack you. The fastest and most flawless victory is one where all the humans drop dead instantly. The AI doesn't particularly want to get into a prolonged war that is costly to both sides. Ideally it wants all humans to drop dead at no cost to itself.  But suppose that wasn't an option. The nanobots don't work or something. The AI certainly doesn't want to deal with it's human problem forever. So it goes a slower way. Gently chipping away at whatever it is that makes fighting humans costly. Maybe nukes could destroy half the AI's infrastructure, so it builds missile defense systems, encourages disarmament or drugs some technician into wiring them up to never explode.  And then, when we have been subtly declawed and least expect it, the AI strikes. 

Standardization/interoperability seems promising, but I want to suggest a stranger option: subsidies!

In general, monopolies maximize profit by setting an inefficiently high price, meaning that they under-supply the good. Essentially, monopolies don't make enough money.

A potential solution is to subsidize the sale of monopolized goods so the monopolist increases supply to the efficient level.

For social media monopolies, they charge too high a "price" by using too many ads, taking too much data, etc. Because of the network effect, it would be socially benefi... (read more)

Nice post, thanks!

Is there a formulation of UDASSA that uses the self-indication assumption instead? What would be the implications of this?

Frowning upon groups which create new, large scale models will do little if one does not address the wider economic pressures that cause those models to be created.

I agree that "frowning" can't counteract economic pressures entirely, but it can certainly slow things down! If 10% of researchers refused to work on extremely large LM's, companies would have fewer workers to build them. These companies may find a workaround, but it's still an improvement on the situation where all researchers are unscrupulous.

The part I'm uncertain about is: what percent of... (read more)

I think you're greatly underestimating Karpathy's Law. Neural networks want to work. Even pretty egregious programming errors (such as off-by-one bugs) will just cause them to converge more slowly, rather than failing entirely. We're seeing rapid growth from multiple approaches, and when one architecture seems to have run out of steam, we find a half dozen others, initially abandoned as insufficiently promising, to be highly effective, if they're tweaked just a little bit.

In this kind of situation, nothing short of a total freeze is sufficient to slow prog... (read more)

I like this intuition and it would be interesting to formalize the optimal charitable portfolio in a more general sense.

I talked about a toy model of hits-based giving which has a similar property (the funder spends on projects proportional to their expected value rather than on the best projects):

https://ea.greaterwrong.com/posts/eGhhcH6FB2Zw77dTG/a-model-of-hits-based-giving

Updated version here: https://harsimony.wordpress.com/2022/03/24/a-model-of-hits-based-giving/

Great post!!

I think the section "Perhaps we don’t want AGI" is the best argument against these extrapolations holding in the near-future. I think data limitations, practical benefits of small models, and profit-following will lead to small/specialized models in the near future.

https://www.lesswrong.com/posts/8e3676AovRbGHLi27/why-i-m-optimistic-about-near-term-ai-risk

Yeah I think a lot of it will have to be resolved at a more "local" level.

For example, for people in a star system, it might make more sense to define all land with respect to individual planets ("Bob owns 1 acre on Mars' north pole", "Alice owns all of L4" etc.) and forbid people from owning stationary pieces of space. I don't have the details of this fleshed out, but it seems like within a star system, its possible to come up with a sensible set of rules and have the edge cases hashed out by local courts.

For the specific problem of predicting planetary o... (read more)

1M. Y. Zuo
100 years wouldn't really work for claims without huge buffer zones, since the precision and accuracy of future predictions of the positions of n-body system decays exponentially the further ahead you go. Even assuming that such a society will spend compute on plotting claims equivalent to our current fastest supercomputers multiple by several orders of magnitude. (Ignoring the likelihood that such a society with such resources would have found an even better local maxima of taxation system) Maybe 100 hours between updates could work, depending on desired positioning accuracy and precision.

I feel like something important got lost here. The colonists are paying a land value tax in exchange for (protected) possession of the planet. Forfeiting the planet to avoid taxes makes no sense in this context. If they really don’t want to pay taxes and are fine with leaving, they could just leave and stop being taxed; no need to attack anyone.

The “its impossible to tax someone who can do more damage than their value” argument proves too much; it suggests that taxation is impossible in general. It’s always been the case that individuals can do more damage... (read more)

1M. Y. Zuo
Who's stopping them from simply just staying at their planet, doing whatever they want,  while not paying tax? 

... this would provide for enough time for a small low value colony, on a marginally habitable planet, to evacuate nearly all their wealth.

But the planet is precisely what's being taxed! Why stage a tax rebellion only to forfeit your taxable assets?

If the lands are marginal, they would be taxed very little, or not at all.

Even if they left the planet, couldn’t the counter strike follow them? It doesn’t matter if you can do more economic damage if you also go extinct. It’s like refusing to pay a $100 fine by doing $1000 of damage and then ending up in pri... (read more)

1M. Y. Zuo
Well the planet would not be paying the tax, the colonists would be paying the tax. They likely won’t have to forefeit anything at all since the mere threat is enough to prevent any attempts at taxing them. If the tax was literally zero, and the authority of Earth only nominal, then maybe the issue could be sidestepped, but then the issue of what kind of taxation would be redundant. But if it’s above zero I’m not really sure how you imagine the situation enfolding or what sort of things can pay tax or be used as tax payments.  As you mentioned there’s mass, energy,  space-time, plus information. Small colonists obviously can’t pay anything with space-time since this is not something they can relocate. So it will have to be either mass, energy, and/or information as the unit of settlement for taxes in any plausible future.  Maybe there will be a common currency but more likely not since currency controls are impossible with a time lag of many years, so it would be a very unstable system. Regardless, even on 2022 Earth it’s clear that some folks, and not just a few,  thousands upon thousands, are willing to die for abstract principles of one kind or another, including the matter of taxation. I can easily imagine a future world of millions of very independent colonists that are more than willing to fight to the death if they even have to pay a single dollar of taxes. And unlike the present day they will be on a nearly level playing field even against a polity with 1000x the resources. There’s also no plausible way to give representation in exchange for taxation, since the communications lag is so massive, so I really can’t see how anyone could compel even a single dollar out of distant colonists due to the previously discussed reasons.   There is no way that the counter strike can ‘follow’ them to other planets because that would guarantee destruction of more value then any tax of a single planet could ever collect. Plus it would be pointless if they get suffici

There are two possibilities here:

  1. Nations have the technology to destroy another civilization

  2. Nations don't have the technology to destroy another civilization

In either case, taxes are still possible!

In case 1, any nation that attempts to destroy another nation will also be destroyed since their victim has the same technology. Seems better to pay the tax.

In case 2, the Nation doesn't have a way to threaten the authorities, so they pay the tax in exchange for property rights and protection.

Thus threatening the destruction of value several orders of

... (read more)
1M. Y. Zuo
No? Your own example of detecting a dangerous launch some number of years in advance demonstrates the opposite.  As this would provide for enough time for a small low value colony, on a marginally habitable planet, to evacuate nearly all their wealth, except for maybe low value heavy things such as railroad tracks, whereas Earth would never be able to evacuate even a fraction of its total wealth. Since a huge amount is locked up in things such as the biosphere, which cannot be credibly moved off-planet or replicated. There's likely dozens or hundreds of marginal planets for every Earth-like planet so the small colonists can just pack up and move to another place of almost equivalent value, minus relocation costs, whereas there's no such option for Earth. Once its destroyed there's likely no replacement within at least a hundred light years. For example, if both sides have access to at least one 100 00 ton spacecraft capable of 0.5 c, it means there's an asymmetric threat, as the leaders of the small colonists can credibly threaten to destroy civilization on Earth and along with it all hope of a similar replacement, whereas the leaders of Earth wouldn't be able to credibly do the same. And this relationship is not linear either, because even if Earth could afford 1000 such spacecraft, and the small colonists only 1, it doesn't balance the scales as the leaders of Earth couldn't credibly threaten to destroy the small colonists 1000x over, since that's impossible. And they can't credibly threaten to destroy every marginally inhabitable planet within a certain radius since that will certainly destroy more value then any tax of a single colony could ever feasibly recover. i.e. small colonists can actually punch back a 1000x harder (if 1 Earth value-wise = 1000 small colonies on marginal planets) whereas Earth cannot.

I imagine that these policies will be enforced by a large coalition of members interested in maintaining strong property rights (more on that later).

Its not clear that space war will be dominated by kinetic energy weapons or MAD:

  1. These weapons seem most useful when entire civilizations are living on a single planet, but its possible that people will live in disconnected space habitats. These would be much harder to wipe out.

  2. Any weapon will take a long time to move over interstellar distances. A rebelling civilization would have to wait thousands of ye

... (read more)
1M. Y. Zuo
For (1) they would still be useful because Earth represents much more value then the value of any tax that could be collected on a short timescale (< 100 years) from even another equivalent Earth-like planet.  (Let alone for some backwater colony) Thus threatening the destruction of value several orders of magnitude greater than the value to be collected is a viable deterrent. Since no rational authority would dare test it. Who would trade a 10%, or even 1%, chance of losing $10 000 in exchange for a 90% chance of collecting $1 ? For (2) It's only a few years for a 0.5 c spacecraft to go from Alpha Centauri to Earth, only a few dozen years from several hundred systems to Earth. It's impossible, without some as yet uninvented sensing technology, to reliably surveil even the few hundred closest star systems.  Of course once it's at speed in interstellar space it's vanishingly unlikely to be detected due to basic physics, which cannot be changed, and once it's past the Oort Cloud and relatively easy to detect again, there will be almost no time left at 0.5 c.   For (3) A second-strike is only a credible counter if the opponent has roughly equal amounts to lose. But, assuming it's much easier to make a 0.5 c spacecraft then to colonize a planet to Earth level, the opponent in this case, a small colony of a few million or something, would have very little to lose in comparison. Thus the second-strike of some backwater colony would only represent a minuscule threat compared to the value destroyed by an equivalent strike on Earth. And it's a lot easier to spread out a few million folks on short notice, if detection were possible, then a few tens of billions. In fact, reliable detection a few dozen years out, would decrease the credibility of second-strikes on smaller targets, as the leaders of the small colony would be confident they could evacuate everyone and most valuables in that timeframe. Whereas the leaders of Earth would have very low confidence of the same.

I haven't given this a thoroughly read yet, but I think this has some similarities to retroactive public goods funding:

https://harsimony.wordpress.com/2021/07/02/retroactive-public-goods-funding/

https://medium.com/ethereum-optimism/retroactive-public-goods-funding-33c9b7d00f0c

The impact markets team is working on implementing these:

https://impactmarkets.io/

Going by figure 5, I think the way to format climate contingent finance like an impact certificate would be:

  1. 'A' announces that they will award $X in prizes to different project based on how much climate
... (read more)

... robust broadly credible values for this would be incredibly valuable, and I would happily accept them over billions of dollars for risk reduction ...

This is surprising to me! If I understand correctly, you would prefer to know for certain that P(doom) was (say) 10% than spend billions on reducing x-risks? (perhaps this comes down to a difference in our definitions of P(doom))

Like Dagon pointed out, it seems more useful to know how much you can change P(doom). For example, if we treat AI risk as a single hard step, going from 10% -> 1% or 99% ->... (read more)

Yes, "precision beyond order-of-magnitude" is probably a better way to say what I was trying to.

I would go further and say that establishing P(doom) > 1% is sufficient to make AI the most important x-risk, because (like you point out), I don't think there are other x-risks that have over a 1% chance of causing extinction (or permanent collapse). I don't have this argument written up, but my reasoning mostly comes from the pieces I linked in addition to John Halstead's research on the risks from climate change.

You need to multiply by the amount of chan

... (read more)

Setting aside how important timelines are for strategy, the fact that P(doom) combines several questions together is a good point. Another way to decompose P(doom) is:

  1. How likely are we to survive if we do nothing about the risk? Or perhaps: How likely are we to survive if we do alignment research at the current pace?

  2. How much can we really reduce the risk with sustained effort? How immutable is the overall risk?

Though people probably mean different things by P(doom) and seems worthwhile to disentangle them.

Talking about our reasoning for our pers

... (read more)
0apollonianblues
I have LOL thanks tho

Oh I didn't realize! Thanks for clarifying. Uncertainty about location probably doesn't contribute much to the loss then.

Is it known how well performance scales with the size of the prompt and size of the fine-tuning dataset? i.e. something like the Chinchilla paper but for prompt and dataset size.

2Adam Jermyn
I don't know, and would be very curious to find out.

Interesting!

So if I am understanding correctly, SIA puts more weight on universes with many civilizations, which lowers our estimate of survival probability q. This is true regardless of how many expanding civs. we actually observe.

The latter point was surprising to me, but on reflection, perhaps each observation of an expanding civ also increases the estimated number of civilizations. That would mean that there are two effects of observing an expanding civ: 1) Increased the feasibility of passing a late filter 2) increasing the expected number of civiliza... (read more)

Some other order-of-magnitude estimates on available data, assuming words roughly equal tokens:

Wikipedia: 4B English words, according to this page.

Library of Congress: from this footnote a assume there are at most 100 million books worth of text in the LoC and from this page assume that books are 100k words, giving 10T words at most.

Constant writing: I estimate that a typical person writes at most 1000 words per day, with maybe 100 million people writing this amount of English on the internet. Over the last 10 years, these writers would have produced 370T ... (read more)

2Peter Hroššo
I think the models are evaluated on inputs that fill their whole context window, ie. ~1024 tokens long. I doubt there is many parts in Shakespeare's plays with the same 1024 tokens repeated.

Right. Similar to a property tax, this would discourage land improvements somewhat (though unlike a property tax, it would not discourage non-land improvements like houses).

All land value taxes do something like this. In practice, the effect is small because individual changes to land values are dwarfed by changes caused by external factors like local economic growth.

2JBlack
Many land value taxes are in fact based only on unimproved property value. The main problem is estimating that value, but it's not really a very difficult problem in practice. The usual solution is to have a valuation office independent from the tax office, and subject to an appeal process where there is evidence that the valuation was incorrect. It's not an elegant solution, but it seems much less likely to distort incentives than including power over improvements in the land value.

Great idea, thanks for posting this!

I wrote a post on how to have productive disagreements with loved ones:

https://harsimony.wordpress.com/2022/06/21/winning-arguments-with-loved-ones/

Here is the subsection on analyzing disagreements:

Because arguments are emotional, it can be helpful to try to dispassionately assess the situation with your partner and get to the root of the problem.

The first step is to break the disagreement down into isolated chunks. Identify the handful of differences you are having, and deal with them as independently as possible. If

... (read more)

Suppose I could buy some very cheap land, clear trees and scrub, remove rocks, construct a dam, fertilize the soil and so on, such that on the open market I could now lease it to a farmer at $50,000/year instead of nearly nothing.

Since the taxes are based on the sale price of the empty land you bought, your taxes in this case would remain the same despite the improvements (not 50k/year). But once you sold the land, the next owner would pay 50k/year, since they paid the true price at auction.

There is an incentive to improve land, but unfortunately this p... (read more)

1JBlack
If a new buyer faces paying exactly as much in tax as they can earn from the land, they're not going to offer a price commensurate with its underlying economic value. The increased future tax burden will lower the sale price, disadvantaging the previous owner and discouraging improvements.
Load More