Yes, I saw an article a few years ago a back of the envelope estimate that suggested this would be doable if one could turn mass on the moon more or less directly to energy and use the moon as a gravitational tug to slowly move Earth out of the way. You can change mass almost directly into energy by feeding the mass into a few smallish blackholes.
How do they propose to move the blackholes? Nothing can touch a blackhole, right?
Donated $300 to SENS foundation just now. My company matches donations, so hopefully a large cheque is going there. Fightaging is having a matching challenge for SENS, so even more moolah goes to anti-aging research. Hip Hip Hurray!
Weird fictional theoritical scenario. Comments solicited.
In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).
We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner's dilemma, but they defect. In our dying gasps, one of us asks them "We thought you were rational. WHY?..."
They reply " We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors. In your treatment of super intelligences that were alive amongst you, the ones you call Azathoth and Mammon, we see that you really crushed them. I mean, you smashed them to the ground and then ran a road roller, twice. I am pretty certain you cooperated with us only because you were afraid. We do to you what you did to them"
What do we do if we could anticipate this scenario? Is it too absurd? Is the idea of extending our "empathy" to the impersonal forces that govern our life too much? What if the aliens simply don't see it that way?
Good grief. You know, we already have nation-states for this sort of thing. If people form coherent separate "groups", such that mixing the groups results in a zero-sum conflict over resources (including "utility function voting space"), then you just keep the groups separate in the first place.
EDIT: Ah, the correct word here is clusters.
So, is my understanding correct that your FAI is going to consider only your group/cluster's values?
Not to mention those who prosecuted and genocided ideological opponents.
Yes, that too.
Poland had used a version of that when arguing with the European union about the share in some commision, I'm not remembering what. It mentioned how much Poland's population might have been had they not been under attack from 2 fronts, the nazis and the communists.
Pop quiz: explain to me why I should program my FAI to consider materially-different humans to have different ethical weight, to have their values and cognitive-algorithms compose differently-weighted portions of the AI's utility function.
Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who're acting sensibly today and restricting reproduction to children they can bring up properly.
I'm not very certain of this answer, but it is my best attempt at the qn.
I went from straight Libertarianism to Georgism to my current position of advocacy of competitive government. I believe in the right to exit and hope to work towards a world where exit gets easier and easier for larger numbers. My current anti-democratic position is informed by the amateur study of public choice theory and incentives. My formalist position is probably due to an engineering background and liking things to be clear.
When the fundamental question arises - what keeps a genuine decision maker, a judge or a bureaucrat in government (of a polity way beyond the dunbar number) honest, then the 3 strands of neo-reaction appear as three possible answers - Either the person believes in a higher power (religious traditionalism) or they feel that the people they are making a decision for are an extended family (ethnic nationalism) or they personally profit from it (Techno-commercialism). Or a mix of the three, which is more probable.
There are discussions in NRx about whether religious traditionalism should even be given a place here, since it is mostly traditional reaction, but that is deviating from the main point. Each of these strands holds something sacred - a theocracy holds the diety supreme, an ethno state holds the race supreme, a catallarchy holds profit supreme. And I think you really can't have a long term governing structure which doesn't hold something really sacred. There has to be a cultural hegemony within which diversities which do not threaten the cultural hegemony can flourish. Even Switzerland, the land of 3 nations democratically bound together has a national military draft which ties its men in brotherhood.
A part of me is still populist, I think, holding out for algorithmic governance to be perfected and not having to rely on human judgement which could be biased. But time and time again, human judgement based organizations have defeated, soundly, procedure based organizations. Apple is way more valuable than Toyota. The latter is considered the pinnacle of process based firms. The former was famously run till recently, by a mercurial dictator. So, human judgement has to be respected, which means clear sovereignty of the humans in question, which means something like the neo-cameralism of Moldbug, until the day of FAI.
Interesting. I'm hoping that by getting a trustworthy non-profit to host the site (and paying for a security audit) we can largely side step the issues.
I spent a long time trying to create a way not to need the trusted third party, but I kept hitting dead ends. The specific dead end that hurt the most was blinding of physical product shipments.
If we can figure out a way to ship both products and placebos to people without knowing who's getting what, I think we can do this :)
I have been thinking of a lot of incentivized networks and was almost coming to the same conclusion, that the extra cost and the questionable legality in certain jurisdictions may not be worth the payoff, and then the Nielsen scandal showed up on my newsfeed. I think there is a niche, just not sure where would it be most profitable. Incidentally Steve Waldman also had a recent post on this - social science data being maintained in a neutral blockchain.
About the shipping of products and placebos to people, I see a physical way of doing it, but it is definitely not scalable.
Let's say there is a typical batch of identical products to be tested. They've been moved to the final inventory sub-inventory, but not yet to the staging area where they are to be shipped out. The people from the testing service arrive with a bunch of duplicate labels for the batch and the placebos and replace 1/2 the quantity with placebo. Now, only the testing service knows which item is placebo and which is product.
This requires 2 things from the system - the ability to trace individual products and the ability to print duplicate labels. the latter should be common except for places which might have some legal issues for continuous numbering. Ability to trace individual products is there in a lot of discrete mfg. but a whole lot of process manufacturing industries have only traceability by batch/lot.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I think this is a very important contribution. The only internal downside of this might be that the simulation of the overseer within the ai would be sentient. But if defined correctly, most of these simulations would not really be leading bad lives. The external downside is overtaking by other goal oriented AIs.
The thing is, I think in any design, it is impossible to tear away purpose from a lot of the subsequent design decisions. I need to think about this a little deeper.