Stupid Questions April 2015
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't mock them for it, as they're doing a noble thing.
To any future monthly posters of SQ threads, please remember to add the "stupid_questions" tag.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (145)
I've been a bit out of touch from the community the past year or so, so I think I've rather missed things about the "Future of Life Institute", which mostly came to my attention because I think Elon Musk gave that big donation to it.
I don't quite understand what's the precise connection of FLI with everything else? How does it relate to MIRI/LessWrong/CFAR/FHI, historically/in the present/in its planned future?
Best way to find out is to ask the LWer Vika, who I'm pretty sure was the driving force (Max Tegmark probably had something to do with it too). I think their niche is to be a more celebrity-centered face of existential risk reduction (compared to FHI), but they've also made some moves to try to be a host of discussions, and this grant really means that now they have to play funding agency.
I'm flattered, but I have to say that Max was the driving force here. The real reason FLI got started was that Max finished his book in the beginning of 2014, and didn't want to give that extra time back to his grad students ;).
MIRI / FHI / CSER are research organizations that have full-time research and admin staff. FLI is more of an outreach and meta-research organization, and is largely volunteer-run. We think of ourselves as sister organizations, and coordinate a fair bit. Most of the FLI founders are CFAR alumni, and many of the volunteers are LWers.
I am reposting my question from February thread since it got no response last time:
Which cryonicist to thaw?
Say that, in thirty-plus years, you're still alive and I've been cryonically preserved for a while. What could I have done during my life to convince you to apply your finite resources to resurrect me, rather than someone else?
Would it make a difference if the only potentially available resurrection method was destructive mind uploading, for which a vitrified brain would happen to be an ideal test subject?
The first attempts at reviving are going to focus on testing the resurrection method. I'm not sure if you want to be in that bunch.
If you want then it's important for the people who resurrect you to check whether your personality is intact or changed.
If you would fill out a personality test every month and that test would provide stable values before your death, it would be interesting to check whether your personality stays stable.
Having a complex Anki deck that contains information about cards that should be in your mind would also be useful for that purpose.
A personality change might simply be because of the new, futuristic environment. One could control for this by bringing personality-stable people from a poor, underdeveloped country into civilisation.
My understanding was that written personality tests tend to have low accuracy although I could easily be wrong in that belief. I think video recordings might be more useful.
As an alternative, what would you think of assuming a certain degree of advance in computation and psychology, and making arrangements to store every bit of digital data I've ever typed, or decided was worth storing in my personal e-library?
More data is likely better when you want to check whether anything in the mind is lost.
Setting up a trust fund to pay whoever resurrects you would help.
Curious whether you would basically need to architect the terms assuming that your resurrectors are unfriendly.
I'm not sure what you mean by 'architect', but as I don't believe there are any current trust funds set up in quite this way, it would likely require designing a custom legal instrument. In which case, not only would I recommend involving a contract lawyer to handle the pitfalls of terminology, but also spending some time working out the game-theory aspects of the payout to avoid perverse incentives of the more likely scenarios - eg, you don't want to incentivize would-be resurrectors to bring you back early and with brain damage, when it's plausible that waiting a bit longer would be in your own interests.
Alternately, the terms of the trust fund may be less important than choosing an executor willing to interpret those terms in the way you meant, rather than the way they were written. ... Which, should your first choice pass away or retire, brings up a whole host of other issues about how to choose replacements.
Do you know why there aren't? There are trust funds set up so that the interest pays for the cost of being cryopreserved. I would have assumed that they'd have clauses in them where, once the person is thawed, the money goes to whoever thawed them. We don't want people kept frozen just so they can get money from those trust funds.
... Um, are you sure? For the cryo organizations I'm aware of, there /is/ no continuing cost of being cryopreserved for the individual - it's all up-front cost, with the funding going to the organization so /it/ can handle those continuing costs.
People who ask this sort of question assume that the cryonics era just comes and goes in a few decades. I find it more likely that cryonics or its successor technologies will become part of mainstream medicine indefinitely. If you have an illness or injury (probably some new kind of pathology we haven't seen yet) that the health care providers in, say, the 22nd Century don't know how to treat, they would put you in some kind of biostasis for attempted revival in, say, the 24th Century, when medicine has advanced enough to know what to do.
So why would people in the 22nd Century want to revive and rejuvenate and transhumanize people from the 21st Century? Well, they might return the favor for their resuscitators in the 24th Century.
Say that, in thirty-plus years, you're still hale and hearty and I've been seriously ill for a while. What could I have done during my life so far to convince you to apply your finite resources to heal me, rather than someone else?
Given that it's questionable whether I'm going to have enough finite resources to bring my aging cat to the vet in the near future; and I live in Canada, with its single-payer health care system; it's a somewhat more complicated question than it may seem. Given past evidence, some minimal qualifications might involve me knowing that you exist, and knowing that I was able to help you, and knowing that the help I could provide would make a difference (this latter being one of the harder qualifications to satisfy). Given all of /that/... one potential qualification might be the possibility for future reciprocation, either directly, or by being part of a shared, low-population social group in which your future contribution could still end up benefiting me - such as, say, the two of us being part of a literally-one-in-a-million group working together to try to find some way to permanently cheat death.
There are probably other answers, including ones that I don't recognize due to my limited knowledge of human psychology and my finite insight into my own motivations... but that one seems to have some measure of plausibility.
You signed a contract allowing people developing resuscitation technology to use you as one of their first experimental human revivals.
ETA: Someone has to be the first attempted revival, but I suspect that it may be last in, first out. The later you get frozen, the better the freezing technology, and the sooner the technology to reverse it will be developed. By the time people can be frozen and thawed routinely, there will still be vaults full of corpsicles that no-one knows how to revive yet. All the people in Alcor today might eventually be written off as beyond salvaging.
Hm... under current law, the cryonically preserved are considered dead, and thus any contracts they signed are no more enforceable than a contract with a graveyard to perform one form of burial instead of another. The existing cryonics companies have standardized contracts. I can't think of any way to create the contract you describe. Do you have any further details in mind?
I wasn't concerned with the legal details, which will vary from time to time and place to place. At the moment, what obligates Alcor to keep their bodies frozen?
And there are wills. You can already will your body to medical research.
The legal regime that cryonics has operated under has been reasonably stable for the past thirtyish years, with some minor quibbles about registering as a cemetery or not. What reasons lead you to believe that the relevant laws will undergo any more significant changes in the next thirtyish years?
At least in part, the fact that the directors are also members, and desire for their own bodies to be kept frozen after they die.
Legally, that's essentially what the wills of cryonicists already do. (In Ontario, the relevant statute is the 'Trillium Gift of Life Act'.)
You would need to be able to provide value for me - so you would need to have skills (or the ability to gain skills) that are still in expensive and in demand, and society would need to give me an enforceable right to extract that value from you. Slavery or indentured servitude, perhaps.
If I may ask, are you yourself a cryonicist who might end up facing the question from either side?
You seem to be assuming that immediate economic value is the only value worth considering; was this your intent?
Does this criteria apply to present-day questions that are in vaguely the same ballpark? That is, do you choose who to help based on whether or not you can force them to pay you?
Good point here - I don't usually have any mechanism to force people to pay me. I usually to help based on how likely I think it I am to get what I want out of it. A few examples:
I'm not sure what you mean by economic value. If you mean money, no. I think that humans value many things. I could certainly see a respected artist being revived even if the reviver could not directly tax the artist's production.
I'm not a cryonicist at this time. I do think there's a pretty good chance that either cryonics, brain uploading, or something similar will see some people from my lifetime recreated in a form after their deaths.
It's already legal to perform a medical procedure to save someone's life without their consent if they're not capable of consenting, and then demand payment. You could still go bankrupt, but that causes problems so if you're capable of repaying you probably would.
That's slightly terrifying, but I guess makes sense as an incentive to perform life saving medical interventions
Is Occam's Razor a useful heuristic because we can observe a certain 'energy frugality' in nature? More complex hypotheses are possibly correlated with a higher energy demand and are thus less likely to happen.
Amusing idea, but I don't think there is any relation. For example, the discovery of nuclear structure strongly lowered the complexity of our description of nature but implied a huge amount of previously unknown available energy.
My personal epistemology says no, and that Occam's Razor is generally useful no matter which universe you find yourself in regardless of how it is structured.
Aren't there physics equations describing processes which are believed not be driven by thermodynamics, which are nevertheless still simple and elegant?
Where can I find recipe listings that 1. are relatively quick to make (because time is precious), 2. have ingredients that cannot be used as finger food (I have no self-control), and 3. are easily adaptable for picky eaters (there's a huge array of things I just can't abide eating)?
There no problem with eating vegetables as finger food anyway.
Can comforting lies be justified in certain circumstances or do the downsides of this thinking habit always outweigh its benefits? (Example: Someone takes homeopathic remedies to cure pain and benefits from the placebo effect.)
Consequentialist ethics would suggest the answer is yes, but in your example perhaps a better result would be getting the same placebo effect benefits from some kind of treatment or remedy that might actually work in itself, beyond placebo. Indulging woo isn't necessary to get positive expectation health benefits.
Relevant: The Third Alternative
Knowing about the placebo effect doesn't stop the placebo effect from kicking in.
Anyway, I'd say that there are moments when comforting lies may be worth it, but I don't trust my ability to know when those moments are happening and it would raise my overall believability if I was found out.
Especially if you know that knowing about the placebo effect doesn't stop the placebo effect from kicking in.
I'd say that there are times when it's worth having comforting lies, but you can't figure out when if you're under the effect of comforting lies, so you should follow the strategy of never listening to comforting lies.
Sorry, never been here before and know nothing about this place and all the other "stupid questions" here seem super formal so I feel really out of place here but, how common is it for the users on this site, the likes of whom likely all refer to themselves as rationalists to be misanthropes?
I hate humans. I hate humans so much. I used to think I could change them. I used to think every human who exhibited behavior I found to be inferior was simply ignorant of true rationality. Mines is a very long story that I no longer want to tell but it was months of thinking I could change every mind I found inferior before I came to the conclusion that humans are worthless and that they've simply devolved to the lowest common denominator, to the point where they retain not the capacity to grasp the objective breadth of rationality in this universe unless they lack the very things that make them human.
I have extremely strong opinions on everything I've cared to question, the likes of which I wish to express formally before I die but I hate humans so much. I wouldn't be doing it for the human. I am probably technically depressed at the moment and have been for a long time and was just wondering how many self-proclaimed rationalists consider themselves misanthropes, or at least exhibit misanthropic views...
If this is representative of your usual conversation style, then everyone above a certain level of competence will correctly infer that they should avoid you. This will leave you with conversation partners that are far below average. Your other statements make me think that this has, in fact, happened.
This is a difficult skill. The first step, if you truly want to change someone, is to establish mutual respect. If they think you don't like them, they will resist all attempts to influence them. This is definitely the right strategic move on their part, and even if they don't think about strategy at all, their instincts and emotions will guide them to it. If you think that you should be able to convince people of things, with this style of writing or this style of writing translated into speech, then you have misunderstood the nature of social interaction and you need to study the basics with the humility of a beginner.
Our culture typically presents rationality as opposed to emotion; I believe that a disproportionate number of misanthropes are drawn to rationality for that reason.
However, logic is meaningless without having an underlying goal, and goals are generally determined by one's emotions. What are your goals?
I find that thinking of other people as inferior or irrational is not particularly helpful in accomplishing my objectives. I feel less stress and make more progress by recognizing that other people have different goals than I do.
It is possible to get others (even "irrational" others) to help you accomplish your goals by offering to help them with theirs.
Sorry, before I mention my personal goals I just want to say that I disagree with the notion that logic is meaningless without being founded on an underlying goal... Logic as I understand it is by definition merely a method of thinking, or the concept of sequencing thought to reach conclusions, and determining why some of such sequences are right. I believe logic in itself- according to the second definition I proposed- tends to the end of a goal, and that goal is rationality. Naturally, without having anything to sequence logic is nothing and has no breadth, but in this universe where the breadth of the construct "logic" is contingent on the human's ability to sequence data it should inherently have a goal, at least today as the human appears, and that goal should be rationality, in my opinion. I believe assuming your proposal is correct would mean assuming "logic" as you used it in your proposal is simply defined as a method of thinking, and not its more fundamental meaning, which I proposed.
My goal is simply to express in my lifetime my views on everything... I do not feel I can change the world. I do not feel I can simply approach every human I encounter and explain to them why I believe my opinions to be correct and all conceivable dissenting opinions to be wrong. I will just express myself in my own way one day and that will be it... I created an account on this website more or less randomly for me because I was recommended going here once, a while ago.
I do not believe that "stress" in itself is something to be considered when it comes to one's method of forcing the world to tend to the end they want to... I will explain what I mean. Please excuse any possible argument by assertion fallacies henceforth... converting everything to E-Prime is tiring but I do believe opinions have to actually be defended to be rational... If i ever simply assert that I believe something is true that is a mistake, as i meant to rationalize its breadth in its entirety to believe it has the capacity to be defended and inherently rebut all conceivable dissenting arguments...
Obviously, the human's understanding of rationality is a consequence of themselves, to some extent. That is not to say that rationality so defined is entirely a consequence of the human and that the human literally created a portion of this universe that retains the properties of "rationality"... What I mean is, humans appear to feel emotion, and humans appear to correlate their understanding of the concepts of "good" and "evil" to what they perceive to be positive and negative emotion, respectively. Fundamentally, every human who retains the standard properties of the human lives through their own emotionality and their idea of good and evil is founded on that very thing.
Ugh... I just realized if I expound my philosophy any further I will be affirming for the first time since posting here my opinions which many will probably disagree with but basically I think that "stress" if "stress" is defined as pain(negative emotion) entirely in the head, meaning it is simply perception, ascribing emotion to certain things and feeling pain as a result, it is entirely a consequence of perception and can be manipulated to become pleasure... Perhaps it will be a certain iteration of masochism, and perhaps actually enduring perceived stress in reality will have consequences on the outside world as distinguished from your own psyche, possibly prompting an entire lifestyle change but "stress" should be irrelevant if its properties can just be totally changed with a different opinion, in my opinion.
When it comes to me, I believe so strongly that all who disagree with me is wrong that it seems extremely unnecessary to saturate my believing their being wrong with something else in an attempt to make me cope with my own emotionality. I believe there are other ways to cope with oneself than compromising on their own beliefs. I just correlate things to good or evil freely, at face value. I really wouldn't make progress insofar as inciting a revolution is concerned by tolerating what I believe to be wrong, either. Perhaps by "goals" you mean something other than forcing the world to tend to its most rational end as you perceive it.
About your last sentence, I don't believe in manipulating via anything other than argument to entice others to do as you wish... If it is something less than true reason to think, which I believe can only be conveyed via argument of some sort, it will be blind conformity, and any society or standard based on that is doomed to conceive notions as worthless as the one it was founded on, making it inferior to what it could and I believe should be. Also, it's interesting that misanthropes are drawn to reason. I kind of expected it but I've had bad experiences with self-proclaimed misanthropes retaining the human property I hate, rendering their sub-ostracization asinine in my eyes... I probably rambled a lot in this post, sorry. I don't know what type of reply I would expect to this if any. Thanks for reading if you did.
I self-describe as a rationalist and I don't like humans that much at all. Don't know how common this is.
I like humans well enough when
-I can have a sensible interaction with them
-Or, they are willing to accommodate my needs with needing an explanation for everything
-Or, if I can manage their irrationality with a strategy that has a low cost to myself
Otherwise, I don't like humans very much or at all. Maybe disappointed? I wouldn't say hate (though the thought does come up).
I have been depressed. I've learned to deal with it, and I don't feel I'm depressed now, though I am probably at risk for depression.
Mostly I try to do things for myself. And to put myself in a position where I won't depend on any individual human for anything vital, and to have resources for as much self-reliance as possible.
Possibly I don't understand your situation ("devolved" doesn't make sense to me except as Star Trek syence, a word I just invented based on the name SyFy. It could be a more polite version of 'syfyces'.)
But I find it useful to remind myself that humans have no evolutionary reason to be perfectly rational. I tell myself that if any future I hope for comes to pass, the people there will see us (at the present time) as particularly foolish children who, rather horribly, age and die before growing up.
Sorry, I suppose I misused the word "devolve"... I've seen others use it as I have in my post here so I thought it was okay, but I suppose not. Perhaps they misused it, and if so I should not be tolerating the arbitrary and blatant misuse of words. What I meant by that word though was simply falling in stature. My using the word was to express that I believed humans have fallen in stature to the point that they cannot fall in stature anymore, and that the humans who roam the earth today will continue to breed and forge the world they want without changing very much in the next few generations of human if ever.
I just realized this site has a quoting feature. That makes responding to posts SO much easier...
Yes... I believe the same thing. One does not have to provide to anything a rational reason to copulate, and to breed. One does not need to provide a rational reason to anything to live, to kill, to, force the world to tend to the end they want or anything. Humans appear to simply do. Naturally, through generations of the human simply doing, and doing as they please, they have perhaps become incapable of actually questioning whether or not simply doing is right, but what do I know? This is just a theory, and not one I can prove with sheer logic. Even if I fancied doing so it would be a waste of time... It would be far worth my effort to simply deduce and affirm what it means to be right, and what it means to be wrong. Whether or not the human has the capacity to truly be rational and what caused rationality and being human to be mutually exclusive if they are can be questioned later...
For those of you who know real-life coding: I started watching CSI: Cyber and I'm hooked. I'm loving it. But is it rubbish?
I never watched CSI: Cyber. That said:
Yes.
:)
Does anybody want to write a rat!BatmanBegins fic set right after the movie ended? I think it would be a great opportunity to explore several issues we have been accustomed to in HPMOR. The premise is: Batman learns that Ras'al'Ghul (sorry if misspelled) was trying to develop industrial-strength technique to produce the psychedelic gas. (Basically, to have the poppy in an in vitro culture, maybe modify it genetically and have an almost fail proof way to obtain unlimited and cheap substance.) RaG wasn't himself a specialist in this, so he hired a lab to work out the protocol. The lab team must include at least 1 person to operate a gas-chromatograph/mass spectrometer, 1 to tinker with the culture medium, 1 statistician, 1 specializing in plant secondary metabolites and (realistically, no less than) 1 assistant.
Now, Batman doesn't know whether RaG has ever succeeded in this scheme, and cannot just check using his (rather conspicuous) personas, but he has an inventor friend. So he buys the lab for W Corp and waits for evidence of culpability/innocence/... He gets to overhear them discuss the seemingly impossible phenomenon of the honest Commissioner and from their hypotheses can at least conclude they are capable of looking for alternatives - as he himself should have when somebody approached him to train him just out of the goodness of their heart.
In reality, every member on the team has had some misgivings about the use of their project, and sabotaged it in subtle ways, but seeing as this is Gotham and nobody quite knows what happened to RaG, they mistrust all outsiders.
Recently I became active in EA (effective altruism) movement but I'm kind of stuck on the issue of animal welfare. While I agree that Animals deserve ethical treatment and that the world would be a better place if we found a way to completely eliminate animal suffering I do have some questions about some practical aspects.
Is there any realistic scenario where we could expect entire world population to convert to non-meat diet , considering cultural, agricultural and economic factors?
Would it be better if instead trying to convert billions of people to become vegetarians/vegans we invest more in synthetic meat research and other ways to make meat eating non-dependent on animals?
How highly should we prioritize animal welfare in comparison to other EA issues like world poverty and existential risks?
How does EA community view meat-eaters in general, is there strong bias against them? Is this a big issue inside the movement?
Disclosure: I am (still) a meat-eater , and at this point it would be really difficult for me to make consistent changes to my eating habits. I was raised in meat-eating culture and there are almost no cheap and convenient vegetarian/vegan food options where I live . Also my current workload prevents me in trying to spend more time on cooking.
I do feel kind of bad though, and maybe I'm not trying hard enough . If you have some good suggestions how I can make some common-sense changes towards less animal dependent-diet that might be helpful.
1) hardly, but then again, what is the minimum % of world population do you expect to be convincable? It doesn't have to be everybody. 2) what are the minuses of this technology? Illegal trade in real meat will thrive, for example, and the animals would live in even worse conditions. 3) I think poverty might contribute to meat consumption, if we're speaking about not starving people but, say, large families with minimal income. Meat makes making nutritious soups easy.
If I cook a fixed amount of raw rice (or couscous, or other things in that genre) in a variable amount of water, what difference does the amount of water make to calories, nutrition, satiety, whatever?
For example, if I want to eat fewer calories, could I cook less rice in more water to get something just as filling but less calorific?
This doesn't answer your question, but if you conclude that adding water is likely to make rice more filling per calorie (I have no idea whether it will), the dish you want is called congee, and searching for that should yield many delicious recipes.
I don't know about varying the amount of water. But if you want to eat fewer calories of rice, there was an article that came out recently saying that the method you use to prepare it could affect the amount of calories your body actually absorbed from it.
More water will also absorb a greater portion of water-soluble vitamins.
Does that mean I get more vitamins (e.g. because the vitamins were biologically unavailable in the rice, but available in the water) or fewer (e.g. because the reverse, or if a significant amount of water boils off)?
Water loss through boiling shouldn't make a difference, as the vitamins are not volatile and will not boil off with it.
I'm not sure. The rice is supposed to absorb (most of) the water you cook it in, which complicates giving an answer.
I hear shirataki was invented specifically for that purpose.
Okay, so I have a "self-healing" router that ostensibly reboots itself once a week to "allow channel switching" and to "promote network health", and given that this seems to NOT mess up my internet access in one of several ways every tuesday morning only MOST of the time, it has been causing me stress absurdly out of proportion with the actual danger (of being without internet access/my ONLY link to the outside world, for a short time).
So, my question is, what the HECK does "channel switching" or "promoting network health" even mean, and is it actually important enough that I shouldn't just flat out disable my router's "self-healing" feature?
In Germany most internet connection have a clause that requires regular reestablishing with get's you a new IP address. It's in the contracts because the changing IP address makes it harder to run a server behind a home connection.
The advantage of a changing IP address that it's a lot harder to track you for random websites.
It makes sense for the router to choose a time in the night where the connection isn't used to do the reconnecting. Otherwise the ISP would on it's own choose the timing which might be worse.
If your router however does this when you aren't sleeping see if disabling the feature helps.
I think you may have misunderstood. I'm talking about my router, which is a separate device from my modem. I have never observed the router rebooting to fix a problem, and have on several occasions observed the reboot to cause a problem. I just want to know if there is something nonobvious going on that will cause problems if the router does not reboot once a week, keeping in mind that it is a separate device from the cable modem.
"Channel-switching" is referring to the wireless channel. Modern wireless routers will "intelligently" select a wireless channel to communicate over, taking into account features of their environment. For example, if there's high competition with other wireless transmissions on one wireless channel, they'll switch to a less contested one.
"Promoting network health" is a bit of a nebulous thing to say about a home network served by a single router. As a pragmatic observation, rebooting a router can solve a variety of problems it might be experiencing. Most home users can't distinguish local machine problems from problems with the network, and automatic periodic rebooting of the router probably prevents a lot of support calls. If you're happy with rebooting your own router as and when you see fit, I don't see why you shouldn't turn this feature off.
The amount of fossil fuels extracted in a year is equal to the amount of fossil fuels burned in a year (give or take reserves, which will even out in the long run). So if fossil fuel extraction were reduced, CO2 emissions would be reduced, regardless of any taxes, cap-and-trade, alternative energy sources, etc that may or may not be in effect. Indeed, the only way that traditional environmental measures such as the above can reduce carbon emissions is if their effect on fossil fuel prices eventually causes less extraction.
Therefore it seems logical that the best way to reduce CO2 emissions is to pay fossil fuel extractors to reduce their extraction rate. This should not cost the extractors too much because they will still own the resources and will be able to monetise them eventually. But environmentalists do not favour such subsidies to e.g. Saudi Arabia and when I have brought up this suggestion to environmentalists they have looked at me funny and suggested the issue was complicated, but never provided any direct reason why this should be a bad idea. This makes me think I am missing something obvious, that this is a silly idea.
Is there academic literature on this or similar concepts? Why isn't this a good idea for reducing CO2 emissions?
If you pay Saudia Arabia to produce less, then someone else will produce more unless you pay them not to, too. And any of them could secretly overproduce the lower quota.
AND once you've lowered the supply, then the price will rise, making the number of potentially profitable oil-producing states rise, increasing the number of people you need to pay off, and increasing the amount you need to pay each one.
Fossil fuels are used for purposes other than burning. They are used in making plastic, in making fertilizer and in synthesizing chemicals as well.
If nothing else, because it would be prohibitively expensive. Globally, something like 70 million barrels of oil are produced per day. The total value of all barrels produced in a year varies depending on the price of oil, but at a highish but realistic $100bbl, you're talking about two and a half trillion US dollars per year. If you were to reduce the supply by introducing a 'buyer' (read: subsidy to defer production) for some large percentage of those barrels, then the price would go even higher; this project would probably cost more than the entire global military budget combined, with no immediate practical or economic benefits.
The main thing you want to do when you want to reduce fossil fuel extraction is to outlaw fracking and make it harder by vetoing pipeline bills.
Paying Saudi Arabia to lower extration rates while at the same time increasing fracking production makes no sense.
Are you saying that's true in general, or that it just so happens that Saudi Arabia drilling is more cost-effective than fracking?
I don't know what "true in general" means here.
It sounds like a thought-terminating cliche. Sort of like saying that we should solve all our problems on Earth before we start exploring space. If Saudi Arabia's marginal oil is less cost-effective than fracking, then it's better for them to stop extracting as much and us to extract more. Are you trying to say that we should stop our own production first regardless, or that fracking has the lowest cost-effectiveness and we should worry about fracking before drilling?
Changing oil extraction rates is a complex political issue where price isn't the only variable that matters. Neither of the statements you made matches the one I made above.
Deferment (producing later instead of now) is really expensive if you are using a reasonable discount rate, so this would be quite expensive. I think your plan would also constrain supply, raising the oil/gas price and making the cost even higher.
If you want to ballpark costs, try deferring whatever fraction you like of us oil prod for, say, 10 years. Try a discount rate of 7-8% and figure out the costs/year. I would assume oil price of at least 80/bbl if you are trying to estimate costs on a reasonable timescale.
It costs a lot of money and only defers the problem. Extracting less coal and less oil doesn't do much to address increasing energy demands. You'll get some decrease when the price goes up from restricting supply, but once thing stabilize it's going to continue rising.
Basically, you're temporarily reducing emissions without addressing the circumstances that brought about high emissions in the first place.
Do other people like clothes when they buy them and dislike them after putting in the wardrobe? I mean, I personally think it is true for myself because my relatives like to give me clothes as easy gifts and I always feel like they remind me that I am a child, which is why I learned to smile and say thank you regardless whether I like something. Lately, when I have to buy something for myself, I just wander and use the Force. How do you unlearn such a habit (it seems wasteful and ungrateful not to accept a gift, but also wasteful and stupid not to learn to choose)?
The path I see is to develop more specific preferences and articulate why you prefer certain clothing over other clothing. Tell your relatives what kind of clothing you like.
You can even say: "I have make a decision to move to a new style ..." If you don't want to make them feel bad for past gifts.
Clothes I pick often depend on my mood and the social context I will have in the day. If the emotional state in which you buy radically differs from the state in which you choose clothes from your wardrobe that can lead to a disconnect.
Have you analysed why you prefer certain clothing over other clothing?
Thank you, I'll try. I prefer warm to pretty in winter and khaki/colourful (depending on whether I am with my kid or not) to neat. I prefer pants to skirts. Generally, I like only a narrow subset of how I can look, and get annoyed when people tell me to be more flexible. My thinking goes like 'can't they see I have already defined myself and have no wish to follow aging conventions?' (I'm 29.) It's a bug and not a feature, but I find other people's clothing so... hopeless, I guess, so muted, that I can't remember the last timeI envied someone.
If that's true it's likely that it you could be more specific then pants>skirts and khaki/colorful.
On the other hand those seems pretty straightforward rules for your relatives. Don't gift her skirts and make sure it's either khaki or colorful.
We have an economic system with N actors. Each actor has its own utility function that it uses to attempt to spend/invest money in areas that will grow. The system as a whole doesn't know these functions and the nodes can't see them internally. They just make a judgment and spend/invest. If they spend in an area that grows, more money comes to them via an agent in the system that redistributes cash as it flows to the originators of cash in a node.
For example, If N1 pays a dollar to N2 for a bottle of wine, N1 gets a share in N2. As cash flows through N2, little bits get funneled back to N1. So if N2 becomes the next big wine maker, many bits will flow to N1 and it will be rewarded for sending money to N1 early in time.
Does it follow from bayes theorem that if I keep passing cash through this system, that over time, the success rate will ocellate around the actual success rate of each nodes utility function? In this scenario, if you fail you get your cash back slowly over time,if you succeed you get it back more quickly.
I'm anticipating that a set of actors in this situation would end up in an economy where the level of wealth for each node converges on their true ability to create value.
If I'm totally misinterpreting, I'd love some pointers to good info to read.
If N2 basically has to pay a tax towards money being challenged through N2 after the node is in use for a while, why doesn't it instead create a N3 node to use as a conductor for payments?
Great Question. The 'bits' in the system I'm proposing are based on a system wide demurrage or 'decay rate' of currency. Simply switching to a different node doesn't change the decay on cash you hold. There isn't an incentive to create a new node. On the positive side, existing customers have a loyalty factor. N1 will be more likely to buy the same commodity from N2 than from a random Nx. This behavior has a limited life though because diminishing returns eventually catch up and suddenly the benefit from being one of the first contributor to Nx is greater than the loyalty to N2.
This gives a lifespan to legal entities and increases the turnover thus increasing the likelihood of more fit entities emerging(if you assume that entities can cross generationally share information).
You basically get the the attractiveness of youth, the steadiness of adulthood, and the slow decline to oblivion. (and with this an increased incentive to figure out immortality by creating enough value to outrun the diminishing returns)
I don't think the question you asked above is answerable at the level of detail you use to speak about. But I don't think what you saying is true.
It quite hard to believe that "There isn't an incentive to create a new node." and "younger companies offering equal goods and services will become more attractive for the general public than old established corporation." can be both true.
You also say "If someone pays from a Hypercapital account to your hypercapital account, there is no fee. " and you say your system is build on Bitcoing with does include fees.
I did ask it in the stupid questions thread. :)
I think that both can be true and yet still have real results. Take humans, reproduction, and marriage. Typically a man is fertile for more years than a woman. We see in marriage a tension between men staying loyal to the wife of their youth and moving on to a more fertile partner. I don't have statistics in front of me, but over history the tendency is to stay loyal. Patrimonialism has a profound evolutionary basis and my theory is that you can use that built in bias to form a sustainable system where legal entities have life spans instead of immortality. If the life span is too short, than it is useless.
As far as the fees go, Bitcoin's fees are non-zero but very close to zero and many alternate payment schemes can be constructed. Typical CC transactions are 3%...much higher than the about .05 needed for a BTC transaction. There are also ways to convince miners to mine your transactions even though no BTC Fees are provided.
If you look at how companies try to evade paying taxes, that's a bad assumptions. Companies usually try to whatever they can do to legally avoid paying taxes instead of paying more taxes then necessary out of loyalty to the government.
As far as your current setup seems to work, all the "pref" seems to go back to from N1 to N2 in cases of decay payed. The person who owns N2 can create a N3 and transfer all the money from N2 to N3. That way N2 never pays any decay fees and N2 get's part of the decay fees that N3 pays and can refunnel them to N3.
There are people who argue that bitcoin fees should be $0.41 per transaction (http://www.coindesk.com/new-study-low-bitcoin-transaction-fees-unsustainable/). Even the 4 cents that currently exist can still matter.
While fees might be less than the average CC transactions they are not zero. Claiming that they are zero suggests that you are not clear about how bitcoin works on that level.
Yes, Ripple manages to work with much less fees but you seem to want to use a blockchain based model.
I've tried to set up a system where tax avoidance is reduced or eliminated. Because the transaction system will reject transactions that don't pay the fee when they use their cash, they are stuck with the decision to participate in the system or not. Once the cash is in the system, they must pay the tax or the tax will be taken from them(using btc multi-sig where the decay charging authority is held accountable to only charge the fee on delinquent accounts.
N2 can certainly set up Nx and move all cash over there. Lets use a real example.
N1 spends $100 with N2. N2 wants to avoid the decay(but the system always charges at least one day of decay during a transaction) So they move the cash to Nx. The transaction occurs and $0.003 cents goes back to N1. Now the cash is in Nx. What are they going to do with it here. If they let it sit for 30 days they will be auto charged a decay fee of about $0.10. This flows to N2.
Even if N2 is proactive and sends it back to N3 immediately, $0.0000032 will flow back to N1. A small amount to be sure, but overtime these small amounts add up.
And if Nx uses the cash to develop something that brings in far more cash than went in, the amounts get much bigger.
That is besides the point because we want to avoid the situation entirely where N2 tries to devalue N1s benefits by passing to a shell corporation Nx
Nothing can keep someone from just passing cash and on and on and on to cash it owns except rule of law and accountability. Accountability can be observed in the blockchain and bad actors identified. Rule of law comes later. (I try to cover this in STH. Statutory Theft - https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/the_pattern_language/sth_statutory_theft.md )
Re: Fees - I don't have a great solution to this other than offering miners a share of future pref payments for any mined items that they charge no fee for. This involves them taking risk, but also provides substantial long term rewards.
All of this goes much deeper than the original question which I think now is best framed as 'does having a backflow of cash based on amount spent enhance the information we can get out of an economic system over the standard capitalistic model of today.' If we add too many things in we end up in a conjecture bias situation.
Once I answer the first question in the affirmative, then I can move on to whether the implementations of the system are rational or not. If achieving the prior is a priority, there likely exists an implementation that can achieve it. At least I think.
You don't say anything about how is supposed to have the power to enforce that statute.
It's also not quite clear in what way having a shell corporation is illegal in your system. Even if you have a fixed rule that a single individual can only own a node, people can move money to their family.
Have you done any math to show that they add up?
Also in a system like Bitcoin where it costs $0.04 to do a transaction, are you sure you can transfer $0.0000032 effectively?
Economic systems work by their agents trying to maximize returns. That means if there a way in your system to maximize returns in a way you didn't anticipate the calculation based on the ways you anticipate is worthless.
If you want to have a mathematical answer you have to be clear about your assumptions.
I say a lot about it in my book. The system relies on Rule of law: https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/the_pattern_language/law_rule_of_law.md
And yes, we limit citizens to one account and legal entities, and governments have different kinds of accounts with different restrictions.
https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/hyper_capitalism/citizen_accounts.md https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/hyper_capitalism/legal_entity_accounts.md https://github.com/skilesare/art_and_democratic_hypercapitalism/blob/master/hyper_capitalism/state_accounts.md
I've run a computer model in a closed system. I present the results here: https://vimeo.com/user17783424/review/115279592/1bb88f885d
Yes. It can just be a few satoshi's to an output with the rest(the bigger values) going somewhere else. If the amounts are too small they can be kept off chain.
Thus the need to experiment and try to blow the thing up. I agree 100%.
If a citizen creates a legal entity, doesn't he get his second account?
The wine seller creates a legal entity "wine shop" and transfers the money from it into his citizens account whenever the shop get's any money.
Of course you can transfer a few satoshi. On the other hand that doesn't stop you from paying bitcoin fees. The bitcoin blockchain is incapable of doing cheap micropayment transactions.
That sounds like a corporation could issue a citizen account to someone who already has an account.
In general if you do have to trust a government to enforce rule of law, why use the expensive bitcoin system where trust relies on the blockchain?
The assumption that the business man doesn't do anything with his money is unrealistic. It also doesn't make sense to assume a 3 person economy. It would make more sense to run a model economy with 10,000 participants and assumptions about how the market participants interacts with each other via an open python script. Including a miner who gets his $0.04 for every transaction.
I'm not sure I've correctly understood your question, but it's hard to see how anything much like that could follow from Bayes' theorem on its own.
Question updated. Any more clear?
Still doesn't seem like the thing that Bayes alone could possibly answer. It seems more like a question about differential equations or dynamical systems or something of the kind. All Bayes' theorem tells you is the relationship between certain conditional probabilities.
I guess the stupid question is does it follow from Bayes that if you keep measuring the same probability over and over that you will converge on the 'actual' probability.
That's more like the Bernstein-von Mises theorem, I think. But that only applies if what you're doing is actually Bayesian updating, and it's not obvious to me that that's necessarily happening in the system you describe. (The actors might happen to be doing that, but you haven't said anything about how they make their decisions. Or there might be some more "automatic" bit of the system that's equivalent to Bayesian updating -- e.g., maybe some of the money flows might adjust themselves in ways that correspond to Bayesian updating -- but I don't see any reason to expect that from what you've said.)
This was really helpful and gives me some great stuff to look at.
Thank you.
My theory is that actors in an economy spend cash on things and some of those things produce lasting value in the economy and some don't. Each actors probability of making a valuable choice that leads to overall growth is unknown. If we reward those that make a valuable voice with fresh cash, they then have the opportunity to succeed or fail again. If we do this over and over the 'right' probabilities will emerge and we will see who the 'best spenders' are by who has the biggest rewards flowing back.
We optimize for value creation and in the long run have a system with better and better information.
That is interesting....what do you mean 'on its own'. Are there some other things that affect the application of Bayes to a system?
Let me think about reforming the question now that I'm not on an iphone.
Duhigg's The Power of Habit is great but very hard to use. The idea is to keep the trigger, keep the reward, but change the action that leads to the reward. But it is not trivial to find less harmful or more helpful actions leading to the same rewards. Can we try to make a list together? I.e. e-cigs, non-A beer, similar ideas.
The stupid part is how incredibly hard to come up with replacements that in the hindsight seem extremely d'uh. I mean people are still buying the sugared version of Coke not the Zero, right? Probably more ugh field than cognitive difficulty, still.
It's not a straightforward case:
Yes, but you know the "with fries and make it large, but diet coke, I am trying to lose weight tee hee hee" stereotype, right? :) Usually diet coke is drunk by people who are fighting their unhealthy habits, as it seems people who always had healthy ones are more content with water.
The fact that the stereotype exists doesn't mean that the strategy works. It only shows that the marketing works.
It should really depend on what is to be replaced. It's difficult to think of examples otherwise. Maybe, make yourself a special cup of tea - like, it is five o'clock, I shall drink this my very favourite cup of tea with lemon and 1 1/5 lumps of sugar on my balcony, and count the day as a win?:)
Not sure that artificial sweeteners are ok for humans. Specifically, at least one of my family members is allergic to aspartame (sp?), so I tend to consider the stuff more dangerous than sugar, which I at least know I can metabolize with fairly predictable effects.
Suppose you became deeply religious as a young adult and married someone of the same religion with a traditional promise to be loyal to them until death. Divorce was unthinkable to your spouse and you had repeatedly reassured them that you fully meant to keep your promise to never leave them, no matter what changes the future brought. You are now no longer religious and remaining married to this person makes you miserable in ways you are sure you can't fix without betraying who you currently are. Is it moral to leave your partner? Why and why not? (Don't worry, this is a hypothetical situation.)
No, since "no matter what changes the future brought" includes changes of religion.
Does it? It literally does, but you probably weren't thinking that at the time.
Good. :D
Maybe a good method to evaluate the strenght of this objection would be to invent many other scenarios that people are not thinking about when they speak about "no matter what changes the future brings", and ask how they feel about the other scenarios. Then use them as an outside view for the change of religion.
Assuming they only married me because they knew I was never going to leave them, no it isn't.
ETHICAL INJUNCTION:
Any moral reasoning that results in "...and I will be miserable for the rest of my life" that is not extremely difficult to prevent and has few other tradeoffs is probably not correct, no matter how well-argued.
Identity may be continuous, but it is not unchanging. You are not the person you were back then and are not required to be bound by their precommitments. No more than by someone else's precommitments. To be quasi-formal, the vows made back then are only morally binding on the fraction of your current self which are left unchanged from your old self.Or something like that.
Would you not object to your neighbor's refusal to return the set of tools you lent him on account of his having had a religious conversion?
What religion would compel you to do that?
Then don't make it a set of tools but a money loan. He switches to Islam and now things that interests on loans is immoral.
Imagine you're elected leader of a country. The last leader defended against an invasion by putting the country into debt. If he hadn't done that, the country would now be under control of the other country's totalitarian regime. You can pay the debt, but if you don't nobody can force you. Should you repay the debt? Are you bound by the precommitments of your predecessor?
A country that is known to elect new leaders cannot credibly precommit to paying back a loan unless it is in a situation that is robust against new leaders refusing to pay back the loans. So you would in fact be bound by the precommitments of your predecessor whether you wanted to be or not, though the exact mechanism can vary depending on exactly what made the precommitment credible.
Suppose the mechanism is that they're electing people that care about the country. Would this mechanism work? Would you and the other leaders consistently pay back loans?
If the mechanism didn't work, then the precommitment wouldn't be credible, and the people making the loans would have known that there is no credible precommitment.
And thus the country will fall. Since the leaders care about the country, they'd rather pay back some loans than let it fall, so the mechanism will work, right?
That's highly misleading. Empirically, many countries have successfully raise debt, and paid it back, despite debt-holders having no defense against a new leader wanting to default.
I think one defence those debt-holders have is that those countries have traditions of repaying debts.
Another is that, regardless of whether you're formally committed to repaying loans, if you default on one then you or your successors are going to get much worse terms (if any) for future loans. So a national leader who doesn't want to screw the country over is going to be reluctant to default.
Derek Parfit, on identity, talks about psychological connectedness (examples: recalling memories, continuing to hold a belief or desire, acting on earlier intentions), and continuity, which is the ancestral of connectedness. It sounds like you are saying that commitments should be binding based primarily on connectedness, not on continuity. But this has certain disadvantages. If I take the suggested attitude, I will be a less attractive partner to make deals and commitments with.
(I didn't downvote your comment BTW. But I bet my worries are similar to those of whoever did.)
Ah, yes, connectedness is indeed what I meant. Thanks! My point was that, while legal commitments transcend connectedness, moral need not.
I don't consider it moral for two people to make each other suffer for years instead of admitting their mistake and moving on with their lives. That's the result of pride, not forbearance. Still worse if one party suffers while the other remains pleased.
If there are severe practical obstacles to divorce then that's one thing, but even then there are ways around that. It's nothing unusual for a couple to separate while remaining married. For example, Warren Buffett had such an arrangement for nearly 30 years--until his wife died.
--Meatloaf
This sounds like a place where Kantian ethics would give the right answer. I think, there is some point at which it would be stupid to not seek divorce, and some point at which the promise you made is indeed more important, and the thing that differentiates those two states is not whether you want divorce now, but whether which procedure would it be better for people to follow - the one that has you stay married here, or the one that has you divorce here.
Kantian ethics would almost definitely say to never divorce. Kantianism is not the same as Rule Utilitarianism!
Even if we ignore for a moment the fact that Kantian ethics doesn't say anything because it's not well-defined, it's not at all clear to me that this is true. As it stands, your statement sounds like it's based more on popular impressions of what Kantian ethics is supposedly like than an actual attempt at Kantian reasoning.
Okay, thanks :)
The issue is with the decision, so asking "Is it moral?" is a potentially misleading framing because of the connotations of "moral" that aren't directly concerned with comparing effects of alternative actions. So the choice is between the scenario where a person made promises etc. and later stuck with them while miserable, and the scenario where they did something else.
I'm asking what would make you justify leaving or staying.
"Justify" has a similar problem. Justifications may be mistaken, even intentionally so. Calling something a justification emphasizes persuasion over accuracy.
This assumes that different kinds of religiosity tend to converge on similar ethics about marital commitments and fidelity. You could become "deeply religious" in a way which allows for divorce or outside relationships.
This also assumes that your religion's doctrine on these matters remains stable over many generations. If your religious community accepts 22nd+ Century medicine and permits its members to seek treatment for engineered negligible senescence and superlongevity, then you could live long enough to see your religion undergo a Reformation-like event which allows for a more flexible view of marriage and sexual relationships.
I think I've mentioned this before, but I find Ridley Scott's portrayal of Future Christians in the film Prometheus interesting. The space ship's archaeologist character, Elizabeth Shaw (played by Swedish actress Noomi Rapace), wears a cross and professes christian beliefs at a time when christianity has apparently gone into decline and christians have become relatively uncommon. Yet as a single christian woman she has a sexual relationship with a man on the ship, which suggests that christian sexual morality during that religion's long twilight will tend to converge with secular moral views.
First two paragraphs seem reasonable. To the third though:
Many, many self-identified Christians from pretty much all denominations have premarital sex. See e.g. here. And this isn't a new thing, even among the Puritans this was not uncommon (in there care we can tell based on extremely short times between many marriages and when children had their births recorded).
File this under "things that could probably be said better, but which might be better said than not said given I won't action it for later".
Whenever I see a post or question of the type "is X moral", I have an instinctual aversive reaction because such questions seem to leave so much that still needs to be asked, and the important questions are not even addressed, so even taking a potshot at the question requires wheeling some rather heavy equipment up to do some rather heavy digging as to the values, priorities, risk tolerance, etc of the person asking the question.
Re "the important questions are not even addressed": Fundamentally, are you trying to satisfice or maximize here? Are you trying to figure out the "optimal" action per those values that you group in the "morality" category, or are you trying to figure out which actions have an acceptable impact in terms of those values (such that you're then going to choose between the acceptable possibilities with a different set of values?) Once the meta's taken care of, what are the actual things that you value? Inferential distance is often pretty humongous in this regard, so more explicit often is better.
Maybe a more concrete example will be useful. If I ask you "what computer should I buy?", I should not take an immediate answer seriously with no further info, because I know you have no way of knowing what my decision criteria are (and its kinda hard for your recommendation to align with them by chance.) As such, I would probably want to give you a decent amount of information regarding my relevant preferences if I ask for such a recommendation...am I going to play games? Office work? Might even be useful to specify the type of games I'm playing and whether graphics are a biggie for me, etc.
When I don't see this type of info flow occurring, it feels like a charade, because if I were the one asking the question I would have to discard any answers that I got in the absence of such info about preferences, etc.
Again, apologies for going meta + possibly abrasive tone at the same time. Just trying to help discussions like this get started off on the right foot, as it feels like I see them more and more lately. Probably tapping out.
ETA punctuation.