AA is willing to pay in order to achieve a more egalitarian outcome. in other words: AA is willing to pay money in order to force others to be more like him.
a desire to change the payoff matrix itself is my point: one monkey gets the banana and the other monkey cries justice. justice is formalized fairness. I can easily envision that AA would also pay in order to alter the payoff matrix.
So let's set up another trial of this with an added meta dilemma: in each case the disadvantaged member of the trial can forfeit another 5 points in order to alter the pa...
I regretted posting the original comment immediately but felt like your comment "maybe this is why africa stays poor" was kind of a pandora's box for this sort of thing.
all discussions lead inexorably towards ever more fundamental issues until eventually you're talking about axiomatic beliefs. This seem to fall in line with the idea that either you have different priors or one of you has made a mistake. Since this is a community of intelligent commenters it follows that most disagreements are probably due to different core values/assumptions.
Bu...
it's disingenuous to blame NASA, as if we couldn't afford both!
the point here is that the money that the government spends is 100% wasted on these things, not that we should find ways to pay for more stuff. I don't support government spending at all. when I talk about environmentalism I'm talking about the government whipping people into a frenzy in order to justify ridiculous schemes that private enterprise would never support. If there was less taxation and people were rational about picking charities to reduce overall suffering micronutrient and clea...
A big part of the reason Africa stays poor is because nutrition and education is so poor that sub-saharan IQ's average about 70. Environmentalism pisses me off because for a fraction of what we are spending on the public hysteria we could be providing micro nutrients that would lead to huge decreases in overall suffering. Ditto with providing clean water.
What the hell is green tech? Is it just more efficient tech? Or does it have less to do with the technology and more to do with economic agents acknowledging externalities, consciously choosing to internalize some of that cost?
terrifying freedom
I believe this is one of the prime motivators for religion, conspiracy theories, and all other manner of hidden organization schemes. the thought that this is literally IT and no one will judge the wicked, no one is guiding the leviathan, no one will care if you make a stupid mistake and it costs you your life.
"The cold, suffocating dark goes on forever and we are alone. Live our lives, lacking anything better to do. Devise reason later. Born from oblivion; bear children, hell-bound as ourselves, go into oblivion. There is nothing e...
Huh, I was unaware that the whole concept of spandrels had originated with Gould. Point taken, one can reinterpret seemingly random noise as being itself an adaptation that overcomes simple hill climbing perhaps. Mutations themselves are a random walk, selection is not random. Environment acts as a hill, organisms as hill climbing algorithms, with the top of the hill being maximally efficient use of resources for reproduction. Is this correct?
yes, the easiest way to spot scientism is to look for value statements being conflated with factual statements. This is done unintentionally in many cases, the persuaders can't help it because they can't distinguish between the two. 1) you falsify the data that someone thought was factual that they used to support their values. They take this as an attack on said values. 2) you point out errors in the train of logic between factual statements and values, and/or point out that there is no valid logic train between their values and facts. 3) you make a fac...
scientists fight over the division of money that has been block-allocated by governments and foundations. I should write about this later.
yes you should. this is a very serious issue. in art the artist caters to his patron. the more I see of the world of research in the U.S. the more I am disturbed by the common source of the vast majority of funding. science is being tailored and politicized.
if the SHs find humans via another colony world blowing up earth is still an option. I don't believe the SHs could have been bargained with. They showed no inclination towards compromise in any other sense than whichever one they have calculated as optimal based on their understanding of humans and babyeaters. Because the SHs don't seem to value the freedom to make sub-optimal choices (free will) they may also worry much less about making incorrect choices based on imperfect information (this is the only rational reason I can come up with for them wantin...
ZM: I'm not saying that the outcome wouldn't be bad from the perspective of current values, I'm saying that it would serve to lessen the blow of sudden transition. The knowledge that they can get back together again in a couple decades seems like it would placate most. And I disagree that people would cease wanting to see each other. They might prefer their new environment, but they would still want to visit each other. Even if Food A tastes better in every dimension to Food B I'll probably want to eat Food B every once in awhile.
James: Considering the...
A few decades with superstimulus-women around for the men, and superstimulus-men for the women? I don't expect that reunification to happen.
Although that doesn't in any way say that there's anything bad about this scenario. cough
EDIT: it would be bad if they didn't manage to get rid of the genie; then humanity would be stuck in this optimised-but-not-optimal state forever. As it is, it's a step forward if only because people won't age any more.
This story would be more disturbing if the 90% threshold was in fact never reached, as more and more people chang...
rw: methods of short circuiting the sex drive falls into two categories. the first would be controlling sensory input (holodecks/virtual reality and or cyborgs). the second is bypassing the senses and directly messing with the brain itself via implants or genetic manipulation.
the second type is more prone to unintended consequences than the first.
Our drive to do better than our neighbor is a deeply ingrained metric of how we judge ourselves. In essence we recognize that our own assessment is biased and look for cues from others. Eliminating this seems like eliminating past of the foundation of a social species.
I think you're being remarkably binary about this. I think it more realistic that non-sentient sexdroids will enable healthier relationships. When people get the urge to procreate with fitter partners they can just spend an afternoon in the holodeck. I see what you're saying as advocating keeping people a little hungry so that they appreciate food more.
an investment earning 2% annual interest for 12,000 years adds up to a googol (10^100) times as much wealth.
no it adds up to a googol of economic units. in all likelihood the actual wealth that the investment represents will stay roughly the same or grow and shrink within fairly small margins.
it seems you conclude with an either/or on subjective experience improvement and brain tinkering. I think it more likely that we will improve our subjective experience up to a certain point of feasibility and then start with the brain tinkering. Some will clock-out...
the value of this is most easily demonstrated in daydream scenarios. I'm guessing that other people, like me, find themselves going through some of the same fantasies time and time again, whether they be about wealth, sex, prestige or whatever else. A few days ago I banished all these familiar fantasies and spent some time thinking up new ones. Not only was it a wonderfully fun exercise, it seemed to increase my creativity when doing other activities throughout the day.
the difference between reality and this hypothetical scenario is where control resides. I take no issue with the decentralized future roulette we are playing when we have this or that kid with this or that person. all my study of economics and natural selection indicates that such decentralized methods are self-correcting. in this scenario we approach the point where the future cone could have this or that bit snuffed by the decision of a singleton (or a functional equivalent), advocating that this sort of thing be slowed down so that we can weigh the decisions carefully seems prudent. isn't this sort of the main thrust of the friendly AI debate?
what effect would it have on the point
if rewinding is morally unacceptable (erasing could-have-been sentients) and you have unlimited power to direct the future, does this mean that all the could-have-beens from futures you didn't select are on your shoulders? This is directly related to another recent post. If I choose a future with less sentients who have a higher standard of living am I responsible for the sentients that would have existed in a future where I chose to let a higher number of them be created? If you're a utilitarian this is the delicate...
Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.
you can make entropy run in reverse in one area as long as a compensating amount of entropy is generated somewhere within the system. what do you think a refrigerator is? what if the extra entropy that needs to be generated in order to rewind is shunted off to some distant corner of the universe that doesn't affect the area you are worried about? I'm not talking about literally making time go in reverse. You can achieve what is functionally the same thing by reversing all the atomic reactions within a volume and shunting the entropy generated by the energy you used to do this to some other area.
I think it's worth noting that truly unlimited power means being able to undo anything. But is it wrong to rewind when things go south? if you rewind far enough you'll be erasing lives and conjuring up new different ones. Is rewinding back to before an AI explodes into a zillion copies morally equivalent to destroying them in this direction of time? unlimited power is unlimited ability to direct the future. Are the lives on every path you don't choose "on your shoulders" so to speak?
Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?
given that the vast majority of possible futures are significantly worse than this, I would be pretty happy with this outcome. but what happens when we've filled the universe? much like the board game risk, your attitude towards your so called allies will abruptly change once the two of you are the only ones left.
Peter: if your change of utility functions is of domain rather than degree you can't calculate the negative utility. the difference in utility between making 25 paperclips a day and 500 a day is a calculable difference for a paperclip maximizing optimization process.
however, if the paperclip optimizer self-modifies and inadvertently changes his utility function to maximizing staples....well you can't calculate paperclips in terms of staples. This outcome is of infinite negative utility from the perspective of the paperclip maximizer. And vice-versa. On...
I think that an empirical approach self modification would quickly become prominent. alter one variable and test it, with a self imposed timeout clause. the problem is that this does not apply to one sort of change: a change in utility function. an inadvertent change of utility function is extremely dangerous, because changing your utility function is of infinite negative utility by the standards of your current utility, and vice-versa.
frelkins: in that vein what if we could flip the switch in the brain that usually only flips when you are sleeping with a new partner? isn't this half of humanties sex problems gone in one shot? it seems to me that the realm of sex is the one in which it is most obvious that desires shaped by natural selection are not in line with actual happiness and fulfillment.
what's more fun? a holodeck that you have complete control over? or a holodeck with built in constraints?
playing god might be fun for awhile, but I think everyone would eventually switch over to programs with built in constraints to challenge themselves. the profession of highest prestige will probably people who write really really good holodeck programs.
Shuman hmm true. alright. fission reactor with enough uranium to power everything for several lifetimes (whatever my lifetime is at that point) and accelerate the asteroid up to relativistic speeds. aim the ship out of the galactic plane. the energy required to catch up with me will make it unprofitable to do so.
the most important adaptation an ideology can make to improve its inclusive fitness for consumption by the human brain is to
1 is accomplished by making the ideology rest on a priori claims. everything that rests on top of that claim can be perfectly logical given the premise. since most people don't examine their beliefs axiomatically, few will question the premise as long as they are provided the bare minimum of comfort. 2 is accomplished by activating the "mor...
I hope I live to see a world where synchronous computing is considered a quaint artifact of the dawn of computers. cognitive bias has prevented us from seeing the full of extent of what can be done with this computing thing. a limit on feasible computability (limited by our own brain capacity) that has existed for all the millions of years, shaping the way we assume we can solve problems in our world, is suddenly gone. we've made remarkable progress in a short time, I can't wait to see what happens next.
in the course of natural selection, conformity to social values took on a much higher priority than the truth. especially for women who are vulnerable and must adapt to please whichever males are in charge at the time. confronting the average person with the truth is a waste of time. they place a higher priority on social status. If you live in a primarily Christian community don't expect anyone to listen, they would lose status by seriously considering your doubts.
the likely result is that pundits would start taking more care to make their predictions untestable.
this is already the norm 1) make qualitative prediction 2) reject criticism with "no true scotsman" fallacy (x wasn't really an example of y because z)