Thanks for the summary of various models of how to figure out what to work on. While reading it, I couldn't help but focus on my frustration about the "getting paid for it" part. Personally, I want to create a new programming language. I think we are still in the dark age of computer programming and that programming languages suck. I can't make a perfect language, but I can take a solid step in the right direction. The world could sure use a better programming language if you ask me. I'm passionate about this project. I'm a skilled software developer with ...
I'm confused by the judges' lack of use of the search capabilities. I think we need more information about how the judges are selected. It isn't clear to me that they are representative of the kinds of people we would expect to be acting as judges in future scenarios of superintelligent AI debates. For example, a simple and obvious tactic would be to ask both AIs what one ought to search for in order to be able to verify their arguments. An AI that can make very compelling arguments still can't change the true facts that are known to humanity to suit their needs.
This is not sound reasoning because of selection bias. If any of those predictions had been correct, you would not be here to see it. Thus, you cannot use their failure as evidence.
As someone who believes in moral error theory, I have problems with the moral language ("responsibility to lead ethical lives of personal fulfillment", "Ethical values are derived from human need and interest as tested by experience.").
I don't think that "Life’s fulfillment emerges from individual participation in the service of humane ideals" or "Working to benefit society maximizes individual happiness." Rather I would say some people find some fulfillment in those things.
I am vehemently opposed to the deathist language of "finding wonder and awe in the ...
I agree with your three premises. However, I would recommend using a different term than "humanism".
Humanism is more than just the broad set of values you described. It is also a specific movement with more specific values. See for example the latest humanist manifesto. I agree with what you described as "humanism" but strongly reject the label humanist because I do not agree with the other baggage that goes with it. If possible, try to come up with a term that directly states the value you are describing. Perhaps something along the lines of "human flourishing as the standard of value"?
I am signed up for cryonics with Alcor and did so in 2017. I checked and the two options you listed are consistent with the options I was given. I didn't have a problem with them, but I can understand your concern.
I have had a number of interactions with Alcor staff both during the signup process and since. I always found them pleasant and helpful. I'm sorry to hear that you are having a bad experience. My suggestion would be to get the representative on the phone and discuss your concerns. Obviously, final wording should be handled in writing but I think ...
While I can understand why many would view advances toward WBE as an AI-safety risk, many in the community are also concerned with cryonics. WBE is an important option for the revival of cryonics patients. So I think the desirability of WBE should be clear. It just may be the case that we need to develop safe AI first.
As someone interested in seeing WBE become a reality, I have also been disappointed by the lack of progress. I would like to understand the reasons for this better. So I was interested to read this post, but you seem to be conflating two different things. The difficulty of simulating a worm and the difficulty of uploading a worm. There are a few sentences that hint both are unsolved, but they should be clearly separated.
Uploading a worm requires being able to read the synaptic weights, thresholds, and possibly other details from an individual worm. No...
A study by Alcor trained C. elegans worms to react to the smell of a chemical. They then demonstrated that the worms retained this memory even after being frozen and revived. Were it possible to upload a worm, the same exact test would show that you had successfully uploaded a worm with that memory vs. one without that memory.
Study here: Persistence of Long-Term Memory in Vitrified and Revived Caenorhabditis elegans
I think you are being overly optimistic about homomorphic encryption. The uFAI doesn't need to have absolute control over how the computation happens. Nor does it need to be able to perfectly predict the real-world results of running some computation. It only needs some amount of information leakage. The current best example I can think of is timing attacks on cryptographic protocols. The protocol itself should be secure, but a side-channel causes insecurity. Another example would be the meltdown and spectre vulnerabilities. How do you know your computatio...
I doubt the lack of 6-door cars has much to do with aesthetics. Doors and tight door seals are some of the more complex and expensive portions of the car body. Doors also pose challenges for crash safety as you have a large opening in the car body weakening the main body's structural integrity in an accident. I suspect that the reason there are so few cars with 6 doors is the extra cost of manufacturing cars. That would lead to increased car costs. Most purchasers don't value the extra convenience of the additional doors enough relative to the added price. Any company producing such a car would find a very small market which might make it not worth it to the manufacturer.
Recently many sources have reported a "CA variant" with many of the same properties as the English and South African strains. I haven't personally investigated, but that might be something to look into. Especially given the number of rationalists in CA.
As others have already answered better than I, first avoid being obligated for such large unexpected charges. The customer in the example may have canceled their credit card, but they are still legally obligated to pay that money.
To answer the actual question of how to put limits. You can use privacy.com They allow you to create new credit card numbers that bill to your bank account but can have limits both in terms of total charges and monthly charges. You can also close any number at any time without impact on your personal finances. It is meant for safe...
I'd be interested in seeing a write up on whether people who've had COVID need to be vaccinated. I have a friend who was sick with COVID symptoms for 3 weeks and tested positive for SARS-CoV-2 shortly after the onset of symptoms. He is now being told by medical professionals that he needs to be vaccinated just the same as everyone else. I tried to look up the data on this. Sources like CDC, Cleavland Clinic, and Mayo Clinic all state that people need to be vaccinated even if they have had COVID. However, their messaging seems to be contradictory. There are...
It's unfortunate that we have this mess. But couldn't this have been avoided by defaulting to minimal access? Per Mozilla (https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies), if a cookie's domain isn't set, it defaults to the domain of the site excluding subdomains. If instead, this defaulted to the full domain, wouldn't that resolve the issue? The harm isn't in allowing people to create cookies that span sites, but in doing so accidentally, correct? The only concern is then tracking cookies. For this, a list of TLDs which it would be invalid to sp...
Even after reading your post, I don't think I'm any closer to comprehending the illusionist view of reality. One of my good and most respected friends is an illusionist. I'd really like to understand his model of consciousness.
Illusionists often seem to be arguing against strawmen to me. (Notwithstanding the fact that some philosophers actually do argue for such "strawman" positions). Dennet's argument against "mental paint" seems to be an example of this. Of course, I don't think there is something in my mental space with the property of redness. Of cours...
Do we need to RSVP in some way?
I can parse your comment a couple of different ways, so I will discuss multiple interpretations but forgive me if I've misunderstood.
If we are talking about 3^^^3 dust specks experienced by that many different people, then it doesn't change my intuition. My early exposure to the question included such unimaginably large numbers of people. I recognize scope insensitivity may be playing a role here, but I think there is more to it.
If we are talking about myself or some other individual experiencing 3^^^3 dust specks (or 3^^^3 people each experienci...
At least as applied to most people, I agree with your claim that "in practice, and to a short-term, first-order approximation, moral realists and moral anti-realists seem very similar." As a moral anti-realist myself, a likely explanation for this seems to be that they are engaging in the kind of moral reasoning that evolution wired into them. Both the realist and anti-realist are then offering post hoc explanations for their behavior.
With any broad claims about humans like this, there are bound to be exceptions. Thus all the qualifications you p...
Not going to sign up with some random site. If you are the author, post a copy that doesn't require signup.
I think moving to frontpage might have broken it. I've put the link back on.
I'm not sure I agree. Sure, there are lots of problems of the "papercut" kind, but I feel like the problems that concern me the most are much more of the "dragon kind". For example:
What is going on here? Copy me
Copy me
[Yes](http://hangouts.google.com)
*hello*
http://somewhere.com
Can I write a linke here [Yes](http://hangouts.google.com)
You should probably clarify that your solution is assuming the variant where the god's head explodes when given an unanswerable question. If I understand correctly, you are also assuming that the god will act to prevent their head from exploding if possible. That doesn't have to be the case. The god could be suicidal but perhaps not be able to die in any other way and so given the opportunity by you to have their head explode they will take it.
Additionally, I think it would be clearer if you could offer a final English sentence statement of the complete question that doesn't involve self referential variables. The variables formation is helpful for seeing the structure, but confusing in other ways.
Oh, sorry
A couple typos:
The date you give is "(11/30)" it should be "(10/30)"
"smedium" should be "medium"
I feel strongly that link posts are an important feature that needs to be kept. There will always be significant and interesting content created on non-rationalist or mainstream sites that we will want to be able to link to and discuss on LessWrong. Additionally, while we might hope that all rationalist bloggers would be ok with cross-posting their content to LessWrong, there will likely always be those who don't want to and yet we may want to include their posts in the discussion here.
A comment of mine
What you label "implicit utility function" sounds like instrumental goals to me. Some of that is also covered under Basic AI Drives.
I'm not familiar with the pig that wants to be eaten, but I'm not sure I would describe that as a conflicted utility function. If one has a utility function that places maximum utility on an outcome that requires their death, then there is no conflict, that is the optimal choice. Though I think human's who think they have such a utility function are usually mistaken, but that is a much more involved discussion.
Not...
I'm not opposed to downvote limits, but I think they need to not be too low. There are situations where I am more likely to downvote many things just because I am more heavily moderating. For example, on comments on my own post I care more and am more likely to both upvote and downvote whereas other times I might just not care that much.
I have completed the survey and upvoted everyone else on this thread
There is a flaw in your argument. I'm going to try to be very precise here and spell out exactly what I agree with and disagree with in the hope that this leads to more fruitful discussion.
Your conclusions about scenarios 1, 2 and 3 are correct.
You state that Bostrom's disjunction is missing a fourth case. The way you state (iv) is problematical because you phrase it in terms of a logical conclusion that "the principle of indifference leads us to believe that we are not in a simulation" which, as I'll argue below, is incorrect. Your disjunct sh...
We are totally blindfolded. He specified that they would be "ancestor simulations" thus in all those simulations they would appear to be in a time prior to simulation.
Looks like the poster edited the post since you took this quote. The last two sentence have been removed. Though they might not have explained it well, OP is correct on this point. I think the two sentences removed confused it though.
Crucially you are "told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X." You are given information about your temporal position relative to all of those people. So regardless whether they were asked the question when t...
If you can afford it, it makes more sense to sign up at Alcor. Alcor's patient care trust improves the chances that you will be cared for indefinitely after cryopreservation. CI asserts their all volunteer status as a benefit, but the cryonics community has not been growing and has been aging. It is not unlikely that there could be problems with availability of volunteers in the next 50 years.
This post was meant to apply when you find either that your own folk ontology is incorrect or to assist people who agree that the folk ontology is incorrect but find themselves disagreeing because they have chosen different responses. Establishing the folk ontology to be incorrect was a prerequisite and like all beliefs should be subject to revision based on new evidence.
This is in no way meant to dismiss genuine debate. As a moral nihilist, I might put moral realism in the category of incorrect "folk ontology". However, if I'm discussing or d...
When we find that the concepts typically held by people, termed folk ontologies, don't correspond to the territory, what should we do with those terms/words? This post discusses three possible ways of handling them. Each is described and discussed with examples from science and philosophy.
The reality today is that we are probably still a long way off from being able to revive someone. To me, the promise of cryonics has a lot to do with being a fallback plan for life extension technologies. Consequently, it is important that it be available and used today. Thus my definition of success. That said, if the cryonics movement were more successful in the way I have described, a lot more effort and money would go into cryonics research and bring us much closer to being able to revive someone. It would also mean that currently cryopreserved patients would be more likely to be cared for long enough to be revived.
I agree that signing up for cryonics is far too complicated and this is one of the things that needs to be addressed. My friend and I have a number of ideas how that might be done.
While I'm not sure about late night basic cable infomercials, existing cryonics organizations certainly don't carry out much if any advertising. There are a number of good reasons that they are not advertising. Those can and should be addressed by any future cryonics organization.
To me, success would be the number of patient's signed up for cryonics, greater cultural acceptance and recognition of cryonics as a reasonable patient choice from the medical field and government.
A friend and I are investigating why the cryonics movement hasn't been more successful and looking at what can be done to improve the situation. We have some ideas and have begun reaching out to people in the cryonics community. If you are interested in helping, message me. Right now it is mostly researching things about the existing cryonics organizations and coming up with ideas. In the future, there could be lots of other ways to contribute.
What does "successful" look like here? Number of patients in cryonic storage? Successfully revived tissues or experimental animals?
I find Jordon Peterson's views fascinating and have a rationalist friend whose thinking has recently been greatly influenced by him. So much so that my friend recently went to a church service. My problem with his view is that it ignores the on the ground reality that many adherents believe their religion to be true in the sense of being a proper map of the territory. This is in direct contradiction to Peterson's use of religion and truth. I warned my friend that this is what he would find in church. Sure enough, that is what he found, and he will not be returning.
I and some other rationalists have been thinking about cryonics a lot recently and how we might improve the strength of cryonics offerings and the rate of adoption. After some consideration, we came up with a couple suggestions for changes to the survey that we think would be helpful and interesting.
A question along the lines of "What impact do you believe money and attention put towards life extension or other technologies such as cryonics has on the world as a whole?" Answers:
The purpo
This is a review of the book Review: Freezing People is (Not) Easy by Bob Nelson. The book recounts his experiences as president of the Cryonics Society of California during which he cryopreserved and then attempted (and failed) to maintain the cryopreservation of a number of early cryonics patients.
I think it is interesting that you think it is not very neglected. I assume you think that because languages like Rust, Kotlin, Go, Swift, and Zig have received various funding levels. Also, academic research is funding languages like Haskell, Scala, Lean, etc.
I suppose that is better than nothing. However, from my perspective, that is mostly funding the wrong things and even funding some of those languages inadequately. As I mentioned, Rust and Go show signs of being pushed to market too soon in ways that will be permanently harmful to the developers usin... (read more)