All of c0rw1n's Comments + Replies

c0rw1n10

i'd bet at at least 1:20 that lung scarring and brain damage are permanent.

Answer by c0rw1n30

please go read the most basic counterarguments to this class of objections to anti-aging at https://agingbiotech.info/objections/

c0rw1n100

In my experience as a subject of hypnosis, I always have a background thought that I could choose to not do/feel the thing, that I choose to do/feel as I'm told. I distinctly remember feeling the background thought there, before choosing to do, or letting myself feel, the thing I'm told. It is still surprising how much and ho many things that are usually subconscious can be controlled through it, though.

On Wednesdays at the Princeton Graduate College, various people would come in to give talks. The speakers were often interesting, and in the discussions after the talks we used to have a lot of fun. For instance, one guy in our school was very strongly anti-Catholic, so he passed out questions in advance for people to ask a religious speaker, and we gave the speaker a hard time.
Another time somebody gave a talk about poetry. He talked about the structure of the poem and the emotions that come with it; he divided everything up into certain kinds of
... (read more)
2gjm
If you're saying "Manipulating people like that wouldn't work, because you always get to choose whether to do what a hypnotist tells you to" then I see two objections. * The fact that you think you could choose not to do it doesn't mean you actually could in any very strong sense. Perhaps it just feels that way. * It could be that when someone's explicitly, blatantly trying to manipulate you via your subconscious, you get to choose, but that a sufficiently skilled manipulator can do it without your ever noticing, in which case you don't have the chance to say no. (I am not sure that that is what you're saying, though, and if it isn't then those points may be irrelevant.)
c0rw1n60

If your theory leads you to an obviously stupid conclusion, you need a better theory.

Total utilitarianism is boringly wrong for this reason, yes.

What you need is non-stupid utilitarianism.

First, utility is not a scalar number, even for one person. Utility and disutility are not the same axis: if I hug a plushie, that is utility without any disutility and if I kick a bedpost, that is disutility without utility, and if I do both at the same time, neither of those ends up compensating for each other. They are not the same dimension with the sign reversed. Thi

... (read more)
1[comment deleted]
c0rw1n10

didn't we use to call those "exokernels" before?

c0rw1n70

I'm curious who the half is and why. Is it that they are half a rationalist? Half (the time?) in Berkeley? (If it is not half the time then where is the other half?)

Also. The N should be equal to the cardinality of the entire set of rationalists you interacted with, not just of those who are going insane; so, if you have been interacting with seven and a half rationalists in total, how many of those are diving into the woo? Or if you have been interacting with dozens of rationalists, how many times more people were they than 7.5?

c0rw1n110

There was a web thing with a Big Red Button, running in Seattle, Oxford (and I think Boston also).

Each group had a cake and if they got nuked, they wouldn't get to eat the cake.

At the time when the Seattle counter said that the game was over for 1 second, someone there puched the button for the lulz, but the Oxford counter was not at zero yet and so they got nuked, then they decided to burn the cake instead of just not eating it.

4Mati_Roy
"We have a positive singularity, let's launch nukes for fun"
1Taymon Beal
No, we didn't participate in this in Boston. Our Petrov Day is this Wednesday, the actual anniversary of the Petrov incident.
6gjm
This is both hilarious and horrifying.

I hope we all learned a valuable lesson here today.

c0rw1n20
  • Aumann's agreement theorem says that two people acting rationally (in a certain precise sense) and with common knowledge of each other's beliefs cannot agree to disagree. More specifically, if two people are genuine Bayesian rationalists with common priors, and if they each have common knowledge of their individual posterior probabilities, then their posteriors must be equal.

With common priors.

This is what does all the work there! If the disagreeers have non-equal priors on one of the points, then of course they'll have different posteriors.

Of course ap

... (read more)
7the gears to ascension
no, I was thinking of facebook. it needs to be a discussion platform, so it does need length, but basically what I want is "endless comment thread" type deal - a feed of discussion, as you'd get if the home page defaulted to opening to an endless open thread. as it is, open threads quarantine freeform discussion in a way that doesn't get eyes.
c0rw1n50

Thank you so much for writing this! I remember reading a tumblr post that explained the main point a while back and could never find it again -because tumblr is an unsearchable memory hole- and kept needing to link it to people who got stuck on taking Eliezer's joking one-liner seriously.

c0rw1n20

It may be that the person keeps expounding their reasons for their wanting you to do the thing because it feels aversive to them to stop infodumping, and/or because they expect you to respond with your reasons for doing the thing so that they know whether your doing the thing is an instance of 2 or of 3.

5Said Achmiz
This is no explanation. First, why do they even start infodumping? Second, if it feels aversive—why? Well, I explained why. (The motivation I described need not be conscious, note!) This is not consistent with my experience, in two ways. I have observed that either the person does not expect you to respond with anything (beyond assent and agreement that, indeed, yes, their reasons are very sound, yep, makes sense, of course, etc.); or, they do expect you to respond, but are unsatisfied if your response reveals case #2, and keep pushing, keep “persuading”, etc., until you visible agree and acknowledge their reasons, and demonstrate that you’ve been successfully persuaded.
c0rw1n-10

The AIs still have to make atoms move for anything Actually Bad to happen.

4orthonormal
There's a lot of Actually Bad things an AI can do just by making electrons move.
2Ben Pace
(or the AIs have to at least make atoms fall apart #atomicbombs #snark)
c0rw1n170

No. Correct models are good. Or rather, more correct models, applied properly, are better than less correct models, or models applied wrongly.

All those examples, however, are bad:

  1. Calories in / Calories out is a bad model because different sources of calories are metabolized differently and have different effects on the organism. It is bad because it is incomplete and used improperly for things that it is bad at. It stays true that to get output from a mechanism, you do have to input some fuel into it; CICO is good enough to calculate, for example, how m

... (read more)
1NSegall
I think the title is a little bit misleading, and perhaps he didn't put much emphasis on this, but it seems he isn't claiming correct models are generally bad, just that there are also possible downsides to holding correct models and it's probably a good idea to be aware to these flaws when applying these models to reality. Also, it seems to me as he is defining 'correct model' as a model in which the reasoning is sound and can be used for some applications, however does not necessarily fully describe every aspect of the problem.
c0rw1n10

Not easy, no. But there is a shorter version here : http://nonsymbolic.org/PNSE-Summary-2013.pdf

1Richard_Kennaway
I'd be interested in Valentine's opinion of the paper. I read it and posted a long response to it in a reply to Elo. (Does LesserWrong support links to comments?) ETA: link to that comment.
c0rw1n30

Is this enlightenment anything like described in https://aellagirl.com/2017/07/07/the-abyss-of-want/ ?

Also possibly related : http://nonsymbolic.org/wp-content/uploads/2014/02/PNSE-Article.pdf (can you point on that map where you think you found yourself)

2malcolm.m.ocean
I clicked on a link in the first one and found my way to this post: https://aellagirl.com/2016/08/21/421/ The kind of nothingnessness that she describes in this post seems like it might be connected with why awakened states haven't been able to scale effectively. It seems to me that it is a non-obvious step to integrate a high level of awakeness with ongoing meaningness. I think that among other things, it requires having a community of people with a shared sense of awakeness. This makes sense, since humans are socio-cultural creatures. And then if the only sorts of communities that can maintain such a state tend to be unproductive (genetically or economically) because they are monasteries... then there are natural scaling limits. This raises questions like "what would an enlightened family look like?" and "what would an enlightened company look like?" and "what would an enlightened school look like?" It seems that (for lots of reasons) the cultures would be very different than what we're used to. I am interested in knowing if Valentine or others have thoughts on these questions or any other questions related to scaling or avoiding what are in essence nihilist traps!
3Valentine
The first one looks related. Hard to say though: a lot of it comes across to me as navigating conceptualizations until the concept circle broke. Something like that happened during my kenshō but it wasn't the focus for me at all. That was the distraction I had to set aside in order to Look. The second one looks longer than I want to dig through right now. Apologies. Is it easy for you to sketch what the map you're referring to is?
c0rw1n10

I'm thinking of something like a section on the main lesserwrong.com page showing the latest edits to the wiki, so that the users of the site could see them and choose to go look at whether what changed in the article is worth points.

c0rw1n40

I think the lesswrong wiki was supposed to be that repository of the interesting/important things that were posted to the community blog.

It could be a good idea to make a wiki in lw2.0 and award site karma to people contributing to it.

3vedrfolnir
Right -- wikis and blogs have different uses. Blogs (or magazines, or letters...) are for hashing things out; wikis (or books, or journals [although journals can do both]...) are for writing things up once they've been hashed out. This is useful to avoid the "Facebook search problem", which IMO hasn't been getting as much attention as it should, especially given that it was one of the reasons listed for why LW ought to be revived: >The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation. One possible solution to the problem that awarding status for wiki edits is hard would be to abandon the wiki model entirely, in favor of a curated encyclopedia model: instead of waiting around for someone to write up X, you'd have curators (status!) who can ask someone who understands X to write (status!) the encyclopedia page on X. What do you do if their writeup is controversial or could be built upon? Well, that's an implementation detail. Maybe a comments page? How do actually-existing curated encyclopedias handle that?
2Vaniver
Per edit, per character, based on karma changes to a post after they edited it? It seems really hard to find a method of rewarding people for edits that isn't easily gameable or rely on deep taste. (You could have editors vote on edits, but you're not going to get many of those votes relative to the number of readers that might vote on articles, for example.)
c0rw1n-20

welp, 2/4 current residents and the next one planned to come there are trans women so um, what gender ratio issue again?

7KPier
Are you disagreeing with my prediction? I'd be happy to bet on it and learning that two of the four initial residents are trans women does not change it.
c0rw1n100

Why yes, there should be such a list; I don't know of any existing one.

1ZeitPolizei
There is now a map and a preregistration database.
c0rw1n20

Well so far it's ... a group house; with long late night conversations, also running self-experiments (currently measuring results of a low-carb diet), and organizing the local monthly rationalist meetup.

We are developing social tech solutions such as a database of competing access needs and a formal system for dealing with house logistics.

c0rw1n30
I am confused about what sort of blog post you are requesting people write. I assume you don't mean that people should list off a variety of interesting facts about the Bay Area, e.g. "the public transit system, while deeply inadequate, is one of the best in the country," "UCSF is one of the top hospitals in the United States for labor and delivery," "everything in San Francisco smells like urine," "adult night at the Exploratorium is awesome," "there are multiple socialist pizza places in Berkeley which sc
... (read more)
2ozymandias
So there is an enormous cultural failure because no one wrote a blog post containing knowledge that is primarily of interest to Bendini? It seems to me a true rationalist should be able to solve this problem, perhaps by commissioning a report, or by talking to people who live in Berkeley, or having a beta reader, or by editing his post to correct obvious errors when there is a much-upvoted comment pointing them out. In fact, Bendini did reblog and comment on a post I wrote, In Defense of Unreliability, in which I discussed the fact that I get places through trains and Uberpool. Perhaps he simply assumed I was a very unusual person, or perhaps he forgot, or perhaps he didn't bother to read the post he was commenting on, but either way this doesn't make me very optimistic about the plan where Bay Area rationalist bloggers transform into the Bay Area travel bureau instead of Bendini taking responsibility for not making glaring mistakes. I am personally happy to beta-read any of Bendini's posts in the future that contain claims about the Berkeley rationalist community or to signal-boost any requests he makes for such beta readers in the future.
c0rw1n10

This means you are trying to Procustes the human squishiness into legibility, with consistent values. You should, instead, be trying to make pragmatic AIs that would frame the world for the humans, in the ways that the humans would approve*, taking into account their objectively stupid incoherence. Because that would be Friendly and parsed as such by the humans.

*=this doesn't mean that such human preferences as those that violate meta-universalizability from behind the veil of ignorance should not be factored out of the calculation of what is ethicall... (read more)

4Stuart_Armstrong
>pragmatic AIs that would frame the world for the humans, in the ways that the humans would approve The choice of how to do that is equivalent with choosing among the human values. That's not to say that there are not better or worse ways of doing things, but as soon as human behaviour become legible to an AI, we have to be very specific about any squishiness we want to preserve, and encode those in AI values.
8gwern
One interesting aspect of my analysis I would like to highlight is the part on multiple selection and genetic correlations. The immediate implication is that estimates of the value of embryo selection for IQ will be considerable underestimates if they ignore the many other traits that this selection will improve, and also that it is both feasible & desirable to make selection choices based on a weighted average of many polygenic scores. But this has had much broader implications for how I conceptualize the genetics of intelligence. (The following is based on too many papers to easily list at the moment, but if you read through my genetics bibliography compilation you'll find cites for a lot of these.) I used to think that IQ variants were relatively neutral and specific to IQ, and variance in the population was maintained by selective neutrality (ie pro-IQ variants being too metabolically expensive or developmentally fragile to be selected for) and so arguments like in OP that 'we should describe IQ boosting as instead reducing stupidity or reducing the risk of intellectual disability' were, more or less, dishonest rhetorical tricks. (The ID claim is particularly questionable; most ID is from single mutations of large effect, stuff like embryo selection isn't going to override that.) Cochran had discussed the possibility of genetic load and 'grit in the gears' from rare variants, but the GCTAs indicated that most of the additive variance was explained by rather common genetic variants (common being >1% of the population having it) and whole-genome studies looking into de novo mutations and counting rare mutation load and finding it not hugely predictive eliminated that as an explanation. So it looked to me like it was more the case that the glass was half-full and there were 'genes for IQ' rather than 'lack of genes against IQ', and the highly general benefits across health & longevity were due to downstream effects like Gottfredson argued, in being able to take ca
c0rw1n40

I think the questions of the next survey should be a superset of those on the last survey. Maybe not strictly, but it's too interesting to track year-on-year changes to remove questions unless it's really unquestionably obvious that they're superfluous.