Posts

Sorted by New

Wiki Contributions

Comments

Hi any it may concern,

You could say I have a technical moat in a certain area and came across an idea/cluster of ideas that seemed unusually connected and potentially alignment-significant but whose publication seems potentially capabilities-enhancing. (I consulted with one other person and they also found it difficult to ascertain or summarize)

I was considering writing to EY on here as an obvious person who would both be someone more likely to be able to determine plausibility/risk across a less familiar domain and have an idea of what further to do. Is there any precedent or better idea for my situation? I suppose a general version is: "How can someone concerned with risk who can't easily further ascertain things themselves determine whether something is relevant to alignment/capabilities without increasing risk?" 

feel free to ask any questions that could help, and thank you anyone!

i like the idea of living in ingroupy housing (insofar as i am correctly understanding it as also being suitable for people with low socialization satiety thresholds)

I'm thinking a bit about AI safety lately as I'm considering writing about it for one of my college classes.

I'm hardly heavy into AI safety research and so expect flaws and mistaken assumptions in these ideas. I would be grateful for corrections.

  1. An AI told to make people smile tiles the world with smiley faces but an AI told to do what humans would want it to do might still get it wrong eg. Failed Utopia #4-2 . However, wouldn't it research further and correct itself (and before that, have care to not do something un-correctable)? Reasoning as follows: let's say a non-failed utopia is worth 100/100 utility points per year. A failed utopia is that which at first seems to the AI to be a true utopia (100 points) but is actually less (idk, 90 points). Even were the cost of research heavy, if the AI wants/expects billions or trillions of years of human existence, it should be doing a lot of research early on, and would be very careful to not be fooled. Therefore, we don't need to do anything more than tell an AI to do what humans would want it to do, and let it do the work on figuring out exactly what that is itself.

  2. Partially un-consequentialist AI/careful AI: Weights harms caused by its decisions (somewhat but not absolutely) heavier than other harms. (Therefore: tendency towards protecting what humanity already has against large harms and causes smaller surer gains like curing cancer rather than instituting a new world order (haha).)

Thanks in advance. :)

Ah, I also wish there were some posts about the practical parts of signing up. An overview of options, like Alcor or CI, standby service, life insurance costs, whether to consider relocation to Phoenix or whatnot, whether to get one of those bracelet things or something, and for god's sake let the guide not be so US-centric.

Though possibly this masterpost-thing exists and I haven't heard of it, or my unusual distaste for not having every detail planned out beforehand is biasing me.

So basically what your saying is that it is possible for a man to "really" be a woman even though not only all the physical/biological evidence points that way, but he isn't even aware of it? This raises even more questions whether you definition of "really a woman" corresponds to anything in reality.

Hm, good question! I'd say: in the same way one might discover one prefers, say, some obscure flavor of ice cream one hadn't tried before to one's previous favorite of chocolate ice cream. Does that mean that the person's favorite wasn't really chocolate before? It was, but also they "actually" preferred something else... I think it comes down to how the individual's narrative of their past or somesuch.

So you agree that the claim that my explanation "necessitates a lot of people lying" that you made in the grandparent is BS. That raises the question why did you make it?

I think we must've talked past each other; I'm having trouble connecting the dots. In any case, to try to elucidate my meaning: In the past, being transgender was more and more widely low-status. If transgenderism isn't real, then it becoming less low-status on average means that more and more people would lie about being transgender. If it is real, then it becoming less low-status on average means that more and more people would be exposed to the concept and feel safer in coming out.

It's similar, the difference being that "gay" properly refers to a person's behavior rather than an intrinsic property. And yes, the current attempt to claim that "gayness" is an intrinsic property is similarly problematic.

I see. I might have a different (though not diametrically opposed) idea on this, but afaict that disagreement doesn't have a bearing on the main idea of this discussion at the moment so for time and clarity's sake I think I'll not take this up, if you're amenable.

I'm not sure I follow. Is the logic that my claim necessitates more lying because people lied about not being transgender in the past (or as I would put it, were unaware or in the closet)? The fact of it being more widely low-status in the past explains that in my explanation as well as yours. Furthermore, if that is what you mean, then do you not also think that the higher amount of openly gay people these days is similar?

I suppose that wasn't a good example, then. Of course, my answer is that their greater non-existence was because it was socially unacceptable to be transgender.

So those are like two side of a coin, no? I say that it was socially unacceptable and less so now, so more realize it and come forth, while you say it was sometimes high-status then and more so now, so more say they are this made-up thing. Why do you prefer your explanation, which necessitates a lot of people lying?

I'm not exactly sure what your explanation is. That transgenderism is status-seeking? In that case, I suppose I'd ask about the existence of transgender people pre-SJ...?

In any case, I disagree with your assessment of cis-ness as unconnected to any real thing (that is what you're saying, no?). Hmm... maybe I'd put it akin to being a goth. Many non-goths would feel uncomfortable if suddenly they were forced to go about their lives clearly dressed as such. It communicates membership of a group they don't identify with.

Does that clarify anything?

When I wasn't exposed to more transgender people and viewpoints, I didn't pay attention and connect the dots I had that pointed at my not being cis, since I'm non-binary with relatively mild dysphoria. So, I'm planning on getting top surgery in a year or two, and wouldn't have if I hadn't introspected and found myself to be not cis. This could be seen as being perfectly happy in the body I was born with until it became fashionable to be transgender, but the connotations are very different.

Load More