tcheasdfjkl

Wiki Contributions

Comments

Sorted by

I like this post. But also the part of it I found most interesting was this footnote bit:

Loosely speaking, you've just turned your own conscious mind into an internal hostile telepath!

bc I think I do that kind of a lot, but also am somewhat sensitive to at least some kinds of things that feel like self-deception or thought-avoidance, and really dislike that feeling, so I do tend to probe at things that feel suspicious in that kind of way, which sometimes adds up to pretty unhelpful thought spirals where I'm chasing my thoughts and emotions around and getting kind of stuck in them. might be useful for me to try strategies where I let the avoidance exist, though I'm not sure how - if I'm at the point where I notice the hypothesis at all it's a pretty unpleasant feeling. I guess "learn to tolerate uncertainty and confusion better" is already a thing I wanted to do, and relevant here.

no it feels scarier! I think if I'm interacting with a real live human being in person I basically always instinctively worry about what they think of me even if there's no strong reason to, and higher uncertainty about what they think of me is more worry-causing; with friends I can somewhat lean on "well they are continuing to be friends with me so they must not be judging me too badly", and also friends have often disclosed similar vulnerable things to me which makes it easier (I am somewhat more hesitant to share productivity details with friends who I feel are way more productive than me). also "entirely outside my circles" is likely to come with "high inferential gap about various stuff I care about". I don't think this all is definitely insurmountable but surmounting it is a currently slightly mysterious first step

(I think "figure out how to tolerate talking to an LLM" might be an easier inroad actually, though that's differently aversive for me)

I also have found I am way more productive when I can do something like this, and kind of want to figure out how to do more of it. Some things that make it harder than it seems like it should be:

  • NDAs and such
  • the things I most need help thinking about are usually also ones that it feels very vulnerable/kind of scary to bring someone else into
  • and/or are ones where domain knowledge is important, such that I'd want to ideally work with someone who knows stuff about it
  • general feeling of privacy/hesitation putting another human in my workflows because my workflows feel very personal (...in part because of things like ~shame around being less productive than I'd like, which is a kind of silly self-sustaining cycle but not necessarily trivial to exit)

some ways I work around this are -

  • coworking with friends, with work pomos and break periods where we talk about how things are going; this is an equal relationship and not one where we can get very far into the weeds on each other's work usually, but it helps a lot to be in a shared work zone & to have explicit social ritual around talking through how things are going, which often leads to noticing possible improvements to strategy. extra good if we are all working on similar stuff, though not required
  • text channels for narrating my thought process, privately or to an occasional audience (or Google docs for same but with more structure)
  • if I keep being stuck on thinking about a given thing, talk to a friend about it
  • identify specific friends who are well placed to help me with specific projects & invite them to work on that specific project together for an hour or day
  • effort-trading where a friend and I help each other with projects on different days

It would also be nice to be able to pay for this as a service but I haven't quite been able to convince myself to try any of this with a stranger! very likely I'd benefit from more highly prioritizing attempting to experiment with versions of this, though.

I think the first time I encountered this post I had some kind of ~distaste for, idk, the idea that my beliefs and my aesthetics need have anything to do with each other? Maybe something about, protecting my right to like things aesthetically for arbitrary reasons without feeling like they need to fit into my broader value system in some coherent way, and/or to believe things without worrying about their aesthetics? whereas now I guess my... aesthetics, in this post's frame... have evolved to... idk, be more okay integrating these things with each other? having a more expansive and higher-uncertainty set of values/beliefs/aesthetics? all these words are very uncertain but this is interesting to encounter

A more concrete thought I have is: I've noticed that my social environments seem to kind of organically over time shape my worldviews in a way that I sometimes find kind of meta-epistemically disconcerting because it makes me feel like what I believe is more determined by who I'm around than by what's true. I think this is a pretty fair reaction for me to have, but also, reading this post now makes me think that actually part of it is that being around people who find a given thing beautiful causes me to learn how to find that thing beautiful too? And that's not a bad thing, I think; at least as long as I don't forget how to find other things beautiful like I used to, and perhaps periodically explore how I might find yet other, more foreign things beautiful, and don't start to believe beauty is quite the same thing as truth.

I think, for me, memory is not necessary for observation, but it is necessary for that observation to... go anywhere, become part of my overall world model, interact with other observations, become something I know?

and words help me stick a thing in my memory, because my memory for words is much better than my memory for e.g. visuals.

I guess that means the enduring world maps I carry around in my head are largely made of words, which lowers their fidelity compared to if I could carry around full visual data? But heightens their fidelity compared to when I don't convert my observations into words - in that case they kind of dissolve into a vague cloud

 

...oh but my memory/models/maps about music are not much mediated by words, I think, because my music memory is no worse than my verbal memory. are my music maps better than my everything-else maps? not sure maybe!

for some reason these crows made me laugh uncontrollably

This is great, thank you.

I didn't quite understand how "Beware ratchet effects" fits into/connects with the rest of the section that it's in - could you spell that out a bit? Also I'm curious if there are concrete examples of that happening that you know about & can share, though ofc very reasonable if not.

oh yeah my dispute isn't "the character in the song isn't talking about building AI" but "the song is not a call to accelerate building AI"

as Solstice creative lead I neither support nor oppose tearing apart the sun for raw materials

Take Great Transhumanist Future. It has "a coder" dismantling the sun "in another twenty years with some big old computer." This is a call to accelerate AI development, and use it for extremely transformative actions.

Super disagree with this! Neither I nor (I have not checked but am pretty certain) the author of the text wants to advocate that! (Indeed I somewhat actively tried to avoid having stuff in my program encourage this! You could argue that even though I tried to do this I did not succeed, but I think the fact that you seem to be reading ~motivations into authors' choices that aren't actually there is a sign that something in your analysis is off.) I think it's pretty standard that having a fictional character espouse an idea does not mean the author espouses it.

In the case of this song I did actually consider changing "you and I will flourish in the great transhumanist future" to "you and I MAY flourish in the great transhumanist future" to highlight the uncertainty, but I didn't want to make changes against the author's will, and Alicorn preferred to keep the "will" there because the rest of the song is written in the indicative mood. And, as I said before, Solstice is a crowdsourced endeavor and I am not willing to only include works where I do not have the slightest disagreement.

If the main problem with changing the songs is that many people in this community want to sing about AI accelerationism and want the songs to be anti-religious, then I stand by my criticisms

hmm, I want to be able to sing songs that express an important thing even if one can possibly read them in a way that also implies some things I disagree with

If the main problem with changing the songs is in making them scan and rhyme, then I can probably just pay that cost.

you are extremely welcome to suggest new versions of things!

but a lot of the cost is distributed and/or necessarily borne by the organizers. changing lines in a song that's sung at Solstice every year is a Big Deal and it is simply not possible to do this in a way that does not cause discourse and strife

(I guess arguably we managed the "threats and trials" line in TWTR without much discourse or strife but I think the framing did a lot there and I explicitly didn't frame it as a permanent change to the song, and also it was a pretty minor change)

Load More