All of Academian's Comments + Replies

On reflection, I endorse the conclusion and arguments in this post. I also like that it's short and direct. Stylistically, it argues for a behavior change among LessWrong readers who sometimes make surveys, rather than being targeted at general LessWrong readers. In particular, the post doesn't spend much time or space building interest about surveys or taking a circumspect view of them. For this reason, I might suggest a change to the original post to add something to the top like "Target audience: LessWrong readers who often or occasionally make form... (read more)

I've been trying to get MIRI to switch to stop calling this blackmail (extortion for information) and start calling it extortion (because it's the definition of extortion). Can we use this opportunity to just make the switch?

2Stuart_Armstrong
Did so, changed titles and terminology.

I support this, whole-heartedly :) CFAR has already created a great deal of value without focusing specifically on AI x-risk, and I think it's high time to start trading the breadth of perspective CFAR has gained from being fairly generalist for some more direct impact on saving the world.

"Brier scoring" is not a very natural scoring rule (log scoring is better; Jonah and Eliezer already covered the main reasons, and it's what I used when designing the Credence Game for similar reasons). It also sets off a negative reaction in me when I see someone naming their world-changing strategy after it. It makes me think the people naming their strategy don't have enough mathematician friends to advise them otherwise... which, as evidenced by these comments, is not the case for CFAR ;) Possible re-naming options that contrast well with "signal boosting"

  • Score boosting
  • Signal filtering
  • Signal vetting
0AnnaSalamon
Got any that contrast with "raising awareness" or "outreach"?

This is a cryonics-fails story, not a cryonics-works-and-is-bad story.

Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn't like your post-cryonics life.

Seems not much worse than actual-death, given that in this scenario you (or the person who replaces you) could still choose to actually-die if you didn't like your post-cryonics life.

Seems not much worse than actual-death, given that in this scenario you could still choose to actually-die if you didn't like your post-cryonics life.

4Kaj_Sotala
You're assuming that people who find life a net negative could simply choose to commit suicide. I don't think that this is a realistic assumption for most people. For many people, actively taking your own life is something that only becomes an option once it gets really, really shitty - and not necessarily even then. If someone falls into this class and puts high chance on their post-cryonics life being one of misery, but still not enough misery that they'd be ready to kill themselves, then cryonics may reasonably seem like negative expected value. (Especially if they assume that societies will maintain the trend of trying to prevent people from killing themselves when possible, and that a future society might be much better at this than ours, making suicide much harder to accomplish.)

This is an example where cryonics fails, and so not the kind of example I'm looking for in this thread. Sorry if that wasn't clear from the OP! I'm leaving this comment to hopefully prevent more such examples from distracting potential posters.

Hmm, this seems like it's not a cryonics-works-for-you scenario, and I did mean to exclude this type of example, though maybe not super clearly:

OP: There's a separate question of whether the outcome is positive enough to be worth the money, which I'd rather discuss in a different thread.

(2) A rich sadist finds it somehow legally or logistically easier to lay hands on the brains/minds of cryonics patients than of living people, and runs some virtual torture scenarios on me where I'm not allowed to die for thousands of subjective years or more.

(1) A well-meaning but slightly-too-obsessed cryonics scientist wakes up some semblance of me in a semi-conscious virtual delirium for something like 1000 very unpleasant subjective years of tinkering to try recovering me. She eventually quits, and I never wake up again.

See Nate's comment above:

http://lesswrong.com/lw/n39/why_cfar_the_view_from_2015/cz99

And, FWIW, I would also consider anything that spends less than $100k causing a small number of top-caliber researchers to become full-time AI safety researchers to be extremely "effective".

[This is in fact a surprisingly difficult problem to solve. Aside from personal experience seeing the difficulty of causing people to become safety researchers, I have also been told by some rich, successful AI companies earnestly trying to set up safety research divisions (y... (read more)

Just donated $500 and pledged $6500 more in matching funds (10% of my salary).

Thank you! We appreciate this enormously.

I would expect not for a paid workshop! Unlike CFAR's core workshops, which are highly polished and get median 9/10 and 10/10 "are you glad you came" ratings, MSFP

  • was free and experimental,

  • produced two new top-notch AI x-risk researchers for MIRI (in my personal judgement as a mathematician, and excluding myself), and

  • produced several others who were willing hires by the end of the program and who I would totally vote to hire if there were more resources available (in the form of both funding and personnel) to hire them.

1IlyaShpitser
I am not saying it wasn't a worthwhile effort (and I agreed to help look into this data, right?) I am just saying if your definition of "resounding success" is one that cannot be used to market this workshop in the future, that definition is a little peculiar... In general, it's hard to find effects of anything in the data.

1) Logical depth seems super cool to me, and is perhaps the best way I've seen for quantifying "interestingness" without mistakenly equating it with "unlikeliness" or "incompressibility".

2) Despite this, Manfred's brain-encoding-halting-times example illustrates a way a D(u/h) / D(u) optimized future could be terrible... do you think this future would not obtain, because despite being human-brain-based, would not in fact make much use of being on a human brain? That is, it would have extremely high D(u) and therefore be pena... (read more)

1sbenthall
1) Thanks, that's encouraging feedback! I love logical depth as a complexity measure. I've been obsessed with it for years and it's nice to have company. 2) Yes, my claim is that Manfred's doomsday cases would have very high D(u) and would be penalized. That is the purpose of having that term in the formula. I agree with your suspicion that our favorite future have relatively high D(u/h) / D(u) but not the highest value of D(u/h) / D(u). I suppose I'd defend a weaker claim, that a D(u/h) / D(u) supercontroller would not be an existential threat. One reason for this is that D(u) is so difficult to compute that it would be pretty bogged down.... One reason for making a concrete proposal of an objective function is that if it pretty good, that means maybe it's a starting point for further refinement.
Academian140

Great question! It was in the winter of 2013, about a year and a half ago.

you cannot use the category of "quantum random" to actual coin flip, because an object to be truly so it must be in a superposition of at least two different pure states, a situation that with a coin at room temperature has yet to be achieved (and will continue to be so for a very long time).

Given the level of subtlety in the question, which gets at the relative nature of superposition, this claim doesn't quite make sense. If I am entangled with a a state that you are not entangled with, it may "be superposed" from your perspective ... (read more)

Not justify: instead, explain.

I disagree. Justification is the act of explaining something in a way that makes it seem less dirty.

0shokwave
Not sure I agree; people are often asked to justify their decisions - to argue their choice was better than another, and calling those arguments an explanation feels like we're stretching the definition of 'explain'.

If you're curious about someone else's emotions or perspective, first, remember that there are two ways to encode knowledge of how someone else feels: by having a description of their feelings, or by empathizing and actually feeling them yourself. It is more costly --- in terms of emotional energy --- to empathize with someone, but if you care enough about them to afford them that cost, I think it's the way to go. You can ask them to help you understand how they feel, or help you to see things the way they do. If you succeed, they'll appreciate having someone who can share their perspective.

My summary of this idea has been that life is a non-convex optimization problem. Hill-climbing will only get you to the top of the hill that you're on; getting to other hills requires periodic re-initializing. Existing non-convex optimization techniques are often heuristic rather than provably optimal, and when they are provable, they're slow.

And the point of CFAR is to help people become better filtering good ideas from bad. It is plainly not to produce people who automatically believe the best verbal argument anyone presents to them without regard for what filters that argument has been through, or what incentives the Skilled Arguer might have to utter the Very Convincing Argument for X instead of the Very Very Convincing Argument for Y. And certainly not to have people ignore their instincts; e.g. CFAR constantly recommends Thinking Fast and Slow by Kahneman, and teaches exercises to extract more information from emotional and physical senses.

What if we also add a requirement that the FAI doesn't make anyone worse off in expected utility compared to no FAI?

I don't think that seems reasonable at all, especially when some agents want to engage in massively negative-sum games with others (like those you describe), or have massively discrete utility functions that prevent them from compromising with others (like those you describe). I'm okay with some agents being worse off with the FAI, if that's the kind of agents they are.

Luckily, I think people, given time to reflect and grown and learn, are not like that, which is probably what made the idea seem reasonable to you.

4Wei Dai
Do you see CEV as about altruism, instead of cooperation/bargaining/politics? It seems to me the latter is more relevant, since if it's just about altruism, you could use CEV instead of CEV. So, if you don't want anyone to have an incentive to shut down an FAI project, you need to make sure they are not made worse off by an FAI. Of course you could limit this to people who actually have the power to shut you down, but my point is that it's not entirely up to you which agents the FAI can make worse off. Right, this could be another way to solve the problem: show that of the people you do have to make sure are not made worse off, their actual values (given the right definition of "actual values") are such that a VNM-rational FAI would be sufficient to not make them worse off. But even if you can do that, it might still be interesting and productive to look into why VNM-rationality doesn't seem to be "closed under bargaining". Also, suppose I personally (according to my sense of altruism) do not want to make anyone among worse off by my actions. Depending on their actual utility functions, it seems that my preferences may not be VNM-rational. So maybe it's not safe to assume that the inputs to this process are VNM-rational either?

Non-VNM agents satisfying only axiom 1 have coherent preferences... they just don't mix well with probabilities.

Dumb solution: an FAI could have a sense of justice which downweights the utility function of people who are killing and/or procreating to game their representation in AI's utility function, or something like that do disincentivize it. (It's dumb because I don't know how to operationalize justice; maybe enough people would not cheat and want to punish the cheaters that the FAI would figure that out.)

Also, given what we mostly believe about moral progress, I think defining morality in terms of the CEV of all people who ever lived is probably okay... they'd probably learn to dislike slavery in the AI's simulation of them.

I don't see how it could be true even in the sense described in the article without violating Well Foundation somehow

Here's why I think you don't get a violation of the axiom of well-foundation from Joel's answer, starting from way-back-when-things-made-sense. If you want to skim and intuit the context, just read the bold parts.

1) Humans are born and see rocks and other objects. In their minds, a language forms for talking about objects, existence, and truth. When they say "rocks" in their head, sensory neurons associated with the presence ... (read more)

testing this symbol: ∃

[This comment is no longer endorsed by its author]Reply
7Kawoomba
There's a sandbox you can use for such, below the box in which you write a new comment click "Show help", then there's a link taking you there on the bottom right.

That was imprecise, but I was trying to comment on this part of the dialogue using the language that it had established

Ah, I was asking you because I thought using that language meant you'd made sense of it ;) The language of us "living in a (model of) set theory" is something I've heard before (not just from you and Eliezer), which made me think I was missing something. Us living in a dynamical system makes sense, and a dynamical system can contain a model of set theory, so at least we can "live with" models of set theory... we in... (read more)

7Eliezer Yudkowsky
Set theory doesn't have a dynamical interpretation because it's not causal, but finite causal systems have first-order descriptions and infinite causal systems have second-order descriptions. Not everything logical is causal; everything causal is logical.

Help me out here...

One of the participants in this dialogue ... seems too convinced he knows what model he's in.

I can imagine living a simulation... I just don't understand yet what you mean by living in a model in the sense of logic and model theory, because a model is a static thing. I heard someone once before talk about "what are we in?", as though the physical universe were a model, in the sense of model theory. He wasn't able to operationalize what he meant by it, though. So, what do you mean when you say this? Are you consideri... (read more)

1Qiaochu_Yuan
That was imprecise, but I was trying to comment on this part of the dialogue using the language that it had established: I was also commenting on this part: The point I was trying to make, and maybe I did not use sensible words to make it, is that This Guy (I don't know what his name is - who writes a dialogue with unnamed participants, by the way?) doesn't actually know that, for two reasons: first, Peano arithmetic might actually be inconsistent, and second, even if it were consistent, there might be some mysterious force preventing us from discovering this fact. Models being static is a matter of interpretation. It is easy to write down a first-order theory of discrete dynamical systems (sets equipped with an endomap, interpreted as a successor map which describes the state of a dynamical system at time t + 1 given its state at time t). If time is discretized, our own universe could be such a thing, and even if it isn't, cellular automata are such things. Are these "static" or "dynamic"?
Academian141

Until I'm destroyed, of course!

... but since Qiaochu asked that we take ultrafinitism seriously, I'll give a serious answer: something else will probably replace ultrafinitism as my preferred (maximum a posteriori) view of math and the world within 20 years or so. That is, I expect to determine that the question of whether ultrafinitism is true is not quite the right question to be asking, and have a better question by then, with a different best guess at the answer... just because similar changes of perspective have happened to me several times already in my life.

I also wish both participants in the dialogue would take ultrafinitism more seriously.

For what it's worth, I'm an ultrafinitist. Since 2005, at least as far as I've been able to tell.

How long do you expect to stay an ultrafinitist?

Kawoomba130

Is that because 2005 is as far from the present time as you dare to go?

If you want to make this post even better (since apparently it's attracting massive viewage from the web-at-large!), here is some feedback:

I didn't find your description of the owl monkey experiment very compelling,

If a monkey was trained to keep a hand on the wheel that moved just the same, but he did not have to pay attention to it… the cortical map remained the same size.

because it wasn't clear that attention was causing the plasticity; the temporal association of subtle discriminations with rewards could plausibly cause plasticity directly, wit... (read more)

I'm pretty sure that the idea of the previous two paragraphs has been talked about before, but I can't find where.

On LessWrong: VNM expected utility theory: uses, abuses, and interpretation (shameless self-citation ;)

On Wikipedia: Limitations of the VNM utility theorem

2AlexMennen
Thanks!

+1 for sharing; you seem the sort of person my post is aimed at: so averse to being constrained by self-image that you turn a blind eye when it affects you. It sounds to me like you that you are actively trying to suppress having beliefs about yourself:

People around me have a model of what Dan apparently is which is empathetic, nice, generous etc. I'm always the first to point out a bias such as racism or nonfactual emotional opinions etc. I don't have to see myself as any of those things though.

I've been there, and I can think of a number of possible... (read more)

If you're a "people-hater" who is able to easily self-modify, why do you still "hate" people? Are you sure you're not rationalizing the usefulness of your dislike of others? What do you find yourself saying to yourself and others about what being a "people-hater" achieves for you. Are there other ways for you to achieve those without hating people? What do you find yourself saying to yourself and others about why it's hard to change? What if in a group of 20+ people interested in rationality, someone has a cool trick you ... (read more)

1[anonymous]
I don't actually hate people, I'm just very averse towards socializing with unfamiliar people in that sort of environment described, and was just paraphrasing that same point for emphasis.
Academian450

You're describing costly signaling. Contrary to your opening statement,

The word 'signalling' is often used in Less Wrong, and often used wrongly.

people on LessWrong are usually using the term "signalling" consistently with its standard meaning in economics and evolutionary biology. From Wikipedia,

In economics, more precisely in contract theory, signalling is the idea that one party credibly conveys some information about itself to another party

Within evolutionary biology, signalling theory is a body of theoretical work examining communic

... (read more)
3Patrick
Well I'm happy to use "costly signalling". I was under the impression that costly signalling was signalling. If it isn't costly, at least for potential fakes, then I'm not sure how it can serve as an explanation for behavior. Why should I signal when the fakes can signal just as easily? What is there to gain? I think at the very least, there has to be some mechanism for keeping out cheats, even if it's rarity. From the wikipedia article on signalling theory: " If many animals in a group send too many dishonest signals, then their entire signalling system will collapse, leading to much poorer fitness of the group as a whole. Every dishonest signal weakens the integrity of the signalling system, and thus weakens the fitness of the group." But what am I? Some kind of prescriptivist? Evidently my understanding of the term is a minority, and people far cleverer than I don't use it my way. I'll stick to "costly signal" in future.
-2timtyler
I looked at http://en.wikipedia.org/wiki/Signalling_%28economics%29 Huh? That isn't what "signalling" means! If that article is correct, it looks like a case of confusing terminology.
0hyporational
Both status and signaling as concepts have at least in some circles pervaded common language. This will inevitably cause new members to use these words imprecisely. I don't recall the exact quote, but Daniel Dennett has said about consciousness that everyone feels like they're an expert on it. I think this applies to signalling and status too once one learns about them, since they're such a constant and seemingly direct part of our experience. I too now feel guilty of not studying these concepts more, and seem to have quite incompetently participated the discussion.
4beoShaffer
And psychology.
Academian220

tl;dr: I was excited by this post, but so far I find reading the cited literature uncompelling :( Can you point us to a study we can read where the authors reported enough of their data and procedure that we can all tell that their conclusion was justified?

I do trust you, Yvain, and I know you know stats, and I even agree with the conclusion of the post --- that people are imperfect introspectors --- but I'm discouraged to continue searching through the literature myself at the moment because the first two articles you cited just weren't clear enough on w... (read more)

Sometimes I have days of low morale where I don't get much done, and don't try to force myself to do things because I know my morale is low and I'll likely fail. I'm experimenting with a few different strategies for cutting down on low-morale days... I'd like to have ... better motivation (which might allow me to work on things with less willpower/energy expenditure),

Morale, and reducing the need for willpower / conscious effort, are things I've had success with using self-image changes, e.g. inspired by Naruto :) So...

those things seem to me to be

... (read more)
0John_Maxwell
Good insight, thanks.
Academian110

I'm not saying rationalists should avoid engaging in ritual like the plague; but I do a lot of promoting of CFAR and rationality to non-LW-readers, and I happen to know from experience that a post like this in Main sends bad vibes to a lot of people. Again, I think it's sad to have to worry so much about image, but I think it's a reality.

2Shmi
Oh, I agree that the optics would be better if the post in question was in Discussion, not Main.

Thanks for sharing this, Quiet; I'm sad to say I agree with you. I think rationality as a movement can't afford to be associated with ritual. It's just too hard to believe that it's not a failure mode. I personally find Raemon's perspective inspiring and convincing. Raemon, it seems to me that you have a very sane perspective on the role of ritual in people's lives. And I'm all about trying to acknowledge and work with our own emotional needs, e.g. in this post. But I personally think openly associating with Ritual with a Capital R is just too sketch... (read more)

7Said Achmiz
I just want to say — lest Raemon, other ritual-type-event-organizers, or people who share their values and views on this subject, get the wrong idea — that we should distinguish between these two positions: * "Rituals make Less Wrong look like a cult, or otherwise make the LW community look sketchy/disreputable/creepy" (optional addendum: "... and because of this, I don't want to associate with LW") * "I don't like rituals, am personally creeped out by them, and wish LW communities wouldn't engage in them" (optional addendum: "... and because of this, I don't want to participate in LW communities") I, personally, am not concerned about LW's image, or my image if I associate with LW, and I make no comment about the strategic implications (for e.g. CFAR) of LW communities engaging in rituals; I just want to head off any conclusion or assertion that the only reason anyone would object to rituals is a concern about appearances, reputation, or the like. (This, I think, is a special case of "well, people don't like X because they don't understand X" — "no, I understand X just fine and I still don't like it". Relatedly: "We shouldn't do X because people might draw the wrong conclusions about us" — "Well, let's do X and just not tell anyone" — "Actually, I think we shouldn't do X for reasons that have nothing to do with other people's opinions of us for doing X!")
1Shmi
LWers are primates, too, so some of us need this pack bonding thing in a form of a ritual. I'm not one of those, but I can see how others can feel differently. And given that rituals, whether religious or civic, are pretty much standard and often spontaneous in most communities, I don't see how having a ritual for some subgroup would harm the High Ideals of Rationality. It even might make the participants appear more human, by counteracting the perception of "straw Volcan"ness.

I would still tend to say that 1/3 apiece is the fair division

I'm curious why you personally just chose to use the norm-connoting term "fair" in place of the less loaded term "equal division" ... what properties does equal division have that make you want to give it special normative consideration? I could think of some, but I'm particularly interested in what your thoughts are here!

Kind of, though "intrinsic uncertainty" also suggests the possibility that the subsystems might be generating moral intuitions which simply cannot be reconciled and that the conflict might be unresolvable unless one is willing to completely cut away or rewrite parts of their own mind.

Don't you think that things being perfectly balanced in a way such that there is no resolution is sort of a measure zero set of outcomes? In drift-diffusion models of neural groups in human and animal brains arrive at decisions/actions (explained pretty well here... (read more)

1Kaj_Sotala
I don't really have any good data on this: my preliminary notion that some such conflicts might be unresolvable is mostly just based on introspection, but we all know how reliable that is. And even if it was reliable, I'm still young and it could turn out that my conflicts will eventually be resolved as well. So if there are theoretical reasons to presume that there will eventually be a resolution, I will update in that direction. That said, based on a brief skim of the page you linked, the drift-diffusion model seems to mostly just predict that a person will eventually take some action - I'm not sure whether it excludes the possibility of a person taking an action, but regardless remaining conflicted of whether it was the right one. This seems to often be the case with moral uncertainty. For example, my personal conflict gets rather complicated, but basically it's over the fact that I work in the x-risk field, which part of my brain considers the Right Thing To Do due to all the usual reasons that you'd expect. But I also have strong negative utilitarian intuitions which "argue" that life going extinct would in the long run be the right thing as it would eliminate suffering. I don't assign a very high probability on humanity actually surviving the Singularity regardless of what we do, so I don't exactly feel that my work is actively unethical, but I do feel that it might be a waste of time and that my efforts might be better spent on something that actually did reduce suffering while life on Earth still existed. This conflict keeps eating into my motivation and making me accomplish less, and I don't see it getting resolved anytime soon. Even if I did switch to another line of work, I expect that I would just end up conflicted and guilty over not working on AI risk. (I also have other personal conflicts, but that's the biggest one.)

Nice post. Do I understand you correctly that what you call "Intrinsic Moral Uncertainty" is the feeling of unresolved conflict between subsystems of our moral-intuition-generators? If so, I'd suggest calling it "Mere internal conflict" or "Not finished computing" or something more descriptive than "Intrinsic".

6Kaj_Sotala
Thanks! Kind of, though "intrinsic uncertainty" also suggests the possibility that the subsystems might be generating moral intuitions which simply cannot be reconciled and that the conflict might be unresolvable unless one is willing to completely cut away or rewrite parts of their own mind. (Though this does not presuppose that the conflict really is unresolvable, merely that it might be.) That makes "not finished computing" somewhat ill-fitting of a name, since that seems to imply that the conflict could be eventually resolved. Not sure if "mere internal conflict" really is it, either. "Intrinsic" was meant to refer to this kind of conflict emerging from an agent holding mutually incompatible intrinsic values, and it being impossible to resolve the conflict via appeal to instrumental considerations.

The Linguistic Consistency Fallacy: claiming, implicitly or otherwise, that a word must be used in the same way in all instances.

I'm definitely talking about the concept of purpose here, not the word.

in my experience they tend to say something like "Not for anyone in particular, just sort of "ultimate" purpose"... That said, the fact that everywhere else we use the word "purpose" it is three-place is certainly a useful observation. It might make us think that perhaps the three-place usage is the original, well-supported v

... (read more)
1Wei Dai
I think bryjnar is saying there may be two different concepts of purpose, which share the same word, with the grammatically 3-nary "purpose" often referring to one concept and the grammatically 2-nary "purpose" often referring to the other. This seems plausible to me, because if the 2-nary "purpose" is just intended to be a projection of the 3-nary "purpose", why would people fail to do this correctly?

I actually think people who say "Life has no meaning and everything I do is pointless" are actually making a deeper mistake than confusing connotations with denotations... I think they're actually making a denotational error in missing that e.g. "purpose" or "pointfulness" typically denotes ternary relationships of the form "The purpose of X to Y is Z." In other words, one must ask or tacitly understand "purpose to whom?" and "meaning to whom" before the statement makes any sense.

My favorite conn... (read more)

It would help you and other commenters to have an example in mind of something you want to change about yourself, and what methods you've already tried. Do you already do everything that you think you should? Do you ever procrastinate? Do you ever over-weight short-term pains against long-term gains? Is there anything you don't enjoy such that people who enjoy that thing have better lives than you, in your estimation?

If you answer one of these questions positively, and you have not been paying attention to conscious and unconscious aspects of self-ima... (read more)

4John_Maxwell
I'd like to do more, but I think I'm probably fairly close to bumping up against my time/energy constraints. It's rare for me to waste time when I'm energetic and high-morale. Sometimes I have days of low morale where I don't get much done, and don't try to force myself to do things because I know my morale is low and I'll likely fail. I'm experimenting with a few different strategies for cutting down on low-morale days. I take breaks. I also sometimes let myself be distracted if I estimate that the time sucked up by the distraction won't be worth the willpower of forcing myself to avoid it. (I'm experimenting with daily meditation to see if it can make those willpower costs lower, since that seems to have been the case in the past.) Entertainment is the one case where your self-image model seems to fit fairly well: I avoid listening to Britney Spears, for instance, because I don't want to be the sort of person who likes Britney Spears. (Realistically I think I could probably learn to enjoy it if I wanted to.) But that doesn't seem like a big loss--there's lots of music/movies/TV that's compatible with my self-image already. Enjoying Britney Spears would mean either telling people I liked Britney Spears or keeping my interest covert and probably generating some sort of incidental feeling of insecurity related to this. Neither option appeals to me. I'd like to have higher energy and better motivation (which might allow me to work on things with less willpower/energy expenditure), but those things seem to me to be more about trying out a wide variety of techniques and empirically determining what works.

The relevant notion of intelligence for a singularity is optimization power, and it's not obvious that we aren't already witnessing the expansion of such an intelligence. You may have already had these thoughts, but you didn't mention them, and I think they're important to evaluating the strength of evidence we have against UFAI explosions:

What do agents with extreme optimization power look like? One way for them to look is a rapidly-expanding-space-and-resource-consuming process which at some point during our existence engulfs our region of space destro... (read more)

0scarcegreengrass
This is similar to the Burning the Cosmic Commons paper by Robin Hanson, which considers whether the astronomical environment we observe might be the leftovers of a migrating extraterrestrial ecosystem that left a long time ago.
Load More