I deeply sympathize with the presumptuous philosopher but 1a feels weird.
Yep! I have the same intuition
Actually putting numbers on 2a (I have a post on this coming soon), the anthropic update seems to say (conditional on non-simulation) there's almost certainly lots of aliens all of which are quiet, which feels really surprising.
Nice! I look forward to seeing this. I did similar analysis - both considering SIA + no simulations and SIA + simulations in my work on grabby aliens
Which of them feel wrong to you? I agree with all them other than 3b, which I'm unsure about - I think it this comment does a good job at unpacking things.
2a is Katja Grace's Doomsday argument. I think 2aii and 2aiii depends on whether we're allowing simulations; if faster expansion speed (either the cosmic speed limit or engineering limit on expansion) meant more ancestor simulations then this could cancel out the fact that faster expanding civilizations prevent more alien civilizations coming in to existence.
At the Center on Long-Term Risk we're open to remote work. Currently we're only hiring for summer research fellows and the application page states (as with other previous positions, iirc)
Location: We prefer summer research fellows to work from our London offices, but will also consider applications from people who are unable to relocate.
Last year we had one fully remote fellow.
The lifecycle of 'agents'
Epistemic status: mostly speculation and simplification, but I stand by the rough outline of 'self-unaware learners -> self-aware consequentialists struggling with multipolarity -> static rule-following not-thinking-too hard non-learners'. The two most important transitions are "learning" and then, once you've learned enough, "committing/self-modifying (away from learning)".
I briefly sketch three phases I guess that ‘agents’ go through, and consider how two different metrics change during this progression. This is a high...
I agree. I think we should break "doom" into at least these four outcomes {human extinction, humans remain on Earth} x {lots of utility achieved, little to no utility} ( )
Mmm. I'm a bit confused about the short timelines: 50% by 2030 and 75% by 2030 seem pretty short to me.
I think the medium timelines I use has a pretty long tail, but the 75% by 2060 is pretty much exactly the Metaculus' community 75% by 2059.
Thanks for sharing! I've definitely had productivity gains from using a similar setup (Logseq, which is pretty much an open source clone of Roam/Obisidan and stores stuff locally as .md files).
This is a short follow up to my post on the optimal timing of spending on AGI safety work which, given exact values for the future real interest, diminishing returns and other factors, calculated the optimal spending schedule for AI risk interventions.
This has also been added to the post’s appendix and assumes some familiarity with the post.
Here I consider the most robust spending policies and supposes uncertainty over nearly all parameters in the model[1] Inputs that are not considered include: historic spendin...
Adjacent to interstice's comment about trade with neighbouring branches, if the AI is sufficiently updateless (i.e. it is reasoning from a prior where it thinks it could have human values) then it may still do nice things for us with a small fraction of the universe.
Johannes Treutlein has written about this here.
Sorry, this is very unclear notation. The is meant to be a random variable exponentially distributed with parameter 0.7.
Using DuckDuckGo as my address bar search..
... but rarely actually searching DuckDuckGo. DuckDuckGo allows for 'bangs' in the search.
For example "London !gmaps" redirects your search to Google Maps. At least half of my searches involve "!g" to search Google since the DuckDuckGo search isn't very good.
The wildcard "!" takes you to the first result on DuckDuckGo's search. For example, "Interstellar !imdb" is slower than "Interstellar imdb !" since the latter takes you to the first page of the DuckDuckGo search whereas the former takes you to the...
Not using a web browser on my phone
I've gone nearly a year without using a web browser on my phone. I minimise the number of apps that are used for websites (e.g. I don't use the Reddit or Facebook apps but heavily rely on the Google Maps app).
This habit makes me more attached to my laptop (and I feel more helpless without it) which seems mixed. I've only rarely needed to re-enable the app and occasionally ask other people to do something for me (e.g. restaurants that only have a web based menu or ordering system)
My Android phone has Chrome installed as a system app so can only be disabled in the settings and not uninstalled.
Using an adblocker to block distracting or unnecessary elements of web pages
On the uBlock Origin extension (Chrome | Firefox) one can right click to "Block element" and pick an element of a webpage to hide. I find this useful for removing distractions or ugly elements (but I don't think speeds up page loading at all)
Some examples
- the Facebook news feed (for which dedicated addons also exist) as well as the footers and left and right sidebars
- the YouTube comments, suggested video sidebar, search bar, footer
- the footer on Amazon
Watching videos at >1x speed
I've listened to audibooks and pocasts at >1x speed for a while and began applying this to any video (TV or film) I watch too.
For the past few months I've been watching film and TV at 1.5x to 2.5x speed quite comfortably. I made the mistake of starting a rewatch of Breaking Bad, but powered through at 3x speed without much loss of moment-to-moment enjoyment. At faster speeds I find it very hard to follow without using subtitles.
I recommend Video Speed Controller (free & open source extension for Chrome & Firefox) f...
Thanks for putting this together! Lots of ideas I hadn't seen before.
As for the meta-level problem, I agree with MSRayne to do the thing that maximises EU which leads me to the ADT/UDT approach. This assumes we can have some non-anthropic prior, which seems reasonable to me.
Anecdata: I aim to never take caffeine on two consecutive days, and when I do it's normally<50mg. This has worked well for me.
Wouldn't the respective type of utilitarian already have the corresponding expectations on future GCs? If not, then they aren't the type of utilitarian that they thought they were.
I'm not sure what you're saying here. Are you saying that in general, a [total][average] utilitarian wagers for [large][small] populations?
So there's a lower bound on the chance of meeting a GC 44e25 meters away.
Yep! (only if we become grabby though)
...Lastly, the most interesting aspect is the symmetry between abiogenesis time and the remaining habitability time (only 500 mil
The habitability of planets around longer lived stars is a crux for those using SSA, but not SIA or decision theoretic approaches with total utilitarianism.
I show in this section that if one is certain that there are planets habitable for at least , then SSA with the reference class of observers in pre-grabby intelligent civilizations gives ~30% on us being alone in the observable universe. For this gives ~10% on being alone.
Great report. I found the high decision-worthiness vignette especially interesting.
Thanks! Glad to hear it
Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?
Yep, this is kinda what anthropic decision theory (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA.
...I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded,
Could your model also include a possibility of the SETI-attack: grabby aliens sending malicious radio signals with AI description ahead of their arrival?
I briefly discuss this in Chapter 4. My tentative conclusion is that we have little to worry about in the next hundred or thousand years, especially (which I do not mention) if we think malicious grabby aliens to try particularly hard to have their signals discovered.
I agree it seems plausible SIA favours panspermia, though my rough guess is that doesn't change the model too much.
Conditioning on panspermia happening (and so the majority of GCs arriving through panspermia) then the number of hard steps in the model can just be seen as the number of post-panspermia steps.
I then think this doesn't change the distribution of ICs or GCs spatially if (1) the post-panspermia steps are sufficiently hard (2) a GC can quickly expand to contain the volume over which its panspermia of origin occurred. The hardnes...
Ah, I don't think I was very clear either.
I interpreted this comment as you saying “We could restrict our SSA reference class to only include observers for whom computers were invented 80 years ago”. (Is that right?)
What I wanted to say was: keep the reference class the same, but restrict the types of observers we are we saying we are contained in(the numerator in the SSA ratio) to be only those who (amongst other things) observe the invention of the computer 80 years ago.
...And then I was trying to respond to that by saying “Well if we can do tha
Doesn't sound snarky at all :-)
Hanson et al. are conditioning on the observation that the universe is 13.8 billion years old. On page 18 they write
...Note that by assuming a uniform distribution over our origin rank r (i.e., that we are equally likely to be any percentile rank in the GC origin time distribution), we can convert distributions over model times τ (e.g., an F(τ ) over GC model origin times) into distributions over clock times t. This in effect uses our current date of 13.8Gyr to estimate a distribution over the model timescale constant k. I
Yep, you're exactly right.
We could further condition on something like "observing that computers were invented ~X years ago" (or something similar that distinguishes observers like) such that the (eventual) population of civilizations doesn't matter. This conditioning means we don't have to consider that longer-lived planets will have greater populations.
I've been studying & replicating the argument in the paper [& hopefully be sharing results in the next few weeks]
The argument implicitly uses the self-sampling assumption (SSA) with reference class of observers in civilizations that are not yet grabby (and may or may not become grabby).
Their argument is similar in structure to the Doomsday argument:
If there are no grabby aliens (and longer lived planets are habitable) then there will be many civilizations that appear far in the future, making us highly atypical (in particular, 'early' in the ...
Something about being watched makes us more responsible. If you can find people that aren't going to distract you, working alongside them keeps you accountable. If it's over zoom you can mute them
I like Focusmate for this. You book a 25 minute or 50 minute pomodoro session with another member of the site and video call during the duration. I've found sharing my screen also helps.
I've finally commented on LessWrong (after lurking for the last few years) which had been on the edge of my comfort zone. Thanks for exercise!
Oh, that's a great idea! me too!
Thanks for this great explainer! For the past few months I've been working on the Bayesian update from Hanson's argument and hoping to share it in the next month or two.
I use Loop Habit Tracker [Android app] for a similar purpose. It's free and open source and allows notifcations to be set and then habits ticked off. The notifcations can be made sticky too.
I think this is potentially an overly strong criteria for decision theories - we should probably restrict to something like the problems to a fair problem class, else we end up with no decision theory receiving any credence.
I also think "wrong answer" is doing a lot of work here. Caspar Oesterheld writes
... (read more)