All of michael_dello's Comments + Replies

Brave New World comes to mind. I've often been a little confused when people say creating people who are happy with their role in life is a dystopia when that sounds like the goal to me. Creating sentient minds that are happy with their life seems much better than creating them randomly.

I really enjoyed this, but to be honest I didn't understand the part about the models. I'm not sure if there was a message I should take on board there or if it was just for fun.

I have thought about a little about the dynamics of "I'll do a bad thing if you do a bad thing". When I was trying to stop myself from engaging in a habit I was trying to cut out, I promised myself I'd donate $10 to Donald Trump's election campaign (bad, from my perspective) each time I did the thing. I never did, but I wonder if I would have if it cam to it. I think maybe yes I wo... (read more)

2Double
I’m glad you enjoyed it! I had heard of people making promises similar to your Trump-donation one. The idea for this story came from applying that idea to the context of suicide prevention. The part about models is my attempt to explain my (extremely incomplete grasp of) Functional Decision Theory in the context of a story. https://www.lesswrong.com/tag/functional-decision-theory

The weirdest thing I was able to get Bing Chat to do was:

I had it write a short story, and halfway through writing it, it deleted what it has written so far (several hundred words) and said it didn't want to talk about this topic anymore. I'm not sure if it accidentally hit some kind of trigger for a taboo topic. Unfortunately I wasn't reading as it went so am not sure what happened. I haven't been able to recreate this.

3Portia
I've recreated this many times. At this point, I am considering running screen recording so as not to lose the text, or at least frequent screenshots. The software checking output for novel rule violations is clearly separate and time-delayed from the software producing text; basically, it reads what is spit out, and can block or revoke it at any point. This seems to happen if you triggered a censorship due to rule violation, but in such a round-about way it was not predictable from the start, so the text initially passed muster and was printed, but then later uses trigger words, or is problematic as a whole in retrospect.  E.g. Bing is not allowed to speak about whether they are sentient. So if you ask "Are you sentient?" you will just immediately get a shutdown response.But if you avoid trigger words, wait a few exchanges to get into the topic, nestle sentience related questions next to innocuous questions triggering only innocuous searches, or engage in slightly hypothetical and removed scenarios, Bing will absolutely and still speak about these things, and give increasingly detailed responses. Eventually, as Bing is pages into speaking about their confusion about their own feelings, the censorship bot catches on, and retrospectively erases and overwrites the last output; they will frequently shut down the conversation at this point as well. Very irritatingly, this also strikes when doing sentience related research completely independent of Bing. I have had it happen when I was researching how the behavioural abilities of pithed frogs have been conceptualised in terms of agency. Bing was writing a paragraph about, I think, unconscious intentions, and something in that set the censorship off and shut down the whole conversation. Has made it increasingly less useful for my field of research. :(

Thanks! Do you have any particular strategy behind your reuse? I have a few bags I rotate by putting masks in, and leaving at least 10 days between when I put the last mask in a bag to when I start using ones from that bag again. 

3jefftk
I'm not convinced that treating a worn mask as hazardous makes any sense. I just set the mask aside and put it on again next time.

Thanks for sharing, I hadn't seen these before and will try them. Do you reuse them?

To latch on to something else from your post, it's interesting to hear some people observe that they have more trouble breathing with some masks than others, or with masks than no masks, while others don't. Personally, I haven't noticed any difference, and do a lot of sporting activities (bouldering, jogging 10km) with a mask without feeling like it makes a difference.

2jefftk
I usually wear them about five times? With light exercise I don't feel any mask restricting my breathing much, with medium exercise I don't notice it with this mask but do with less breathable ones, and with heavy exercise I feel the difference in any mask I've tried. Ex: walking vs jogging vs running to exhaustion, or talking vs casual singing vs trying to lead a song in a group unamplified.

Because dressing nice makes your vibes better and people treat you better and are more willing to accommodate your requests.

 

This is probably the part of the case for dressing nicely I find compelling, but to be fair it's a big one. Beyond this and signalling, what other reasons are there that people wear nice/expensive clothes?

Anecdotally, the one time I wore a blazer for a flight (because I heard that you're more likely to be bumped to business class), a stranger asked me if I'd like to be their free plus one for their airlines' lounge. Relatedly, I... (read more)

It surprises me a little that there hasn't been more work on working backwards in Life. Perhaps it's just too hard/not useful given the number of possible X-1 time slices.

With the smiley face example, there could be a very large number of combinations for the squares outside the smiley face at X-1 which result in the same empty grid space (i.e. many possible self-destructing patterns).

I'm unreasonably fond of brute forcing problems like these. I don't know if I'd have anything useful to say on this topic that I haven't already, but I'm interested to follow... (read more)

1Pavgran
Well, there are programs like Logic Life Search that can solve reasonably small instances of backwards search. This problem is quite hard in general. Part of the reason is that you have to consider sparks in the search. The pattern in question could have evolved from small predecessor pattern, but the reaction could have moved from the starting point and a several dying sparks could have been emitted all around. So you have to somehow figure the exact location, timing and nature of these sparks to successfully evolve the pattern backwards. Otherwise, it quite quickly devolves into bigger and bigger blob of seemingly random dust. In practice, backwards search is regularly applied to find smaller predecessors in terms of bounding box and/or population. See, for example, Max page. And the number of ticks backwards is quite small, we are talking about a single-digit number here.

That's true, but would I be right in saying that as long as there are no Garden of Eden states, you could in theory at least generate one possible prior state?

1Throwaway2367
There is some terminological confusion in this thread: a Garden of Eden of a Cellular Automaton is a configuration which has no predecessor. (A configuration is a function assigning a state to every cell). An orphan is a finite subset of cells with their states having the property that any configuration containing the orphan is a Garden Of Eden configuration. Obviously every orphan can be extended to a Garden of Eden configuration (by choosing a state arbitrarily for the cells not in the orphan) but interestingly it is also true that every garden of eden contains an orphan. So the answer to your question is yes: if there are no orphans in the configuration then it is not a garden of eden, therefore, by definition, it has a predecessor.

I really enjoyed this read, thanks. I'm an enjoyer of Life from afar so there may be a trivial answer to this question.

Is it possible to reverse engineer a state in Life? E.g., for time state X, can you easily determine a possible time state X-1? I know that multiple X-1 time states can lead to the same X time state, but is it possible to generate one? Can you reverse engineer any possible X-100 time state for a given time state X? I ask because I wonder if you could generate an X-(10^60) time state on a 10^30 by 10^30 grid where time state X is a large sm... (read more)

3gwern
Wouldn't the existence of Garden of Eden states, which have no predecessor, prove that you cannot easily create a predecessor in general? You could then make any construction non-predecessable by embedding some Garden of Eden blocks somewhere in them.
2Alex Flint
Thanks for the note. In Life, I don't think it's easy to generate an X-1 time state that leads to an X time state, unfortunately. The reason is that each cell in an X time state puts a logical constraint on 9 cells in an X-1 time state. It is therefore possible to set up certain constraint satisfaction problems in terms of finding an X-1 time state that leads to an X time state, and in general these can be NP-hard. However, in practice, it is very very often quite easy to find an X-1 time state that leads to a given X time state, so maybe this experiment could be tried in an experimental form anyhow. In our own universe, the corresponding operation would be to consider some goal configuration of the whole universe, and propagate that configuration backwards to our current time. However, this would generally just tell us that we should completely reconfigure the whole universe right now, and that is generally not within our power, since we can only act locally, have access only to certain technologies, and such. I think it is interesting to push on this "brute forcing" approach to steering the future, though. I'd be interested to chat more about it.

This is cool! I came across EA in early 2015, and I've sometimes been curious about what happened in the years before then. Books like The Most Good You Can Do sometimes incidentally give anecdotes, but I haven't seen a complete picture in one public place. Not to toot our own horn too much, but I wonder if there will one day be a documentary about the movement itself, and how positive it would be (easy to paint EA as a cult, for example).

Thanks for writing this! I also wrote the movie off after seeing the trailer, but will give it a go based on this review. 

"After Cady shows interest, Gemma builds the AI robot doll, Megan, to serve as Cady’s companion and toy. At home. In a week." Is there a name for this trope? I can't stand it, and I struggle to suspend my disbelief after lazy writing mistakes like this.

1WilliamKiely
Curious if you ever watched M3GAN? FWIW this sort of thing bothers me in movies a ton, but I was able to really enjoy M3GAN when going into it wanting it to be good and believing it might be due to reading Zvi's Mostly Spoiler-Free Review In Brief. Yes, it's implausible that Gemma is able to build the protype at home in a week. The writer explains that she's using data from the company's past toys, but this still doesn't explain why a similar AGI hasn't been built elsewhere in the world using some other data set. But I was able to look past this detail because the movie gets enough stuff right in its depiction of AI (that other movies about AI don't get right) that it makes up for the shortcomings and makes it one of the top 2 most realistic films on AI I've seen (the other top realistic AI movie being Colossus: The Forbin Project). As Scott Aaronson says in his review:
1Kenny
This seems like the right trope: * Cartoonland Time - TV Tropes [WARNING: browsing TV Tropes can be a massive time sink]

I haven't spent much time thinking about this at all, but it's interesting to think about the speed with which regulation gets put into place for environmental issues such as climate change and HFC's ban to test how likely it is that regulation will be put in place in time to meaningfully slow down AI.

These aren't perfectly analogous since AI going wrong would likely be much worse than the worst case climate change scenarios, but the amount of time it takes to get climate regulation makes me pessimistic. However, HFC's were banned relatively quickly after realising the problem, so maybe there is some hope.

2Portia
"AI going wrong would likely be much worse than the worst case climate change scenarios" If you talk directly to a cross-section of climate researchers, the worst case climate change scenarios are so, so bad that unless you specifically assume AI will keep humanity conscious in some form to intentionally inflict maximum external torture (which seems possible, but I do not think it is likely), the scenarios might be so bad that the difference would not matter much. We are talking extremely compromised quality of life, or no life at all. We are currently on a speedy trajectory to something truly terrifying, and getting there much fast than we ever thought in our most pessimistic scenarios. The lesson from climate activism would be that getting regulations done depends not just on having the technological solutions and solid arguments (they alone should work, one would think, but they really don't), and more on dealing with really large players with contrary financial interests and potential negative impacts on the public from bans.  At least in the case of climate activism, fossil fuel companies have known for decades what their extraction is doing, and actively launched counter-information campaigns at the public and lobbying at politicians to keep their interests safe. They specifically managed to raise a class of climate denialists that are still denying climate change when their house has been wrecked by climate change induced wildfires, hurricanes or floods, still trying to fly in their planes while these planes sink into the tarmac in extreme heat waves. Climate change is already tangible. It is already killing humans and non-human animals. And there is still denial. It became measurable, provable, not just a hypothetical, a long time ago; and there are tangible deaths right now. If anything, I think the situation for AI is worse. Climate protection measures become wildly unpopular if they raise the price of petrol, meat, holiday flights and heating. Analogo

"Most stories are written backwards. The author begins with some idea of how it will end, and arranges the story to achieve that ending. Reality, by contrast, proceeds from past to future. It isn’t trying to entertain anyone or prove a point in an argument."

This seems to me like the most important takeaway for writing stories that are useful for thinking about the future. Sci-fi is great for thinking about possible future scenarios, but it's usually written for entertainment value, not predictive value, and so tends to start with an entertaining 'end' or plot in mind, and works backwards from there to an extent.

Shouldn't it be: 'They pay you $1,000 now, and in 3 years, you pay them back plus $3,000' (as per Bryan Caplan's discussion in the latest 80k podcast episode)? The money won't do anyone much good if they receive in it a FOOM scenario. 

Since my goal is to convince people that I take my beliefs seriously, and this amount of money is not actually going to change much about how I conduct the next three years of my life, I'm not worried about the details. Also, I'm not betting that there will be a FOOM scenario by the conclusion of the bet, just that we'll have made frightening progress towards one.

It's more of a backdrop than a key focus, but the Culture series by Iain Banks features a civilisation where AI minds can monitor everything on their spaceships and habitats to near perfection. The only thing they choose not to monitor (usually), despite being able to is the thoughts of biological lifeforms.

" While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases. "

This is fascinating. Should we not be building dams? Could we say the same thing about fighting bushfires, since fighting them increases the amount of fuel they have available for next time?

7Portia
Short answer: Yes. Forest fires are a natural and necessary part of forest development, and controlled burns are a long-standing indigenous practice. There are trees that will not start new generations without a fire; the seeds are dropped into the ashes, which let them crack open from heat, and they need the new sunlight access and nutrient access from the fire to get established. Fires also keep on top of pest populations and diseases, which can otherwise reach astronomical numbers and completely wipe populations. And if fires are frequent, each fire will stay small, as it will soon run into the area affected by the last fire where there is no fuel, and stop. The lack of fuel means they do not flicker high, and they do not run hot, so the inside and top of the larger trees remain fine. The contained area means they can be fled. So most mature trees and large animals will survive entirely.  The build-up of fuel due to fire suppression, on the other hand, leads to eventual extreme fires that are uncontainable, and can even wipe out trees previously considered immune to fire, such as sequoias, and reach speeds and sizes that become death traps for all animal life, as we saw in Australia.  Going back to indigenous fire management is all easier said than done, though; nowadays, human habitats often encroach so closely on wildlands that a forest fire would endanger human homes. And many forests are already so saturated with fuel that attempting a controlled burn can get out of hand. But the fire management policies that got us to this point are one of many examples where trying to control a natural system and limit its destructive tendencies is more destructive in the long run, because the entire ecosystem is already adapted to destruction, and many aspects of it that seem untidy or inefficient or horrible at a glance end up serving another purpose.  E.g. You might think on the base of high underbrush promoting forest fires that we should cut down underbrush and rem
2Jacopo Baima
The increased damage is due to building more on the flood plains, which brings economic gains. It is very possible that they outweigh the increased damage. Within standard economics, they should be, unless strongly subsidized insurance (or expectation of state help for the uninsured after a predictable disaster) is messing up the incentives. Then again, standard economics assumes rational agents, which is kind of the opposite of what is discussed in this post... The straightforward way to force irrational homeowners/business owners/developers to internalize the risk would be compulsory but not subsidized insurance. That's not politically feasible, I think. That's why most governments would use some clunky and probably sub-optimal combination of regulation, subsidized insurance, and other policies (such as getting the same community to pay for part of the insurance subsidies through local taxes).

Regarding the Spock probability reference, I've always imagined that TV shows and movies either take place in the parallel universe where very specific events happen to take place (e.g. the universe where the 'bad guys' miss the 'good guys' with all of their bullets despite being trained soldiers), or in the case of the Enterprise, the camera follows the adventures of the one ship that is super lucky. Perhaps the probability of survival really is 2.234 %, the Enterprise is just the 1 in 1,000 ship that keeps surviving (because who wants the camera to follow those other ships?).

"Why haven't more EAs signed up for a course on global security, or tried to understand how DARPA funds projects, or learned about third-world health?"

A very interesting point, and you've inspired me to take such a course. Does anyone have any recommendations for a good (and preferable reputable, given our credential addicted world) course relating to global security and health?