All of player_03's Comments + Replies

Gary Marcus Yann LeCun describes LLMs as "an off-ramp on the road to AGI," and I'm inclined to agree. LLMs themselves aren't likely to "turn AGI." Each generation of LLMs demonstrates the same fundamental flaws, even as they get better at hiding them.

But I also completely buy the "FOOM even without superintelligence" angle, as well as the argument that they'll speed up AI research by an unpredictable amount.

I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3

Or one might be working from different axioms. I don't know what axioms, and I'd look at you funny until you explained, but I can't rule it out. It's possible (though implausible given its length) that Principia Mathematica wasn't thorough enough, that it snuck in a hidden axiom that - if challenged - would reveal an equally-coherent alternate counting system in which 1+1=3.

I brought up Euclid's postulates as an example of a time thi... (read more)

In other words, when I say that "Murder is bad," that is a fact about the world, as true as 2+2=4 or the Pythagorean theorem.

I like this way of putting it.

In Principia Mathematica, Whitehead and Russell spent over 300 pages laying groundwork before they even attempt to prove 1+1=2. Among other things, they needed to define numbers (especially the numbers 1 and 2), equality, and addition.

I do think "1+1=2" is an obvious fact. If someone claimed to be intelligent and also said that 1+1=3, I'd look at them funny and press for clarification. Given all the a... (read more)

2omnizoid
I agree that it can take a long time to prove simple things.  But my claim is that one has to be very stupid to think 1+1=3, not so with the falsity of the Orthogonality thesis. 

Back when you were underestimating Covid, how much did you hear from epidemiologists? Either directly or filtered through media coverage?

I was going to give an answer about how "taking the outside view" should work, but I realized I needed this information first.

4lsusr
In the early days of COVID, I listened to the This Week in Virology podcast (which is hosted by virologists) because I wanted to hear from experts. I was surprised by how they weren't much better than the mainstream media at predicting what would happen. This could be because they were virologists and not epidemiologists.

I don't think it invalidates the claim that "Without the minimum wage law, lots of people would probably be paid significantly less." (I believe that's one of the claims you were referring to. Let me know if I misinterpreted your post.)

I don't have a whole lot of time to research economies around the world, but I checked out a couple sources with varying perspectives (two struck me as neutral, two as libertarian). One of the libertarian ones made no effort to understand or explain the phenomenon, but all three others agreed that these countries rely on str... (read more)

0Leafcraft
"if a country has minimum wage laws, removing those laws will in fact tend to reduce wages." You say so, but you don't justify that statement in any way. When the poster wrote: "Without the minimum wage law, lots of people would probably be paid significantly less." I assumed they meant that the lack of MWL would push to sweatshop like conditions, my observation proves that it is not ture. The reverse (MWL pushes away from sweatshop like conditions) can also be proven false, as there are plenty of countries with bad economies and MWL where people live in miserable conditions. 
1TekhneMakre
I'm curious if you have independent reasons for thinking that market forces push towards sweatshop-like relations. It's not obvious to me either way.

Most of the research is aware of that limitation. Either they address it directly, or the experiment is designed to work around it, assuming mental state based on actions just as you suggest.

My point here isn't necessarily that you're wrong, but that you can make a stronger point by acknowledging and addressing the existing literature. Explain why you've settled on suicidal behavior as the best available indicator, as opposed to vocalizations and mannerisms.

This is important because, as gbear605 pointed out, most farms restrict animals' ability to attempt ... (read more)

I'm afraid I don't have time to write out my own views on this topic, but I think it's important to note that several researchers have looked into the question of whether animals experience emotion. I think your post would be a lot stronger if you addressed and/or cited some of this research.

0George3d6
The problem with that research is that it's shabby, I encountered this problem when dealing with the research on animal suicide and the one on animal emotions expands that trend. Fundamentally, it's a problem that can't be studied unless you are able to metaphorically see as a bat, which you can't, so I chose to think the closest thing we can do is treat it much like we do with other humans, assume their mental state based on their actions and act accordingly.

I do want to add - separately - that superrational agents (not sure about EDT) can solve this problem in a roundabout way.

Imagine if some prankster erased the "1" and "2" from the signs in rooms A1 and A2, leaving just "A" in both cases. Now everyone has less information and makes better decisions. And in the real contest, (super)rational agents could achieve the same effect by keeping their eyes closed. Simply say "tails," maximize expected value, and leave the room never knowing which one it was.

None of which should be necessary. (Super)rational agents s... (read more)

Oh right, I see where you're coming from. When I said "you can't control their vote" I was missing the point, because as far superrational agents are concerned, they do control each other's votes. And in that case, it sure seems like they'll go for the $2, earning less money overall.

It occurs to me that if team 4 didn't exist, but teams 1-3 were still equally likely, then "heads" actually would be the better option. If everyone guesses "heads," two teams are right, and they take home $4. If everyone guesses "tails," team 3 takes home $3 and that's it. On a... (read more)

1player_03
I do want to add - separately - that superrational agents (not sure about EDT) can solve this problem in a roundabout way. Imagine if some prankster erased the "1" and "2" from the signs in rooms A1 and A2, leaving just "A" in both cases. Now everyone has less information and makes better decisions. And in the real contest, (super)rational agents could achieve the same effect by keeping their eyes closed. Simply say "tails," maximize expected value, and leave the room never knowing which one it was. None of which should be necessary. (Super)rational agents should win even after looking at the sign. They should be able to eliminate a possibility and still guess "tails." A flaw must exist somewhere in the argument for "heads," and even if I haven't found that flaw, a perfect logician would spot it no problem.
Answer by player_03-10

I'm going to rephrase this using as many integers as possible because humans are better at reasoning about those. I know I personally am.

Instead of randomness, we have four teams that perform this experiment. Teams 1 and 2 represent the first flip landing on heads. Team 3 is tails then heads, and team 4 is tails then tails. No one knows which team they've been assigned to.

Also, instead of earning $1 or $3 for both participants, a correct guess earns that same amount once. They still share finances so this shouldn't affect anyone's reaso... (read more)

2Abhimanyu Pallavi Sudhir
I don't think this is right. A superrational agent exploits the symmetry between A1 and A2, correct? So it must reason that an identical agent in A2 will reason the same way as it does, and if it bets heads, so will the other agent. That's the point of bringing up EDT.

It is a stretch, which is why it needed to be explained.

And yes, it would kind of make him immune to dying... in cases where he could be accidentally rescued. Cases like a first year student's spell locking a door, which an investigator could easily dispel when trying to investigate.

Oh, and I guess once it was established, the other time travel scenes would have had to be written differently. Or at least clarify that "while Draco's murder plot was flimsy enough that the simplest timeline was the timeline in which it failed, Quirrel's mu

... (read more)
1ndee

I don't mind the occasional protagonist who makes their own trouble. I agree it would be annoying if all protagonists were like that (and I agree that Harry is annoying in general), there's room in the world for stories like this.

Now that you mention it, your first example does sound like a Deus Ex Machina. Except that

the story already established that the simplest possible time loop is preferred, and it's entirely possible that if Harry hadn't gotten out to pass a note, someone would have gone back in time to investigate his death, and

... (read more)
1ndee
Sounds like too much of a stretch to me. I don't remember that part, could point me to it?

It's been a while since I read it, but off the top of my head I can't recall any blatant cases of Deus ex Machina. I'd ask for concrete examples, but I don't think it would be useful. I'm sure you can provide an example, and in turn I'll point out reasons why it doesn't count as Deus ex Machina. We'd argue about how well the solution was explained, and whether enough clues were presented far enough in advance to count as valid foreshadowing, and ultimately it'll come down to opinion.

Instead, I can go ahead and a... (read more)

1ndee
I can probably find a few more, but these two already look good enough. I do believe that rational people can always find a way to understand each other. I personally prefer protagonists who don't get into the trouble mostly because of their own faults.

An example I like is the Knight Capital Group trading incident. Here are the parts that I consider relevant:

KCG deployed new code to a production environment, and while I assume this code was thoroughly tested in a sandbox, one of the production servers had some legacy code ("Power Peg") that wasn't in the sandbox and therefore wasn't tested with the new code. These two pieces of code used the same flag for different purposes: the new code set the flag during routine trading, but Power Peg interpreted that flag as a signal to buy and sell ~10,000... (read more)

I suppose, but even then he would have to take time to review the state of the puzzle. You would still expect him to take longer to spot complex details, and perhaps he'd examine a piece or two to refresh his memory.

But that isn't my true rejection here.

If you assume that Claude's brother "spent arbitrarily much time" beforehand, the moral of the story becomes significantly less helpful: "If you're having trouble, spend an arbitrarily large amount of time working on the problem."

4gwern
I don't think that's what it becomes. It remains what it was: 'a solution exists, and oddly enough, reminding yourself of this is useful'.
player_03140

His brother's hint contained information that he couldn't have gotten by giving the hint to himself. The fact that his brother said this while passing by means that he spotted a low-hanging fruit. If his brother had spent more time looking before giving the hint, this would have indicated a fruit that was a little higher up.

This advice is worth trying, but when you give it to yourself, you can't be sure that there's low hanging fruit left. If someone else gives it to you, you know it's worth looking for, because you know there's something there to find. (T... (read more)

1gwern
The brother could have spent arbitrarily much time on the jigsaw puzzle before Claude started playing with it.

Harry left "a portion of his life" (not an exact quote) in Azkaban, and apparently it will remain there forever. That could be the remnant that Death would fail to destroy.

Anyway, Snape drew attention to the final line in the prophecy. It talked about two different spirits that couldn't exist in the same world, or perhaps two ingredients that cannot exist in the same cauldron. That's not Harry and Voldemort; that's Harry and Death.

I mean, Harry has already sworn to put an end to death. It's how he casts his patronus. He's a lot less sure about killing Voldemort, and would prefer not to, if given the choice.

On the other hand, MIRI hit its goal three weeks early, so the amount of support is pretty obvious.

Though I have to admit, I was going to remain silent too, and upon reflection I couldn't think of any good reasons to do so. It may not be necessary, but it couldn't hurt either. So...

I donated $700 to CFAR.

player_03220

So that's how Omega got the money for box B!

player_03120

Well designed traditions and protocols will contain elements that cause most competent people to not want to throw them out.

2James_Miller
No. If an organization contains sub-competent people, it should take this into account when designing traditions and protocols.

Having just listened to much of the Ethical Injunctions sequence (as a podcast courtesy of George Thomas), I'm not so sure about this one. There are reasons for serious, competent people to follow ethical rules, even when they need to get things done in the real world.

Ethics aren't quite the same as tradition and protocol, but even so, sometimes all three of those things exist for good reasons.

Agreed.


Though actually, Eliezer used similar phrasing regarding Richard Loosemore and got downvoted for it (not just by me). Admittedly, "persistent troll" is less extreme than "permanent idiot," but even so, the statement could be phrased to be more useful.

I'd suggest, "We've presented similar arguments to [person] already, and [he or she] remained unconvinced. Ponder carefully before deciding to spend much time arguing with [him or her]."

Not only is it less offensive this way, it does a better job of explaining itself. (Note: the "ponder carefully" section is quoting Eliezer; that part of his post was fine.)

You and I are both bound by the terms of a scenario that someone else has set here.

Ok, if you want to pass the buck, I won't stop you. But this other person's scenario still has a faulty premise. I'll take it up with them if you like; just point out where they state that the goal code starts out working correctly.

To summarize my complaint, it's not very useful to discuss an AI with a "sincere" goal of X, because the difficulty comes from giving the AI that goal in the first place.

What you did was consider some other possibilities, such as th

... (read more)

I didn't mean to ignore your argument; I just didn't get around to it. As I said, there were a lot of things I wanted to respond to. (In fact, this post was going to be longer, but I decided to focus on your primary argument.)

Your story:

This hypothetical AI will say “I have a goal, and my goal is to get a certain class of results, X, in the real world.” [...] And we say “Hey, no problem: looks like your goal code is totally consistent with that verbal description of the desired class of results.” Everything is swell up to this point.

My version:

The A

... (read more)
0[anonymous]
What you say makes sense .... except that you and I are both bound by the terms of a scenario that someone else has set here. So, the terms (as I say, this is not my doing!) of reference are that an AI might sincerely believe that it is pursuing its original goal of making humans happy (whatever that means .... the ambiguity is in the original), but in the course of sincerely and genuinely pursuing that goal, it might get into a state where it believes that the best way to achieve the goal is to do something that we humans would consider to be NOT achieving the goal. What you did was consider some other possibilities, such as those in which the AI is actually not being sincere. Nothing wrong with considering those, but that would be a story for another day. Oh, and one other thing that arises from your above remark: remember that what you have called the "fail-safe" is not actually a fail-safe, it is an integral part of the original goal code (X). So there is no question of this being a situation where "... it wants Z, and a fail-safe prevents it from getting Z, [so] it will find a way around that fail-safe." In fact, the check is just part of X, so it WANTS to check as much as wants anything else involved in the goal. I am not sure that self-modification is part of the original terms of reference here, either. When Muehlhauser (for example) went on a radio show and explained to the audience that a superintelligence might be programmed to make humans happy, but then SINCERELY think it was making us happy when it put us on a Dopamine Drip, I think he was clearly not talking about a free-wheeling AI that can modify its goal code. Surely, if he wanted to imply that, the whole scenario goes out the window. The AI could have any motivation whatsoever. Hope that clarifies rather than obscures.

Oh, yeah, I found that myself eventually.

Anyway, I went and read the the majority of that discussion (well, the parts between Richard and Rob). Here's my summary:

Richard:

I think that what is happening in this discussion [...] is a misunderstanding. [...]

[Rob responds]

Richard:

You completely miss the point that I was trying to make. [...]

[Rob responds]

Richard:

You are talking around the issue I raised. [...] There is a gigantic elephant in the middle of this room, but your back is turned to it. [...]

[Rob responds]

Richard:

[...] But each time I ex

... (read more)
2[anonymous]
This entire debate is supposed to about my argument, as presented in the original article I published on the IEET.org website ("The Fallacy of Dumb Superintelligence"). But in that case, what should I do when Rob insists on talking about something that I did not say in that article? My strategy was to explain his mistake, but not engage in a debate about his red herring. Sensible people of all stripes would consider that a mature response. But over and over again Rob avoided the actual argument and insisted on talking about his red herring. And then FINALLY I realized that I could write down my original claim in such a way that it is IMPOSSIBLE for Rob to misinterpret it. (That was easy, in retrospect: all I had to do was remove the language that he was using as the jumping-off point for his red herring). That final, succinct statement of my argument is sitting there at the end of his blog ..... so far ignored by you, and by him. Perhaps he will be able to respond, I don't know, but you say you have read it, so you have had a chance to actually understand why it is that he has been talking about something of no relevance to my original argument. But you, in your wisdom, chose to (a) completely ignore that statement of my argument, and (b) give me a patronizing rebuke for not being able to understand Rob's red herring argument.

Link to the nailed-down version of the argument?

2Rob Bensinger
Bottommost (September 9, 6:03 PM) comment here.

I posted elsewhere that this post made me think you're anthropomorphizing; here's my attempt to explain why.

egregiously incoherent behavior in ONE domain (e.g., the Dopamine Drip scenario)

the craziness of its own behavior (vis-a-vis the Dopamine Drip idea)

if an AI cannot even understand that "Make humans happy" implies that humans get some say in the matter

Ok, so let's say the AI can parse natural language, and we tell it, "Make humans happy." What happens? Well, it parses the instruction and decides to implement a Dopamine Drip set... (read more)

-1Peterdjones
Humans generally manage with those constraints. You seem to be doing something that is kind of the opposite of anthropomorphising -- treatiing an entity that is stipulated as having at least human intelligence as if were as literal and rigid as a non-AI computer.
2Broolucks
That's not very realistic. If you trained AI to parse natural language, you would naturally reward it for interpreting instructions the way you want it to. If the AI interpreted something in a way that was technically correct, but not what you wanted, you would not reward it, you would punish it, and you would be doing that from the very beginning, well before the AI could even be considered intelligent. Even the thoroughly mediocre AI that currently exists tries to guess what you mean, e.g. by giving you directions to the closest Taco Bell, or guessing whether you mean AM or PM. This is not anthropomorphism: doing what we want is a sine qua non condition for AI to prosper. Suppose that you ask me to knit you a sweater. I could take the instruction literally and knit a mini-sweater, reasoning that this minimizes the amount of expended yarn. I would be quite happy with myself too, but when I give it to you, you're probably going to chew me out. I technically did what I was asked to, but that doesn't matter, because you expected more from me than just following instructions to the letter: you expected me to figure out that you wanted a sweater that you could wear. The same goes for AI: before it can even understand the nuances of human happiness, it should be good enough to knit sweaters. Alas, the AI you describe would make the same mistake I made in my example: it would knit you the smallest possible sweater. How do you reckon such AI would make it to superintelligence status before being scrapped? It would barely be fit for clerk duty. Realistically, AI would be constantly drilled to ask for clarification when a statement is vague. Again, before the AI is asked to make us happy, it will likely be asked other things, like building houses. If you ask it: "build me a house", it's going to draw a plan and show it to you before it actually starts building, even if you didn't ask for one. It's not in the business of surprises: never, in its whole training history, from
-8Peterdjones

I did see the insult, but Eliezer (quite rightly) got plenty of downvotes for it. I'm pretty sure that's not the reason you're being rated down.

I myself gave you a downvote because I got a strong impression that you were anthropomorphizing. Note that I did so before reading Eliezer's comment.

I certainly should have explained my reasons after voting, but I was busy and the downvote button seemed convenient. Sorry about that. I'll get started on a detailed response now.

player_03160

I want to upvote this for the link to further discussion, but I also want to downvote it for the passive-aggressive jab at LW users.

No vote.

-8[anonymous]
0[anonymous]
I'm afraid the 'troll prodrome' behaviour you observe more than cancels out the usefulness of the link (and, for that matter, it prevented me from even considering the link as having positive expected value.)
player_03210

I've donated a second $1000.

4lukeprog
Thanks again!!
player_03300

I donated $1000 and then went and bought Facing the Intelligence Explosion for the bare minimum price. (Just wanted to put that out there.)

I've also left myself a reminder to consider another donation a few days before this runs out. It'll depend on my financial situation, but I should be able to manage it.

player_03210

I've donated a second $1000.

2lukeprog
Thanks very much!

A podcast entry is included for that one, but it just directs you to read the original article.

I was going to link you one of the other podcasts (which all provide samples), but then I realized you might be asking why this specific podcast doesn't have one.

player_03*10

Ok, yeah, in that case my response is to take as many deals as Omega offers.

AdeleneDawner and gwern provide a way to make the idea more palatable - assume MWI. That is, assume there will be one "alive" branch and a bunch of "dead" branches. That way, your utility payoff is guaranteed. (Ignoring the grief of those around you in all the "dead" branches.)

Without that interpretation, the idea becomes scarier, but the math still comes down firmly on the side of accepting all the offers. It certainly feels like a bad idea to accept that probability of death, no ... (read more)

One-in-a-million is just an estimate. Immortality is a tough proposition, but the singularity might make it happen. The important part is that it isn't completely implausible.

I'm not sure what you mean, otherwise.

Are you suggesting that Omega takes away any chance of achieving immortality even before making the offer? In that case, Omega's a jerk, but I'll shut up and multiply.

Or are you saying that 10^10,000,000,000 years could be used for other high-utility projects, like making simulated universes full of generally happy people? Immortality would allow even more time for that.

[This comment is no longer endorsed by its author]Reply
-2MugaSofer
#1, although I was thinking in terms of someone from a civilization with no singularity in sight. Thanks for clarifying!
player_03*00

Summary of this retracted post:

Omega isn't offering an extended lifespan; it's offering an 80% chance of guaranteed death plus a 20% chance of guaranteed death. Before this offer was made, actual immortality was on the table, with maybe a one-in-a-million chance.

[This comment is no longer endorsed by its author]Reply
0MugaSofer
How about if you didn't have that one-in-a-million chance? After all, life is good for more than immortality research.
player_03210

When I was a Christian, and when I began this intense period of study which eventually led to my atheism, my goal, my one and only goal, was to find the best evidence and argument I could find that would lead people to the truth of Jesus Christ. That was a huge mistake. As a skeptic now, my goal is very similar - it just stops short. My goal is to find the best evidence and argument, period. Not the best evidence and argument that leads to a preconceived conclusion. The best evidence and argument, period, and go wherever the evidence leads.

--Matt Dillahunty

7simplicio
I wonder if somebody, looking at (a) his stated goal and (b) his behaviour, would consider his statement borne out. (Same goes for me, no offense to Dillahunty specifically).

Sorry I'm late... is no-one curious given the age of the universe why the 3 races are so close technologically?

Sorry I'm late in replying to this, but I'd guess the answer is that this is "the past's future." He would not have been able to tell this story with one species being that advanced, so he postulated a universe in which such a species doesn't exist (or at least isn't nearby).

Your in-universe explanations work as well, of course.

Perhaps the fact that it's the "traditional and irrational" ending is the reason Eliezer went with it as the "real" one. (Note that he didn't actually label them as "good" and "bad" endings.)

player_03150

I assumed the same, based on the definition of "god" as "supernatural" and the definition of "supernatural" as "involving ontologically basic mental entities."

(Oh, and for anyone who hasn't read the relevant post, the survey is quoting this.)

I could be interpreting it entirely wrong, but I'd guess this is the list Cochran had in mind:

This reminds me of Lojban, in which the constructs meaning "good" and "bad" encourage you to specify a metric. It is still possible to say that something is "worse" without providing any detail, but I suspect most Lojban speakers would remember to provide detail if there was a chance of confusion.

Give the furries, vampire-lovers and other assorted xenophiles a few generations to chase their dreams, and you're going to start seeing groups with distinctly non-human psychology.

WHY HAVEN'T I READ THIS STORY?

Because you haven't had time to read all the Orion's Arm stories, probably. (Details)

2) Honestly, I would have been happy with the aliens' deal (even before it was implemented), and I think there is a ~60% chance that Elizier agrees.

I'm of the opinion that pain is a bad thing, except insofar as it prevents you from damaging yourself. People argue that pain is necessary to provide contrast to happiness, and that pleasure wouldn't be meaningful without pain, but I would say that boredom and slight discomfort provide more than enough contrast.

However, this future society disagrees. The idea that "pain is important" is ingrained in t... (read more)

2JackAttack1024
You should read this: http://www.nickbostrom.com/fable/dragon.html It makes your point well. This is also touched on in HPMOR.
0Hul-Gil
I think that point would make more sense than the point he is apparently actually making... which is that we must keep negative aspects of ourselves (such as pain) to remain "human" (as defined by current specimens, I suppose), which is apparently something important. Either that or, as you say, Yudkowsky believes that suffering is required to appreciate happiness. I too would have been happy to take the SH deal; or, if not happy, at least happier than with any of the alternatives.

I think one of the main points Elizier is trying to make is that we would disagree with future humans almost as much as we would disagree with the baby-eaters or superhappies.

I never had this impression; if anything, I thought that all the things Eliezer mentioned in any detail - changes in gender and sexuality, the arcane libertarian framework that replaces the state and generally all the differences that seem important by the measure of our own history - are still intended to underscore how humanity still operates against a scale recongizable to its p... (read more)

If you're trying to present any kind of information at all, you should figure out what is important about it and what presentation will make it clear.

Unfortunately, the quote above isn't at all clear, even in context. I suspect this is because Jacques Bertin isn't as good at expressing himself in English as in French, but even so I'm unable to understand the sample data he presents or how it relates to the point he was trying to make.

4Raw_Power
It's Continental Philosophy at its worst. I can assure you it's exactly as messy in French.
1NancyLebovitz
Unfortunately, I posted because it looked reasonable more than because I had a solid understanding. Here's where I picked it up (page down to del_c)-- the chart is definitely clearer when the person influenced by Bertin has re-arranged it.

I agree with your interpretation of the song, and to back it up, here's the chorus of "The Farthest Star" (another song by the same band).

We possess the power, if this should start to fall apart, To mend divides, to change the world, to reach the farthest star. If we should stay silent, if fear should win our hearts, Our light will have long diminished before it reaches the farthest star.

This time, the message is pretty clear: we should aspire to overcome both our differences and our limitations, to avoid extinction and to expand throughout t... (read more)

1ata
Streamline, from their newest album, also seems fairly transhumanist, and in a more hopeful way than most of their songs. Also, by the unholy power of confirmation bias, I hereby declare that Testament is about humanity's recklessness and apathy in the face of existential risks, and Tomorrow Never Comes is about our final desperate and ultimately futile efforts to stave off doomsday after having waited too long to act.
-1ata
Since the above comment of mine was posted, I actually became a big fan of VNV Nation (thanks Eliezer! :P) and downloaded the rest of their discography. "The Farthest Star" is definitely a good one. Though I do remember from one live recording of "Further" that Ronan did in fact say that it's about living forever, but given the lyrics, it sounds more like it's about what it would be like for one or two people living forever while the rest of humanity dies, and honestly that probably would suck.

If the problem here is that the entity being simulated ceases to exist, an alternative solution would be to move the entity into an ongoing simulation that won't be terminated. Clearly, this would require an ever-increasing number of resources as the number of simulations increased, but perhaps that would be a good thing - the AI's finite ability to support conscious entities would impose an upper bound on the number of simulations it would run. If it was important to be able to run such a simulation, it could, but it wouldn't do so frivolously.

Before you ... (read more)

player_03300

Daniel Oppenheimer's Ig Nobel Prize acceptance speech:

My research shows that conciseness is interpreted as intelligence. So, thank you.

player_03*70

Until then, I'd be more interested in donating to general life extension research than paying for cryonics specifically.

This is very similar to my primary objection to cryonics.

I realize that, all factors considered, the expected utility you'd get from signing up for cryonics is extremely large. Certainly large enough to be worth the price.

However, it seems to me that there are better alternatives. Sure, paying for cryonics increases your chances of nigh-immortality by orders of magnitude. On the other hand, funding longevity research makes it more like... (read more)

Thanks for the analysis, MathijsJ! It made perfect sense and resolved most of my objections to the article.

I was willing to accept that we cannot reach absolute certainty by accumulating evidence, but I also came up with multiple logical statements that undeniably seemed to have probability 1. Reading your post, I realized that my examples were all tautologies, and that your suggestion to allow certainty only for tautologies resolved the discrepancy.

The Wikipedia article timtyler linked to seems to support this: "Cromwell's rule [...] states that one ... (read more)