I agree that it can take a long time to prove simple things. But my claim is that one has to be very stupid to think 1+1=3
Or one might be working from different axioms. I don't know what axioms, and I'd look at you funny until you explained, but I can't rule it out. It's possible (though implausible given its length) that Principia Mathematica wasn't thorough enough, that it snuck in a hidden axiom that - if challenged - would reveal an equally-coherent alternate counting system in which 1+1=3.
I brought up Euclid's postulates as an example of a time thi...
In other words, when I say that "Murder is bad," that is a fact about the world, as true as 2+2=4 or the Pythagorean theorem.
I like this way of putting it.
In Principia Mathematica, Whitehead and Russell spent over 300 pages laying groundwork before they even attempt to prove 1+1=2. Among other things, they needed to define numbers (especially the numbers 1 and 2), equality, and addition.
I do think "1+1=2" is an obvious fact. If someone claimed to be intelligent and also said that 1+1=3, I'd look at them funny and press for clarification. Given all the a...
Back when you were underestimating Covid, how much did you hear from epidemiologists? Either directly or filtered through media coverage?
I was going to give an answer about how "taking the outside view" should work, but I realized I needed this information first.
I don't think it invalidates the claim that "Without the minimum wage law, lots of people would probably be paid significantly less." (I believe that's one of the claims you were referring to. Let me know if I misinterpreted your post.)
I don't have a whole lot of time to research economies around the world, but I checked out a couple sources with varying perspectives (two struck me as neutral, two as libertarian). One of the libertarian ones made no effort to understand or explain the phenomenon, but all three others agreed that these countries rely on str...
Most of the research is aware of that limitation. Either they address it directly, or the experiment is designed to work around it, assuming mental state based on actions just as you suggest.
My point here isn't necessarily that you're wrong, but that you can make a stronger point by acknowledging and addressing the existing literature. Explain why you've settled on suicidal behavior as the best available indicator, as opposed to vocalizations and mannerisms.
This is important because, as gbear605 pointed out, most farms restrict animals' ability to attempt ...
I'm afraid I don't have time to write out my own views on this topic, but I think it's important to note that several researchers have looked into the question of whether animals experience emotion. I think your post would be a lot stronger if you addressed and/or cited some of this research.
I do want to add - separately - that superrational agents (not sure about EDT) can solve this problem in a roundabout way.
Imagine if some prankster erased the "1" and "2" from the signs in rooms A1 and A2, leaving just "A" in both cases. Now everyone has less information and makes better decisions. And in the real contest, (super)rational agents could achieve the same effect by keeping their eyes closed. Simply say "tails," maximize expected value, and leave the room never knowing which one it was.
None of which should be necessary. (Super)rational agents s...
Oh right, I see where you're coming from. When I said "you can't control their vote" I was missing the point, because as far superrational agents are concerned, they do control each other's votes. And in that case, it sure seems like they'll go for the $2, earning less money overall.
It occurs to me that if team 4 didn't exist, but teams 1-3 were still equally likely, then "heads" actually would be the better option. If everyone guesses "heads," two teams are right, and they take home $4. If everyone guesses "tails," team 3 takes home $3 and that's it. On a...
I'm going to rephrase this using as many integers as possible because humans are better at reasoning about those. I know I personally am.
Instead of randomness, we have four teams that perform this experiment. Teams 1 and 2 represent the first flip landing on heads. Team 3 is tails then heads, and team 4 is tails then tails. No one knows which team they've been assigned to.
Also, instead of earning $1 or $3 for both participants, a correct guess earns that same amount once. They still share finances so this shouldn't affect anyone's reaso...
It is a stretch, which is why it needed to be explained.
And yes, it would kind of make him immune to dying... in cases where he could be accidentally rescued. Cases like a first year student's spell locking a door, which an investigator could easily dispel when trying to investigate.
Oh, and I guess once it was established, the other time travel scenes would have had to be written differently. Or at least clarify that "while Draco's murder plot was flimsy enough that the simplest timeline was the timeline in which it failed, Quirrel's mu
I don't mind the occasional protagonist who makes their own trouble. I agree it would be annoying if all protagonists were like that (and I agree that Harry is annoying in general), there's room in the world for stories like this.
Now that you mention it, your first example does sound like a Deus Ex Machina. Except that
the story already established that the simplest possible time loop is preferred, and it's entirely possible that if Harry hadn't gotten out to pass a note, someone would have gone back in time to investigate his death, and
It's been a while since I read it, but off the top of my head I can't recall any blatant cases of Deus ex Machina. I'd ask for concrete examples, but I don't think it would be useful. I'm sure you can provide an example, and in turn I'll point out reasons why it doesn't count as Deus ex Machina. We'd argue about how well the solution was explained, and whether enough clues were presented far enough in advance to count as valid foreshadowing, and ultimately it'll come down to opinion.
Instead, I can go ahead and a...
An example I like is the Knight Capital Group trading incident. Here are the parts that I consider relevant:
KCG deployed new code to a production environment, and while I assume this code was thoroughly tested in a sandbox, one of the production servers had some legacy code ("Power Peg") that wasn't in the sandbox and therefore wasn't tested with the new code. These two pieces of code used the same flag for different purposes: the new code set the flag during routine trading, but Power Peg interpreted that flag as a signal to buy and sell ~10,000...
I suppose, but even then he would have to take time to review the state of the puzzle. You would still expect him to take longer to spot complex details, and perhaps he'd examine a piece or two to refresh his memory.
But that isn't my true rejection here.
If you assume that Claude's brother "spent arbitrarily much time" beforehand, the moral of the story becomes significantly less helpful: "If you're having trouble, spend an arbitrarily large amount of time working on the problem."
His brother's hint contained information that he couldn't have gotten by giving the hint to himself. The fact that his brother said this while passing by means that he spotted a low-hanging fruit. If his brother had spent more time looking before giving the hint, this would have indicated a fruit that was a little higher up.
This advice is worth trying, but when you give it to yourself, you can't be sure that there's low hanging fruit left. If someone else gives it to you, you know it's worth looking for, because you know there's something there to find. (T...
Harry left "a portion of his life" (not an exact quote) in Azkaban, and apparently it will remain there forever. That could be the remnant that Death would fail to destroy.
Anyway, Snape drew attention to the final line in the prophecy. It talked about two different spirits that couldn't exist in the same world, or perhaps two ingredients that cannot exist in the same cauldron. That's not Harry and Voldemort; that's Harry and Death.
I mean, Harry has already sworn to put an end to death. It's how he casts his patronus. He's a lot less sure about killing Voldemort, and would prefer not to, if given the choice.
On the other hand, MIRI hit its goal three weeks early, so the amount of support is pretty obvious.
Though I have to admit, I was going to remain silent too, and upon reflection I couldn't think of any good reasons to do so. It may not be necessary, but it couldn't hurt either. So...
I donated $700 to CFAR.
So that's how Omega got the money for box B!
Well designed traditions and protocols will contain elements that cause most competent people to not want to throw them out.
Having just listened to much of the Ethical Injunctions sequence (as a podcast courtesy of George Thomas), I'm not so sure about this one. There are reasons for serious, competent people to follow ethical rules, even when they need to get things done in the real world.
Ethics aren't quite the same as tradition and protocol, but even so, sometimes all three of those things exist for good reasons.
Agreed.
Though actually, Eliezer used similar phrasing regarding Richard Loosemore and got downvoted for it (not just by me). Admittedly, "persistent troll" is less extreme than "permanent idiot," but even so, the statement could be phrased to be more useful.
I'd suggest, "We've presented similar arguments to [person] already, and [he or she] remained unconvinced. Ponder carefully before deciding to spend much time arguing with [him or her]."
Not only is it less offensive this way, it does a better job of explaining itself. (Note: the "ponder carefully" section is quoting Eliezer; that part of his post was fine.)
You and I are both bound by the terms of a scenario that someone else has set here.
Ok, if you want to pass the buck, I won't stop you. But this other person's scenario still has a faulty premise. I'll take it up with them if you like; just point out where they state that the goal code starts out working correctly.
To summarize my complaint, it's not very useful to discuss an AI with a "sincere" goal of X, because the difficulty comes from giving the AI that goal in the first place.
...What you did was consider some other possibilities, such as th
I didn't mean to ignore your argument; I just didn't get around to it. As I said, there were a lot of things I wanted to respond to. (In fact, this post was going to be longer, but I decided to focus on your primary argument.)
Your story:
This hypothetical AI will say “I have a goal, and my goal is to get a certain class of results, X, in the real world.” [...] And we say “Hey, no problem: looks like your goal code is totally consistent with that verbal description of the desired class of results.” Everything is swell up to this point.
My version:
...The A
Oh, yeah, I found that myself eventually.
Anyway, I went and read the the majority of that discussion (well, the parts between Richard and Rob). Here's my summary:
Richard:
I think that what is happening in this discussion [...] is a misunderstanding. [...]
[Rob responds]
Richard:
You completely miss the point that I was trying to make. [...]
[Rob responds]
Richard:
You are talking around the issue I raised. [...] There is a gigantic elephant in the middle of this room, but your back is turned to it. [...]
[Rob responds]
Richard:
...[...] But each time I ex
Link to the nailed-down version of the argument?
I posted elsewhere that this post made me think you're anthropomorphizing; here's my attempt to explain why.
egregiously incoherent behavior in ONE domain (e.g., the Dopamine Drip scenario)
the craziness of its own behavior (vis-a-vis the Dopamine Drip idea)
if an AI cannot even understand that "Make humans happy" implies that humans get some say in the matter
Ok, so let's say the AI can parse natural language, and we tell it, "Make humans happy." What happens? Well, it parses the instruction and decides to implement a Dopamine Drip set...
I did see the insult, but Eliezer (quite rightly) got plenty of downvotes for it. I'm pretty sure that's not the reason you're being rated down.
I myself gave you a downvote because I got a strong impression that you were anthropomorphizing. Note that I did so before reading Eliezer's comment.
I certainly should have explained my reasons after voting, but I was busy and the downvote button seemed convenient. Sorry about that. I'll get started on a detailed response now.
I want to upvote this for the link to further discussion, but I also want to downvote it for the passive-aggressive jab at LW users.
No vote.
I've donated a second $1000.
I donated $1000 and then went and bought Facing the Intelligence Explosion for the bare minimum price. (Just wanted to put that out there.)
I've also left myself a reminder to consider another donation a few days before this runs out. It'll depend on my financial situation, but I should be able to manage it.
I've donated a second $1000.
A podcast entry is included for that one, but it just directs you to read the original article.
I was going to link you one of the other podcasts (which all provide samples), but then I realized you might be asking why this specific podcast doesn't have one.
Ok, yeah, in that case my response is to take as many deals as Omega offers.
AdeleneDawner and gwern provide a way to make the idea more palatable - assume MWI. That is, assume there will be one "alive" branch and a bunch of "dead" branches. That way, your utility payoff is guaranteed. (Ignoring the grief of those around you in all the "dead" branches.)
Without that interpretation, the idea becomes scarier, but the math still comes down firmly on the side of accepting all the offers. It certainly feels like a bad idea to accept that probability of death, no ...
One-in-a-million is just an estimate. Immortality is a tough proposition, but the singularity might make it happen. The important part is that it isn't completely implausible.
I'm not sure what you mean, otherwise.
Are you suggesting that Omega takes away any chance of achieving immortality even before making the offer? In that case, Omega's a jerk, but I'll shut up and multiply.
Or are you saying that 10^10,000,000,000 years could be used for other high-utility projects, like making simulated universes full of generally happy people? Immortality would allow even more time for that.
Summary of this retracted post:
Omega isn't offering an extended lifespan; it's offering an 80% chance of guaranteed death plus a 20% chance of guaranteed death. Before this offer was made, actual immortality was on the table, with maybe a one-in-a-million chance.
When I was a Christian, and when I began this intense period of study which eventually led to my atheism, my goal, my one and only goal, was to find the best evidence and argument I could find that would lead people to the truth of Jesus Christ. That was a huge mistake. As a skeptic now, my goal is very similar - it just stops short. My goal is to find the best evidence and argument, period. Not the best evidence and argument that leads to a preconceived conclusion. The best evidence and argument, period, and go wherever the evidence leads.
Sorry I'm late... is no-one curious given the age of the universe why the 3 races are so close technologically?
Sorry I'm late in replying to this, but I'd guess the answer is that this is "the past's future." He would not have been able to tell this story with one species being that advanced, so he postulated a universe in which such a species doesn't exist (or at least isn't nearby).
Your in-universe explanations work as well, of course.
Perhaps the fact that it's the "traditional and irrational" ending is the reason Eliezer went with it as the "real" one. (Note that he didn't actually label them as "good" and "bad" endings.)
I assumed the same, based on the definition of "god" as "supernatural" and the definition of "supernatural" as "involving ontologically basic mental entities."
(Oh, and for anyone who hasn't read the relevant post, the survey is quoting this.)
I could be interpreting it entirely wrong, but I'd guess this is the list Cochran had in mind:
•
This reminds me of Lojban, in which the constructs meaning "good" and "bad" encourage you to specify a metric. It is still possible to say that something is "worse" without providing any detail, but I suspect most Lojban speakers would remember to provide detail if there was a chance of confusion.
Give the furries, vampire-lovers and other assorted xenophiles a few generations to chase their dreams, and you're going to start seeing groups with distinctly non-human psychology.
WHY HAVEN'T I READ THIS STORY?
Because you haven't had time to read all the Orion's Arm stories, probably. (Details)
2) Honestly, I would have been happy with the aliens' deal (even before it was implemented), and I think there is a ~60% chance that Elizier agrees.
I'm of the opinion that pain is a bad thing, except insofar as it prevents you from damaging yourself. People argue that pain is necessary to provide contrast to happiness, and that pleasure wouldn't be meaningful without pain, but I would say that boredom and slight discomfort provide more than enough contrast.
However, this future society disagrees. The idea that "pain is important" is ingrained in t...
I think one of the main points Elizier is trying to make is that we would disagree with future humans almost as much as we would disagree with the baby-eaters or superhappies.
I never had this impression; if anything, I thought that all the things Eliezer mentioned in any detail - changes in gender and sexuality, the arcane libertarian framework that replaces the state and generally all the differences that seem important by the measure of our own history - are still intended to underscore how humanity still operates against a scale recongizable to its p...
If you're trying to present any kind of information at all, you should figure out what is important about it and what presentation will make it clear.
Unfortunately, the quote above isn't at all clear, even in context. I suspect this is because Jacques Bertin isn't as good at expressing himself in English as in French, but even so I'm unable to understand the sample data he presents or how it relates to the point he was trying to make.
I agree with your interpretation of the song, and to back it up, here's the chorus of "The Farthest Star" (another song by the same band).
We possess the power, if this should start to fall apart, To mend divides, to change the world, to reach the farthest star. If we should stay silent, if fear should win our hearts, Our light will have long diminished before it reaches the farthest star.
This time, the message is pretty clear: we should aspire to overcome both our differences and our limitations, to avoid extinction and to expand throughout t...
If the problem here is that the entity being simulated ceases to exist, an alternative solution would be to move the entity into an ongoing simulation that won't be terminated. Clearly, this would require an ever-increasing number of resources as the number of simulations increased, but perhaps that would be a good thing - the AI's finite ability to support conscious entities would impose an upper bound on the number of simulations it would run. If it was important to be able to run such a simulation, it could, but it wouldn't do so frivolously.
Before you ...
Daniel Oppenheimer's Ig Nobel Prize acceptance speech:
My research shows that conciseness is interpreted as intelligence. So, thank you.
Until then, I'd be more interested in donating to general life extension research than paying for cryonics specifically.
This is very similar to my primary objection to cryonics.
I realize that, all factors considered, the expected utility you'd get from signing up for cryonics is extremely large. Certainly large enough to be worth the price.
However, it seems to me that there are better alternatives. Sure, paying for cryonics increases your chances of nigh-immortality by orders of magnitude. On the other hand, funding longevity research makes it more like...
Thanks for the analysis, MathijsJ! It made perfect sense and resolved most of my objections to the article.
I was willing to accept that we cannot reach absolute certainty by accumulating evidence, but I also came up with multiple logical statements that undeniably seemed to have probability 1. Reading your post, I realized that my examples were all tautologies, and that your suggestion to allow certainty only for tautologies resolved the discrepancy.
The Wikipedia article timtyler linked to seems to support this: "Cromwell's rule [...] states that one ...
Gary MarcusYann LeCun describes LLMs as "an off-ramp on the road to AGI," and I'm inclined to agree. LLMs themselves aren't likely to "turn AGI." Each generation of LLMs demonstrates the same fundamental flaws, even as they get better at hiding them.But I also completely buy the "FOOM even without superintelligence" angle, as well as the argument that they'll speed up AI research by an unpredictable amount.