All of somervta's Comments + Replies

Formatting note: something seems to have deleted a couple of your time units:

I spent ~1500 working on genuinely original scientific research.

and

which subsumes the 100 old Poincare conjecture,

2JonahS
Thanks, fixed.

Why do we need the full tower? Why couldn't it be the case that just one (or some other finite number) of the Turing Oracle levels are physically possible?

0[anonymous]
Effectively, there is either some natural number n such that physics allows for n levels of physically-implementable Turing oracles, or the number is omega. Mostly, we think the number should either be zero or omega, because once you have a first-level Turing Oracle, you construct the next level just by phrasing the Halting Problem for Turing Machines with One Oracle, and then positing an oracle for that, and so on. Likewise, having omega (cardinality of the natural numbers) bits of algorithmic information is equivalent to having a first-level Turing Oracle (knowing the value of Chaitin's Omega completely). From there, you start needing larger and larger infinities of bits to handle higher levels of the Turing hierarchy. So the question is: how large a set of bits can physics allow us to compute with? Possible answers are: * Finite only. This is what we currently believe. * Countably infinite (Alef zero) or Continuum infinite (Alef one). Playing time-dilation games with General Relativity might, in certain funky situations I don't quite understand but which form the basis of some science fiction, almost allow you to get up to here. But it would require negative energy or infinite mass or things of that nature. * Arbitrarily large infinities. Almost definitely not. * Omega: if we're completely wrong about the relationship between computation and physics as we know it, possible.

We don't know that it's physically impossible, although it does look that way, but even if we did that doesn't mean it's contradictory, not to the extent that using it you'll "mostly derive paradox theorems and contradictions".

1[anonymous]
If Turing oracles are not physically impossible, then we need an explanation for how physics implements an infinite tower of Turing oracle levels. Short of that, I'm going to believe Turing oracles are impossible. If you start with something undecidable and build on it, you usually find that your results are even more undecidable (require a higher level of Turing oracle). There's also the AIT angle, which says that a true Turing oracle possesses infinite Kolmogorov complexity, and since Shannon entropy is the expected-value of Kolmogorov complexity, and Shannon entropy is closely related to physical entropy... we have strong reason to think that a Turing oracle violates basic thermodynamics.

Why do you think that a hypercomputer is inherently contradictory?

0[anonymous]
A hypercomputer is a computer that can deterministically decide the Halting Problem for a Turing machine in finite time. We already know that this is physically impossible. And unfortunately, most of the FAI work I've seen under the assumption of having a hypercomputer tends to end up along the lines of, "We started by assuming we had a Turing Oracle, and proved that given a second-level Turing Oracle, we can implement UDT with blah blah blah."

The LW Tumblr contingent has a Skype group.

The relevant test is 'Do I want to see more things like this on LW', and the answer is no, because I value clarity more than seeing things I would agree with did I understand them.

Interestingly, both concepts seem worthwhile to me... and I mostly advocate a combination of hedonistic and preference utilitarianism.

Eliezer said this would just have been Harry antimatter-suiciding and Hermione waking up in a flaming crater.

2kilobug
I don't really see the point in antimatter suiciding. It'll not kill Voldermort due to the Horcrux network, so it'll just kill the Death Eaters but letting Voldemort in power, and Voldemort would be so pissed of he would do the worst he can to Harry's family and friends... how is that any better than letting Voldemort kill Harry and manage to save a couple of people by telling him a few secrets ?

Is the novel content written by you, Eliezer, or others?

The novel content is nearly all my doing, with input and ideas from MIRI staff and volunteers.

Cool, that clears it up, thanks!

(I got that you were being sarcastic, but I wasn't clear which possible sucky thing you were disapproving of)

I got the four, but not the rectangle - I just noticed that two elements only appeared three times.

Huh. I got the same answer, but a different way.

Rnpu vgrz vf znqr hc bs gur cerfrapr be nofrapr bs bar bs fvk onfvp ryrzragf. Rnpu ryrzrag nccrnef sbhe gvzrf, rkprcg gubfr gjb.

0Magnap
I got the same answer in a third way. Gur ynfg vgrz va n ebj vf znqr sebz rirelguvat va gur svefg gjb cynprf, rkprcg gung juvpu gurl unir va pbzzba. EDIT: There's a simpler name for what I did: KBE, ubevmbagnyyl.
4IlyaShpitser
Ok, let me see if I can help. Aside from the fact that this was an incredibly rude, cultish thing to say, utterly lacking in collegiality, how do we even judge who a 'mere mortal' is here? Do we compare CVs? Citation rank? A tingly sense of impressiveness you get when in the same room? ---------------------------------------- Maybe people should find a better hobby than ordering other people from best to worst. Yes, I know this hobby stirs something deep in our social monkey hearts.
1Astazha
Betting isn't my thing. I know it's popular among many here, but I'm just on the forums to discuss HPMOR with other fans. There are other possibilities, like Harry is seeing what he desires, but I have to propose some pretty awkward complications to make that work. I think Izeinwinter is very likely correct.

There's a Welcome Thread that you might want to check out!

it is if you can get evidence about your UF.

I think you overestimate the likelihood that EY even read your comment. I doubt he reads all comments on hpmor discussion anymore.

1solipsist
I'd still take a 250 to 1 bet.

Why was this trolling? This was in fact true, although Wei Dai's UDT ended up giving rise to a better framework for future and more general DT work.

The link to Non-Omniscience, Probabilistic Inference, and Metamathematics isn't right. Also, 'published earlier this year' is now wrong, it should be 'midway through last year' :D

1So8res
Fixed, thanks.
gjm120

It should be "in mid-2014" so that it doesn't go out of date another year from now.

The author also needs to work on his own rationality. The car example is just bad start to finish. You need a lot more information to even estimate net deaths from the car in question.

Which has nothing to do with the point being made.

6Jiro
Also, in the car example, the second version names the American car and says that it is 8 times as likely to injure another car's occupants as a typical family car. The readers would probably take this to mean "a typical American family car". Combined with the fact that the other car is named and that the reader knows that this named car is considered safe enough for American roads, this provides the information that a spread of 8 times is not dangerous. This information is absent when a German car is compared to an American car, especially when the German car is not named.
0buybuydandavis
He was trying to make a point about bias undermining rationality, and so was completely sloppy in asking a question that didn't have an answer determined by rationality.

IIRC, we were doing it as an initial pass-through, but that plan might have changed.

Perhaps not, but there is good evidence for drugs+therapy doing better than either alone.

0ChristianKl
I'm not aware of it. Drugs can sometimes help but they can also destablise brains. They have Goodhart's law problems.

I'm trying to learn Linear Algebra and some automata/computability stuff for courses, and I have basic set theory and logic on the backburner.

1SanguineEmpiricist
What's going on with your linear algebra. What text's are you looking at right now? I am interested in computability, and am working through some set theory right now.
1[anonymous]
"Actual device" in the "physically resizable" sense. It is an actual design for a device, and various pieces already written or prototyped. The deception detector is easier to implement then the AGI, obviously, and further constrains the AGI design space (the AI has to work a certain way for the detector to work). Much like how ITER is an "actual" fusion power plant even though it hasn't been built yet.

Thanks! I didn't fine it with my minute of googling, good to know it's legit.

somervta180

I don't suppose you have a source for the quote? (at this point, my default is to disbelieve any attribution of a quote unknown to me to Einstein)

7Yaakov T
according to this website (http://ravallirepublic.com/news/opinion/viewpoint/article_876e97ba-1aff-11e2-9a10-0019bb2963f4.html) it is part of 'aphorisms for leo baeck' (which I think is printed in 'ideas and opinions' but I don't have access to the book right now to check)

'noble phantasm' is probably a reference to Fate/Stay Night, wherein a noble phantasm is a weapon or object of unusual reknown which a certain class of beings have that grants them signature powers.

I would put such things in the bragging thread - why the separation?

1[anonymous]
The bragging thread encourages a focus on personal accomplishments and personal improvement. It becomes easy to assume that the only things of worth are self-focused actions. Not a bad thing when you want to focus on self-improvement, but it does not demonstrate any connection with saving the world at large. A "Saving the World" thread encourages considering large scale actions and effects. At least this is my opinion. As Christian said, we would just have to see what the reception is. Might be unnecessary, might be helpful.
somervta-10

I don't think it's a good idea to write things expressing opinions like this as if you're presenting the majority view, even when you think it is. I for one completely disagree with the first paragraph, and would only like transparency wrt deletions if it was unobtrusive.

0[anonymous]
If it's not actually the majority view then people will downvote it or at least upvote it less. I don't think you understand how karma systems work.
somervta250

So, after reading the comments, I figure I should speak up because selection effects

I appreciated the deleting of the original post. I thought it was silly, and pointless and not what should be on LW. I didn't realize it was being upvoted (or I would have downvoted it), and I still don't know why it was.

I endorse the unintrusive (i.e, silent and unannounces) deleting of things like this (particularly given that the author was explicitly not taking the posting seriously - written while drunk, etc), and I suspect others do as well.

There's a thing that happ... (read more)

3jsteinhardt
Maybe a poll would be better?
-7Will_Newsome
3[anonymous]
Or upvote the parent, as I did.
somervta140

I was scrolling through, saw this comment and reread ialdabaoth's comment and upvoted, which I wouldn't have without yours. upvoted.

You mean on average? The studies I'm thinking of had small or no differences, but I'm pretty sure there are other results out there.

I don't have the citation to hand, but IIRC there's research suggesting higher variance among parents is the most significant effect.

0Carinthium
Good to know, but does that research clarify whether happiness is overall higher or lower in the long run?
7chaosmage
This fits to something Dan Savage (not a scientist but someone worth listening to on matters of family relationships) said:

Ooops, I actually didn't mean to post that! Usually when I'm making an obvious criticism, after I write it I go back and double-check that I haven't missed or misinterpreted something, and I noticed that and meant to delete the unposted comment. I guess I must have hit enter at some point.

Because each additional dollar is less valuable, however, we would expect this transfer to make the group as a whole worse off.

grumble grumble only if the people the money went from were drawn from the same or similar distribution as the person it goes to.

[This comment is no longer endorsed by its author]Reply
2jefftk
I wrote "take $1 from 10k randomly selected people and give that $10k it to one randomly selected person". Reading it back this implies you use the same distribution for both selections, but it sounds like that's not how you read it? How would you phrase this idea differently?

I don't see a public spectacle - the names were redacted, etc. And Kaj's post seems to be asking "what should our policy on this be" to me.

0Kawoomba
I was referring to an upvoted (at the time) comment calling for public shaming. I thought this community especially would be more sensitive to the whole public shaming thing. Also, OP should a) have messaged other editors first and b) not presumed that a valid reason for redacting private information is the "presumption of innocence". The reason for not disclosing private information is that it's private. D'uh.

I suggest you ask at the Effective Altruist facebook page - there's usually fairly good coverage there and if such a thing exists someone there will know of it.

I don't see why Yudkowsky makes superintelligence a requirement for this.

Because often when we talk about 'worst-case' inputs, it would require something of this order to deliberately give you the worst-case, in theoretical CS, at least. I don't think Eliezer would object at all to this kind of reasoning where there actually was a plausible possibility of an adversary involved. In fact, one focus of things like cryptography (or systems security?) (where this is assumed) is to structure things so the adversary has to solve as hard a problem as you can m... (read more)

0Eliezer Yudkowsky
Yep! Original article said that this was a perfectly good assumption and a perfectly good reason for randomization in cryptography, paper-scissors-rock, or any other scenario where there is an actual adversary, because it is perfectly reasonable to use randomness to prevent an opponent from being intelligent.
0V_V
And what do you suggest to assume, instead? Anyway, most proofs of asymptotic security in cryptography are conditional on conjectures such as "P != NP" or "f is a one-way function".

Not sure how much relevant overlap there is. CM seems focused primarily on education, and on spreading relatively available but difficult to compile info to people who can derive value from it.

CFAR is largely focused on much less available content, and on the developing of new content, bith focused at more general needs.

Not so sure about 80K, although it is much less education focused. It also seems to have an additional purpose - providing support for people to do good, as well as do be more competent.

Neither appear to have a funding gap

I've seen discussion of Diarist (and maybe MRI) on LW before. None of the others seem to me to be plausible enough to even bother considering, and even those seem primarily only useful as a supplement to for the people reconstructing you from cryonics.

that's worse than letting billions of children be tortured to death every year. that's worse than dying from a supernova.

No? The story explicitly rejects this. It is only because the Superhappies can deal with the Babyeaters on their own, and that solutions to the human problem do not prevent this that the story is resolved other ways.

that's worse than dying from mass suicide.

I don't see the story as advocating this - Akon does not suicide, for example. It is not that the value difference between human life before and after the change is so large (l... (read more)

Do you have planned articles for discussing? How late do you plan on going?

0free_rip
Hmm... perhaps How to be Happy - I can bring along my positive psyc textbook to supplement it and it's something everyone should be able to contribute to whether they've read the article or not. No need to stick too closely to it though, I think for the first meetup fairly free discussion could be more fun, to see what everyone's interests are. I'd guess it will go about 3 hours, but we'll end when things naturally close, and if anyone needs to go earlier that's fine.

This has been circulating among the tumblrers for a little bit, and I wrote a quick infodump on where it comes from.

TL;DR: The article comes from (is co-authored by) a brand-new x-risk organization founded by Max Tegmark and four others, with all of the authors from it's scientific advisory board.

The guys in the Australian community are truly awesome! I'd definitely recommend it if it's a viable option for you (and I'm happy to talk about the people I met at the meetup next Sunday if anyone wants)

I think the main effect wrt the former is as a introduction to rationality and the Sequences

Load More