Is gain-of-function research "very harmful"? I feel like it's not appropriate to nickel-and-dime this.
And also, yes, I do think it's harmful directly, in addition to eventually in expectation. It's a substantial derogation of a norm that should exist. To explain this concept further:
I don't think we disagree about the harmfulness of this kind of research. Our disagreement is about the probable consequences of going around saying "I think this research is harmful and should stop."
It's the classic disagreement about how "righteous" vs. "moderate" a movement should be. "Speaking truth to power" vs. "winning hearts and minds." I don't have anything interesting to say here, I was just putting in a vote for a small move towards the "moderate" direction. I defer to the judgment of people who spend more time talking to policymakers and AGI capabilities researchers, and if you are such a person, then I defer to your judgment.
Because of the Curry-Howard correspondence, as well as for other reasons, it does not seem that the distance between solving math problems and writing AIs is large. I mean, actually, according to the correspondence, the distance is zero, but perhaps we may grant that programming an AI is a different kind of math problem from the Olympiad fare. Does this make you feel safe?
Also, it seems that the core difficulty in alignment is more in producing definitions and statements of theorems, than in proving theorems. What should the math even look like or be about? A proof assistant is not helpful here.
I think this kind of research is very harmful and should stop.
I think it's important to repeat this even if it's common knowledge in many of our circles, because it's not in broader circles, and we should not give up on reminding people not to conduct research that leads to net increased risk of destroying the world, even if its really cool, gets you promoted, or makes you a lot of money.
Again, OpenAI people, if you're reading this, please stop.
I think it's very strange that this is the work that gets this sort of pushback—of all the capabilities research out there, I think this is some of the best from an alignment perspective. See e.g. STEM AI and Thoughts on Human Models for context on why this sort of work is good.
If our safety research is useless, this path to AGI gives me the most hope, because it may produce math that lets us solve alignment before it becomes general.
Does anyone know approx what time the event will end?
Stylebot for chrome. Perhaps there's better now — the ui can be a bit wonky — but I've used it for almost a decade, so
I found, when I tried to do this over a year ago, that no matter how much effort I put into "pruning" the home screen, YouTube would always devote ~10-20% of it to stuff I didn't want to see. Either it was epsilon-exploration, or stuff that tested well with the general population, or a bunch of "mandatory modules" like popular music or "news," but whatever it was, I couldn't get rid of all of it, and some of it managed to attract my clicks despite my best efforts. These extra items filled me with a sense of violation whenever I scrolled through.
So, I wound...
See
Trump
Let's not get ahead of ourselves, friend.
You've posted the preface of the New Organon (i.e. "volume 2" of The Great Renewal), but did you know that the whole work also has a preface? To me, this preface contains some of the most compelling material. Here are some selections from the Cambridge edition (ed. Jardine and Silverthorne; try libgen):
Men seem to me to have no good sense of either their resources or their power; but to exaggerate the former and underrate the latter. Hence, either they put an insane value on the arts which they already have and look no further or, undervalui...
What's the boundary between early and late, and why is late bad?
Have you re-released "Transhumanists Don't Need Special Dispositions"? If not, can I give you a nudge to do so? It's one of my favorites.
I've been lurking here for a while, but I'd like to get more actively involved.
By the way, are there any other Yale students here? If so, I'd be interested in founding a rationalist group / LW meetup on campus.
The standard advice for starting a physical group is to just pick a timeframe and a nice location, then show up with a good book and stay for the duration. Either other people show up and you've got your meetup, or else you spend a couple hours with a good book.
PM me if you want to talk about founding a group. I ran the Boston community for a while, and it was one of the most rewarding things I've ever done.
At Yale, the situation is similar. I took a course on Gödel's incompleteness theorem and earned a humanities credit from it. The course was taught by the philosophy department and also included a segment on the equivalence of various notions of computability. Coolest humanities class ever!
I shudder to think of what politics were involved to classify it as such, though.
I often get this confused, but isn't it supposed to be the Pioneer probe?
I'm not sure that's possible. If Harry is defeated, it must be in such a way that "only a remnant of him remains," by prophesy. Crushing Harry now would not leave a remnant (even if "remnant" means "legacy," I would argue); therefore, it is not worth trying.
Also, Harry's dark side is "very good at lying." Remember Azkaban? Pretty much every proposition he uttered aloud there was a lie, straight up, and told to pursue a greater goal. If Harry can convincingly pretend, for Bellatrix, to be someone other than who he believes himself to be, convincingly feign innocence and fear when discovered by the auror, and convincingly lie to Minerva about his location, then I think he'd have no problem with this particular deception.
On the other hand, choosing the ring in particular as his hiding target strikes me...
But what about Dumbledore? If there were anyone in such a Soul Sect, I'm pretty sure Dumbledore would be one of them. Wouldn't you agree?
But as "Pretending to be Wise" suggests, and as Dumbledore's room of broken wands makes clear, Dumbledore does not, in fact, behave as if souls are real. Now "perhaps" this is all an elaborate ruse on the part of Dumbledore, and he is just pretending to behave-as-if souls are not real. Regardless of how twisty and deceptive Dumbledore is, this particular deception seems wildly out of character for him....
Um.. I feel like I'm in the out-group now. What does this (and the stuff below) mean?
Was this the "PUA Controversy"?
The fanfiction.net mirror has chapter 81 posted. Meanwhile, hpmor.com has today's Author's Note up, but not #81 itself. This is a shame, since I think that hpmor.com provides a substantially better reading experience...
Edit: And now it has #81 up too. Sorry about that.
Would you mind giving a few more details? Curiosity striking...
I've been lurking for a while, and this is my first post, but:
Would you mind giving far fewer details? Consciously imposed conjunction-aversion striking...
FTFY. Instead of asking for a single detailed story, we should ask for many simple alternative stories, no?
Obviously, this doesn't countermand your complaint about inferential distance, which I totally agree with.
Mmhmm... Borges time!
...In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without som
Seems like the real test would be to do it without the television shows?
I think it would be more accurate to say that the test was meant to check whether the TV shows were effective than whether the children had a maximal inherent tendency towards virtuousness.