All of Lambda's Comments + Replies

Lambda10

Seems like the real test would be to do it without the television shows?

2Pattern
I think they're a way to make an experiment easier to do. (It's not actually clear what the point of the experiment is - to figure out how to shape behavior, or what.)

I think it would be more accurate to say that the test was meant to check whether the TV shows were effective than whether the children had a maximal inherent tendency towards virtuousness.

1Measure
Maybe there are multiple test conditions, or maybe this is just the one that the market settled on.
Lambda140

Is gain-of-function research "very harmful"? I feel like it's not appropriate to nickel-and-dime this.

And also, yes, I do think it's harmful directly, in addition to eventually in expectation. It's a substantial derogation of a norm that should exist. To explain this concept further:

  • In addition to risking pandemics, participating in gain-of-function research also sullies and debases the research community, and makes it less the shape it needs to be culturally to do epidemiology. Refusing to take massive risks with minor upsides, even if they're cool, is al
... (read more)

I don't think we disagree about the harmfulness of this kind of research. Our disagreement is about the probable consequences of going around saying "I think this research is harmful and should stop."

It's the classic disagreement about how "righteous" vs. "moderate" a movement should be. "Speaking truth to power" vs. "winning hearts and minds." I don't have anything interesting to say here, I was just putting in a vote for a small move towards the "moderate" direction. I defer to the judgment of people who spend more time talking to policymakers and AGI capabilities researchers, and if you are such a person, then I defer to your judgment.

Lambda70

Because of the Curry-Howard correspondence, as well as for other reasons, it does not seem that the distance between solving math problems and writing AIs is large. I mean, actually, according to the correspondence, the distance is zero, but perhaps we may grant that programming an AI is a different kind of math problem from the Olympiad fare. Does this make you feel safe?

Also, it seems that the core difficulty in alignment is more in producing definitions and statements of theorems, than in proving theorems. What should the math even look like or be about? A proof assistant is not helpful here.

4Gurkenglas
Writing AIs is not running them. Proving what they would do is, but we need not have the math engine design an AI and prove it safe. We need it to babble about agent foundations in the same way that it would presumably be inspiring to hear Ramanujan talk in his sleep. The math engine I'm looking for would be able to intuit not only a lemma that helps prove a theorem, but a conjecture, which is just a lemma when you don't know the theorem. Or a definition, which is to a conjecture as sets are to truth values. A human who has proven many theorems sometimes becomes able to write them in turn, why should language models be any different? I can sense some math we need: An AI is more interpretable if the task of interpreting it can be decomposed into interpreting its parts, we want the assembly of descriptions to be associative, an AI design tolerates more mistakes if its behavior is more continuous in its parts than a maximizer's in its utility function. Category Theory formalizes such intuitions, and even a tool that rewrites all our math in its terms would help a lot, let alone one that invents a math language even better at CTs job of the short sentences being the useful ones.
Lambda50

I think this kind of research is very harmful and should stop.

I think it's important to repeat this even if it's common knowledge in many of our circles, because it's not in broader circles, and we should not give up on reminding people not to conduct research that leads to net increased risk of destroying the world, even if its really cool, gets you promoted, or makes you a lot of money.

Again, OpenAI people, if you're reading this, please stop.

evhub150

I think it's very strange that this is the work that gets this sort of pushback—of all the capabilities research out there, I think this is some of the best from an alignment perspective. See e.g. STEM AI and Thoughts on Human Models for context on why this sort of work is good.

9Daniel Kokotajlo
I think it might be better if you said "may have very harmful long-run consequences" or "is very harmful in expectation" rather than "is very harmful." I worry that people who don't already agree with you will find it easier to roll their eyes at "is very harmful."

If our safety research is useless, this path to AGI gives me the most hope, because it may produce math that lets us solve alignment before it becomes general.

Lambda10

Does anyone know approx what time the event will end?

1Drake Thomas
I think people typically hang out for as long as they want, and the size of the group gradually dwindles. There's no official termination point - I'd be a little surprised if more than half of people were left by 7:30, but I'd also be surprised if at least some meetup attendees weren't still interacting by 10PM or later. 
Lambda30

Stylebot for chrome. Perhaps there's better now — the ui can be a bit wonky — but I've used it for almost a decade, so

Lambda90

I found, when I tried to do this over a year ago, that no matter how much effort I put into "pruning" the home screen, YouTube would always devote ~10-20% of it to stuff I didn't want to see. Either it was epsilon-exploration, or stuff that tested well with the general population, or a bunch of "mandatory modules" like popular music or "news," but whatever it was, I couldn't get rid of all of it, and some of it managed to attract my clicks despite my best efforts. These extra items filled me with a sense of violation whenever I scrolled through.

So, I wound... (read more)

2lsusr
Do you use a particular CSS editor plugin?
Lambda80

See

Lambda100

Trump

Let's not get ahead of ourselves, friend.

LambdaΩ4180
  • how suitable is the research engineering job for people with no background in ml, but who are otherwise strong engineers and mathematicians?
  • will these jobs be long-term remote? if not, on what timeframe will they be remote?
6paulfchristiano
We expect to be requiring people to work from the office again sometime next year. ML background is very helpful. Strong engineers who are interested in learning about ML are also welcome to apply though no promises about how well we'll handle those applications in the current round.
Lambda340

You've posted the preface of the New Organon (i.e. "volume 2" of The Great Renewal), but did you know that the whole work also has a preface? To me, this preface contains some of the most compelling material. Here are some selections from the Cambridge edition (ed. Jardine and Silverthorne; try libgen):

Men seem to me to have no good sense of either their resources or their power; but to exaggerate the former and underrate the latter. Hence, either they put an insane value on the arts which they already have and look no further or, undervalui
... (read more)
4Ruby
This is really good! No, I didn't think to look for the preface for the entire work. Thanks for raising this. It's probably okay for us to quote some passages, though I'd be hesitant to post the whole thing from Libgen for copywrite reasons. (We have the license to post the version we're posting, but I'd be surprised if Cambridge press was as permissive.)
Lambda50

What's the boundary between early and late, and why is late bad?

Lambda40

Have you re-released "Transhumanists Don't Need Special Dispositions"? If not, can I give you a nudge to do so? It's one of my favorites.

5Vincent_P
It was released on Dec 7.
Lambda40

I've been lurking here for a while, but I'd like to get more actively involved.

By the way, are there any other Yale students here? If so, I'd be interested in founding a rationalist group / LW meetup on campus.

The standard advice for starting a physical group is to just pick a timeframe and a nice location, then show up with a good book and stay for the duration. Either other people show up and you've got your meetup, or else you spend a couple hours with a good book.

PM me if you want to talk about founding a group. I ran the Boston community for a while, and it was one of the most rewarding things I've ever done.

0protest_boy
Alum here... glad to hear! You should do that :)
Lambda20

At Yale, the situation is similar. I took a course on Gödel's incompleteness theorem and earned a humanities credit from it. The course was taught by the philosophy department and also included a segment on the equivalence of various notions of computability. Coolest humanities class ever!

I shudder to think of what politics were involved to classify it as such, though.

2VAuroch
Probably it was that a Phil professor wanted to teach the class, and no one cared to argue. It's not things like which classes are taught that are the big political fights, to my knowledge; the fights are more often about who gets the right to teach a topic of their choosing, and who doesn't.
Lambda80

I often get this confused, but isn't it supposed to be the Pioneer probe?

1arborealhominid
You're right; it is.
Lambda20

I'm not sure that's possible. If Harry is defeated, it must be in such a way that "only a remnant of him remains," by prophesy. Crushing Harry now would not leave a remnant (even if "remnant" means "legacy," I would argue); therefore, it is not worth trying.

Lambda140

Also, Harry's dark side is "very good at lying." Remember Azkaban? Pretty much every proposition he uttered aloud there was a lie, straight up, and told to pursue a greater goal. If Harry can convincingly pretend, for Bellatrix, to be someone other than who he believes himself to be, convincingly feign innocence and fear when discovered by the auror, and convincingly lie to Minerva about his location, then I think he'd have no problem with this particular deception.

On the other hand, choosing the ring in particular as his hiding target strikes me... (read more)

Lambda40

But what about Dumbledore? If there were anyone in such a Soul Sect, I'm pretty sure Dumbledore would be one of them. Wouldn't you agree?

But as "Pretending to be Wise" suggests, and as Dumbledore's room of broken wands makes clear, Dumbledore does not, in fact, behave as if souls are real. Now "perhaps" this is all an elaborate ruse on the part of Dumbledore, and he is just pretending to behave-as-if souls are not real. Regardless of how twisty and deceptive Dumbledore is, this particular deception seems wildly out of character for him.... (read more)

2Velorien
I agree that, if knowing about the afterlife is made likelier by being an experienced and powerful wizard, Dumbledore should be expected to know about the afterlife. However, we have now gone from "it's everybody's observations of the world" to "it's Harry's observations of the general public" to "it's Harry's observations of Dumbledore". In other words, Harry's (and our) evidence base for the lack of an afterlife keeps getting narrower the more we think about it. In addition, it's worth noting that Dumbledore, for all his virtues, is also great at self-deception and confused thinking (plotting and strategy excepted). There are all manner of circumstances under which Dumbledore would be unaware of the existence of the afterlife - for example, if it led to a conclusion he was unable to accept, all his power and experience might not stop him flinching away.
Lambda10

Um.. I feel like I'm in the out-group now. What does this (and the stuff below) mean?

9Atelos
You've seen/heard about the What Would Jesus Do thing, yes? This is that but with references to the Harry Potter as a Rationalist fanfic Yudkowsky is doing. What Would Harry James Potter-Evans-Verres Do What Would Professor Quirrel Do Professor Quirrel Would Avada Kevadra (the Killing Curse, very efficient for removal of obstacles :P)
Lambda00

Was this the "PUA Controversy"?

[This comment is no longer endorsed by its author]Reply
Lambda10

The fanfiction.net mirror has chapter 81 posted. Meanwhile, hpmor.com has today's Author's Note up, but not #81 itself. This is a shame, since I think that hpmor.com provides a substantially better reading experience...

Edit: And now it has #81 up too. Sorry about that.

Lambda20

Would you mind giving a few more details? Curiosity striking...

I've been lurking for a while, and this is my first post, but:

Would you mind giving far fewer details? Consciously imposed conjunction-aversion striking...

FTFY. Instead of asking for a single detailed story, we should ask for many simple alternative stories, no?

Obviously, this doesn't countermand your complaint about inferential distance, which I totally agree with.

Lambda110

Mmhmm... Borges time!

In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without som

... (read more)