silentbob

Wikitag Contributions

Comments

Sorted by

So, I was wondering whether this is usable in anki, and indeed, there appears to be a simple setting for it without even having to install a plugin, as described here in 4 easy steps. I'll see if it makes a notable difference.

Not so relatedly, this made me realize a connection I hadn't really thought about before: I wish music apps like Spotify would use something vaguely like spaced repetition for Shuffle mode. In the sense of finding some good algorithm to predict, based on past listening behavior, which song in a playlist the user is most likely to currently enjoy, and weighing their occurrences in shuffle mode accordingly. One could, very roughly, treat skipping a song as getting a flashcard right - it will then have some exponential backoff before it returns. But not skipping the song would be roughly like getting a card wrong, and it will show up again very soon. Of course, the algorithm shouldn't quite be the same, e.g. listening to a song once without skipping shouldn't have such a drastic effect (as typically the user may not be paying much attention to the music, so not skipping is a rather weak signal). But, yeah... I kind of doubt these platforms are working on anything like this, as they most likely don't care much about such intangible value propositions that are hard to measure in A/B tests.

By the way, I had a quick look at what PersonalityMap reports about how intelligence and ethics are correlated among humans. The websites provides an interface to query a pretty powerful AI model that is able to predict correlations (psychological, behavioral etc.) very well. The most suitable starting question that might correlate with high intelligence that I found was "What was your ACT score, between 1 and 36?" (although one could also just work with some made-up claim like "What's your IQ?" or "Would you describe yourself as unusually intelligent?" or so, that the prediction model could probably work with almost as well). I then checked the correlation of this with some phrases that are vaguely related to doing good:

So, based on this, it appears that at least among humans (or rather, among the types of humans who's data is in the database of PersonalityMap, which is likely primarily people from the US), intelligence and morality are not (meaningfully/positively) correlated, so locally this does look like evidence for the Orthogonality thesis holding up. Of course we can't just extrapolate this to AI, let alone AGI/ASI. But maybe still an interesting data point. (Admittedly this is only tangentially related to your actual post, so sorry if this is a little off-topic)

Thanks for asking! Are you referring to the slightly earlier wake-up time? I just had a look at the net sleep time in the three groups, and got the following comparison:

Control: 8h 00m

0.15mg: 8h 02m

0.3mg: 7h 45m

But large p values as you can guess from the overlapping CIs.

(The seeming discrepancy between this data and wake-up time can be explained by the fact that wake-up time was the absolute time, whereas net sleep time is also affected by when I went to bed and how long it took me to fall asleep)

But -- if I understand correctly, you did not take any melatonin between nights in which you randomized -- have you looked "treatment effect vs. length of time since last experimental night"? This would be a very crude way of getting at tolerance effects.

Good idea! Had a brief look now: I filtered my data for the 40 days on which I took melatonin, then for each one calculated the time (in days) since I last took melatonin (so not the last day I ran the experiment, but the last day I ran the expeirment where I was in one of the two intervention groups), and looked for a correlation between number of days since previous melatonin intake and time to fall asleep. There's maybe a tiny hint that there could be tolerance effects at play, but the data is insufficient for anything conclusive:

The point on the very right is the first day where I took melatonin - for that one, the "day since last intake" is not really defined, so I just choose the maximum distance between days I had + 1.

We do find a very slightly negative correlation which seems to indicate that after taking a break from the experiment (or having had some control group days recently) made the melatonin slightly more effective at reducing time to fall asleep, but then again, a [-0.4, 0.22] CI doesn't tell us much. :)

(Update: I also made a small linear regression and obtained the formula predicted_time_to_fall_asleep = 27.1 - 0.24 * days_since_last_intake (for days on which I took melatonin) - but, again, large error bars around that coefficient)

I have a (completed) 5-year melatonin self-experiment that I will hopefully write up later this year (although... I have been saying that for 12+ months at this point), will be fun to compare notes.

Oh wow, please do!

One thing that stuck with me after having it read somewhere, probably on lesswrong, a few years ago is the framing: "does future-you have a comparative advantage to do the thing? Otherwise you may just as well do it now". Which maybe doesn't quite capture your cooking counter-example, but it seems like a useful way to address procrastination nonetheless.

The short version of my somewhat opposing view point would be something along the lines of "directional effects aren't absolute truths". If moral realism is true, then a superintelligence may indeed be more likely to find these moral facts - but it doesn't mean it necessarily does, nor does it mean it will be motivated to accept these moral facts as goals. "In the limit" (of intelligence), maybe...? But "just able to disempower humanity"-level ASI could still be very far away from that.

Your points 2-4 are all what I would consider directional effects. (Side note, do you really mean "casually" or "causally"?) They are not necessarily very strong, and opposing factors could exist as well.

And point 6 turns these qualitative/directional considerations into something close-to-quantitative ("likely") that I wouldn't see as a conclusion following from the earlier points.

I would still agree with the basic idea that moral realism may be vaguely good news wrt the orthogonality thesis, but for me that seems like a very marginal change.

Indeed, judgement seems to be a dimension of intelligence (or effectiveness? Or something?) that is distinct from creativity or problem solving and maybe a bit neglected / less on top of mind. I wonder if there are even good ways of measuring this in humans. Or some benchmark for LLMs. I really don't have a good model of judgement at all. Is that a general thing people are good or bad at? Is it highly domain-specific? Probably? To what degree is it distinct from "expertise"? And, yes, do today's frontier models maybe have some judgement capability that is just hard to elicit?

silentbob3-1

I certainly disagree about the "no evidence" part - to me, the fact that I'm an individual with preferences and ability to suffer is very strong evidence for subjective moral facts, so to speak, and if these exist subjectively, then it's not that much of a stretch to assume there's an objective way to resolve conflicts between these subjective preferences.

It's for sure too large of a topic to resolve in this random comment thread, but either way, to my knowledge the majority of philosophers believes moral realism is more likely to be true than not, and even on lesswrong I'm not aware of huge agreement on it being false (but maybe I'm mistaken?). Hence, just casually dismissing moral realism without even a hint of uncertainty seems rather overconfident.

Surely there are diminishing returns to meaning obtained per work time, and with 80% of the work you get like 95% of the meaning.

And given that you now have much more time for coping, overall that's probably a pretty positive deal!

But that's a very different question from whether moral realism is true. Sure, some (maybe large) subset of human morality can be explained through biological and cultural evolution. But that tells us nothing about moral realism. It probably indicates that if moral facts exist, then the "default" morality any human ends up with is potentially (albeit not necessarily) quite different from these facts; but I don't think it has any notable implications on the correctness of moral realism.

Load More