links 12/23/2024: https://roamresearch.com/#/app/srcpublic/page/12-23-2024
My median expectation is that AGI[1] will be created 3 years from now. This has implications on how to behave, and I will share some useful thoughts I and others have had on how to orient to short timelines.
I’ve led multiple small workshops on orienting to short AGI timelines and compiled the wisdom of around 50 participants (but mostly my thoughts) here. I’ve also participated in multiple short-timelines AGI wargames and co-led one wargame.
This post will assume median AGI timelines of 2027 and will not spend time arguing for this point. Instead, I focus on what the implications of 3 year timelines would be.
I didn’t update much on o3 (as my timelines were already short) but I imagine some readers did and might feel disoriented now. I hope...
make their models sufficiently safe
What does "safe" mean, in this post?
Do you mean something like "effectively controllable"? If yes: controlled by whom? Suppose AGI were controlled by some high-ranking people at (e.g.) the NSA; with what probability do you think that would be "safe" for most people?
A new article in Science Policy Forum voices concern about a particular line of biological research which, if successful in the long term, could eventually create a grave threat to humanity and to most life on Earth.
Fortunately, the threat is distant, and avoidable—but only if we have common knowledge of it.
What follows is an explanation of the threat, what we can do about it, and my comments.
Glucose, a building block of sugars and starches, looks like this:
But there is also a molecule that is the exact mirror-image of glucose. It is called simply L-glucose (in contrast, the glucose in our food and bodies is sometimes called D-glucose):
This is not just the same molecule flipped around,...
Not all forms of mirror biology would even need to be restricted. For instance, there are potential uses for mirror proteins, and those can be safely engineered in the lab. The only dangerous technologies are the creation of full mirror cells, and certain enabling technologies which could easily lead to that (such as the creation of a full mirror genome or key components of a proteome).
Once we get used to create and deal with mirror proteins, and once we get used to designing & building cells, which I don't know when it happens, maybe adding 1+1 togeth...
I think you are working to outline something interesting and useful, that might be a necessary step for carrying out your original post's suggestion with less risk; especially when the connection is directly there and even what you find yourself analyzing rather than multiple links away.
Once I talked to a person who said they were asexual. They were also heavily depressed and thought about committing suicide. I repeatedly told them to eat some meat, as they were vegan for many years. I myself had experienced veganism-induced depression. Finally, after many weeks they ate some chicken, and the next time we spoke, they said that they were no longer asexual (they never were), nor depressed.
I was vegan or vegetarian for many consecutive years. Vegetarianism was manageable, perhaps because of cheese. I never hit the extreme low points that I did with veganism. I remember once after not eating meat for a long time there was a period of maybe a weak, where I got extremely fatigued. I took 200mg of modafinil[1], without having any build-up resistance. Usually, this would give me a lot...
they said that they were no longer asexual (they never were),
I'm somewhat skeptical of the claim in parentheses. It certainly sounds like there is a state where they demonstrated enough traits to think they were asexual, and that information tends to be worth tracking, even if only for self-diagnostics.
“Just expose yourself to more social situations!” — Ah yes, you felt anxious the first 100 times, but the 101st will be the breakthrough!
“But exposure works!” people yell from across the street. “Like for fear of snakes - you know, those things you see once a year!”
Uh, it’s pretty rational to fear things you have little experience with. But social anxiety… you interact with people everyday! Why would anything change after the first 100 attempts?
I don’t doubt that a couple of exposures can often reduce anxieties. However, if you still feel anxious even after hundreds of social situations and years of trying... then maybe your fear is actually doing something presently useful and you should reconnect with your intuitions.
At a 100% eye contact workshop I led earlier this year, most...
I wrote (in Hebrew, alas) two years ago, about locally-useful methods that doesn't have stopping condition. I'm sure there are people out there that will benefit from exposure. the attitude you described come from them, and from people whose bubble includes mostly them.
the problem is the luck of stopping condition. who many tries before you decide this method doesn't work? before stopping and re-evaluating? before trying something else instead?
also, what Scott Alexander wrote about exposure, and Trapped Priors.
I think you make the same mistake the ex...
I’ve updated quite hard against computational functionalism (CF) recently (as an explanation for phenomenal consciousness), from ~80% to ~30%. Of course it’s more complicated than that, since there are different ways to interpret CF and having credences on theories of consciousness can be hella slippery.
So far in this sequence, I’ve scrutinised a couple of concrete claims that computational functionalists might make, which I called theoretical and practical CF. In this post, I want to address CF more generally.
Like most rationalists I know, I used to basically assume some kind of CF when thinking about phenomenal consciousness. I found a lot of the arguments against functionalism, like Searle’s Chinese room, unconvincing. They just further entrenched my functionalismness. But as I came across and tried to explain away more and more...
A failure of practical CF can be of two kinds:
Copy is possible, but it will not have phenomenal consciousness or, at least, it will be non-human or non-mine phenomenal consciousness, e.g., it will have different non-human qualia.
What is your opinion about (1) – the possibility of creating a copy?
Similar to other people's shortform feeds, short stuff that people on LW might be interested in, but which doesn't feel like it's worth a separate post. (Will probably be mostly cross-posted from my Facebook wall.)
I disagree with this, in that good mathematics definitely requires at least a little understanding of the world, and if I were to think about why LLMs succeeded at math, I'd probably point to the fact that it's an unusually verifiable task, relative to the vast majority of tasks, and would also think that the fact that you can get a lot of high-quality data also helps LLMs.
Only programming shares these traits to an exceptional degree, and outside of mathematics/programming, I expect less transferability, though not effectively 0 transferability.