All of Mawrak's Comments + Replies

Just looking at the list of "subtle cases of unwholesomeness" makes me not want to adopt the model of wholesomeness in my behaviour. All of these things, except the second one, seem reasonable to me. Not "sometimes", but as a concept of available actions. Model of wholesomeness feels very restrictive and ineffective. I'm not sure I understand why wholesomeness should be implemented when we have other common ideologies of morality that would condemn all of the things on the first list (list of extreme cases) as "bad" (and I think those should be considered ... (read more)

If you can recreate even 1% of his consciousness with this kind of data I would be surprised.

The button isn't showing up for me. Well, it shows up for like a second after I re-load the page but then it's gone. I tried Opera GX browser and Chrome, it happens in both. Is this intended behaviour? I use Windows 7, maybe thats why...

2Rafael Harth
You shouldn't yet be able to blow up the page due to your karma. Idk if it's supposed to show up.

I would argue that the most complex information exchange system in the known Universe will be "hard to emulating". I don't see how it can be any other way. We already understand the neurons well enough to emulate them. This is not nearly enough. You will not be able to do whole brain emulation without understanding of the inner workings of the system.

If we look at 17!Austin and 27!Austin as two different people, then I don't see why 27!Austin would have any obligation to do anything for 17!Austin if 27!Austin doesn't want to do it, just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to. 

If we look at 17!Austin and 27!Austin as a continuation of the same person, then 27!Austin can do whatever he wants, because everybody has a right to change their mind and perspective, to evolve and to correct mistakes of their past.

If we consider information p... (read more)

1Svyatoslav Usachev
But that's not true! Even if I don't feel obliged to 100% comply with what other people want, I certainly am affected by their desires and want to compromise. Yes, maybe it's not quite an "obligation", but I rarely experience those towards whoever anyway.
1Austin Chen
I'm not so sure about this analogy -- intuitively, aren't your obligations to yourself much stronger than to a friend? E.g. if a friend randomly asked for $5000 to pay for a vacation I wouldn't just randomly give it to her; but if my twin or past self spent that much I'd be something like 10-100x more likely to to oblige.

And it keeps giving me photorealistic faces as a component of images where I wasn't even asking for that, meaning that per the terms and conditions I can't share those images publicly.

Could you just blur out the faces? Or is that still not allowed?

2Swimmer963 (Miranda Dixon-Luinenburg)
I assume that would be allowed, but then it misses a lot of the point of sharing how impressive DALL-E's art is!

For typos there should be an option to just select the error in the text and submit it to the author though the web page. That's what they do on some fanfiction websites. The only downside is that a troll could potentially abuse the system.

Answer by Mawrak140

I think people who are trying to accurately describe the future that will happen more than 3 years from now are overestimating their predictive abilities. There are so many unknowns that just trying to come up with accurate odds of survival should make your head spin. We have no idea how exactly transformative AI will function, how soon is it coming, what will the future researches do or not do in order to keep it under control (I am talking about specific technological implementations here, not just abstract solutions), whether it will even need something... (read more)

I think this argument can and should be expanded on.  Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record.  Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?

Take, for example, traditional Marxist thought.  In the early twentieth century, an intellectual Marxist's prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-con... (read more)

That Washington Post about Bucha... thats just insane. So many lives lost. And the pro-Russian sources are completely silent on this, which is also telling.

Inb4 rationalists intentionally develop an unaligned AI designed to destroy humanity. Maybe the real x-risks were the friends we made along the way...

Brain is the most complex information exchange system in the known Universe. Whole Brain Emulation is going to be really hard. I would probably go with a different solution. I think myopic AI has potential.

EDIT: It may also be worth considering building an AI with no long-term memory. If you want it to do a thing, you put in some parameters ("build a house that looks like this"), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI c... (read more)

1Erhannis
"Complex" doesn't imply "hard to emulate".  We likely won't need to understand the encoded systems, just the behavior of the neurons.  In high school I wrote a simple simulator of charged particles - the rules I needed to encode were simple, but it displayed behavior I hadn't programmed in, nor expected, but which were, in fact, real phenomena that really happen.

Could it be possible to build an AI with no long-term memory? Just make it's structure static. If you want it to do a thing, you put in some parameters ("build a house that looks like this"), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI cannot rewrite itself to not lose memory, and it probably can't build a new similar AI either (remember, it's still an early AGI, not a God-like Superintelligence yet). If it doesn't remember ... (read more)

1Evan R. Murphy
This is similar to the concept of myopia. It seems a bit different though, as myopia tends to focus on constraining an AI's forward-lookingness, whereas your focus is on constraining past memory. I think myopia has potential, but I'm not sure about blocking long-term memory. Does forgetting the past really prevent an AI from having dangerous plans and objectives? (I haven't thought about this very much yet, it's just an initial reaction.)

This framing feels like a much more motivating strategy, even though it's pretty much identical to what Eliezer is proposing.

NY times article disputed the Russian alternative explanations about how the bodies were moved there. I agree that those are bullshit, and I am not disputing that.

That drone footage seems like direct evidence to me, yes.

Is there any direct evidence linking the Bucha massacre to Russian military? Kremlin's alternative theories being bullshit doesn't mean they did it. They just don't really have a motive, though it could've easily been soldiers acting on their own without much oversight going on. Ukrainian government gave out guns to civilians, so it could've been someone else. I've seen Russian army rations lying near the bodies on some of the photos (EDIT: for the record, the photos come from here, which is far from a reliable source, I cannot verify that these photos are... (read more)

1Dirichlet-to-Neumann
https://mobile.twitter.com/MilHist_Lee/status/1510853947338199045. For a perspective on how armies turn to atrocities.
1kilotaras
Besides witnesses there's 1. NY times article showing satellite images with bodies appearing during occupation period. 2. Drone footage of armour shooting on cyclist. What would you consider a more direct evidence?

I am Russian and I can confirm that he most certainly did not call for "killing of as many Ukrainians as possible".

He said that there can be no negotiations with Nazis outside of putting a boot on their neck because it will be seen as weakness, and that you shouldn't try to have "small talk" with them, or shake hands with them. He did say "a task is set and it should be finished". He did not explicitly say anything about killing, let alone "as many as possible", at least not in that clip.

It seems like one literally cannot trust anyone to report information about this war accurately.

I most certainly do not think that we should do nothing right now. I think that important work is being done right now. We want to be prepared for transformative AI when the time comes. We absolutely should be concerned about AI safety. What I am saying is, it's pretty hard to calculate our chances of success at this point in time due to so many unknown about the timeline and the form the future AI will take.

I found this post to be extremely depressing and full of despair. It has made me seriously question what I personally believe about AI safety, whether I should expect the world to end within a century or two, and if I should go full hedonist mode right now.

I've come to the conclusion that it is impossible to make an accurate prediction about an event that's going to happen more than three years from the present, including predictions about humanity's end. I believe that the most important conversation will start when we actually get close to developing ear... (read more)

0superads91
"I've come to the conclusion that it is impossible to make an accurate prediction about an event that's going to happen more than three years from the present, including predictions about humanity's end." Correct. Eliezer has said this himself, check out his outstanding post "There is no fire alarm for AGI". However, you can still assign a probability distribution to it. Say, I'm 80% certain that dangerous/transformative AI (I dislike the term AGI) will happen in the next couple of decades. So the matter turns out to be just as urgent, even if you can't predict the future. Perhaps such uncertainty only makes it more urgent. ". I believe that the most important conversation will start when we actually get close to developing early AGIs (and we are not quite there yet), this is when the real safety protocols and regulations will be put in place, and when the rationalist community will have the best chance at making a difference. This is probably when the fate of humanity will be decided, and until then everything is up in the air." Well, first, like I said, you can't predict the future, i.e. There's No Fire Alarm for AGI. So we might never know that we're close till we get there. Happened with other transformative technologies before. Second, even if we could, we might not have enough time by then. Alignment seems to be pretty hard. Perhaps intractable. Perhaps straight impossible. The time to start thinking of solutions and implementing them is now. In fact, I'd even say that we're already too late. Given such monumental task, I'd say that we would need centuries, and not the few decades that we might have. You're like the 3rd person I respond to in this post saying that "we can't predict the future, so let's not panic and let's do nothing until the future is nearer". The sociologist in me tells me that this might be one of the crucial aspects of why people aren't more concerned about AI safety. And I don't blame them. If I hadn't been exposed to key concepts my

If I am doomed to fail, I have no motivation to work on the problem. If we are all about to get destroyed by an out-of-control AI, I might just go full hedonist and stop putting work into anything (Fuck dignity and fuck everything). Post like this is severely demotivating, people are interested in solving problems, and nobody is interested in working on "dying with dignity".

I'm interestd in working on dying with dignity

I still take issue with the Doctor Rock. The Rock is acceptable if the primary function of the job is to be the Rock (like the bullshit security guard). The doctor's job is not to "reassure her patients and reduce their stress" (she is not a psychotherapist), her job is to actually find those three dangerous cancers among many harmless ones. The existence of such doctor is dangerous because it gives people false confidence that they are being treated for their medical issues by an expert when in reality the doctor is just bullshitting. Even if there are ma... (read more)

I am not an expert by any means, but here are my thoughts: While I find GPT-3 quite impressive, it's not even close to AGI. All the models you mentioned are still focused on performing specific tasks. This alone will (probably) not be enough to create AGI, even if you try to increase the size of the models even further. I believe AGI is at least decades away, perhaps even a hundred years away. Now,  there is a possibility of stuff being developed in secret, which is impossible to account for, but I'd say the probability of these developments being significantly more advanced that the publicly available technologies is pretty low.

1superads91
A sober opinion (even if quite different from mine). My biggest fear is scaling a transformer + completing it with other "parts", as in an agent (even if a dumb one), etc. Thanks

I probably wouldn't do it. If I am being compensated for writing the note, it means there is a person that is offering to pay me to write the note. Since this is a weird request, and I cannot be sure of the person's motivation, as a safety precaution I would assume malice. Especially if the "compensation" is large. Like, nobody is their right mind would pay me a million dollars to put a note on the wall that says that I wish my family would die just to prove a point. I would assume malice. This person could be a psycho serial killer who would kill my famil... (read more)

Yes, I realized it in the dream. Since it was a Lucid dream, I was fully aware that I am in a dream and remembered how the taste is supposed to be in reality, so I could compare on the fly.

I know for a fact that I see colors in dreams. When I have a Lucid dream I can experiment with my experiences, and I could confirm that I saw colors. I could also feel taste, cold, touch, hear sounds and sometimes experience pain (I once was stabbed in a dream and it hurt like hell, even for several minutes after I woke up). In fact, I found the amount of details objects had to be surprising - when I looked at a stone wall, it looked like a texture of a real wall. When I touched the wall, it felt like a stone wall. Other senses did not get the details as well - when I tired tasting snow, it was kinda cold-ish and kinda tasted like snow, but not really. When I tasted food, it tasted really weird and not really like I expected.

1Measure
Did you know in the dream that the taste was wrong, or only after waking?