I would argue that the most complex information exchange system in the known Universe will be "hard to emulating". I don't see how it can be any other way. We already understand the neurons well enough to emulate them. This is not nearly enough. You will not be able to do whole brain emulation without understanding of the inner workings of the system.
If we look at 17!Austin and 27!Austin as two different people, then I don't see why 27!Austin would have any obligation to do anything for 17!Austin if 27!Austin doesn't want to do it, just like I wouldn't attend masses just because my friend from 10 years ago who is also dead now wanted me to.
If we look at 17!Austin and 27!Austin as a continuation of the same person, then 27!Austin can do whatever he wants, because everybody has a right to change their mind and perspective, to evolve and to correct mistakes of their past.
If we consider information p...
I think people who are trying to accurately describe the future that will happen more than 3 years from now are overestimating their predictive abilities. There are so many unknowns that just trying to come up with accurate odds of survival should make your head spin. We have no idea how exactly transformative AI will function, how soon is it coming, what will the future researches do or not do in order to keep it under control (I am talking about specific technological implementations here, not just abstract solutions), whether it will even need something...
I think this argument can and should be expanded on. Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record. Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?
Take, for example, traditional Marxist thought. In the early twentieth century, an intellectual Marxist's prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-con...
Brain is the most complex information exchange system in the known Universe. Whole Brain Emulation is going to be really hard. I would probably go with a different solution. I think myopic AI has potential.
EDIT: It may also be worth considering building an AI with no long-term memory. If you want it to do a thing, you put in some parameters ("build a house that looks like this"), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI c...
Could it be possible to build an AI with no long-term memory? Just make it's structure static. If you want it to do a thing, you put in some parameters ("build a house that looks like this"), and they are automatically wiped out once the goal is achieved. Since the neural structure in fundamentally static (not sure how to build it, but it should be possible?), the AI cannot rewrite itself to not lose memory, and it probably can't build a new similar AI either (remember, it's still an early AGI, not a God-like Superintelligence yet). If it doesn't remember ...
Is there any direct evidence linking the Bucha massacre to Russian military? Kremlin's alternative theories being bullshit doesn't mean they did it. They just don't really have a motive, though it could've easily been soldiers acting on their own without much oversight going on. Ukrainian government gave out guns to civilians, so it could've been someone else. I've seen Russian army rations lying near the bodies on some of the photos (EDIT: for the record, the photos come from here, which is far from a reliable source, I cannot verify that these photos are...
I am Russian and I can confirm that he most certainly did not call for "killing of as many Ukrainians as possible".
He said that there can be no negotiations with Nazis outside of putting a boot on their neck because it will be seen as weakness, and that you shouldn't try to have "small talk" with them, or shake hands with them. He did say "a task is set and it should be finished". He did not explicitly say anything about killing, let alone "as many as possible", at least not in that clip.
It seems like one literally cannot trust anyone to report information about this war accurately.
I most certainly do not think that we should do nothing right now. I think that important work is being done right now. We want to be prepared for transformative AI when the time comes. We absolutely should be concerned about AI safety. What I am saying is, it's pretty hard to calculate our chances of success at this point in time due to so many unknown about the timeline and the form the future AI will take.
I found this post to be extremely depressing and full of despair. It has made me seriously question what I personally believe about AI safety, whether I should expect the world to end within a century or two, and if I should go full hedonist mode right now.
I've come to the conclusion that it is impossible to make an accurate prediction about an event that's going to happen more than three years from the present, including predictions about humanity's end. I believe that the most important conversation will start when we actually get close to developing ear...
If I am doomed to fail, I have no motivation to work on the problem. If we are all about to get destroyed by an out-of-control AI, I might just go full hedonist and stop putting work into anything (Fuck dignity and fuck everything). Post like this is severely demotivating, people are interested in solving problems, and nobody is interested in working on "dying with dignity".
I still take issue with the Doctor Rock. The Rock is acceptable if the primary function of the job is to be the Rock (like the bullshit security guard). The doctor's job is not to "reassure her patients and reduce their stress" (she is not a psychotherapist), her job is to actually find those three dangerous cancers among many harmless ones. The existence of such doctor is dangerous because it gives people false confidence that they are being treated for their medical issues by an expert when in reality the doctor is just bullshitting. Even if there are ma...
I am not an expert by any means, but here are my thoughts: While I find GPT-3 quite impressive, it's not even close to AGI. All the models you mentioned are still focused on performing specific tasks. This alone will (probably) not be enough to create AGI, even if you try to increase the size of the models even further. I believe AGI is at least decades away, perhaps even a hundred years away. Now, there is a possibility of stuff being developed in secret, which is impossible to account for, but I'd say the probability of these developments being significantly more advanced that the publicly available technologies is pretty low.
I probably wouldn't do it. If I am being compensated for writing the note, it means there is a person that is offering to pay me to write the note. Since this is a weird request, and I cannot be sure of the person's motivation, as a safety precaution I would assume malice. Especially if the "compensation" is large. Like, nobody is their right mind would pay me a million dollars to put a note on the wall that says that I wish my family would die just to prove a point. I would assume malice. This person could be a psycho serial killer who would kill my famil...
I know for a fact that I see colors in dreams. When I have a Lucid dream I can experiment with my experiences, and I could confirm that I saw colors. I could also feel taste, cold, touch, hear sounds and sometimes experience pain (I once was stabbed in a dream and it hurt like hell, even for several minutes after I woke up). In fact, I found the amount of details objects had to be surprising - when I looked at a stone wall, it looked like a texture of a real wall. When I touched the wall, it felt like a stone wall. Other senses did not get the details as well - when I tired tasting snow, it was kinda cold-ish and kinda tasted like snow, but not really. When I tasted food, it tasted really weird and not really like I expected.
Just looking at the list of "subtle cases of unwholesomeness" makes me not want to adopt the model of wholesomeness in my behaviour. All of these things, except the second one, seem reasonable to me. Not "sometimes", but as a concept of available actions. Model of wholesomeness feels very restrictive and ineffective. I'm not sure I understand why wholesomeness should be implemented when we have other common ideologies of morality that would condemn all of the things on the first list (list of extreme cases) as "bad" (and I think those should be considered ... (read more)