Nectanebo

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

Thanks for the detailed response. The link was very good, too.

Nectanebo110

Index funds have been recommended on LW before. I have a hard time understanding how it would work investing in one, though. Do you actually own the separate stocks on the index of the index fund, or do you technically own something else? Where does the dividend money go?

Nectanebo850

Took the survey. I always feel like I did the last one only recently.

One of the better AMAs I've read.

Peter is an interesting guy. Is his book worth reading?

If their ideas of friendliness are incompatible with each other, perhaps a conflict? Superintelligent war? It may be the case that one will be 'stronger' than the other, and that there will be a winner-take-all(-of-the-universe?) resolution?

If there is some compatibility, perhaps a merge, a la Three Worlds Collide?

Or maybe they co-operate, try not to interfere with each other? This would be more unlikely if they are in competition for something or other (matter?), but more likely if they have difficulties assessing risks to not co-operating, or if there is mutually assured destruction?

It's a fun question, but I mean, Vinge had that event horizon idea, about how fundamentally unpredictable things are for us mere humans when we're talking about hypothetical intelligences of this caliber, and I think he had a pretty good point on that. This question is taking a few extra steps beyond that, even.

Isn't this kind of thing a subset of the design space of minds post? Like, we don't know exactly what kind of intelligence could end up exploding and there are lots of different possible variations?

Nectanebo130

Apart from the fact that they wouldn't say anything (because generally animals can't speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species' existence has made the lives of other animals much better than they would otherwise be. I'm thinking of veterinary clinics that often perform work on wild animals, pets that don't have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.

As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world's predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity's existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature's whims perhaps indefinitely, rather than ours perhaps temporarily.

Yeah, I was thinking of Goertzel as well.

So you don't think MIRI's work is all that useful? What probability would you assign to hard-takeoff happening of the speed they're worried about?

So is this is roughly one aspect of why MIRI's position on AI safety concerns are different to similar parties? - that they're generally more sympathetic to possibilities futher away from 1 than their peers? I don't really know, but that's what the pebblesorters/value-is-fragile strain of thinking seems to suggest for me.

All the more reason to try to only consume finished works.

I agree with the sentiment because it's frustrating not being able to complete something right away, but with AnH I really did enjoy following it month by month. I think that some pieces of entertainment are suited to that style of consumption and are fun to follow, even if they don't turn out to be very good in the end and aren't worth it for those who would go back and consume it all at once.

Load More