bortels
bortels has not written any posts yet.

I think that's a cognitive illusion, but I understand that it can generate positive emotions who are not an illusion, by any means.
More a legacy kind of consideration, really - I do not imagine any meaningful part of myself other than genes (which frankly I was just borrowing) live on. But - If I have done my job right, the attitudes and morals that I have should be reflected in my children, and so I have an effect on the world in some small way that lingers, even if I am not around to see it. And yes - that's comforting, a bit. Still would rather not die, but hey.
So - I am still having issues parsing this, and I am persisting because I want to understand the argument, at least. I may or may not agree, but understanding it seems a reasonable goal.
The builders know, of course, that this is much riskier than it seems, because its success would render their own observations extremely rare.
The success of the self-modifying AI would make the builders of that AI's observations extremely rare... why? Because the AI's observations count, and it is presumably many orders of magnitude faster?
For a moment, I will assume I have interpreted that correctly. So? How is this risky, and how would creating billions of simulated humanities change... (read more)
Ah - I'd seen the link, but the widget just spun. I'll go look at the PDF. The below is before I have read it - it could be amusing and humility inducing if I read it and it makes me change my mind on the below (and I will surely report back if that happens).
As for the SSA being wrong on the face of it - the DA wiki page says "The doomsday argument relies on the self-sampling assumption (SSA), which says that an observer should reason as if they were randomly selected from the set of observers that actually exist." Assuming this is true (I do not know enough... (read 1031 more words →)
I have an intellectual issue with using "probably" before an event that has never happened before, in the history of the universe (so far as I can tell).
And - if I am given the choice between slow, steady improvement in the lot of humanity (which seems to be the status quo), and a dice throw that results in either paradise, or extinction - I'll stick with slow steady, thanks, unless the odds were overwhelmingly positive. And - I suspect they are, but in the opposite direction, because there are far more ways to screw up than to succeed, and once the AI is out - you no longer have a chance to... (read more)
The techniques are useful, in and of themselves, without having to think about utility in creating a friendly AI.
So, yes, by all means, work on better skills.
But - the point I'm trying to make is that while they may help, they are insufficient to provide any real degree of confidence in preventing the creation of an unfriendly AI, because the emergent effects that would likely be responsible for such are not amenable to planning about ahead of time.
It seems to me your original proposal is the logical equivalent to "Hey, if we can figure out how to better predict where lightning strikes - we could go there ahead of time and be ready to stop the fires quickly, before the spread". Well, sure - except that sort of prediction would depend on knowing ahead of time the outcome of very unpredictable events ("where, exactly, will the lightning strike?") - and it would be far more practical to spend the time and effort on things like lightning rods and firebreaks.
So - there's probably no good reason for you - as a mind - to care about your genes, unless you have reason to believe they are unique or somehow superior in some way to the rest of the population.
But as a genetic machine, you "should" care deeply, for a very particular definition of "should" - simply because if you do not, and that turns out to have been genetically related, then yours will indeed die out. The constant urge and competition to reproduce your particular set of genes is what drives evolution (well, that and some other stuff like mutations). I like what evolution has come up with so far,... (read more)
Exactly. Having a guaranteed low-but-livable-income job as a reward for serving time and not going back is hardly a career path people will aim for - but might be attractive to someone who is out but sees little alternatives but to go back to a life of crime.
I actually think training and new-deal type employment guarantees for those in poverty is a good idea aside from the whole prison thing - in that attempts to raise people from poverty would likely reduce crime to begin with.
The real issue here - running a prison being a profit-making business - has already been pointed out.
Dunning-Kruger - learn it, fear it. So long as you are aware of that effect, and aware of your tendency to arrogance (hardly uncommon, especially among the educated), you are far less likely to have it be a significant issue. Just be vigilant.
I have similar issues - I find it helpful to dive deeply into things I am very inexperienced with, for a while; realizing there are huge branches of knowledge you may be no more educated in than a 6th grader is humbling, and freeing, and once you are comfortable saying "That? Oh, hell - I don't know much about that, and will never find the time to", you can let it go and relax a bit. Or - I have. (my favorites are microbiology, or advance mathematics. I fancy myself smart, but it is super easy to be so totally over my head it may as well be mystic sorcery they're talking about. Humbles you right out.)
Big chunks of this board do that as well, FWIW.
I spent 7 years playing a video game that started to become as important to me as the real world, at least in terms of how it emotionally effected me. If I had spent the 6ish hours a day, on average, doing something else - well, it makes me vaguely sick to think of the things I might have better spent the time and energy on. Don't get me wrong - it was fun. And I did not sink nearly so low as so many others have, and in the end when I realized what was going on - I left. I am simply saddened by the lost opportunity cost. FWIW -... (read more)
Strawman?
"... idea for an indirect strategy to increase the likelihood of society acquiring robustly safe and beneficial AI." is what you said. I said preventing the creation of an unfriendly AI.
Ok. valid point. Not the same.
I would say the items described will do nothing whatsoever to "increase the likelihood of society acquiring robustly safe and beneficial AI."
They are certainly of value in normal software development, but it seems increasingly likely as time passes without a proper general AI actually being created that such a task is far, far more difficult than anyone expected, and that if one does come into being, it will happen in a manner other than the... (read 582 more words →)