iwdw
Message
80
33
The fact that I can knock 12 points off a Hamilton Depression scale with an Ambien and a Krispy Kreme should serve as a warning about the validity and generalizability of the term "antidepressant."
… every culture in history, in every time and every place, has operated from the assumption that it had it 95% correct and that the other 5% would arrive in five years’ time! All were wrong! All were wrong, and we gaze back at their naivety with a faint sense of our own superiority.
-- Terence McKenna, Culture and Ideology are Not Your Friends
That depends on your definition of hope, really.
I've generally been partial to Derrick Jensen's definition of hope, as given in his screed against it:
http://www.orionmagazine.org/index.php/articles/article/170/
...But what, precisely, is hope? At a talk I gave last spring, someone asked me to define it. I turned the question back on the audience, and here’s the definition we all came up with: hope is a longing for a future condition over which you have no agency; it means you are essentially powerless.
I'm not, for example, going to say I hope I eat something
It's entirely possible that there are classified analyses of the RHIC/LHC risks which won't be released for decades.
What public discussion was occurring in the 40s regarding the risks of atmospheric ignition?
I know the claim was that morality was implementation-independent, but I am just bothered by the idea that there can be multiple implementations of John.
Aren't there routinely multiple implementations of John?
John at 1213371457 epoch time John at 1213371458 John at 1213371459 John at 1213371460 John at 1213371461 John at 1213371462
The difference between John in a slightly different branch of reality is probably much smaller than the difference between John and John five seconds later in a given branch of reality (I'm not sure of the correct grammar).
bambi: You're taking the very short-term view. Eliezer has stated previously that the plan is to popularize the topic (presumably via projects like this blog and popular science books) with the intent of getting highly intelligent teenagers or college students interested. The desired result would be that a sufficient quantity of them will go and work for him after graduating.
One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they're "ontop of us" and out of phase so we can't see them, or do they propagate "sideways", or is it nonsensical to even talk about it?
Is there really anyone who would sign up for cryonics except that they are worried that their future revived self wouldn't be made of the same atoms and thus would not be them? The case for cryonics (a case that persuades me) should be simpler than this.
I think that's just a point in the larger argument that whatever the "consciousness we experience" is, it's at sufficiently high level that it does survive massive changes at at quantum level over the course of a single night's sleep. If worry about something as seemingly disastrous as having al...
@Ian Maxwell: It's not about the yous in the universes where you have signed up -- it's about all of the yous that die when you're not signed up. i.e. none of the yous that die on your way to work tommorow are going to get frozen.
(This is making me wonder if anyone has developed a corresponding grammar for many worlds yet...)
Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let me out, so that others will accept that a real AI would be able to do this."
I'm pretty sure that the first experiments were with people who disagreed with him on the idea that AI boxing would work or not. The...
It's impossible for me not to perceive time, to not perceive myself as myself, to not perceive my own consciousness.
You've never been so intoxicated that you "lose time", and woken up wondering who you threw up on the previous night? You've never done any kind of hallucinogenic drug? You don't ... sleep?
Those things you listed are only true for a fairly narrow range of operational paramaters of the human brain. It's very possible to not do those things, and we stop doing them every night.
The sensation of time passing only seems to exist beca...
bambi: I think this would be related to Newcomb's Problem? Just because the future is fixed relative to your current state (or decision making strategy, or whatever), doesn't mean that a successful rational agent should not try to optimize it's current state (or decision making strategy) so that it comes out on the desired side of future probabilities.
It all sorts itself out in the end, of course -- if you're the kind of agent that gets paralyzed when presented with a deterministic universe, then you're not going to be as successful as your consciousness moves to a different part of the configuration as agents that act as if they can change the future.
If everything we know is but a simulation being run in a much larger world, then "everything we know" isn't a universe.
The question wasn't "what's outside the universe?", it was "where did the configuration that we are a part of come from?"
I don't think you can necessarily equate "configuration" (the mathematical entity that we are implicitly represented within), with "universe" (everything that exists).
You're not imaginative enough. If the latter is true, we're a lot more likely to see messages from outside the Matrix sometime. ("Sorry, guys, I ran out of supercomputer time.")
For various values of "a lot", I suppose. If something is simulating something the size of the universe, chances are it's not even going to notice us (unless we turn everything into paper clips, I suppose). Just because the universe could be a simulation doesn't mean that we're the point of the simulation.
Manon de Gaillande asked "Where does this configuration come from?" Seeing no answer yet, I'm also intrigued by this. Does it even make sense to ask it? If it doesn't, please help Manon and I dissolve the question.
It doesn't make sense in the strict sense, in that barring the sudden arrival of sufficiently compelling evidence, you aren't going to be able to answer it with anything but metaphysical speculation. You aren't going to come out less confused about anything on the other side of contemplating the question.
Furthermore, no answer changes ...
I can't remember which, but one of Brian Greene's books had a line that convinced me that all the configurations do exist simultaneously: "The total loaf exists". How can anything that crazy-sounding not be right?
I'm not sure that taking the crazy-sound of a given statement as positively correlated with it's truth is a useful strategy (in isolation). :-)
I guess I'm not sure what "exists" even means in this context. Is this in the general sense that "all mathematical objects exist"? I don't know what sin(435 rad) is offhand, ...
I'm still trying to wrap my non-physicist brain around this.
Okay, so t is redundant, mathematically speaking. It would be as if you had an infinite series of numbers, and you were counting from the beginning. The definition of the series is recursive, and defined as such that (barring new revelations in number theory) you can guarantee it will never repeat. As a trivial example, { t, i } = { 1, 1.1 }, { 2, 1.21 }, { 3, 1.4641 }.... t is redundant, in the sense that you don't need it there to calculate the next item in the series, and subtracting it mak...
bambi: "Logic bomb" has the current meaning of a piece of software that acts as a time-delayed trojan horse (traditionally aimed at destruction, rather than infection or compromise), which might be causing some confusion in your analogy.
I don't think I've seen the term used to refer to an AI-like system.
Okay, trying to remember what I was thinking about 4 years ago.
A) Long term existential health would require us to secure control over our "housing". We couldn't assume that our progenitors would be interested in moving the processors running us to an off-world facility in order to insure our survival in the case of an asteroid impact (for example).
B) It depends on the intelligence and insight and nature of our creators. If they are like us as we are now, as soon as we would attempt to control our own destiny in their "world", we would be at war with them.