The fact that I can knock 12 points off a Hamilton Depression scale with an Ambien and a Krispy Kreme should serve as a warning about the validity and generalizability of the term "antidepressant."
… every culture in history, in every time and every place, has operated from the assumption that it had it 95% correct and that the other 5% would arrive in five years’ time! All were wrong! All were wrong, and we gaze back at their naivety with a faint sense of our own superiority.
-- Terence McKenna, Culture and Ideology are Not Your Friends
… every culture in history, in every time and every place ...
We should implement a filter that changes the above phrase to "The USA in the 1950s". Because then the statements that include the phrase would generally become true.
I don't really disagree with the point he's trying to make there, and if we restrict ourselves to talking about post-Enlightenment Western cultures the argument might be largely accurate; but over all cultures in all times and places he's simply wrong.
It's actually fairly unusual for a culture to be consistently forward-looking at all, let alone to assume that the solutions to all its problems and the answers to all its open questions will arrive in a few years or decades. Most seem to have assumed that the present world's unusually debased and that thi...
That depends on your definition of hope, really.
I've generally been partial to Derrick Jensen's definition of hope, as given in his screed against it:
http://www.orionmagazine.org/index.php/articles/article/170/
...But what, precisely, is hope? At a talk I gave last spring, someone asked me to define it. I turned the question back on the audience, and here’s the definition we all came up with: hope is a longing for a future condition over which you have no agency; it means you are essentially powerless.
I'm not, for example, going to say I hope I eat something
It's entirely possible that there are classified analyses of the RHIC/LHC risks which won't be released for decades.
What public discussion was occurring in the 40s regarding the risks of atmospheric ignition?
I know the claim was that morality was implementation-independent, but I am just bothered by the idea that there can be multiple implementations of John.
Aren't there routinely multiple implementations of John?
John at 1213371457 epoch time John at 1213371458 John at 1213371459 John at 1213371460 John at 1213371461 John at 1213371462
The difference between John in a slightly different branch of reality is probably much smaller than the difference between John and John five seconds later in a given branch of reality (I'm not sure of the correct grammar).
bambi: You're taking the very short-term view. Eliezer has stated previously that the plan is to popularize the topic (presumably via projects like this blog and popular science books) with the intent of getting highly intelligent teenagers or college students interested. The desired result would be that a sufficient quantity of them will go and work for him after graduating.
One of the things that always comes up in my mind regarding this is the concept of space relative to these other worlds. Does it make sense to say that they're "ontop of us" and out of phase so we can't see them, or do they propagate "sideways", or is it nonsensical to even talk about it?
Is there really anyone who would sign up for cryonics except that they are worried that their future revived self wouldn't be made of the same atoms and thus would not be them? The case for cryonics (a case that persuades me) should be simpler than this.
I think that's just a point in the larger argument that whatever the "consciousness we experience" is, it's at sufficiently high level that it does survive massive changes at at quantum level over the course of a single night's sleep. If worry about something as seemingly disastrous as having al...
@Ian Maxwell: It's not about the yous in the universes where you have signed up -- it's about all of the yous that die when you're not signed up. i.e. none of the yous that die on your way to work tommorow are going to get frozen.
(This is making me wonder if anyone has developed a corresponding grammar for many worlds yet...)
Also, the fact that Eliezer won't tell, however understandable, makes me fear that Eliezer cheated for the sake of a greater good, i.e. he said to the other player, "In principle, a real AI might persuade you to let me out, even if I can't do it. This would be incredibly dangerous. In order to avoid this danger in real life, you should let me out, so that others will accept that a real AI would be able to do this."
I'm pretty sure that the first experiments were with people who disagreed with him on the idea that AI boxing would work or not. The...
It's impossible for me not to perceive time, to not perceive myself as myself, to not perceive my own consciousness.
You've never been so intoxicated that you "lose time", and woken up wondering who you threw up on the previous night? You've never done any kind of hallucinogenic drug? You don't ... sleep?
Those things you listed are only true for a fairly narrow range of operational paramaters of the human brain. It's very possible to not do those things, and we stop doing them every night.
The sensation of time passing only seems to exist beca...
bambi: I think this would be related to Newcomb's Problem? Just because the future is fixed relative to your current state (or decision making strategy, or whatever), doesn't mean that a successful rational agent should not try to optimize it's current state (or decision making strategy) so that it comes out on the desired side of future probabilities.
It all sorts itself out in the end, of course -- if you're the kind of agent that gets paralyzed when presented with a deterministic universe, then you're not going to be as successful as your consciousness moves to a different part of the configuration as agents that act as if they can change the future.
If everything we know is but a simulation being run in a much larger world, then "everything we know" isn't a universe.
The question wasn't "what's outside the universe?", it was "where did the configuration that we are a part of come from?"
I don't think you can necessarily equate "configuration" (the mathematical entity that we are implicitly represented within), with "universe" (everything that exists).
You're not imaginative enough. If the latter is true, we're a lot more likely to see messages from outside the Matrix sometime. ("Sorry, guys, I ran out of supercomputer time.")
For various values of "a lot", I suppose. If something is simulating something the size of the universe, chances are it's not even going to notice us (unless we turn everything into paper clips, I suppose). Just because the universe could be a simulation doesn't mean that we're the point of the simulation.
Manon de Gaillande asked "Where does this configuration come from?" Seeing no answer yet, I'm also intrigued by this. Does it even make sense to ask it? If it doesn't, please help Manon and I dissolve the question.
It doesn't make sense in the strict sense, in that barring the sudden arrival of sufficiently compelling evidence, you aren't going to be able to answer it with anything but metaphysical speculation. You aren't going to come out less confused about anything on the other side of contemplating the question.
Furthermore, no answer changes ...
I can't remember which, but one of Brian Greene's books had a line that convinced me that all the configurations do exist simultaneously: "The total loaf exists". How can anything that crazy-sounding not be right?
I'm not sure that taking the crazy-sound of a given statement as positively correlated with it's truth is a useful strategy (in isolation). :-)
I guess I'm not sure what "exists" even means in this context. Is this in the general sense that "all mathematical objects exist"? I don't know what sin(435 rad) is offhand, ...
I'm still trying to wrap my non-physicist brain around this.
Okay, so t is redundant, mathematically speaking. It would be as if you had an infinite series of numbers, and you were counting from the beginning. The definition of the series is recursive, and defined as such that (barring new revelations in number theory) you can guarantee it will never repeat. As a trivial example, { t, i } = { 1, 1.1 }, { 2, 1.21 }, { 3, 1.4641 }.... t is redundant, in the sense that you don't need it there to calculate the next item in the series, and subtracting it mak...
bambi: "Logic bomb" has the current meaning of a piece of software that acts as a time-delayed trojan horse (traditionally aimed at destruction, rather than infection or compromise), which might be causing some confusion in your analogy.
I don't think I've seen the term used to refer to an AI-like system.
@Unknown: In the context of the current simulation story, how long would that take? Less than a year for them, researching and building technology to our specs (this is Death March-class optimism....)? So only another 150 billion years for us to wait? And that's just to start beta testing.
As for the general question, it shouldn't have one unless you can guarantee it's behavior. (Mainly because you share this planet with me, and I don't especially want an AI on the loose that could (to use the dominant example here) start the process of turning the enti...
In real life if this happened, we would no doubt be careful and wouldn't want to be unplugged, and we might well like to get out of the box, but I doubt we would be interested in destroying our simulators; I suspect we would be happy to cooperate with them.
Given the scenario, I would assume the long-term goals of the human population would be to upload themselves (individually or collectively) to bodies in the "real" world -- i.e. escape the simulation.
I can't imagine our simulators being terribly cooperative in that project.
Unless you believe that the universe is being simulated in a computer (which seems like a highly unparsimonious not to mention anthropocentric assumption)
I can certainly see how it's an unparsimonious assumption, but how is it especially anthropocentric? Would you consider a given Conway Game of Life run to be "glidercentric"?
I own at least two distinct items of clothing printed with this theorem, so it must be important.
Isn't this an argumentum ad vestem fallacy?
Not a comment on the theory, but if you want to play with the experiments yourself, find some old LCD electronics (calculators, etc) that can be sacrificed on the altar of curiosity. They typically have a strip of polarizing material above the display (rather, they did when I was growing up).
It's a bit more elegant than trying to get some sunglasses oriented at 90° to each other.
@Ben Jones:
I don't disagree about the utility of the term, I'm just trying to figure out what should be considered a dimension in "thingspace" and what shouldn't. Obviously our brain's hormonal environment is a rather important and immediate aspect of the environment, so we tend to lend undue importance to those things which change it.
To continue to play Devil's Advocate, where does the line get drawn?
If you extend the hypothetical experiment out to a sufficiently sized random sampling of other people, and find that Wigginettes are more likely t...
@Ben Jones:
Remember, Thingspace doesn't morph to one's utility function - it is a representation of things in reality, outside one's head.
But... your head is part of reality, is it not?
Could you not theoretically devise an experiment that showed a correlation between the presence of black hair / green eyes and biochemical changes in your brain and hormonal systems?
This particular cluster in Thingspace - female features which Ben Jones, specifically, finds attractive - may not be of any use to anyone but you (with the possible exception of women in your soc...
Wigginettes does that for me, regardless of whether or not it describes a cluster.
Isn't it describing the cluster of women whom you expect to be attracted to? Surely one of the dimensions in your the subset of thingspace that you work with can be based upon your expected reaction to a set of physical features.
"The laws of physics the universe runs on are provably Turing-equivalent."
Are there any links or references for this? That sounds like fascinating reading.
Thanks for this over the holidays. (You asked for feedback from practical applications).
It helped me come to the realization on why some stores can get away with put horribly, stupidly expensive chocolates on display right at the counter top: not only do they want you to buy it (duh), but it also lets your recipients know that you bought them a $5.99 bar of chocolate that would otherwise be indistinguishable from the larger $1.49 chocolate bars at the grocery store (assuming that your recipients have shopped at the same stores as you and are aware of how ...
My initial reaction (before I started to think...) was to pick the dust specks, given that my biases made the suffering caused by the dust specks morally equivalent to zero, and 0^^^3 is still 0.
However, given that the problem stated an actual physical phenomenon (dust specks), and not a hypothetical minimal annoyance, then you kind of have to take the other consequences of the sudden appearance of the dust specks under consideration, don't you?
If I was omnipotent, and I could make everyone on Earth get a dust speck in their eye right now, how many car acc...
Okay, trying to remember what I was thinking about 4 years ago.
A) Long term existential health would require us to secure control over our "housing". We couldn't assume that our progenitors would be interested in moving the processors running us to an off-world facility in order to insure our survival in the case of an asteroid impact (for example).
B) It depends on the intelligence and insight and nature of our creators. If they are like us as we are now, as soon as we would attempt to control our own destiny in their "world", we would be at war with them.